Sample records for iterative soft bit

  1. Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems

    DTIC Science & Technology

    2003-06-01

    167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are

  2. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  3. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    2000-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  4. An Interactive Concatenated Turbo Coding System

    NASA Technical Reports Server (NTRS)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  5. A high throughput architecture for a low complexity soft-output demapping algorithm

    NASA Astrophysics Data System (ADS)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  6. Soft-decision decoding techniques for linear block codes and their error performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1996-01-01

    The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.

  7. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; The Map and Related Decoding Algirithms

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.

  8. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  9. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    NASA Astrophysics Data System (ADS)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  10. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  11. Improved Iterative Decoding of Network-Channel Codes for Multiple-Access Relay Channel.

    PubMed

    Majumder, Saikat; Verma, Shrish

    2015-01-01

    Cooperative communication using relay nodes is one of the most effective means of exploiting space diversity for low cost nodes in wireless network. In cooperative communication, users, besides communicating their own information, also relay the information of other users. In this paper we investigate a scheme where cooperation is achieved using a common relay node which performs network coding to provide space diversity for two information nodes transmitting to a base station. We propose a scheme which uses Reed-Solomon error correcting code for encoding the information bit at the user nodes and convolutional code as network code, instead of XOR based network coding. Based on this encoder, we propose iterative soft decoding of joint network-channel code by treating it as a concatenated Reed-Solomon convolutional code. Simulation results show significant improvement in performance compared to existing scheme based on compound codes.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batista, Antonio J. N.; Santos, Bruno; Fernandes, Ana

    The data acquisition and control instrumentation cubicles room of the ITER tokamak will be irradiated with neutrons during the fusion reactor operation. A Virtex-6 FPGA from Xilinx (XC6VLX365T-1FFG1156C) is used on the ATCA-IO-PROCESSOR board, included in the ITER Catalog of I and C products - Fast Controllers. The Virtex-6 is a re-programmable logic device where the configuration is stored in Static RAM (SRAM), functional data stored in dedicated Block RAM (BRAM) and functional state logic in Flip-Flops. Single Event Upsets (SEU) due to the ionizing radiation of neutrons causes soft errors, unintended changes (bit-flips) to the values stored in statemore » elements of the FPGA. The SEU monitoring and soft errors repairing, when possible, were explored in this work. An FPGA built-in Soft Error Mitigation (SEM) controller detects and corrects soft errors in the FPGA configuration memory. Novel SEU sensors with Error Correction Code (ECC) detect and repair the BRAM memories. Proper management of SEU can increase reliability and availability of control instrumentation hardware for nuclear applications. The results of the tests performed using the SEM controller and the BRAM SEU sensors are presented for a Virtex-6 FPGA (XC6VLX240T-1FFG1156C) when irradiated with neutrons from the Portuguese Research Reactor (RPI), a 1 MW nuclear fission reactor operated by IST in the neighborhood of Lisbon. Results show that the proposed SEU mitigation technique is able to repair the majority of the detected SEU errors in the configuration and BRAM memories. (authors)« less

  13. Performance Enhancement of MC-CDMA System through Novel Sensitive Bit Algorithm Aided Turbo Multi User Detection

    PubMed Central

    Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan

    2015-01-01

    Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI. PMID:25714917

  14. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  15. Iterative current mode per pixel ADC for 3D SoftChip implementation in CMOS

    NASA Astrophysics Data System (ADS)

    Lachowicz, Stefan W.; Rassau, Alexander; Lee, Seung-Minh; Eshraghian, Kamran; Lee, Mike M.

    2003-04-01

    Mobile multimedia communication has rapidly become a significant area of research and development constantly challenging boundaries on a variety of technological fronts. The processing requirements for the capture, conversion, compression, decompression, enhancement, display, etc. of increasingly higher quality multimedia content places heavy demands even on current ULSI (ultra large scale integration) systems, particularly for mobile applications where area and power are primary considerations. The ADC presented in this paper is designed for a vertically integrated (3D) system comprising two distinct layers bonded together using Indium bump technology. The top layer is a CMOS imaging array containing analogue-to-digital converters, and a buffer memory. The bottom layer takes the form of a configurable array processor (CAP), a highly parallel array of soft programmable processors capable of carrying out complex processing tasks directly on data stored in the top plane. This paper presents a ADC scheme for the image capture plane. The analogue photocurrent or sampled voltage is transferred to the ADC via a column or a column/row bus. In the proposed system, an array of analogue-to-digital converters is distributed, so that a one-bit cell is associated with one sensor. The analogue-to-digital converters are algorithmic current-mode converters. Eight such cells are cascaded to form an 8-bit converter. Additionally, each photo-sensor is equipped with a current memory cell, and multiple conversions are performed with scaled values of the photocurrent for colour processing.

  16. Soft-Input Soft-Output Modules for the Construction and Distributed Iterative Decoding of Code Networks

    NASA Technical Reports Server (NTRS)

    Benedetto, S.; Divsalar, D.; Montorsi, G.; Pollara, F.

    1998-01-01

    Soft-input soft-output building blocks (modules) are presented to construct and iteratively decode in a distributed fashion code networks, a new concept that includes, and generalizes, various forms of concatenated coding schemes.

  17. 45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.

    PubMed

    Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile

    2012-07-30

    In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.

  18. Brownian motion properties of optoelectronic random bit generators based on laser chaos.

    PubMed

    Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge

    2016-07-11

    The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source.

  19. Multi-bits error detection and fast recovery in RISC cores

    NASA Astrophysics Data System (ADS)

    Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu

    2015-11-01

    The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.

  20. 16QAM transmission with 5.2 bits/s/Hz spectral efficiency over transoceanic distance.

    PubMed

    Zhang, H; Cai, J-X; Batshon, H G; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2012-05-21

    We transmit 160 x 100 G PDM RZ 16 QAM channels with 5.2 bits/s/Hz spectral efficiency over 6,860 km. There are more than 3 billion 16 QAM symbols, i.e., 12 billion bits, processed in total. Using coded modulation and iterative decoding between a MAP decoder and an LDPC based FEC all channels are decoded with no remaining errors.

  1. Asymmetric Memory Circuit Would Resist Soft Errors

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G.; Perlman, Marvin

    1990-01-01

    Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.

  2. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  3. Accelerating scientific computations with mixed precision algorithms

    NASA Astrophysics Data System (ADS)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes

  4. Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

    NASA Astrophysics Data System (ADS)

    Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang

    2015-05-01

    In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.

  5. Reed Solomon codes for error control in byte organized computer memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  6. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  7. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    PubMed

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  8. Proper nozzle location, bit profile, and cutter arrangement affect PDC-bit performance significantly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Gavito, D.; Azar, J.J.

    1994-09-01

    During the past 20 years, the drilling industry has looked to new technology to halt the exponentially increasing costs of drilling oil, gas, and geothermal wells. This technology includes bit design innovations to improve overall drilling performance and reduce drilling costs. These innovations include development of drag bits that use PDC cutters, also called PDC bits, to drill long, continuous intervals of soft to medium-hard formations more economically than conventional three-cone roller-cone bits. The cost advantage is the result of higher rates of penetration (ROP's) and longer bit life obtained with the PDC bits. An experimental study comparing the effectsmore » of polycrystalline-diamond-compact (PDC)-bit design features on the dynamic pressure distribution at the bit/rock interface was conducted on a full-scale drilling rig. Results showed that nozzle location, bit profile, and cutter arrangement are significant factors in PDC-bit performance.« less

  9. Static Computer Memory Integrity Testing (SCMIT): An experiment flown on STS-40 as part of GAS payload G-616

    NASA Technical Reports Server (NTRS)

    Hancock, Thomas

    1993-01-01

    This experiment investigated the integrity of static computer memory (floppy disk media) when exposed to the environment of low earth orbit. The experiment attempted to record soft-event upsets (bit-flips) in static computer memory. Typical conditions that exist in low earth orbit that may cause soft-event upsets include: cosmic rays, low level background radiation, charged fields, static charges, and the earth's magnetic field. Over the years several spacecraft have been affected by soft-event upsets (bit-flips), and these events have caused a loss of data or affected spacecraft guidance and control. This paper describes a commercial spin-off that is being developed from the experiment.

  10. Performance enhancement of wireless mobile adhoc networks through improved error correction and ICI cancellation

    NASA Astrophysics Data System (ADS)

    Sabir, Zeeshan; Babar, M. Inayatullah; Shah, Syed Waqar

    2012-12-01

    Mobile adhoc network (MANET) refers to an arrangement of wireless mobile nodes that have the tendency of dynamically and freely self-organizing into temporary and arbitrary network topologies. Orthogonal frequency division multiplexing (OFDM) is the foremost choice for MANET system designers at the Physical Layer due to its inherent property of high data rate transmission that corresponds to its lofty spectrum efficiency. The downside of OFDM includes its sensitivity to synchronization errors (frequency offsets and symbol time). Most of the present day techniques employing OFDM for data transmission support mobility as one of the primary features. This mobility causes small frequency offsets due to the production of Doppler frequencies. It results in intercarrier interference (ICI) which degrades the signal quality due to a crosstalk between the subcarriers of OFDM symbol. An efficient frequency-domain block-type pilot-assisted ICI mitigation scheme is proposed in this article which nullifies the effect of channel frequency offsets from the received OFDM symbols. Second problem addressed in this article is the noise effect induced by different sources into the received symbol increasing its bit error rate and making it unsuitable for many applications. Forward-error-correcting turbo codes have been employed into the proposed model which adds redundant bits into the system which are later used for error detection and correction purpose. At the receiver end, maximum a posteriori (MAP) decoding algorithm is implemented using two component MAP decoders. These decoders tend to exchange interleaved extrinsic soft information among each other in the form of log likelihood ratio improving the previous estimate regarding the decoded bit in each iteration.

  11. Micromagnetic Study of Perpendicular Magnetic Recording Media

    NASA Astrophysics Data System (ADS)

    Dong, Yan

    With increasing areal density in magnetic recording systems, perpendicular recording has successfully replaced longitudinal recording to mitigate the superparamagnetic limit. The extensive theoretical and experimental research associated with perpendicular magnetic recording media has contributed significantly to improving magnetic recording performance. Micromagnetic studies on perpendicular recording media, including aspects of the design of hybrid soft underlayers, media noise properties, inter-grain exchange characterization and ultra-high density bit patterned media recording, are presented in this dissertation. To improve the writability of recording media, one needs to reduce the head-to-keeper spacing while maintaining a good texture growth for the recording layer. A hybrid soft underlayer, consisting of a thin crystalline soft underlayer stacked above a non-magnetic seed layer and a conventional amorphous soft underlayer, provides an alternative approach for reducing the effective head-to-keeper spacing in perpendicular recording. Micromagnetic simulations indicate that the media using a hybrid soft underlayer helps enhance the effective field and the field gradient in comparison with conventional media that uses only an amorphous soft underlayer. The hybrid soft underlayer can support a thicker non-magnetic seed layer yet achieve an equivalent or better effective field and field gradient. A noise plateau for intermediate recording densities is observed for a recording layer of typical magnetization. Medium noise characteristics and transition jitter in perpendicular magnetic recording are explored using micromagnetic simulation. The plateau is replaced by a normal linear dependence of noise on recording density for a low magnetization recording layer. We show analytically that a source of the plateau is similar to that producing the Non-Linear-Transition-Shift of signal. In particular, magnetostatic effects are predicted to produce positive correlation of jitter and thus negative correlation of noise at the densities associated with the plateau. One focus for developing perpendicular recording media is on how to extract intergranular exchange coupling and intrinsic anisotropy field dispersion. A micromagnetic numerical technique is developed to effectively separate the effects of intergranular exchange coupling and anisotropy dispersion by finding their correlation to differentiated M-H curves with different initial magnetization states, even in the presence of thermal fluctuation. The validity of this method is investigated with a series of intergranular exchange couplings and anisotropy dispersions for different media thickness. This characterization method allows for an experimental measurement employing a vibrating sample magnetometer (VSM). Bit patterned media have been suggested to extend areal density beyond 1 Tbit/in2. The feasibility of 4 Tbit/in2 bit patterned recording is determined by aspects of write head design and media fabrication, and is estimated by the bit error rate. Micromagnetic specifications including 2.3:1 BAR bit patterned exchange coupled composite media, trailing shield, and side shields are proposed to meet the requirement of 3x10 -4 bit error rate, 4 nm fly height, 5% switching field distribution, 5% timing and 5% jitter errors for 4 Tbit/in2 bit-patterned recording. Demagnetizing field distribution is examined by studying the shielding effect of the side shields on the stray field from the neighboring dots. For recording self-assembled bit-patterned media, the head design writes two staggered tracks in a single pass and has maximum perpendicular field gradients of 580 Oe/nm along the down-track direction and 476 Oe/nm along the cross-track direction. The geometry demanded by self-assembly reduces recording density to 2.9 Tbit/in 2.

  12. Inter-track interference mitigation with two-dimensional variable equalizer for bit patterned media recording

    NASA Astrophysics Data System (ADS)

    Wang, Yao; Vijaya Kumar, B. V. K.

    2017-05-01

    The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.

  13. A sparse matrix algorithm on the Boolean vector machine

    NASA Technical Reports Server (NTRS)

    Wagner, Robert A.; Patrick, Merrell L.

    1988-01-01

    VLSI technology is being used to implement a prototype Boolean Vector Machine (BVM), which is a large network of very small processors with equally small memories that operate in SIMD mode; these use bit-serial arithmetic, and communicate via cube-connected cycles network. The BVM's bit-serial arithmetic and the small memories of individual processors are noted to compromise the system's effectiveness in large numerical problem applications. Attention is presently given to the implementation of a basic matrix-vector iteration algorithm for space matrices of the BVM, in order to generate over 1 billion useful floating-point operations/sec for this iteration algorithm. The algorithm is expressed in a novel language designated 'BVM'.

  14. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  15. Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems

    PubMed Central

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge. PMID:24678274

  16. Low-bit rate feedback strategies for iterative IA-precoded MIMO-OFDM-based systems.

    PubMed

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge.

  17. Design Consideration and Performance of Networked Narrowband Waveforms for Tactical Communications

    DTIC Science & Technology

    2010-09-01

    four proposed CPM modes, with perfect acquisition parameters, for both coherent and noncoherent detection using an iterative receiver with both inner...Figure 1: Bit error rate performance of various CPM modes with coherent and noncoherent detection. Figure 3 shows the corresponding relationship...symbols. Table 2 summarises the parameter Coherent results (cross) Noncoherent results (diamonds) Figur 1: Bit Error Rate Pe f rmance of

  18. Coded excitation speeds up the detection of the fundamental flexural guided wave in coated tubes

    NASA Astrophysics Data System (ADS)

    Song, Xiaojun; Moilanen, Petro; Zhao, Zuomin; Ta, Dean; Pirhonen, Jalmari; Salmi, Ari; Hæeggström, Edward; Myllylä, Risto; Timonen, Jussi; Wang, Weiqi

    2016-09-01

    The fundamental flexural guided wave (FFGW) permits ultrasonic assessment of the wall thickness of solid waveguides, such as tubes or, e.g., long cortical bones. Recently, an optical non-contact method was proposed for ultrasound excitation and detection with the aim of facilitating the FFGW reception by suppressing the interfering modes from the soft coating. This technique suffers from low SNR and requires iterative physical scanning across the source-receiver distance for 2D-FFT analysis. This means that SNR improvement achieved by temporal averaging becomes time-consuming (several minutes) which reduces the applicability of the technique, especially in time-critical applications such as clinical quantitative ultrasound. To achieve sufficient SNR faster, an ultrasonic excitation by a base-sequence-modulated Golay code (BSGC, 64-bit code pair) on coated tube samples (1-5 mm wall thickness and 5 mm soft coating layer) was used. This approach improved SNR by 21 dB and speeded up the measurement by a factor of 100 compared to using a classical pulse excitation with temporal averaging. The measurement now took seconds instead of minutes, while the ability to determine the wall thickness of the phantoms was maintained. The technique thus allows rapid noncontacting assessment of the wall thickness in coated solid tubes, such as the human bone.

  19. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    PubMed Central

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-01

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963

  20. Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1997-01-01

    In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).

  1. Error control for reliable digital data transmission and storage systems

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, R. H.

    1985-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.

  2. Blind One-Bit Compressive Sampling

    DTIC Science & Technology

    2013-01-17

    14] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse...methods for nonconvex optimization on the unit sphere and has a provable convergence guarantees. Binary iterative hard thresholding (BIHT) algorithms were... Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0

  3. Origami mechanologic.

    PubMed

    Treml, Benjamin; Gillman, Andrew; Buskohl, Philip; Vaia, Richard

    2018-06-18

    Robots autonomously interact with their environment through a continual sense-decide-respond control loop. Most commonly, the decide step occurs in a central processing unit; however, the stiffness mismatch between rigid electronics and the compliant bodies of soft robots can impede integration of these systems. We develop a framework for programmable mechanical computation embedded into the structure of soft robots that can augment conventional digital electronic control schemes. Using an origami waterbomb as an experimental platform, we demonstrate a 1-bit mechanical storage device that writes, erases, and rewrites itself in response to a time-varying environmental signal. Further, we show that mechanical coupling between connected origami units can be used to program the behavior of a mechanical bit, produce logic gates such as AND, OR, and three input majority gates, and transmit signals between mechanologic gates. Embedded mechanologic provides a route to add autonomy and intelligence in soft robots and machines. Copyright © 2018 the Author(s). Published by PNAS.

  4. Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.

    PubMed

    Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos

    2013-11-04

    In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.

  5. Iterative Overlap FDE for Multicode DS-CDMA

    NASA Astrophysics Data System (ADS)

    Takeda, Kazuaki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Recently, a new frequency-domain equalization (FDE) technique, called overlap FDE, that requires no GI insertion was proposed. However, the residual inter/intra-block interference (IBI) cannot completely be removed. In addition to this, for multicode direct sequence code division multiple access (DS-CDMA), the presence of residual interchip interference (ICI) after FDE distorts orthogonality among the spreading codes. In this paper, we propose an iterative overlap FDE for multicode DS-CDMA to suppress both the residual IBI and the residual ICI. In the iterative overlap FDE, joint minimum mean square error (MMSE)-FDE and ICI cancellation is repeated a sufficient number of times. The bit error rate (BER) performance with the iterative overlap FDE is evaluated by computer simulation.

  6. Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software

    NASA Astrophysics Data System (ADS)

    Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg

    2017-09-01

    100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.

  7. Performance Evaluation of LDPC Coding and Iterative Decoding System in BPM R/W Channel Affected by Head Field Gradient, Media SFD and Demagnetization Field

    NASA Astrophysics Data System (ADS)

    Nakamura, Yasuaki; Okamoto, Yoshihiro; Osawa, Hisashi; Aoi, Hajime; Muraoka, Hiroaki

    We evaluate the performance of the write-margin for the low-density parity-check (LDPC) coding and iterative decoding system in the bit-patterned media (BPM) R/W channel affected by the write-head field gradient, the media switching field distribution (SFD), the demagnetization field from adjacent islands and the island position deviation. It is clarified that the LDPC coding and iterative decoding system in R/W channel using BPM at 3 Tbit/inch2 has a write-margin of about 20%.

  8. [A fast iterative algorithm for adaptive histogram equalization].

    PubMed

    Cao, X; Liu, X; Deng, Z; Jiang, D; Zheng, C

    1997-01-01

    In this paper, we propose an iterative algorthm called FAHE., which is based on the relativity between the current local histogram and the one before the sliding window moving. Comparing with the basic AHE, the computing time of FAHE is decreased from 5 hours to 4 minutes on a 486dx/33 compatible computer, when using a 65 x 65 sliding window for a 512 x 512 with 8 bits gray-level range.

  9. Inferring the Presence of Reverse Proxies Through Timing Analysis

    DTIC Science & Technology

    2015-06-01

    16 Figure 3.2 The three different instances of timing measurement configurations 17 Figure 3.3 Permutation of a web request iteration...Their data showed that they could detect at least 6 bits of entropy between unlike devices and that it was enough to determine that they are in fact...depending on the permutation being executed so that every iteration was conducted under the same distance 15 City   Lat   Long   City   Lat   Long

  10. Universal Decoder for PPM of any Order

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.

    2010-01-01

    A recently developed algorithm for demodulation and decoding of a pulse-position- modulation (PPM) signal is suitable as a basis for designing a single hardware decoding apparatus to be capable of handling any PPM order. Hence, this algorithm offers advantages of greater flexibility and lower cost, in comparison with prior such algorithms, which necessitate the use of a distinct hardware implementation for each PPM order. In addition, in comparison with the prior algorithms, the present algorithm entails less complexity in decoding at large orders. An unavoidably lengthy presentation of background information, including definitions of terms, is prerequisite to a meaningful summary of this development. As an aid to understanding, the figure illustrates the relevant processes of coding, modulation, propagation, demodulation, and decoding. An M-ary PPM signal has M time slots per symbol period. A pulse (signifying 1) is transmitted during one of the time slots; no pulse (signifying 0) is transmitted during the other time slots. The information intended to be conveyed from the transmitting end to the receiving end of a radio or optical communication channel is a K-bit vector u. This vector is encoded by an (N,K) binary error-correcting code, producing an N-bit vector a. In turn, the vector a is subdivided into blocks of m = log2(M) bits and each such block is mapped to an M-ary PPM symbol. The resultant coding/modulation scheme can be regarded as equivalent to a nonlinear binary code. The binary vector of PPM symbols, x is transmitted over a Poisson channel, such that there is obtained, at the receiver, a Poisson-distributed photon count characterized by a mean background count nb during no-pulse time slots and a mean signal-plus-background count of ns+nb during a pulse time slot. In the receiver, demodulation of the signal is effected in an iterative soft decoding process that involves consideration of relationships among photon counts and conditional likelihoods of m-bit vectors of coded bits. Inasmuch as the likelihoods of all the m-bit vectors of coded bits mapping to the same PPM symbol are correlated, the best performance is obtained when the joint mbit conditional likelihoods are utilized. Unfortunately, the complexity of decoding, measured in the number of operations per bit, grows exponentially with m, and can thus become prohibitively expensive for large PPM orders. For a system required to handle multiple PPM orders, the cost is even higher because it is necessary to have separate decoding hardware for each order. This concludes the prerequisite background information. In the present algorithm, the decoding process as described above is modified by, among other things, introduction of an lbit marginalizer sub-algorithm. The term "l-bit marginalizer" signifies that instead of m-bit conditional likelihoods, the decoder computes l-bit conditional likelihoods, where l is fixed. Fixing l, regardless of the value of m, makes it possible to use a single hardware implementation for any PPM order. One could minimize the decoding complexity and obtain an especially simple design by fixing l at 1, but this would entail some loss of performance. An intermediate solution is to fix l at some value, greater than 1, that may be less than or greater than m. This solution makes it possible to obtain the desired flexibility to handle any PPM order while compromising between complexity and loss of performance.

  11. New Bandwidth Efficient Parallel Concatenated Coding Schemes

    NASA Technical Reports Server (NTRS)

    Denedetto, S.; Divsalar, D.; Montorsi, G.; Pollara, F.

    1996-01-01

    We propose a new solution to parallel concatenation of trellis codes with multilevel amplitude/phase modulations and a suitable iterative decoding structure. Examples are given for throughputs 2 bits/sec/Hz with 8PSK and 16QAM signal constellations.

  12. A novel image encryption algorithm using chaos and reversible cellular automata

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Luan, Dapeng

    2013-11-01

    In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.

  13. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application

    NASA Astrophysics Data System (ADS)

    Mazon, D.; Liegeard, C.; Jardin, A.; Barnsley, R.; Walsh, M.; O'Mullane, M.; Sirinelli, A.; Dorchies, F.

    2016-11-01

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.

  14. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application.

    PubMed

    Mazon, D; Liegeard, C; Jardin, A; Barnsley, R; Walsh, M; O'Mullane, M; Sirinelli, A; Dorchies, F

    2016-11-01

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.

  15. A soft decoding algorithm and hardware implementation for the visual prosthesis based on high order soft demodulation.

    PubMed

    Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei

    2016-09-26

    High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.

  16. Comparisons of single event vulnerability of GaAs SRAMS

    NASA Astrophysics Data System (ADS)

    Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.

    1986-12-01

    A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.

  17. Least reliable bits coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  18. Effect of contrast enhancement prior to iteration procedure on image correction for soft x-ray projection microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamsranjav, Erdenetogtokh, E-mail: ja.erdenetogtokh@gmail.com; Shiina, Tatsuo, E-mail: shiina@faculity.chiba-u.jp; Kuge, Kenichi

    2016-01-28

    Soft X-ray microscopy is well recognized as a powerful tool of high-resolution imaging for hydrated biological specimens. Projection type of it has characteristics of easy zooming function, simple optical layout and so on. However the image is blurred by the diffraction of X-rays, leading the spatial resolution to be worse. In this study, the blurred images have been corrected by an iteration procedure, i.e., Fresnel and inverse Fresnel transformations repeated. This method was confirmed by earlier studies to be effective. Nevertheless it was not enough to some images showing too low contrast, especially at high magnification. In the present study,more » we tried a contrast enhancement method to make the diffraction fringes clearer prior to the iteration procedure. The method was effective to improve the images which were not successful by iteration procedure only.« less

  19. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazon, D., E-mail: Didier.Mazon@cea.fr; Jardin, A.; Liegeard, C.

    2016-11-15

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plugmore » geometry.« less

  20. Iterants, Fermions and Majorana Operators

    NASA Astrophysics Data System (ADS)

    Kauffman, Louis H.

    Beginning with an elementary, oscillatory discrete dynamical system associated with the square root of minus one, we study both the foundations of mathematics and physics. Position and momentum do not commute in our discrete physics. Their commutator is related to the diffusion constant for a Brownian process and to the Heisenberg commutator in quantum mechanics. We take John Wheeler's idea of It from Bit as an essential clue and we rework the structure of that bit to a logical particle that is its own anti-particle, a logical Marjorana particle. This is our key example of the amphibian nature of mathematics and the external world. We show how the dynamical system for the square root of minus one is essentially the dynamics of a distinction whose self-reference leads to both the fusion algebra and the operator algebra for the Majorana Fermion. In the course of this, we develop an iterant algebra that supports all of matrix algebra and we end the essay with a discussion of the Dirac equation based on these principles.

  1. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  2. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  3. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  4. Acoustically assisted spin-transfer-torque switching of nanomagnets: An energy-efficient hybrid writing scheme for non-volatile memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biswas, Ayan K.; Bandyopadhyay, Supriyo; Atulasimha, Jayasimha

    We show that the energy dissipated to write bits in spin-transfer-torque random access memory can be reduced by an order of magnitude if a surface acoustic wave (SAW) is launched underneath the magneto-tunneling junctions (MTJs) storing the bits. The SAW-generated strain rotates the magnetization of every MTJs' soft magnet from the easy towards the hard axis, whereupon passage of a small spin-polarized current through a target MTJ selectively switches it to the desired state with > 99.99% probability at room temperature, thereby writing the bit. The other MTJs return to their original states at the completion of the SAW cycle.

  5. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  6. Nonlinear BCJR equalizer for suppression of intrachannel nonlinearities in 40 Gb/s optical communications systems.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane

    2006-05-29

    A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.

  7. Implementation of cryptographic hash function SHA256 in C++

    NASA Astrophysics Data System (ADS)

    Shrivastava, Akash

    2012-02-01

    This abstract explains the implementation of SHA Secure hash algorithm 256 using C++. The SHA-2 is a strong hashing algorithm used in almost all kinds of security applications. The algorithm consists of 2 phases: Preprocessing and hash computation. Preprocessing involves padding a message, parsing the padded message into m-bits blocks, and setting initialization values to be used in the hash computation. It generates a message schedule from padded message and uses that schedule, along with functions, constants, and word operations to iteratively generate a series of hash values. The final hash value generated by the computation is used to determine the message digest. SHA-2 includes a significant number of changes from its predecessor, SHA-1. SHA-2 consists of a set of four hash functions with digests that are 224, 256, 384 or 512 bits. The algorithm outputs a 256 bits message block with an internal state block of 256 bits and initial block size of 512 bits. Maximum message length in bit is generated is 2^64 -1, over all computed over a series of 64 rounds consisting or several operations such as and, or, Xor, Shr, Rot. The code will provide clear understanding of the hash algorithm and generates hash values to retrieve message digest.

  8. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  9. Influence of Ultra-Low-Dose and Iterative Reconstructions on the Visualization of Orbital Soft Tissues on Maxillofacial CT.

    PubMed

    Widmann, G; Juranek, D; Waldenberger, F; Schullian, P; Dennhardt, A; Hoermann, R; Steurer, M; Gassner, E-M; Puelacher, W

    2017-08-01

    Dose reduction on CT scans for surgical planning and postoperative evaluation of midface and orbital fractures is an important concern. The purpose of this study was to evaluate the variability of various low-dose and iterative reconstruction techniques on the visualization of orbital soft tissues. Contrast-to-noise ratios of the optic nerve and inferior rectus muscle and subjective scores of a human cadaver were calculated from CT with a reference dose protocol (CT dose index volume = 36.69 mGy) and a subsequent series of low-dose protocols (LDPs I-4: CT dose index volume = 4.18, 2.64, 0.99, and 0.53 mGy) with filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR)-50, ASIR-100, and model-based iterative reconstruction. The Dunn Multiple Comparison Test was used to compare each combination of protocols (α = .05). Compared with the reference dose protocol with FBP, the following statistically significant differences in contrast-to-noise ratios were shown (all, P ≤ .012) for the following: 1) optic nerve: LDP-I with FBP; LDP-II with FBP and ASIR-50; LDP-III with FBP, ASIR-50, and ASIR-100; and LDP-IV with FBP, ASIR-50, and ASIR-100; and 2) inferior rectus muscle: LDP-II with FBP, LDP-III with FBP and ASIR-50, and LDP-IV with FBP, ASIR-50, and ASIR-100. Model-based iterative reconstruction showed the best contrast-to-noise ratio in all images and provided similar subjective scores for LDP-II. ASIR-50 had no remarkable effect, and ASIR-100, a small effect on subjective scores. Compared with a reference dose protocol with FBP, model-based iterative reconstruction may show similar diagnostic visibility of orbital soft tissues at a CT dose index volume of 2.64 mGy. Low-dose technology and iterative reconstruction technology may redefine current reference dose levels in maxillofacial CT. © 2017 by American Journal of Neuroradiology.

  10. Preliminary Design of a Helium-Cooled Ceramic Breeder Blanket for CFETR Based on the BIT Concept

    NASA Astrophysics Data System (ADS)

    Ma, Xuebin; Liu, Songlin; Li, Jia; Pu, Yong; Chen, Xiangcun

    2014-04-01

    CFETR is the “ITER-like” China fusion engineering test reactor. The design of the breeding blanket is one of the key issues in achieving the required tritium breeding radio for the self-sufficiency of tritium as a fuel. As one option, a BIT (breeder insider tube) type helium cooled ceramic breeder blanket (HCCB) was designed. This paper presents the design of the BIT—HCCB blanket configuration inside a reactor and its structure, along with neutronics, thermo-hydraulics and thermal stress analyses. Such preliminary performance analyses indicate that the design satisfies the requirements and the material allowable limits.

  11. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  12. PolyCheck: Dynamic Verification of Iteration Space Transformations on Affine Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Wenlei; Krishnamoorthy, Sriram; Pouchet, Louis-noel

    2016-01-11

    High-level compiler transformations, especially loop transformations, are widely recognized as critical optimizations to restructure programs to improve data locality and expose parallelism. Guaranteeing the correctness of program transformations is essential, and to date three main approaches have been developed: proof of equivalence of affine programs, matching the execution traces of programs, and checking bit-by-bit equivalence of the outputs of the programs. Each technique suffers from limitations in either the kind of transformations supported, space complexity, or the sensitivity to the testing dataset. In this paper, we take a novel approach addressing all three limitations to provide an automatic bug checkermore » to verify any iteration reordering transformations on affine programs, including non-affine transformations, with space consumption proportional to the original program data, and robust to arbitrary datasets of a given size. We achieve this by exploiting the structure of affine program control- and data-flow to generate at compile-time lightweight checker code to be executed within the transformed program. Experimental results assess the correctness and effectiveness of our method, and its increased coverage over previous approaches.« less

  13. Noise reduction in heat-assisted magnetic recording of bit-patterned media by optimizing a high/low Tc bilayer structure

    NASA Astrophysics Data System (ADS)

    Muthsam, O.; Vogler, C.; Suess, D.

    2017-12-01

    It is assumed that heat-assisted magnetic recording is the recording technique of the future. For pure hard magnetic grains in high density media with an average diameter of 5 nm and a height of 10 nm, the switching probability is not sufficiently high for the use in bit-patterned media. Using a bilayer structure with 50% hard magnetic material with low Curie temperature and 50% soft magnetic material with high Curie temperature to obtain more than 99.2% switching probability leads to very large jitter. We propose an optimized material composition to reach a switching probability of Pswitch > 99.2% and simultaneously achieve the narrow transition jitter of pure hard magnetic material. Simulations with a continuous laser spot were performed with the atomistic simulation program VAMPIRE for a single cylindrical recording grain with a diameter of 5 nm and a height of 10 nm. Different configurations of soft magnetic material and different amounts of hard and soft magnetic material were tested and discussed. Within our analysis, a composition with 20% soft magnetic and 80% hard magnetic material reaches the best results with a switching probability Pswitch > 99.2%, an off-track jitter parameter σoff,80/20 = 0.46 nm and a down-track jitter parameter σdown,80/20 = 0.49 nm.

  14. Achievable Information Rates for Coded Modulation With Hard Decision Decoding for Coherent Fiber-Optic Systems

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi

    2017-12-01

    We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.

  15. Hologram recording tubes

    NASA Technical Reports Server (NTRS)

    Rajchman, J. H.

    1973-01-01

    Optical memories allow extremely large numbers of bits to be stored and recalled in a matter of microseconds. Two recording tubes, similar to conventional image-converting tubes, but having a soft-glass surface on which hologram is recorded, do not degrade under repeated hologram read/write cycles.

  16. LOOP- SIMULATION OF THE AUTOMATIC FREQUENCY CONTROL SUBSYSTEM OF A DIFFERENTIAL MINIMUM SHIFT KEYING RECEIVER

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1994-01-01

    The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.

  17. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987

  18. Motion estimation using the firefly algorithm in ultrasonic image sequence of soft tissue.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  19. EBIT spectroscopy of highly charged heavy ions relevant to hot plasmas

    NASA Astrophysics Data System (ADS)

    Nakamura, Nobuyuki

    2013-05-01

    An electron beam ion trap (EBIT) is a versatile device for studying highly charged ions. We have been using two types of EBITs for the spectroscopic studies of highly charged ions. One is a high-energy device called the Tokyo-EBIT, and another is a compact low-energy device called CoBIT. Complementary use of them enables us to obtain spectroscopic data for ions over a wide charge-state range interacting with electrons over a wide energy range. In this talk, we present EBIT spectra of highly charged ions for tungsten, iron, bismuth, etc., which are relevant to hot plasmas. Tungsten is considered to be the main impurity in the ITER (the next generation nuclear fusion reactor) plasma, and thus its emission lines are important for diagnosing and controlling the ITER plasma. We have observed many previously unreported lines to supply the lack of spectroscopic data of tungsten ions. Iron is one of the main components of the solar corona, and its spectra are used to diagnose temperature, density, etc. The diagnostics is usually done by comparing observed spectra with model calculations. An EBIT can provide spectra under a well-defined condition; they are thus useful to test the model calculations. Laser-produced bismuth plasma is one of the candidates for a soft x-ray source in the water window region. An EBIT has a narrow charge state distribution; it is thus useful to disentangle the spectra of laser-produced plasma containing ions with a wide charge-state range. Performed with the support and under the auspices of the NIFS Collaboration Research program (NIFS09KOAJ003) and JSPS KAKENHI Number 23246165, and partly supported by the JSPS-NRF-NSFC A3 Foresight Program in the field of Plasma Physics.

  20. Statistical Deviations From the Theoretical Only-SBU Model to Estimate MCU Rates in SRAMs

    NASA Astrophysics Data System (ADS)

    Franco, Francisco J.; Clemente, Juan Antonio; Baylac, Maud; Rey, Solenne; Villa, Francesca; Mecha, Hortensia; Agapito, Juan A.; Puchner, Helmut; Hubert, Guillaume; Velazco, Raoul

    2017-08-01

    This paper addresses a well-known problem that occurs when memories are exposed to radiation: the determination if a bit flip is isolated or if it belongs to a multiple event. As it is unusual to know the physical layout of the memory, this paper proposes to evaluate the statistical properties of the sets of corrupted addresses and to compare the results with a mathematical prediction model where all of the events are single bit upsets. A set of rules easy to implement in common programming languages can be iteratively applied if anomalies are observed, thus yielding a classification of errors quite closer to reality (more than 80% accuracy in our experiments).

  1. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    PubMed

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  2. Soft-information flipping approach in multi-head multi-track BPMR systems

    NASA Astrophysics Data System (ADS)

    Warisarn, C.; Busyatras, W.; Myint, L. M. M.

    2018-05-01

    Inter-track interference is one of the most severe impairments in bit-patterned media recording system. This impairment can be effectively handled by a modulation code and a multi-head array jointly processing multiple tracks; however, such a modulation constraint has never been utilized to improve the soft-information. Therefore, this paper proposes the utilization of modulation codes with an encoded constraint defined by the criteria for soft-information flipping during a three-track data detection process. Moreover, we also investigate the optimal offset position of readheads to provide the most improvement in system performance. The simulation results indicate that the proposed systems with and without position jitter are significantly superior to uncoded systems.

  3. Implications of scaling on static RAM bit cell stability and reliability

    NASA Astrophysics Data System (ADS)

    Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael

    1993-01-01

    In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.

  4. PDC Bit Testing at Sandia Reveals Influence of Chatter in Hard-Rock Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    RAYMOND,DAVID W.

    1999-10-14

    Polycrystalline diamond compact (PDC) bits have yet to be routinely applied to drilling the hard-rock formations characteristic of geothermal reservoirs. Most geothermal production wells are currently drilled with tungsten-carbide-insert roller-cone bits. PDC bits have significantly improved penetration rates and bit life beyond roller-cone bits in the oil and gas industry where soft to medium-hard rock types are encountered. If PDC bits could be used to double current penetration rates in hard rock geothermal well-drilling costs could be reduced by 15 percent or more. PDC bits exhibit reasonable life in hard-rock wear testing using the relatively rigid setups typical of laboratorymore » testing. Unfortunately, field experience indicates otherwise. The prevailing mode of failure encountered by PDC bits returning from hard-rock formations in the field is catastrophic, presumably due to impact loading. These failures usually occur in advance of any appreciable wear that might dictate cutter replacement. Self-induced bit vibration, or ''chatter'', is one of the mechanisms that may be responsible for impact damage to PDC cutters in hard-rock drilling. Chatter is more severe in hard-rock formations since they induce significant dynamic loading on the cutter elements. Chatter is a phenomenon whereby the drillstring becomes dynamically unstable and excessive sustained vibrations occur. Unlike forced vibration, the force (i.e., weight on bit) that drives self-induced vibration is coupled with the response it produces. Many of the chatter principles derived in the machine tool industry are applicable to drilling. It is a simple matter to make changes to a machine tool to study the chatter phenomenon. This is not the case with drilling. Chatter occurs in field drilling due to the flexibility of the drillstring. Hence, laboratory setups must be made compliant to observe chatter.« less

  5. Binary weight distributions of some Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Arnold, S.

    1992-01-01

    The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.

  6. Exploiting data representation for fault tolerance

    DOE PAGES

    Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...

    2015-01-06

    Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less

  7. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  8. Closed-Loop Analysis of Soft Decisions for Serial Links

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlesinger, Adam M.

    2013-01-01

    We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.

  9. Soft Decision Analyzer

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam

    2013-01-01

    We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.

  10. New-Sum: A Novel Online ABFT Scheme For General Iterative Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram

    Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less

  11. Information-reduced Carrier Synchronization of Iterative Decoded BPSK and QPSK using Soft Decision (Extrinsic) Feedback

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban; Jones, Christopher

    2008-01-01

    This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.

  12. Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs

    NASA Astrophysics Data System (ADS)

    Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.

    1982-12-01

    Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.

  13. Computing in the presence of soft bit errors. [caused by single event upset on spacecraft

    NASA Technical Reports Server (NTRS)

    Rasmussen, R. D.

    1984-01-01

    It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.

  14. Link Performance Analysis and monitoring - A unified approach to divergent requirements

    NASA Astrophysics Data System (ADS)

    Thom, G. A.

    Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.

  15. Areal density optimizations for heat-assisted magnetic recording of high-density media

    NASA Astrophysics Data System (ADS)

    Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter; Praetorius, Dirk

    2016-06-01

    Heat-assisted magnetic recording (HAMR) is hoped to be the future recording technique for high-density storage devices. Nevertheless, there exist several realization strategies. With a coarse-grained Landau-Lifshitz-Bloch model, we investigate in detail the benefits and disadvantages of a continuous and pulsed laser spot recording of shingled and conventional bit-patterned media. Additionally, we compare single-phase grains and bits having a bilayer structure with graded Curie temperature, consisting of a hard magnetic layer with high TC and a soft magnetic one with low TC, respectively. To describe the whole write process as realistically as possible, a distribution of the grain sizes and Curie temperatures, a displacement jitter of the head, and the bit positions are considered. For all these cases, we calculate bit error rates of various grain patterns, temperatures, and write head positions to optimize the achievable areal storage density. Within our analysis, shingled HAMR with a continuous laser pulse moving over the medium reaches the best results and thus has the highest potential to become the next-generation storage device.

  16. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks

    PubMed Central

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-01-01

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement. PMID:28677636

  17. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks.

    PubMed

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-07-04

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement.

  18. CT image reconstruction with half precision floating-point values.

    PubMed

    Maaß, Clemens; Baer, Matthias; Kachelrieß, Marc

    2011-07-01

    Analytic CT image reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow's reconstruction hardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently. Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today's high-performance algorithms, by 50%. In CT simulations and CT measurements, float reconstructions (gold standard) and half reconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART). The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged. Half precision floating-point values allow to speed up CT image reconstruction without compromising image quality.

  19. Soybean aphids making their summer appearance early

    USDA-ARS?s Scientific Manuscript database

    Two small, soft-bodied insects have begun showing up in South Dakota soybean. One is the soybean aphid, and the other is a mealybug. Soybean aphids are yellow to yellow/green and are usually found feeding on the underside of leaves. Incidence of soybean aphid has been a bit higher than typical fo...

  20. Soft Decision Analyzer

    NASA Technical Reports Server (NTRS)

    Steele, Glen; Lansdowne, Chatwin; Zucha, Joan; Schlensinger, Adam

    2013-01-01

    The Soft Decision Analyzer (SDA) is an instrument that combines hardware, firmware, and software to perform realtime closed-loop end-to-end statistical analysis of single- or dual- channel serial digital RF communications systems operating in very low signal-to-noise conditions. As an innovation, the unique SDA capabilities allow it to perform analysis of situations where the receiving communication system slips bits due to low signal-to-noise conditions or experiences constellation rotations resulting in channel polarity in versions or channel assignment swaps. SDA s closed-loop detection allows it to instrument a live system and correlate observations with frame, codeword, and packet losses, as well as Quality of Service (QoS) and Quality of Experience (QoE) events. The SDA s abilities are not confined to performing analysis in low signal-to-noise conditions. Its analysis provides in-depth insight of a communication system s receiver performance in a variety of operating conditions. The SDA incorporates two techniques for identifying slips. The first is an examination of content of the received data stream s relation to the transmitted data content and the second is a direct examination of the receiver s recovered clock signals relative to a reference. Both techniques provide benefits in different ways and allow the communication engineer evaluating test results increased confidence and understanding of receiver performance. Direct examination of data contents is performed by two different data techniques, power correlation or a modified Massey correlation, and can be applied to soft decision data widths 1 to 12 bits wide over a correlation depth ranging from 16 to 512 samples. The SDA detects receiver bit slips within a 4 bits window and can handle systems with up to four quadrants (QPSK, SQPSK, and BPSK systems). The SDA continuously monitors correlation results to characterize slips and quadrant change and is capable of performing analysis even when the receiver under test is subjected to conditions where its performance degrades to high error rates (30 percent or beyond). The design incorporates a number of features, such as watchdog triggers that permit the SDA system to recover from large receiver upsets automatically and continue accumulating performance analysis unaided by operator intervention. This accommodates tests that can last in the order of days in order to gain statistical confidence in results and is also useful for capturing snapshots of rare events.

  1. Real-time minimal-bit-error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  2. Real-time minimal bit error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  3. Microcircuit radiation effects databank

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Radiation test data submitted by many testers is collated to serve as a reference for engineers who are concerned with and have some knowledge of the effects of the natural radiation environment on microcircuits. Total dose damage information and single event upset cross sections, i.e., the probability of a soft error (bit flip) or of a hard error (latchup) are presented.

  4. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    NASA Astrophysics Data System (ADS)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  5. Iterated local search algorithm for solving the orienteering problem with soft time windows.

    PubMed

    Aghezzaf, Brahim; Fahim, Hassan El

    2016-01-01

    In this paper we study the orienteering problem with time windows (OPTW) and the impact of relaxing the time windows on the profit collected by the vehicle. The way of relaxing time windows adopted in the orienteering problem with soft time windows (OPSTW) that we study in this research is a late service relaxation that allows linearly penalized late services to customers. We solve this problem heuristically by considering a hybrid iterated local search. The results of the computational study show that the proposed approach is able to achieve promising solutions on the OPTW test instances available in the literature, one new best solution is found. On the newly generated test instances of the OPSTW, the results show that the profit collected by the OPSTW is better than the profit collected by the OPTW.

  6. Five Bit, Five Gigasample TED Analog-to-Digital Converter Development.

    DTIC Science & Technology

    1981-06-01

    pliers. TRW uses two sources at present: materials grown by Horizontal I Bridgman technique from Crystal Specialties, and Czochralski from MRI. The...the circuit modelling and circuit design tasks. A number of design iterations were required to arrive at a satisfactory design. In or-der to riake...made by modeling the TELD as a voltage-controlled current generator with a built-in time delay between impressed voltage and output current. Based on

  7. Simplified flexible-PON upstream transmission using pulse position modulation at ONU and DSP-enabled soft-combining at OLT for adaptive link budgets.

    PubMed

    Liu, Xiang; Effenberger, Frank; Chand, Naresh

    2015-03-09

    We demonstrate a flexible modulation and detection scheme for upstream transmission in passive optical networks using pulse position modulation at optical network unit, facilitating burst-mode detection with automatic decision threshold tracking, and DSP-enabled soft-combining at optical line terminal. Adaptive receiver sensitivities of -33.1 dBm, -36.6 dBm and -38.3 dBm at a bit error ratio of 10(-4) are respectively achieved for 2.5 Gb/s, 1.25 Gb/s and 625 Mb/s after transmission over a 20-km standard single-mode fiber without any optical amplification.

  8. Multi-stage decoding of multi-level modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  9. An analysis of four error detection and correction schemes for the proposed Federal standard 1024 (land mobile radio)

    NASA Astrophysics Data System (ADS)

    Lohrmann, Carol A.

    1990-03-01

    Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.

  10. Iterative Demodulation and Decoding of Non-Square QAM

    NASA Technical Reports Server (NTRS)

    Li, Lifang; Divsalar, Dariush; Dolinar, Samuel

    2004-01-01

    It has been shown that a non-square (NS) 2(sup 2n+1)-ary (where n is a positive integer) quadrature amplitude modulation [(NS)2(sup 2n+1)-QAM] has inherent memory that can be exploited to obtain coding gains. Moreover, it should not be necessary to build new hardware to realize these gains. The present scheme is a product of theoretical calculations directed toward reducing the computational complexity of decoding coded 2(sup 2n+1)-QAM. In the general case of 2(sup 2n+1)-QAM, the signal constellation is not square and it is impossible to have independent in-phase (I) and quadrature-phase (Q) mapping and demapping. However, independent I and Q mapping and demapping are desirable for reducing the complexity of computing the log likelihood ratio (LLR) between a bit and a received symbol (such computations are essential operations in iterative decoding). This is because in modulation schemes that include independent I and Q mapping and demapping, each bit of a signal point is involved in only one-dimensional mapping and demapping. As a result, the computation of the LLR is equivalent to that of a one-dimensional pulse amplitude modulation (PAM) system. Therefore, it is desirable to find a signal constellation that enables independent I and Q mapping and demapping for 2(sup 2n+1)-QAM.

  11. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  12. On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A

    NASA Astrophysics Data System (ADS)

    Blake, J. B.; Mandel, R.

    1987-02-01

    The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.

  13. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Shu, L.; Kasami, T.

    1985-01-01

    A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.

  14. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Lin, S.

    1985-01-01

    A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.

  15. A novel chaotic stream cipher and its application to palmprint template protection

    NASA Astrophysics Data System (ADS)

    Li, Heng-Jian; Zhang, Jia-Shu

    2010-04-01

    Based on a coupled nonlinear dynamic filter (NDF), a novel chaotic stream cipher is presented in this paper and employed to protect palmprint templates. The chaotic pseudorandom bit generator (PRBG) based on a coupled NDF, which is constructed in an inverse flow, can generate multiple bits at one iteration and satisfy the security requirement of cipher design. Then, the stream cipher is employed to generate cancelable competitive code palmprint biometrics for template protection. The proposed cancelable palmprint authentication system depends on two factors: the palmprint biometric and the password/token. Therefore, the system provides high-confidence and also protects the user's privacy. The experimental results of verification on the Hong Kong PolyU Palmprint Database show that the proposed approach has a large template re-issuance ability and the equal error rate can achieve 0.02%. The performance of the palmprint template protection scheme proves the good practicability and security of the proposed stream cipher.

  16. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  17. Recoded and nonrecoded trinary signed-digit adders and multipliers with redundant-bit representations

    NASA Astrophysics Data System (ADS)

    Cherri, Abdallah K.; Alam, Mohammed S.

    1998-07-01

    Highly-efficient two-step recoded and one-step nonrecoded trinary signed-digit (TSD) carry-free adders subtracters are presented on the basis of redundant-bit representation for the operands digits. It has been shown that only 24 (30) minterms are needed to implement the two-step recoded (the one-step nonrecoded) TSD addition for any operand length. Optical implementation of the proposed arithmetic can be carried out by use of correlation- or matrix-multiplication-based schemes, saving 50% of the system memory. Furthermore, we present four different multiplication designs based on our proposed recoded and nonrecoded TSD adders. Our multiplication designs require a small number of reduced minterms to generate the multiplication partial products. Finally, a recently proposed pipelined iterative-tree algorithm can be used in the TSD adders multipliers; consequently, efficient use of all available adders can be made.

  18. Recoded and nonrecoded trinary signed-digit adders and multipliers with redundant-bit representations.

    PubMed

    Cherri, A K; Alam, M S

    1998-07-10

    Highly-efficient two-step recoded and one-step nonrecoded trinary signed-digit (TSD) carry-free adders-subtracters are presented on the basis of redundant-bit representation for the operands' digits. It has been shown that only 24 (30) minterms are needed to implement the two-step recoded (the one-step nonrecoded) TSD addition for any operand length. Optical implementation of the proposed arithmetic can be carried out by use of correlation- or matrix-multiplication-based schemes, saving 50% of the system memory. Furthermore, we present four different multiplication designs based on our proposed recoded and nonrecoded TSD adders. Our multiplication designs require a small number of reduced minterms to generate the multiplication partial products. Finally, a recently proposed pipelined iterative-tree algorithm can be used in the TSD adders-multipliers; consequently, efficient use of all available adders can be made.

  19. Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.

    PubMed

    Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo

    2017-03-03

    Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.

  20. Introducing soft systems methodology plus (SSM+): why we need it and what it can contribute.

    PubMed

    Braithwaite, Jeffrey; Hindle, Don; Iedema, Rick; Westbrook, Johanna I

    2002-01-01

    There are many complicated and seemingly intractable problems in the health care sector. Past ways to address them have involved political responses, economic restructuring, biomedical and scientific studies, and managerialist or business-oriented tools. Few methods have enabled us to develop a systematic response to problems. Our version of soft systems methodology, SSM+, seems to improve problem solving processes by providing an iterative, staged framework that emphasises collaborative learning and systems redesign involving both technical and cultural fixes.

  1. Sparsening Filter Design for Iterative Soft-Input Soft-Output Detectors

    DTIC Science & Technology

    2012-02-29

    filter/detector structure. Since the BP detector itself is unaltered from [1], it can accommodate a system employing channel codes such as LDPC encoding...considered in [1], or can readily be extended to the MIMO case with, for example, space-time coding as in [2,8]. Since our focus is on the design of...simplex method of [15], since it was already available in Matlab , via the “fminsearch” function. 6 Cost surfaces To visualize the cost surfaces, consider

  2. Accumulate Repeat Accumulate Coded Modulation

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  3. Fault tolerance in an inner-outer solver: A GVR-enabled case study

    DOE PAGES

    Zhang, Ziming; Chien, Andrew A.; Teranishi, Keita

    2015-04-18

    Resilience is a major challenge for large-scale systems. It is particularly important for iterative linear solvers, since they take much of the time of many scientific applications. We show that single bit flip errors in the Flexible GMRES iterative linear solver can lead to high computational overhead or even failure to converge to the right answer. Informed by these results, we design and evaluate several strategies for fault tolerance in both inner and outer solvers appropriate across a range of error rates. We implement them, extending Trilinos’ solver library with the Global View Resilience (GVR) programming model, which provides multi-streammore » snapshots, multi-version data structures with portable and rich error checking/recovery. Lastly, experimental results validate correct execution with low performance overhead under varied error conditions.« less

  4. Experimental research of adaptive OFDM and OCT precoding with a high SE for VLLC system

    NASA Astrophysics Data System (ADS)

    Liu, Shuang-ao; He, Jing; Chen, Qinghui; Deng, Rui; Zhou, Zhihua; Chen, Shenghai; Chen, Lin

    2017-09-01

    In this paper, an adaptive orthogonal frequency division multiplexing (OFDM) modulation scheme with 128/64/32/16-quadrature amplitude modulation (QAM) and orthogonal circulant matrix transform (OCT) precoding is proposed and experimentally demonstrated for a visible laser light communication (VLLC) system with a cost-effective 450-nm blue-light laser diode (LD). The performance of OCT precoding is compared with conventional the adaptive Discrete Fourier Transform-spread (DFT-spread) OFDM scheme, 32 QAM OCT precoding OFDM scheme, 64 QAM OCT precoding OFDM scheme and adaptive OCT precoding OFDM scheme. The experimental results show that OCT precoding can achieve a relatively flat signal-to-noise ratio (SNR) curve, and it can provide performance improvement in bit error rate (BER). Furthermore, the BER of the proposed OFDM signal with a raw bit rate 5.04 Gb/s after 5-m free space transmission is less than 20% of soft-decision forward error correlation (SD-FEC) threshold of 2.4 × 10-2, and the spectral efficiency (SE) of 4.2 bit/s/Hz can be successfully achieved.

  5. A multi-state magnetic memory dependent on the permeability of Metglas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrie, J. R.; Wieland, K. A.; Timmerwilke, J. M.

    A three-state magnetic memory was developed based on differences in the magnetic permeability of a soft ferromagnetic media, Metglas 2826MB (Fe{sub 40}Ni{sub 38}Mo{sub 4}B{sub 18}). By heating bits of a 250 nm thick Metglas film with 70–100 mW of laser power, we were able to tune the local microstructure, and hence, the permeability. Ternary memory states were created by using lower laser power to enhance the initial permeability through localized atomic rearrangement and higher power to reduce the permeability through crystallization. The permeability of the bits was read by detecting variations in an external 32 Oe probe field within 10 μm ofmore » the media via a magnetic tunnel junction read head. Compared to data based on remanent magnetization, these multi-permeability bits have enhanced insensitivity to unexpected field and temperature changes. We found that data was not corrupted after exposure to fields of 1 T or temperatures of 423 K, indicating the effectiveness of this multi-state approach for safely storing large amounts of data.« less

  6. A forward error correction technique using a high-speed, high-rate single chip codec

    NASA Astrophysics Data System (ADS)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  7. Application of morphological bit planes in retinal blood vessel extraction.

    PubMed

    Fraz, M M; Basit, A; Barman, S A

    2013-04-01

    The appearance of the retinal blood vessels is an important diagnostic indicator of various clinical disorders of the eye and the body. Retinal blood vessels have been shown to provide evidence in terms of change in diameter, branching angles, or tortuosity, as a result of ophthalmic disease. This paper reports the development for an automated method for segmentation of blood vessels in retinal images. A unique combination of methods for retinal blood vessel skeleton detection and multidirectional morphological bit plane slicing is presented to extract the blood vessels from the color retinal images. The skeleton of main vessels is extracted by the application of directional differential operators and then evaluation of combination of derivative signs and average derivative values. Mathematical morphology has been materialized as a proficient technique for quantifying the retinal vasculature in ocular fundus images. A multidirectional top-hat operator with rotating structuring elements is used to emphasize the vessels in a particular direction, and information is extracted using bit plane slicing. An iterative region growing method is applied to integrate the main skeleton and the images resulting from bit plane slicing of vessel direction-dependent morphological filters. The approach is tested on two publicly available databases DRIVE and STARE. Average accuracy achieved by the proposed method is 0.9423 for both the databases with significant values of sensitivity and specificity also; the algorithm outperforms the second human observer in terms of precision of segmented vessel tree.

  8. TID and SEE Response of an Advanced Samsung 4G NAND Flash Memory

    NASA Technical Reports Server (NTRS)

    Oldham, Timothy R.; Friendlich, M.; Howard, J. W.; Berg, M. D.; Kim, H. S.; Irwin, T. L.; LaBel, K. A.

    2007-01-01

    Initial total ionizing dose (TID) and single event heavy ion test results are presented for an unhardened commercial flash memory, fabricated with 63 nm technology. Results are that the parts survive to a TID of nearly 200 krad (SiO2), with a tractable soft error rate of about 10(exp -l2) errors/bit-day, for the Adams Ten Percent Worst Case Environment.

  9. Studies Of Single-Event-Upset Models

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.

    1988-01-01

    Report presents latest in series of investigations of "soft" bit errors known as single-event upsets (SEU). In this investigation, SEU response of low-power, Schottky-diode-clamped, transistor/transistor-logic (TTL) static random-access memory (RAM) observed during irradiation by Br and O ions in ranges of 100 to 240 and 20 to 100 MeV, respectively. Experimental data complete verification of computer model used to simulate SEU in this circuit.

  10. Scaled CMOS Technology Reliability Users Guide

    NASA Technical Reports Server (NTRS)

    White, Mark

    2010-01-01

    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.

  11. Orthobiologics in the Foot and Ankle.

    PubMed

    Temple, H Thomas; Malinin, Theodore I

    2016-12-01

    Many allogeneic biologic materials, by themselves or in combination with cells or cell products, may be transformative in healing or regeneration of musculoskeletal bone and soft tissues. By reconfiguring the size, shape, and methods of tissue preparation to improve deliverability and storage, unique iterations of traditional tissue scaffolds have emerged. These new iterations, combined with new cell technologies, have shaped an exciting platform of regenerative products that are effective and provide a bridge to newer and better methods of providing care for orthopedic foot and ankle patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. A Fast Optimization Method for General Binary Code Learning.

    PubMed

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  13. A novel decoding algorithm based on the hierarchical reliable strategy for SCG-LDPC codes in optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong

    2013-11-01

    An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.

  14. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    PubMed Central

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  15. A high data rate universal lattice decoder on FPGA

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Huang, Xinming; Kura, Swapna

    2005-06-01

    This paper presents the architecture design of a high data rate universal lattice decoder for MIMO channels on FPGA platform. A phost strategy based lattice decoding algorithm is modified in this paper to reduce the complexity of the closest lattice point search. The data dependency of the improved algorithm is examined and a parallel and pipeline architecture is developed with the iterative decoding function on FPGA and the division intensive channel matrix preprocessing on DSP. Simulation results demonstrate that the improved lattice decoding algorithm provides better bit error rate and less iteration number compared with the original algorithm. The system prototype of the decoder shows that it supports data rate up to 7Mbit/s on a Virtex2-1000 FPGA, which is about 8 times faster than the original algorithm on FPGA platform and two-orders of magnitude better than its implementation on a DSP platform.

  16. Development and Evaluation of Math Library Routines for a 1750A Airborne Microcomputer.

    DTIC Science & Technology

    1985-12-04

    Since each iteration doubles the number of correct significant digits in the square root, this assures an accuracy of 63.32 bits. (4: 23) The next...X, C1 + C2 represents In (C) to more than working precision This method gives extra digits of precision equivalent to the number of extra digits in...will not underflow for lxI K eps. Cody and Waite have suggested that eps = 2-t/2 where there are t base-2 digits in the significand. The next step

  17. A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories

    NASA Technical Reports Server (NTRS)

    Barkai, D.; Moriarty, K. J. M.; Rebbi, C.

    1984-01-01

    New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS.

  18. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali

    1997-01-01

    Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.

  19. A Case for Soft Error Detection and Correction in Computational Chemistry.

    PubMed

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  20. A modified two-layer iteration via a boundary point approach to generalized multivalued pseudomonotone mixed variational inequalities.

    PubMed

    Saddeek, Ali Mohamed

    2017-01-01

    Most mathematical models arising in stationary filtration processes as well as in the theory of soft shells can be described by single-valued or generalized multivalued pseudomonotone mixed variational inequalities with proper convex nondifferentiable functionals. Therefore, for finding the minimum norm solution of such inequalities, the current paper attempts to introduce a modified two-layer iteration via a boundary point approach and to prove its strong convergence. The results here improve and extend the corresponding recent results announced by Badriev, Zadvornov and Saddeek (Differ. Equ. 37:934-942, 2001).

  1. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  2. Design and synthesis of digitally encoded polymers that can be decoded and erased

    NASA Astrophysics Data System (ADS)

    Roy, Raj Kumar; Meszynska, Anna; Laure, Chloé; Charles, Laurence; Verchin, Claire; Lutz, Jean-François

    2015-05-01

    Biopolymers such as DNA store information in their chains using controlled sequences of monomers. Here we describe a non-natural information-containing macromolecule that can store and retrieve digital information. Monodisperse sequence-encoded poly(alkoxyamine amide)s were synthesized using an iterative strategy employing two chemoselective steps: the reaction of a primary amine with an acid anhydride and the radical coupling of a carbon-centred radical with a nitroxide. A binary code was implemented in the polymer chains using three monomers: one nitroxide spacer and two interchangeable anhydrides defined as 0-bit and 1-bit. This methodology allows encryption of any desired sequence in the chains. Moreover, the formed sequences are easy to decode using tandem mass spectrometry. Indeed, these polymers follow predictable fragmentation pathways that can be easily deciphered. Moreover, poly(alkoxyamine amide)s are thermolabile. Thus, the digital information encrypted in the chains can be erased by heating the polymers in the solid state or in solution.

  3. Design and synthesis of digitally encoded polymers that can be decoded and erased.

    PubMed

    Roy, Raj Kumar; Meszynska, Anna; Laure, Chloé; Charles, Laurence; Verchin, Claire; Lutz, Jean-François

    2015-05-26

    Biopolymers such as DNA store information in their chains using controlled sequences of monomers. Here we describe a non-natural information-containing macromolecule that can store and retrieve digital information. Monodisperse sequence-encoded poly(alkoxyamine amide)s were synthesized using an iterative strategy employing two chemoselective steps: the reaction of a primary amine with an acid anhydride and the radical coupling of a carbon-centred radical with a nitroxide. A binary code was implemented in the polymer chains using three monomers: one nitroxide spacer and two interchangeable anhydrides defined as 0-bit and 1-bit. This methodology allows encryption of any desired sequence in the chains. Moreover, the formed sequences are easy to decode using tandem mass spectrometry. Indeed, these polymers follow predictable fragmentation pathways that can be easily deciphered. Moreover, poly(alkoxyamine amide)s are thermolabile. Thus, the digital information encrypted in the chains can be erased by heating the polymers in the solid state or in solution.

  4. Scalable SCPPM Decoder

    NASA Technical Reports Server (NTRS)

    Quir, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy; Nakashima, Michael A.; Moision, Bruce E.

    2012-01-01

    A decoder was developed that decodes a serial concatenated pulse position modulation (SCPPM) encoded information sequence. The decoder takes as input a sequence of four bit log-likelihood ratios (LLR) for each PPM slot in a codeword via a XAUI 10-Gb/s quad optical fiber interface. If the decoder is unavailable, it passes the LLRs on to the next decoder via a XAUI 10-Gb/s quad optical fiber interface. Otherwise, it decodes the sequence and outputs information bits through a 1-GB/s Ethernet UDP/IP (User Datagram Protocol/Internet Protocol) interface. The throughput for a single decoder unit is 150-Mb/s at an average of four decoding iterations; by connecting a number of decoder units in series, a decoding rate equal to that of the aggregate rate is achieved. The unit is controlled through a 1-GB/s Ethernet UDP/IP interface. This ground station decoder was developed to demonstrate a deep space optical communication link capability, and is unique in the scalable design to achieve real-time SCPP decoding at the aggregate data rate.

  5. A Hybrid Multiuser Detector Based on MMSE and AFSA for TDRS System Forward Link

    PubMed Central

    Yin, Zhendong; Liu, Xiaohui

    2014-01-01

    This study mainly focuses on multiuser detection in tracking and data relay satellite (TDRS) system forward link. Minimum mean square error (MMSE) is a low complexity multiuser detection method, but MMSE detector cannot achieve satisfactory bit error ratio and near-far resistance, whereas artificial fish swarm algorithm (AFSA) is expert in optimization and it can realize the global convergence efficiently. Therefore, a hybrid multiuser detector based on MMSE and AFSA (MMSE-AFSA) is proposed in this paper. The result of MMSE and its modified formations are used as the initial values of artificial fishes to accelerate the speed of global convergence and reduce the iteration times for AFSA. The simulation results show that the bit error ratio and near-far resistance performances of the proposed detector are much better, compared with MF, DEC, and MMSE, and are quite close to OMD. Furthermore, the proposed MMSE-AFSA detector also has a large system capacity. PMID:24883418

  6. Figures of Merit for Magnetic Recording Media

    NASA Astrophysics Data System (ADS)

    Skomski, Ralph; Sellmyer, D. J.

    2007-03-01

    Since the first nucleation-field calculations for hard-soft nanostructures with multilayered [1] and arbitrary [2] geometries, exchange-spring magnets have attracted much attention in various areas of magnetism, including magnetic recording. Ultrahigh storage densities correspond to the strong-coupling limit, realized on small length scales and described by volume-averaged anisotropies. Second-order perturbation theory yields finite-size corrections that describe a partial decoupling of the phases. Since soft phases reduce the nucleation field, nanostructuring can be used to reduce the coercivity Hc while maintaining the energy barrier EB. However, the ratio EB/Hc is an ill-defined figure of merit, because the comparison with the Stoner-Wohlfarth model requires the introduction of a particle volume, as contrasted to an area. By using elongated particles with a continuous anisotropy gradient, it is possible to reduce the coercivity by a factor scaling as the bit size divided by the domain-wall width of the hard phase. However, with decreasing bit size this effect becomes less pronounced. In the strong-coupling limit, thermal stability yields a maximum storage density of order γ/kBT, where γ is the domain-wall energy of the hard phase. - This research is supported by NSF MRSEC, INSIC, and NCMN. [1] S. Nieber and H. Kronm"uller, phys. stat. sol. (b) 153, 367 (1989). [2] R. Skomski and J. M. D. Coey, Phys. Rev. B 48, 15812 (1993).

  7. Flexible high speed codec

    NASA Technical Reports Server (NTRS)

    Boyd, R. W.; Hartman, W. F.

    1992-01-01

    The project's objective is to develop an advanced high speed coding technology that provides substantial coding gains with limited bandwidth expansion for several common modulation types. The resulting technique is applicable to several continuous and burst communication environments. Decoding provides a significant gain with hard decisions alone and can utilize soft decision information when available from the demodulator to increase the coding gain. The hard decision codec will be implemented using a single application specific integrated circuit (ASIC) chip. It will be capable of coding and decoding as well as some formatting and synchronization functions at data rates up to 300 megabits per second (Mb/s). Code rate is a function of the block length and can vary from 7/8 to 15/16. Length of coded bursts can be any multiple of 32 that is greater than or equal to 256 bits. Coding may be switched in or out on a burst by burst basis with no change in the throughput delay. Reliability information in the form of 3-bit (8-level) soft decisions, can be exploited using applique circuitry around the hard decision codec. This applique circuitry will be discrete logic in the present contract. However, ease of transition to LSI is one of the design guidelines. Discussed here is the selected coding technique. Its application to some communication systems is described. Performance with 4, 8, and 16-ary Phase Shift Keying (PSK) modulation is also presented.

  8. Reconstruction of truncated TCT and SPECT data from a right-angle dual-camera system for myocardial SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsui, B.M.W.; Frey, E.C.; Lalush, D.S.

    1996-12-31

    We investigated methods to accurately reconstruct 180{degrees} truncated TCT and SPECT projection data obtained from a right-angle dual-camera SPECT system for myocardial SPECT with attenuation compensation. The 180{degrees} data reconstruction methods would permit substantial savings in transmission data acquisition time. Simulation data from the 3D MCAT phantom and clinical data from large patients were used in the evaluation study. Different transmission reconstruction methods including the FBP, transmission ML-EM, transmission ML-SA, and BIT algorithms with and without using the body contour as support, were used in the TCT image reconstructions. The accuracy of both the TCT and attenuation compensated SPECT imagesmore » were evaluated for different degrees of truncation and noise levels. We found that using the FBP reconstructed TCT images resulted in higher count density in the left ventricular (LV) wall of the attenuation compensated SPECT images. The LV wall count density obtained using the iteratively reconstructed TCT images with and without support were similar to each other and were more accurate than that using the FBP. However, the TCT images obtained with support show fewer image artifacts than without support. Among the iterative reconstruction algorithms, the ML-SA algorithm provides the most accurate reconstruction but is the slowest. The BIT algorithm is the fastest but shows the most image artifacts. We conclude that accurate attenuation compensated images can be obtained with truncated 180{degrees} data from large patients using a right-angle dual-camera SPECT system.« less

  9. Automated segmentations of skin, soft-tissue, and skeleton, from torso CT images

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Hara, Takeshi; Fujita, Hiroshi; Yokoyama, Ryujiro; Kiryu, Takuji; Hoshi, Hiroaki

    2004-05-01

    We have been developing a computer-aided diagnosis (CAD) scheme for automatically recognizing human tissue and organ regions from high-resolution torso CT images. We show some initial results for extracting skin, soft-tissue and skeleton regions. 139 patient cases of torso CT images (male 92, female 47; age: 12-88) were used in this study. Each case was imaged with a common protocol (120kV/320mA) and covered the whole torso with isotopic spatial resolution of about 0.63 mm and density resolution of 12 bits. A gray-level thresholding based procedure was applied to separate the human body from background. The density and distance features to body surface were used to determine the skin, and separate soft-tissue from the others. A 3-D region growing based method was used to extract the skeleton. We applied this system to the 139 cases and found that the skin, soft-tissue and skeleton regions were recognized correctly for 93% of the patient cases. The accuracy of segmentation results was acceptable by evaluating the results slice by slice. This scheme will be included in CAD systems for detecting and diagnosing the abnormal lesions in multi-slice torso CT images.

  10. An iterative algorithm for soft tissue reconstruction from truncated flat panel projections

    NASA Astrophysics Data System (ADS)

    Langan, D.; Claus, B.; Edic, P.; Vaillant, R.; De Man, B.; Basu, S.; Iatrou, M.

    2006-03-01

    The capabilities of flat panel interventional x-ray systems continue to expand, enabling a broader array of medical applications to be performed in a minimally invasive manner. Although CT is providing pre-operative 3D information, there is a need for 3D imaging of low contrast soft tissue during interventions in a number of areas including neurology, cardiac electro-physiology, and oncology. Unlike CT systems, interventional angiographic x-ray systems provide real-time large field of view 2D imaging, patient access, and flexible gantry positioning enabling interventional procedures. However, relative to CT, these C-arm flat panel systems have additional technical challenges in 3D soft tissue imaging including slower rotation speed, gantry vibration, reduced lateral patient field of view (FOV), and increased scatter. The reduced patient FOV often results in significant data truncation. Reconstruction of truncated (incomplete) data is known an "interior problem", and it is mathematically impossible to obtain an exact reconstruction. Nevertheless, it is an important problem in 3D imaging on a C-arm to address the need to generate a 3D reconstruction representative of the object being imaged with minimal artifacts. In this work we investigate the application of an iterative Maximum Likelihood Transmission (MLTR) algorithm to truncated data. We also consider truncated data with limited views for cardiac imaging where the views are gated by the electrocardiogram(ECG) to combat motion artifacts.

  11. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  12. Microcircuit radiation effects databank

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This databank is the collation of radiation test data submitted by many testers and serves as a reference for engineers who are concerned with and have some knowledge of the effects of the natural radiation environment on microcircuits. It contains radiation sensitivity results from ground tests and is divided into two sections. Section A lists total dose damage information, and section B lists single event upset cross sections, I.E., the probability of a soft error (bit flip) or of a hard error (latchup).

  13. Impact of view reduction in CT on radiation dose for patients

    NASA Astrophysics Data System (ADS)

    Parcero, E.; Flores, L.; Sánchez, M. G.; Vidal, V.; Verdú, G.

    2017-08-01

    Iterative methods have become a hot topic of research in computed tomography (CT) imaging because of their capacity to resolve the reconstruction problem from a limited number of projections. This allows the reduction of radiation exposure on patients during the data acquisition. The reconstruction time and the high radiation dose imposed on patients are the two major drawbacks in CT. To solve them effectively we adapted the method for sparse linear equations and sparse least squares (LSQR) with soft threshold filtering (STF) and the fast iterative shrinkage-thresholding algorithm (FISTA) to computed tomography reconstruction. The feasibility of the proposed methods is demonstrated numerically.

  14. A modified non-binary LDPC scheme based on watermark symbols in high speed optical transmission systems

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo

    2016-04-01

    We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.

  15. A Scalable Architecture of a Structured LDPC Decoder

    NASA Technical Reports Server (NTRS)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  16. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  17. Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices

    NASA Astrophysics Data System (ADS)

    Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie

    2016-09-01

    Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.

  18. Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices

    PubMed Central

    Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie

    2016-01-01

    Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body. PMID:27670953

  19. Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices.

    PubMed

    Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie

    2016-09-27

    Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes' (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.

  20. ITER-relevant calibration technique for soft x-ray spectrometer.

    PubMed

    Rzadkiewicz, J; Książek, I; Zastrow, K-D; Coffey, I H; Jakubowska, K; Lawson, K D

    2010-10-01

    The ITER-oriented JET research program brings new requirements for the low-Z impurity monitoring, in particular for the Be—the future main wall component of JET and ITER. Monitoring based on Bragg spectroscopy requires an absolute sensitivity calibration, which is challenging for large tokamaks. This paper describes both “component-by-component” and “continua” calibration methods used for the Be IV channel (75.9 Å) of the Bragg rotor spectrometer deployed on JET. The calibration techniques presented here rely on multiorder reflectivity calculations and measurements of continuum radiation emitted from helium plasmas. These offer excellent conditions for the absolute photon flux calibration due to their low level of impurities. It was found that the component-by-component method gives results that are four times higher than those obtained by means of the continua method. A better understanding of this discrepancy requires further investigations.

  1. Hardware Implementation of Serially Concatenated PPM Decoder

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael

    2009-01-01

    A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in operation of the decoder. This is accomplished in the receiver by transmitting only a subset consisting of the likelihoods that correspond to time slots containing the largest numbers of observed photons during each PPM symbol period. The assumed number of observed photons in the remaining time slots is set to the mean of a noise slot. In low background noise, the selection of a small subset in this manner results in only negligible loss. Other features of the decoder design to reduce complexity and increase speed include (1) quantization of metrics in an efficient procedure chosen to incur no more than a small performance loss and (2) the use of the max-star function that allows sum of exponentials to be computed by simple operations that involve only an addition, a subtraction, and a table lookup. Another prominent feature of the design is a provision for access to interleaver and de-interleaver memory in a single clock cycle, eliminating the multiple clock-cycle latency characteristic of prior interleaver and de-interleaver designs.

  2. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  3. Kinetic analysis of reactions of Si-based epoxy resins by near-infrared spectroscopy, 13C NMR and soft-hard modelling.

    PubMed

    Garrido, Mariano; Larrechi, Maria Soledad; Rius, F Xavier; Mercado, Luis Adolfo; Galià, Marina

    2007-02-05

    Soft- and hard-modelling strategy was applied to near-infrared spectroscopy data obtained from monitoring the reaction between glycidyloxydimethylphenyl silane, a silicon-based epoxy monomer, and aniline. On the basis of the pure soft-modelling approach and previous chemical knowledge, a kinetic model for the reaction was proposed. Then, multivariate curve resolution-alternating least squares optimization was carried out under a hard constraint, that compels the concentration profiles to fulfil the proposed kinetic model at each iteration of the optimization process. In this way, the concentration profiles of each species and the corresponding kinetic rate constants of the reaction, unpublished until now, were obtained. The results obtained were contrasted with 13C NMR. The joint interval test of slope and intercept for detecting bias was not significant (alpha=5%).

  4. FPGA implementation of low complexity LDPC iterative decoder

    NASA Astrophysics Data System (ADS)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  5. The Deterministic Information Bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, D. J.; Schwab, David

    2015-03-01

    A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.

  6. Capacity of Pulse-Position Modulation (PPM) on Gaussian and Webb Channels

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Divsalar, D.; Hamkins, J.; Pollara, F.

    2000-01-01

    This article computes the capacity of various idealized soft-decision channels modeling an optical channel using an avalanche photodiode detector (APD) and pulse-position modulation (PPM). The capacity of this optical channel depends in a complicated way on the physical parameters of the APD and the constraints imposed by the PPM orthogonal signaling set. This article attempts to identify and separate the effects of several fundamental parameters on the capacity of the APD-detected optical PPM channel. First, an overall signal-to-noise ratio (SNR) parameter is de ned such that the capacity as a function of a bit-normalized version of this SNR drops precipitously toward zero at quasi-brick-wall limits on bit SNR that are numerically the same as the well-understood brick-wall limits for the standard additive white Gaussian noise (AWGN) channel. A second parameter is used to quantify the effects on capacity of one unique facet of the optical PPM channel (as compared with the standard AWGN channel) that causes the noise variance to be higher in signal slots than in nonsignal slots. This nonuniform noise variance yields interesting capacity effects even when the channel model is AWGN. A third parameter is used to measure the effects on capacity of the difference between an AWGN model and a non-Gaussian model proposed by Webb (see reference in [2]) for approximating the statistics of the APD-detected optical channel. Finally, a fourth parameter is used to quantify the blending of a Webb model with a pure AWGN model to account for thermal noise. Numerical results show that the capacity of M-ary orthogonal signaling on the Webb channel exhibits the same brick-wall Shannon limit, (M ln 2)=(M 1), as on the AWGN channel ( 1:59 dB for large M). Results also compare the capacity obtained by hard- and soft-output channels and indicate that soft-output channels o er a 3-dB advantage.

  7. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    NASA Astrophysics Data System (ADS)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  8. Convolutional encoding of self-dual codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1994-01-01

    There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.

  9. Results from Testing of Two Rotary Percussive Drilling Systems

    NASA Technical Reports Server (NTRS)

    Kriechbaum, Kristopher; Brown, Kyle; Cady, Ian; von der Heydt, Max; Klein, Kerry; Kulczycki, Eric; Okon, Avi

    2010-01-01

    The developmental test program for the MSL (Mars Science Laboratory) rotary percussive drill examined the e ect of various drill input parameters on the drill pene- tration rate. Some of the input parameters tested were drill angle with respect to gravity and percussive impact energy. The suite of rocks tested ranged from a high strength basalt to soft Kaolinite clay. We developed a hole start routine to reduce high sideloads from bit walk. The ongoing development test program for the IMSAH (Integrated Mars Sample Acquisition and Handling) rotary percussive corer uses many of the same rocks as the MSL suite. An additional performance parameter is core integrity. The MSL development test drill and the IMSAH test drill use similar hardware to provide rotation and percussion. However, the MSL test drill uses external stabilizers, while the IMSAH test drill does not have external stabilization. In addition the IMSAH drill is a core drill, while the MSL drill uses a solid powdering bit. Results from the testing of these two related drilling systems is examined.

  10. Designing I*CATch: A Multipurpose, Education-Friendly Construction Kit for Physical and Wearable Computing

    ERIC Educational Resources Information Center

    Ngai, Grace; Chan, Stephen C. F.; Leong, Hong Va; Ng, Vincent T. Y.

    2013-01-01

    This article presents the design and development of i*CATch, a construction kit for physical and wearable computing that was designed to be scalable, plug-and-play, and to provide support for iterative and exploratory learning. It consists of a standardized construction interface that can be adapted for a wide range of soft textiles or electronic…

  11. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  12. Private content identification based on soft fingerprinting

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Sviatoslav; Holotyak, Taras; Koval, Oleksiy; Beekhof, Fokko; Farhadzadeh, Farzad

    2011-02-01

    In many problems such as biometrics, multimedia search, retrieval, recommendation systems requiring privacypreserving similarity computations and identification, some binary features are stored in the public domain or outsourced to third parties that might raise certain privacy concerns about the original data. To avoid this privacy leak, privacy protection is used. In most cases, privacy protection is uniformly applied to all binary features resulting in data degradation and corresponding loss of performance. To avoid this undesirable effect we propose a new privacy amplification technique that is based on data hiding principles and benefits from side information about bit reliability a.k.a. soft fingerprinting. In this paper, we investigate the identification-rate vs privacy-leak trade-off. The analysis is performed for the case of a perfect match between side information shared between the encoder and decoder as well as for the case of partial side information.

  13. Use of the ROC anchor in foot and ankle surgery. A retrospective study.

    PubMed

    Kuwada, G T

    1999-05-01

    A retrospective study was conducted on the use of the ROC (Radial Osteo Compression) soft-tissue anchor in foot and ankle surgery. This article describes how the anchor is deployed, problematic aspects of using the anchor, and complications and success rates associated with the anchor in ankle stabilizations, posterior tibial tendon reconstruction, peroneus brevis tendon reconstruction after fracture of the base of the fifth metatarsal, and detachment and reattachment of the Achilles tendon. The ROC anchor consists of the anchor with nonabsorbable suture attached to the shaft, the deployment handle, and drill bits. The anchor and shaft are snapped into the deployment handle and inserted into the drill hole. Compression of the trigger deploys the anchor into the hole. The ROC anchor was found to be reliable, useful, and relatively easy to deploy, with outcomes similar to those of other soft-tissue anchors.

  14. Enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding for four-level holographic data storage systems

    NASA Astrophysics Data System (ADS)

    Kong, Gyuyeol; Choi, Sooyong

    2017-09-01

    An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.

  15. Genetic Local Search for Optimum Multiuser Detection Problem in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Wang, Shaowei; Ji, Xiaoyong

    Optimum multiuser detection (OMD) in direct-sequence code-division multiple access (DS-CDMA) systems is an NP-complete problem. In this paper, we present a genetic local search algorithm, which consists of an evolution strategy framework and a local improvement procedure. The evolution strategy searches the space of feasible, locally optimal solutions only. A fast iterated local search algorithm, which employs the proprietary characteristics of the OMD problem, produces local optima with great efficiency. Computer simulations show the bit error rate (BER) performance of the GLS outperforms other multiuser detectors in all cases discussed. The computation time is polynomial complexity in the number of users.

  16. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  17. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  18. Acceleration of linear stationary iterative processes in multiprocessor computers. II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romm, Ya.E.

    1982-05-01

    For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less

  19. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, Yuru, E-mail: peiyuru@cis.pku.edu.cn; Ai, Xin

    Purpose: Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. Methods: The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3Dmore » exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. Results: The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. Conclusions: The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.« less

  20. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images.

    PubMed

    Pei, Yuru; Ai, Xingsheng; Zha, Hongbin; Xu, Tianmin; Ma, Gengyu

    2016-09-01

    Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.

  1. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Quasi-perfect FIFO: Synchronous or asynchronous with application in controller design for the UNICON laser memory. [digital memory and buffer storage

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1974-01-01

    The first-in-first-out memory buffer (FIFO), is an elastic digital memory whose main application is in data buffering between devices operating at different rates. Data written into the top is moved autonomously down toward the bottom of the FIFO to the lowest unoccupied location, and data read from the bottom of the FIFO will cause data from the top to move autonomously down toward the bottom. The FIFO is available in MOS LSI asynchronous form with data rate in the 1 MHz region. The FIFO described yields a simple high-speed iterative implementation, either synchronous of asynchronous. Because of this simple iterative structure, the FIFO is expandable in both number of words and bits per word, and it is attractive from the viewpoint of integrated-circuit production. For the synchronous FIFO, a model was built and successfully used in the controller for the UNICON laser memory. For the asynchronous FIFO, a model was built and also successfully used in a high-performance magnetic tape controller.

  3. Giant and universal magnetoelectric coupling in soft materials and concomitant ramifications for materials science and biology

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Sharma, Pradeep

    2013-10-01

    Magnetoelectric coupling—the ability of a material to magnetize upon application of an electric field and, conversely, to polarize under the action of a magnetic field—is rare and restricted to a rather small set of exotic hard crystalline materials. Intense research activity has recently ensued on materials development, fundamental scientific issues, and applications related to this phenomenon. This tantalizing property, if present in adequate strength at room temperature, can be used to pave the way for next-generation memory devices such as miniature magnetic random access memories and multiple state memory bits, sensors, energy harvesting, spintronics, among others. In this Rapid Communication, we prove the existence of an overlooked strain mediated nonlinear mechanism that can be used to universally induce the giant magnetoelectric effect in all (sufficiently) soft dielectric materials. For soft polymer foams—which, for instance, may be used in stretchable electronics—we predict room-temperature magnetoelectric coefficients that are comparable to the best known (hard) composite materials created. We also argue, based on a simple quantitative model, that magnetoreception in some biological contexts (e.g., birds) most likely utilizes this very mechanism.

  4. Outage probability of a relay strategy allowing intra-link errors utilizing Slepian-Wolf theorem

    NASA Astrophysics Data System (ADS)

    Cheng, Meng; Anwar, Khoirul; Matsumoto, Tad

    2013-12-01

    In conventional decode-and-forward (DF) one-way relay systems, a data block received at the relay node is discarded, if the information part is found to have errors after decoding. Such errors are referred to as intra-link errors in this article. However, in a setup where the relay forwards data blocks despite possible intra-link errors, the two data blocks, one from the source node and the other from the relay node, are highly correlated because they were transmitted from the same source. In this article, we focus on the outage probability analysis of such a relay transmission system, where source-destination and relay-destination links, Link 1 and Link 2, respectively, are assumed to suffer from the correlated fading variation due to block Rayleigh fading. The intra-link is assumed to be represented by a simple bit-flipping model, where some of the information bits recovered at the relay node are the flipped version of their corresponding original information bits at the source. The correlated bit streams are encoded separately by the source and relay nodes, and transmitted block-by-block to a common destination using different time slots, where the information sequence transmitted over Link 2 may be a noise-corrupted interleaved version of the original sequence. The joint decoding takes place at the destination by exploiting the correlation knowledge of the intra-link (source-relay link). It is shown that the outage probability of the proposed transmission technique can be expressed by a set of double integrals over the admissible rate range, given by the Slepian-Wolf theorem, with respect to the probability density function ( pdf) of the instantaneous signal-to-noise power ratios (SNR) of Link 1 and Link 2. It is found that, with the Slepian-Wolf relay technique, so far as the correlation ρ of the complex fading variation is | ρ|<1, the 2nd order diversity can be achieved only if the two bit streams are fully correlated. This indicates that the diversity order exhibited in the outage curve converges to 1 when the bit streams are not fully correlated. Moreover, the Slepian-Wolf outage probability is proved to be smaller than that of the 2nd order maximum ratio combining (MRC) diversity, if the average SNRs of the two independent links are the same. Exact as well as asymptotic expressions of the outage probability are theoretically derived in the article. In addition, the theoretical outage results are compared with the frame-error-rate (FER) curves, obtained by a series of simulations for the Slepian-Wolf relay system based on bit-interleaved coded modulation with iterative detection (BICM-ID). It is shown that the FER curves exhibit the same tendency as the theoretical results.

  5. Quantum-assisted learning of graphical models with arbitrary pairwise connectivity

    NASA Astrophysics Data System (ADS)

    Realpe-Gómez, John; Benedetti, Marcello; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    Mainstream machine learning techniques rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speedup these tasks. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful machine learning models. Here we show how to surpass this `curse of limited connectivity' bottleneck and illustrate our findings by training probabilistic generative models with arbitrary pairwise connectivity on a real dataset of handwritten digits and two synthetic datasets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding Boltzmann-like distribution. Therefore, the need to infer the effective temperature at each iteration is avoided, speeding up learning, and the effect of noise in the control parameters is mitigated, improving accuracy. This work was supported in part by NASA, AFRL, ODNI, and IARPA.

  6. Searching for memories, Sudoku, implicit check bits, and the iterative use of not-always-correct rapid neural computation.

    PubMed

    Hopfield, J J

    2008-05-01

    The algorithms that simple feedback neural circuits representing a brain area can rapidly carry out are often adequate to solve easy problems but for more difficult problems can return incorrect answers. A new excitatory-inhibitory circuit model of associative memory displays the common human problem of failing to rapidly find a memory when only a small clue is present. The memory model and a related computational network for solving Sudoku puzzles produce answers that contain implicit check bits in the representation of information across neurons, allowing a rapid evaluation of whether the putative answer is correct or incorrect through a computation related to visual pop-out. This fact may account for our strong psychological feeling of right or wrong when we retrieve a nominal memory from a minimal clue. This information allows more difficult computations or memory retrievals to be done in a serial fashion by using the fast but limited capabilities of a computational module multiple times. The mathematics of the excitatory-inhibitory circuits for associative memory and for Sudoku, both of which are understood in terms of energy or Lyapunov functions, is described in detail.

  7. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P. T.; Shadid, J. N.; Hu, J. J.

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  8. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE PAGES

    Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...

    2017-11-06

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  9. Monte Carlo simulation of particle-induced bit upsets

    NASA Astrophysics Data System (ADS)

    Wrobel, Frédéric; Touboul, Antoine; Vaillé, Jean-Roch; Boch, Jérôme; Saigné, Frédéric

    2017-09-01

    We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER) for a given device in a given environment.

  10. Dynamic iterative beam hardening correction (DIBHC) in myocardial perfusion imaging using contrast-enhanced computed tomography.

    PubMed

    Stenner, Philip; Schmidt, Bernhard; Allmendinger, Thomas; Flohr, Thomas; Kachelrie, Marc

    2010-06-01

    In cardiac perfusion examinations with computed tomography (CT) large concentrations of iodine in the ventricle and in the descending aorta cause beam hardening artifacts that can lead to incorrect perfusion parameters. The aim of this study is to reduce these artifacts by performing an iterative correction and by accounting for the 3 materials soft tissue, bone, and iodine. Beam hardening corrections are either implemented as simple precorrections which cannot account for higher order beam hardening effects, or as iterative approaches that are based on segmenting the original image into material distribution images. Conventional segmentation algorithms fail to clearly distinguish between iodine and bone. Our new algorithm, DIBHC, calculates the time-dependent iodine distribution by analyzing the voxel changes of a cardiac perfusion examination (typically N approximately 15 electrocardiogram-correlated scans distributed over a total scan time up to T approximately 30 s). These voxel dynamics are due to changes in contrast agent. This prior information allows to precisely distinguish between bone and iodine and is key to DIBHC where each iteration consists of a multimaterial (soft tissue, bone, iodine) polychromatic forward projection, a raw data comparison and a filtered backprojection. Simulations with a semi-anthropomorphic dynamic phantom and clinical scans using a dual source CT scanner with 2 x 128 slices, a tube voltage of 100 kV, a tube current of 180 mAs, and a rotation time of 0.28 seconds have been carried out. The uncorrected images suffer from beam hardening artifacts that appear as dark bands connecting large concentrations of iodine in the ventricle, aorta, and bony structures. The CT-values of the affected tissue are usually underestimated by roughly 20 HU although deviations of up to 61 HU have been observed. For a quantitative evaluation circular regions of interest have been analyzed. After application of DIBHC the mean values obtained deviate by only 1 HU for the simulations and the corrected values show an increase of up to 61 HU for the measurements. One iteration of DIBHC greatly reduces the beam hardening artifacts induced by the contrast agent dynamics (and those due to bone) now allowing for an improved assessment of contrast agent uptake in the myocardium which is essential for determining myocardial perfusion.

  11. Read disturb errors in a CMOS static RAM chip. [radiation hardened for spacedraft

    NASA Technical Reports Server (NTRS)

    Wood, Steven H.; Marr, James C., IV; Nguyen, Tien T.; Padgett, Dwayne J.; Tran, Joe C.; Griswold, Thomas W.; Lebowitz, Daniel C.

    1989-01-01

    Results are reported from an extensive investigation into pattern-sensitive soft errors (read disturb errors) in the TCC244 CMOS static RAM chip. The TCC244, also known as the SA2838, is a radiation-hard single-event-upset-resistant 4 x 256 memory chip. This device is being used by the Jet Propulsion Laboratory in the Galileo and Magellan spacecraft, which will have encounters with Jupiter and Venus, respectively. Two aspects of the part's design are shown to result in the occurrence of read disturb errors: the transparence of the signal path from the address pins to the array of cells, and the large resistance in the Vdd and Vss lines of the cells in the center of the array. Probe measurements taken during a read disturb failure illustrate how address skews and the data pattern in the chip combine to produce a bit flip. A capacitive charge pump formed by the individual cell capacitances and the resistance in the supply lines pumps down both the internal cell voltage and the local supply voltage until a bit flip occurs.

  12. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  13. Nonlinearity-aware 200  Gbit/s DMT transmission for C-band short-reach optical interconnects with a single packaged electro-absorption modulated laser.

    PubMed

    Zhang, Lu; Hong, Xuezhi; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Schatz, Richard; Guo, Changjian; Zhang, Junwei; Nordwall, Fredrik; Engenhardt, Klaus M; Westergren, Urban; Popov, Sergei; Jacobsen, Gunnar; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia

    2018-01-15

    We experimentally demonstrate the transmission of a 200 Gbit/s discrete multitone (DMT) at the soft forward error correction limit in an intensity-modulation direct-detection system with a single C-band packaged distributed feedback laser and traveling-wave electro absorption modulator (DFB-TWEAM), digital-to-analog converter and photodiode. The bit-power loaded DMT signal is transmitted over 1.6 km standard single-mode fiber with a net rate of 166.7 Gbit/s, achieving an effective electrical spectrum efficiency of 4.93 bit/s/Hz. Meanwhile, net rates of 174.2 Gbit/s and 179.5 Gbit/s are also demonstrated over 0.8 km SSMF and in an optical back-to-back case, respectively. The feature of the packaged DFB-TWEAM is presented. The nonlinearity-aware digital signal processing algorithm for channel equalization is mathematically described, which improves the signal-to-noise ratio up to 3.5 dB.

  14. Soft-output decoding algorithms in iterative decoding of turbo codes

    NASA Technical Reports Server (NTRS)

    Benedetto, S.; Montorsi, G.; Divsalar, D.; Pollara, F.

    1996-01-01

    In this article, we present two versions of a simplified maximum a posteriori decoding algorithm. The algorithms work in a sliding window form, like the Viterbi algorithm, and can thus be used to decode continuously transmitted sequences obtained by parallel concatenated codes, without requiring code trellis termination. A heuristic explanation is also given of how to embed the maximum a posteriori algorithms into the iterative decoding of parallel concatenated codes (turbo codes). The performances of the two algorithms are compared on the basis of a powerful rate 1/3 parallel concatenated code. Basic circuits to implement the simplified a posteriori decoding algorithm using lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed.

  15. Segmentation methodology for automated classification and differentiation of soft tissues in multiband images of high-resolution ultrasonic transmission tomography.

    PubMed

    Jeong, Jeong-Won; Shin, Dae C; Do, Synho; Marmarelis, Vasilis Z

    2006-08-01

    This paper presents a novel segmentation methodology for automated classification and differentiation of soft tissues using multiband data obtained with the newly developed system of high-resolution ultrasonic transmission tomography (HUTT) for imaging biological organs. This methodology extends and combines two existing approaches: the L-level set active contour (AC) segmentation approach and the agglomerative hierarchical kappa-means approach for unsupervised clustering (UC). To prevent the trapping of the current iterative minimization AC algorithm in a local minimum, we introduce a multiresolution approach that applies the level set functions at successively increasing resolutions of the image data. The resulting AC clusters are subsequently rearranged by the UC algorithm that seeks the optimal set of clusters yielding the minimum within-cluster distances in the feature space. The presented results from Monte Carlo simulations and experimental animal-tissue data demonstrate that the proposed methodology outperforms other existing methods without depending on heuristic parameters and provides a reliable means for soft tissue differentiation in HUTT images.

  16. Impact of jammer side information on the performance of anti-jam systems

    NASA Astrophysics Data System (ADS)

    Lim, Samuel

    1992-03-01

    The Chernoff bound parameter, D, provides a performance measure for all coded communication systems. D can be used to determine upper-bounds on bit error probabilities (BEPs) of Viterbi decoded convolutional codes. The impact on BEP bounds of channel measurements that provide additional side information can also be evaluated with D. This memo documents the results of a Chernoff bound parameter evaluation in optimum partial-band noise jamming (OPBNJ) for both BPSK and DPSK modulation schemes. Hard and soft quantized receivers, with and without jammer side information (JSI), were examined. The results of this analysis indicate that JSI does improve decoding performance. However, a knowledge of jammer presence alone achieves a performance level comparable to soft decision decoding with perfect JSI. Furthermore, performance degradation due to the lack of JSI can be compensated for by increasing the number of levels of quantization. Therefore, an anti-jam system without JSI can be made to perform almost as well as a system with JSI.

  17. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, A; Paysan, P; Brehm, M

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less

  18. Redesign and Rehost of the BIG STICK Nuclear Wargame Simulation

    DTIC Science & Technology

    1988-12-01

    described by Pressman [16]. The 4GT soft- ware development approach consists of four iterative phases: the requirements gathering phase, the design strategy...2. BIG STICK Instructions and Planning Guidance. Air Command and Staff College, Air University, Maxwell AFB AL, 1987. Unpublished Manual. 3. Barry W...Software Engineering Notes, 7:29-32, April 1982. 81 17. Roger S. Pressman . Software Engineering: A Practitioner’s Approach. Mc-Craw-llill Book

  19. Soft Clustering Criterion Functions for Partitional Document Clustering

    DTIC Science & Technology

    2004-05-26

    in the clus- ter that it already belongs to. The refinement phase ends, as soon as we perform an iteration in which no documents moved between...for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 26 MAY 2004 2... it with the one obtained by the hard criterion functions. We present a comprehensive experimental evaluation involving twelve differ- ent datasets

  20. Mitigation of crosstalk based on CSO-ICA in free space orbital angular momentum multiplexing systems

    NASA Astrophysics Data System (ADS)

    Xing, Dengke; Liu, Jianfei; Zeng, Xiangye; Lu, Jia; Yi, Ziyao

    2018-09-01

    Orbital angular momentum (OAM) multiplexing has caused a lot of concerns and researches in recent years because of its great spectral efficiency and many OAM systems in free space channel have been demonstrated. However, due to the existence of atmospheric turbulence, the power of OAM beams will diffuse to beams with neighboring topological charges and inter-mode crosstalk will emerge in these systems, resulting in the system nonavailability in severe cases. In this paper, we introduced independent component analysis (ICA), which is known as a popular method of signal separation, to mitigate inter-mode crosstalk effects; furthermore, aiming at the shortcomings of traditional ICA algorithm's fixed iteration speed, we proposed a joint algorithm, CSO-ICA, to improve the process of solving the separation matrix by taking advantage of fast convergence rate and high convergence precision of chicken swarm algorithm (CSO). We can get the optimal separation matrix by adjusting the step size according to the last iteration in CSO-ICA. Simulation results indicate that the proposed algorithm has a good performance in inter-mode crosstalk mitigation and the optical signal-to-noise ratio (OSNR) requirement of received signals (OAM+2, OAM+4, OAM+6, OAM+8) is reduced about 3.2 dB at bit error ratio (BER) of 3.8 × 10-3. Meanwhile, the convergence speed is much faster than the traditional ICA algorithm by improving about an order of iteration times.

  1. New adaptive statistical iterative reconstruction ASiR-V: Assessment of noise performance in comparison to ASiR.

    PubMed

    De Marco, Paolo; Origgi, Daniela

    2018-03-01

    To assess the noise characteristics of the new adaptive statistical iterative reconstruction (ASiR-V) in comparison to ASiR. A water phantom was acquired with common clinical scanning parameters, at five different levels of CTDI vol . Images were reconstructed with different kernels (STD, SOFT, and BONE), different IR levels (40%, 60%, and 100%) and different slice thickness (ST) (0.625 and 2.5 mm), both for ASiR-V and ASiR. Noise properties were investigated and noise power spectrum (NPS) was evaluated. ASiR-V significantly reduced noise relative to FBP: noise reduction was in the range 23%-60% for a 0.625 mm ST and 12%-64% for the 2.5 mm ST. Above 2 mGy, noise reduction for ASiR-V had no dependence on dose. Noise reduction for ASIR-V has dependence on ST, being greater for STD and SOFT kernels at 2.5 mm. For the STD kernel ASiR-V has greater noise reduction for both ST, if compared to ASiR. For the SOFT kernel, results varies according to dose and ST, while for BONE kernel ASIR-V shows less noise reduction. NPS for CT Revolution has dose dependent behavior at lower doses. NPS for ASIR-V and ASiR is similar, showing a shift toward lower frequencies as the IR level increases for STD and SOFT kernels. The NPS is different between ASiR-V and ASIR with BONE kernel. NPS for ASiR-V appears to be ST dependent, having a shift toward lower frequencies for 2.5 mm ST. ASiR-V showed greater noise reduction than ASiR for STD and SOFT kernels, while keeping the same NPS. For the BONE kernel, ASiR-V presents a completely different behavior, with less noise reduction and modified NPS. Noise properties of the ASiR-V are dependent on reconstruction slice thickness. The noise properties of ASiR-V suggest the need for further measurements and efforts to establish new CT protocols to optimize clinical imaging. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  2. Gaseous electron multiplier-based soft x-ray plasma diagnostics development: Preliminary tests at ASDEX Upgrade.

    PubMed

    Chernyshova, M; Malinowski, K; Czarski, T; Wojeński, A; Vezinet, D; Poźniak, K T; Kasprowicz, G; Mazon, D; Jardin, A; Herrmann, A; Kowalska-Strzęciwilk, E; Krawczyk, R; Kolasiński, P; Zabołotny, W; Zienkiewicz, P

    2016-11-01

    A Gaseous Electron Multiplier (GEM)-based detector is being developed for soft X-ray diagnostics on tokamaks. Its main goal is to facilitate transport studies of impurities like tungsten. Such studies are very relevant to ITER, where the excessive accumulation of impurities in the plasma core should be avoided. This contribution provides details of the preliminary tests at ASDEX Upgrade (AUG) with a focus on the most important aspects for detector operation in harsh radiation environment. It was shown that both spatially and spectrally resolved data could be collected, in a reasonable agreement with other AUG diagnostics. Contributions to the GEM signal include also hard X-rays, gammas, and neutrons. First simulations of the effect of high-energy photons have helped understanding these contributions.

  3. Gaseous electron multiplier-based soft x-ray plasma diagnostics development: Preliminary tests at ASDEX Upgrade

    NASA Astrophysics Data System (ADS)

    Chernyshova, M.; Malinowski, K.; Czarski, T.; Wojeński, A.; Vezinet, D.; Poźniak, K. T.; Kasprowicz, G.; Mazon, D.; Jardin, A.; Herrmann, A.; Kowalska-Strzeciwilk, E.; Krawczyk, R.; Kolasiński, P.; Zabołotny, W.; Zienkiewicz, P.

    2016-11-01

    A Gaseous Electron Multiplier (GEM)-based detector is being developed for soft X-ray diagnostics on tokamaks. Its main goal is to facilitate transport studies of impurities like tungsten. Such studies are very relevant to ITER, where the excessive accumulation of impurities in the plasma core should be avoided. This contribution provides details of the preliminary tests at ASDEX Upgrade (AUG) with a focus on the most important aspects for detector operation in harsh radiation environment. It was shown that both spatially and spectrally resolved data could be collected, in a reasonable agreement with other AUG diagnostics. Contributions to the GEM signal include also hard X-rays, gammas, and neutrons. First simulations of the effect of high-energy photons have helped understanding these contributions.

  4. A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.

    PubMed

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh

    2017-07-01

    Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (p<;0.05). Thus, the proposed system provides an effective and economical solution to the Midas-Touch problem and extended usability for the large population of disabled users.

  5. Toward Modular Soft Robotics: Proprioceptive Curvature Sensing and Sliding-Mode Control of Soft Bidirectional Bending Modules.

    PubMed

    Luo, Ming; Skorina, Erik H; Tao, Weijia; Chen, Fuchen; Ozel, Selim; Sun, Yinan; Onal, Cagdas D

    2017-06-01

    Real-world environments are complex, unstructured, and often fragile. Soft robotics offers a solution for robots to safely interact with the environment and human coworkers, but suffers from a host of challenges in sensing and control of continuously deformable bodies. To overcome these challenges, this article considers a modular soft robotic architecture that offers proprioceptive sensing of pressure-operated bending actuation modules. We present integrated custom magnetic curvature sensors embedded in the neutral axis of bidirectional bending actuators. We describe our recent advances in the design and fabrication of these modules to improve the reliability of proprioceptive curvature feedback over our prior work. In particular, we study the effect of dimensional parameters on improving the linearity of curvature measurements. In addition, we present a sliding-mode controller formulation that drives the binary solenoid valve states directly, giving the control system the ability to hold the actuator steady without continuous pressurization and depressurization. In comparison to other methods, this control approach does not rely on pulse width modulation and hence offers superior dynamic performance (i.e., faster response rates). Our experimental results indicate that the proposed soft robotic modules offer a large range of bending angles with monotonic and more linear embedded curvature measurements, and that the direct sliding-mode control system exhibits improved bandwidth and a notable reduction in binary valve actuation operations compared to our earlier iterative sliding-mode controller.

  6. Fractal nematic colloids

    NASA Astrophysics Data System (ADS)

    Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.

    2017-01-01

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter.

  7. A method for the dynamic and thermal stress analysis of space shuttle surface insulation

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Levy, A.; Austin, F.

    1975-01-01

    The thermal protection system of the space shuttle consists of thousands of separate insulation tiles bonded to the orbiter's surface through a soft strain-isolation layer. The individual tiles are relatively thick and possess nonuniform properties. Therefore, each is idealized by finite-element assemblages containing up to 2500 degrees of freedom. Since the tiles affixed to a given structural panel will, in general, interact with one another, application of the standard direct-stiffness method would require equation systems involving excessive numbers of unknowns. This paper presents a method which overcomes this problem through an efficient iterative procedure which requires treatment of only a single tile at any given time. Results of associated static, dynamic, and thermal stress analyses and sufficient conditions for convergence of the iterative solution method are given.

  8. Asymmetric soft-error resistant memory

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)

    1991-01-01

    A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.

  9. Space vehicle Viterbi decoder. [data converters, algorithms

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The design and fabrication of an extremely low-power, constraint-length 7, rate 1/3 Viterbi decoder brassboard capable of operating at information rates of up to 100 kb/s is presented. The brassboard is partitioned to facilitate a later transition to an LSI version requiring even less power. The effect of soft-decision thresholds, path memory lengths, and output selection algorithms on the bit error rate is evaluated. A branch synchronization algorithm is compared with a more conventional approach. The implementation of the decoder and its test set (including all-digital noise source) are described along with the results of various system tests and evaluations. Results and recommendations are presented.

  10. Experiments with a small behaviour controlled planetary rover

    NASA Technical Reports Server (NTRS)

    Miller, David P.; Desai, Rajiv S.; Gat, Erann; Ivlev, Robert; Loch, John

    1993-01-01

    A series of experiments that were performed on the Rocky 3 robot is described. Rocky 3 is a small autonomous rover capable of navigating through rough outdoor terrain to a predesignated area, searching that area for soft soil, acquiring a soil sample, and depositing the sample in a container at its home base. The robot is programmed according to a reactive behavior control paradigm using the ALFA programming language. This style of programming produces robust autonomous performance while requiring significantly less computational resources than more traditional mobile robot control systems. The code for Rocky 3 runs on an eight bit processor and uses about ten k of memory.

  11. A service for the application of data quality information to NASA earth science satellite records

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Xing, Z.; Fry, C.; Khalsa, S. J. S.; Huang, T.; Chen, G.; Chin, T. M.; Alarcon, C.

    2016-12-01

    A recurring demand in working with satellite-based earth science data records is the need to apply data quality information. Such quality information is often contained within the data files as an array of "flags", but can also be represented by more complex quality descriptions such as combinations of bit flags, or even other ancillary variables that can be applied as thresholds to the geophysical variable of interest. For example, with Level 2 granules from the Group for High Resolution Sea Surface Temperature (GHRSST) project up to 6 independent variables could be used to screen the sea surface temperature measurements on a pixel-by-pixel basis. Quality screening of Level 3 data from the Soil Moisture Active Passive (SMAP) instrument can be become even more complex, involving 161 unique bit states or conditions a user can screen for. The application of quality information is often a laborious process for the user until they understand the implications of all the flags and bit conditions, and requires iterative approaches using custom software. The Virtual Quality Screening Service, a NASA ACCESS project, is addressing these issues and concerns. The project has developed an infrastructure to expose, apply, and extract quality screening information building off known and proven NASA components for data extraction and subset-by-value, data discovery, and exposure to the user of granule-based quality information. Further sharing of results through well-defined URLs and web service specifications has also been implemented. The presentation will focus on overall description of the technologies and informatics principals employed by the project. Examples of implementations of the end-to-end web service for quality screening with GHRSST and SMAP granules will be demonstrated.

  12. Optimized scheduling technique of null subcarriers for peak power control in 3GPP LTE downlink.

    PubMed

    Cho, Soobum; Park, Sang Kyu

    2014-01-01

    Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system.

  13. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  14. Optimized Scheduling Technique of Null Subcarriers for Peak Power Control in 3GPP LTE Downlink

    PubMed Central

    Park, Sang Kyu

    2014-01-01

    Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system. PMID:24883376

  15. Inductive electronegativity scale. Iterative calculation of inductive partial charges.

    PubMed

    Cherkasov, Artem

    2003-01-01

    A number of novel QSAR descriptors have been introduced on the basis of the previously elaborated models for steric and inductive effects. The developed "inductive" parameters include absolute and effective electronegativity, atomic partial charges, and local and global chemical hardness and softness. Being based on traditional inductive and steric substituent constants these 3D descriptors provide a valuable insight into intramolecular steric and electronic interactions and can find broad application in structure-activity studies. Possible interpretation of physical meaning of the inductive descriptors has been suggested by considering a neutral molecule as an electrical capacitor formed by charged atomic spheres. This approximation relates inductive chemical softness and hardness of bound atom(s) with the total area of the facings of electrical capacitor formed by the atom(s) and the rest of the molecule. The derived full electronegativity equalization scheme allows iterative calculation of inductive partial charges on the basis of atomic electronegativities, covalent radii, and intramolecular distances. A range of inductive descriptors has been computed for a variety of organic compounds. The calculated inductive charges in the studied molecules have been validated by experimental C-1s Electron Core Binding Energies and molecular dipole moments. Several semiempirical chemical rules, such as equalized electronegativity's arithmetic mean, principle of maximum hardness, and principle of hardness borrowing could be explicitly illustrated in the framework of the developed approach.

  16. Nonuniform update for sparse target recovery in fluorescence molecular tomography accelerated by ordered subsets.

    PubMed

    Zhu, Dianwen; Li, Changqing

    2014-12-01

    Fluorescence molecular tomography (FMT) is a promising imaging modality and has been actively studied in the past two decades since it can locate the specific tumor position three-dimensionally in small animals. However, it remains a challenging task to obtain fast, robust and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden, the noisy measurement and the ill-posed nature of the inverse problem. In this paper we propose a nonuniform preconditioning method in combination with L (1) regularization and ordered subsets technique (NUMOS) to take care of the different updating needs at different pixels, to enhance sparsity and suppress noise, and to further boost convergence of approximate solutions for fluorescence molecular tomography. Using both simulated data and phantom experiment, we found that the proposed nonuniform updating method outperforms its popular uniform counterpart by obtaining a more localized, less noisy, more accurate image. The computational cost was greatly reduced as well. The ordered subset (OS) technique provided additional 5 times and 3 times speed enhancements for simulation and phantom experiments, respectively, without degrading image qualities. When compared with the popular L (1) algorithms such as iterative soft-thresholding algorithm (ISTA) and Fast iterative soft-thresholding algorithm (FISTA) algorithms, NUMOS also outperforms them by obtaining a better image in much shorter period of time.

  17. Sparse-view proton computed tomography using modulated proton beams.

    PubMed

    Lee, Jiseoc; Kim, Changhwan; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong; Cho, Seungryong

    2015-02-01

    Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method-projection onto convex sets (SM-POCS), superiorization method-expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed within 1% error. EM-based algorithms produced an increased image noise and RMSE as the iteration reaches about 20, while the POCS-based algorithms showed a monotonic convergence with iterations. The ASD-POCS algorithm outperformed the others in terms of CNR, RMSE, and the accuracy of the reconstructed relative stopping power in the region of lung and soft tissues. The four iterative algorithms, i.e., ASD-POCS, SM-POCS, SM-EM, and EM-TV, have been developed and applied for proton CT image reconstruction. Although it still seems that the images need to be improved for practical applications to the treatment planning, proton CT imaging by use of the modulated beams in sparse-view sampling has demonstrated its feasibility.

  18. Notice of Violation of IEEE Publication PrinciplesJoint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath

    NASA Astrophysics Data System (ADS)

    Li, Lei; Hu, Jianhao

    2010-12-01

    Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.

  19. Performance of Low-Density Parity-Check Coded Modulation

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2011-02-01

    This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt

  20. Differential dynamic microscopy microrheology of soft materials: A tracking-free determination of the frequency-dependent loss and storage moduli

    NASA Astrophysics Data System (ADS)

    Edera, Paolo; Bergamini, Davide; Trappe, Véronique; Giavazzi, Fabio; Cerbino, Roberto

    2017-12-01

    Particle-tracking microrheology (PT-μ r ) exploits the thermal motion of embedded particles to probe the local mechanical properties of soft materials. Despite its appealing conceptual simplicity, PT-μ r requires calibration procedures and operating assumptions that constitute a practical barrier to its wider application. Here we demonstrate differential dynamic microscopy microrheology (DDM-μ r ), a tracking-free approach based on the multiscale, temporal correlation study of the image intensity fluctuations that are observed in microscopy experiments as a consequence of the translational and rotational motion of the tracers. We show that the mechanical moduli of an arbitrary sample are determined correctly over a wide frequency range provided that the standard DDM analysis is reinforced with an iterative, self-consistent procedure that fully exploits the multiscale information made available by DDM. Our approach to DDM-μ r does not require any prior calibration, is in agreement with both traditional rheology and diffusing wave spectroscopy microrheology, and works in conditions where PT-μ r fails, providing thus an operationally simple, calibration-free probe of soft materials.

  1. Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space.

    PubMed

    Giorelli, M; Renda, F; Calisti, M; Arienti, A; Ferri, G; Laschi, C

    2015-05-13

    This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix cannot be calculated analytically and the classical iterative methods cannot be used. To overcome the intrinsic problems of methods based on the Jacobian matrix, this paper proposes a neural network learning the inverse kinematics of a soft octopus-like manipulator driven by cables. After the learning phase, a feed-forward neural network is able to represent the relation between manipulator tip positions and forces applied to the cables. Experimental results show that a desired tip position can be achieved in a short time, since heavy computations are avoided, with a degree of accuracy of 8% relative average error with respect to the total arm length.

  2. Stress concentration in periodically rough Hertzian contact: Hertz to soft-flat-punch transition

    PubMed Central

    Raphaël, E.; Léger, L.; Restagno, F.; Poulard, C.

    2016-01-01

    We report on the elastic contact between a spherical lens and a patterned substrate, composed of a hexagonal lattice of cylindrical pillars. The stress field and the size of the contact area are obtained by means of numerical methods: a superposition method of discrete pressure elements and an iterative bisection-like method. For small indentations, a transition from a Hertzian to a soft-flat-punch behaviour is observed when the surface fraction of the substrate that is covered by the pillars is increased. In particular, we present a master curve defined by two dimensionless parameters, which allows one to predict the stress at the centre of the contact region in terms of the surface fraction occupied by pillars. The transition between the limiting contact regimes, Hertzian and soft-flat-punch, is well described by a rational function. Additionally, a simple model to describe the Boussinesq–Cerruti-like contact between the lens and a single elastic pillar, which takes into account the pillar geometry and the elastic properties of the two bodies, is presented. PMID:27713659

  3. Vocal-fold collision mass as a differentiator between registers in the low-pitch range.

    PubMed

    Vilkman, E; Alku, P; Laukkanen, A M

    1995-03-01

    Register shift between the chest and falsetto register is generally studied in the higher-than-speaking pitch range. However, a similar difference can also be produced at speaking pitch level. The shift from breathy "falsetto" phonation to normal chest voice phonation was studied in normal female (pitch range 170-180 Hz) and male (pitch range 94-110 Hz) subjects. The phonations gliding from falsetto to normal chest voice were analyzed using iterative adaptive inverse filtering and electroglottography. Both trained and untrained, as well as female and male subjects, were able to produce an abrupt register shift from soft falsetto to soft chest register phonation. The differences between male and female speakers in the glottal flow waveforms were smaller than expected. The register shift is interpreted in terms of a "critical mass" concept of chest register phonation.

  4. Gaseous electron multiplier-based soft x-ray plasma diagnostics development: Preliminary tests at ASDEX Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chernyshova, M., E-mail: maryna.chernyshova@ipplm.pl; Malinowski, K.; Czarski, T.

    2016-11-15

    A Gaseous Electron Multiplier (GEM)-based detector is being developed for soft X-ray diagnostics on tokamaks. Its main goal is to facilitate transport studies of impurities like tungsten. Such studies are very relevant to ITER, where the excessive accumulation of impurities in the plasma core should be avoided. This contribution provides details of the preliminary tests at ASDEX Upgrade (AUG) with a focus on the most important aspects for detector operation in harsh radiation environment. It was shown that both spatially and spectrally resolved data could be collected, in a reasonable agreement with other AUG diagnostics. Contributions to the GEM signalmore » include also hard X-rays, gammas, and neutrons. First simulations of the effect of high-energy photons have helped understanding these contributions.« less

  5. Fractal nematic colloids

    PubMed Central

    Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.

    2017-01-01

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter. PMID:28117325

  6. An iterative hyperelastic parameters reconstruction for breast cancer assessment

    NASA Astrophysics Data System (ADS)

    Mehrabian, Hatef; Samani, Abbas

    2008-03-01

    In breast elastography, breast tissues usually undergo large compressions resulting in significant geometric and structural changes, and consequently nonlinear mechanical behavior. In this study, an elastography technique is presented where parameters characterizing tissue nonlinear behavior is reconstructed. Such parameters can be used for tumor tissue classification. To model the nonlinear behavior, tissues are treated as hyperelastic materials. The proposed technique uses a constrained iterative inversion method to reconstruct the tissue hyperelastic parameters. The reconstruction technique uses a nonlinear finite element (FE) model for solving the forward problem. In this research, we applied Yeoh and Polynomial models to model the tissue hyperelasticity. To mimic the breast geometry, we used a computational phantom, which comprises of a hemisphere connected to a cylinder. This phantom consists of two types of soft tissue to mimic adipose and fibroglandular tissues and a tumor. Simulation results show the feasibility of the proposed method in reconstructing the hyperelastic parameters of the tumor tissue.

  7. Iterative Code-Aided ML Phase Estimation and Phase Ambiguity Resolution

    NASA Astrophysics Data System (ADS)

    Wymeersch, Henk; Moeneclaey, Marc

    2005-12-01

    As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very difficult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code properties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguity resolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phase ambiguity resolution can be performed without resorting to a pilot sequence, thus improving the spectral efficiency.

  8. GEM detectors development for radiation environment: neutron tests and simulations

    NASA Astrophysics Data System (ADS)

    Chernyshova, Maryna; Jednoróg, Sławomir; Malinowski, Karol; Czarski, Tomasz; Ziółkowski, Adam; Bieńkowska, Barbara; Prokopowicz, Rafał; Łaszyńska, Ewa; Kowalska-Strzeciwilk, Ewa; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Zabołotny, Wojciech; Wojeński, Andrzej; Krawczyk, Rafał D.; Linczuk, Paweł; Potrykus, Paweł; Bajdel, Barcel

    2016-09-01

    One of the requests from the ongoing ITER-Like Wall Project is to have diagnostics for Soft X-Ray (SXR) monitoring in tokamak. Such diagnostics should be focused on tungsten emission measurements, as an increased attention is currently paid to tungsten due to a fact that it became a main candidate for the plasma facing material in ITER and future fusion reactor. In addition, such diagnostics should be able to withstand harsh radiation environment at tokamak during its operation. The presented work is related to the development of such diagnostics based on Gas Electron Multiplier (GEM) technology. More specifically, an influence of neutron radiation on performance of the GEM detectors is studied both experimentally and through computer simulations. The neutron induced radioactivity (after neutron source exposure) was found to be not pronounced comparing to an impact of other secondary neutron reaction products (during the exposure).

  9. Model-based Iterative Reconstruction: Effect on Patient Radiation Dose and Image Quality in Pediatric Body CT

    PubMed Central

    Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.

    2014-01-01

    Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P < .002), lungs (P < .001), and bones (P < .001). By using the same reduced-dose acquisition, lesion detectability was better (38% [32 of 84 rated lesions]) or the same (62% [52 of 84 rated lesions]) with MBIR as compared with 100% ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental material is available for this article. PMID:24091359

  10. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  11. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  12. PAPR reduction in CO-OFDM systems using IPTS and modified clipping and filtering

    NASA Astrophysics Data System (ADS)

    Tong, Zheng-rong; Hu, Ya-nong; Zhang, Wei-hua

    2018-05-01

    Aiming at the problem of the peak to average power ratio ( PAPR) in coherent optical orthogonal frequency division multiplexing (CO-OFDM), a hybrid PAPR reduction technique of the CO-OFDM system by combining iterative partial transmit sequence (IPTS) scheme with modified clipping and filtering (MCF) is proposed. The simulation results show that at the complementary cumulative distribution function ( CCDF) of 10-4, the PAPR of proposed scheme is optimized by 1.86 dB and 2.13 dB compared with those of IPTS and CF schemes, respectively. Meanwhile, when the bit error rate ( BER) is 10-3, the optical signal to noise ratio ( OSNR) are optimized by 1.57 dB and 0.66 dB compared with those of CF and IPTS-CF schemes, respectively.

  13. Drilling electrode for real-time measurement of electrical impedance in bone tissues.

    PubMed

    Dai, Yu; Xue, Yuan; Zhang, Jianxun

    2014-03-01

    In order to prevent possible damages to soft tissues, reliable monitoring methods are required to provide valuable information on the condition of the bone being cut. This paper describes the design of an electrical impedance sensing drill developed to estimate the relative position between the drill and the bone being drilled. The two-electrode method is applied to continuously measure the electrical impedance during a drill feeding movement: two copper wire brushes are used to conduct electricity in the rotating drill and then the drill is one electrode; a needle is inserted into the soft tissues adjacent to the bone being drilled and acts as another electrode. Considering that the recorded electrical impedance is correlated with the insertion depth of the drill, we theoretically calculate the electrode-tissue contact impedance and prove that the rate of impedance change varies considerably when the drill bit crosses the boundary between two different bone tissues. Therefore, the rate of impedance change is used to determine whether the tip of the drill is located in one of cortical bone, cancellous bone, and cortical bone near a boundary with soft tissue. In vitro experiments in porcine thoracic spines were performed to demonstrate the feasibility of the impedance sensing drill. The experimental results indicate that the drill, used with the proposed data-processing method, can provide accurate and reliable breakthrough detection in the bone-drilling process.

  14. Iterative metal artefact reduction in CT: can dedicated algorithms improve image quality after spinal instrumentation?

    PubMed

    Aissa, J; Thomas, C; Sawicki, L M; Caspers, J; Kröpil, P; Antoch, G; Boos, J

    2017-05-01

    To investigate the value of dedicated computed tomography (CT) iterative metal artefact reduction (iMAR) algorithms in patients after spinal instrumentation. Post-surgical spinal CT images of 24 patients performed between March 2015 and July 2016 were retrospectively included. Images were reconstructed with standard weighted filtered back projection (WFBP) and with two dedicated iMAR algorithms (iMAR-Algo1, adjusted to spinal instrumentations and iMAR-Algo2, adjusted to large metallic hip implants) using a medium smooth kernel (B30f) and a sharp kernel (B70f). Frequencies of density changes were quantified to assess objective image quality. Image quality was rated subjectively by evaluating the visibility of critical anatomical structures including the central canal, the spinal cord, neural foramina, and vertebral bone. Both iMAR algorithms significantly reduced artefacts from metal compared with WFBP (p<0.0001). Results of subjective image analysis showed that both iMAR algorithms led to an improvement in visualisation of soft-tissue structures (median iMAR-Algo1=3; interquartile range [IQR]:1.5-3; iMAR-Algo2=4; IQR: 3.5-4) and bone structures (iMAR-Algo1=3; IQR:3-4; iMAR-Algo2=4; IQR:4-5) compared to WFBP (soft tissue: median 2; IQR: 0.5-2 and bone structures: median 2; IQR: 1-3; p<0.0001). Compared with iMAR-Algo1, objective artefact reduction and subjective visualisation of soft-tissue and bone structures were improved with iMAR-Algo2 (p<0.0001). Both iMAR algorithms reduced artefacts compared with WFBP, however, the iMAR algorithm with dedicated settings for large metallic implants was superior to the algorithm specifically adjusted to spinal implants. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  15. Serial turbo trellis coded modulation using a serially concatenated coder

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)

    2010-01-01

    Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.

  16. Advanced Geothermal Turbodrill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. C. Maurer

    2000-05-01

    Approximately 50% of the cost of a new geothermal power plant is in the wells that must be drilled. Compared to the majority of oil and gas wells, geothermal wells are more difficult and costly to drill for several reasons. First, most U.S. geothermal resources consist of hot, hard crystalline rock formations which drill much slower than the relatively soft sedimentary formations associated with most oil and gas production. Second, high downhole temperatures can greatly shorten equipment life or preclude the use of some technologies altogether. Third, producing viable levels of electricity from geothermal fields requires the use of largemore » diameter bores and a high degree of fluid communication, both of which increase drilling and completion costs. Optimizing fluid communication often requires creation of a directional well to intersect the best and largest number of fracture capable of producing hot geothermal fluids. Moineau motor stators made with elastomers cannot operate at geothermal temperatures, so they are limited to the upper portion of the hole. To overcome these limitations, Maurer Engineering Inc. (MEI) has developed a turbodrill that does not use elastomers and therefore can operate at geothermal temperatures. This new turbodrill uses a special gear assembly to reduce the output speed, thus allowing a larger range of bit types, especially tri-cone roller bits, which are the bits of choice for drilling hard crystalline formations. The Advanced Geothermal Turbodrill (AGT) represents a significant improvement for drilling geothermal wells and has the potential to significantly reduce drilling costs while increasing production, thereby making geothermal energy less expensive and better able to compete with fossil fuels. The final field test of the AGT will prepare the tool for successful commercialization.« less

  17. Mechanical specific energy versus depth of cut in rock cutting and drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yaneng; Zhang, Wu; Gamwo, Isaac

    The relationship between Mechanical Specific Energy (MSE) and the Rate of Penetration (ROP), or equivalently the depth of cut per revolution, provides an important measure for strategizing a drilling operation. This study explores how MSE evolves with depth of cut, and presents a concerted effort that encompasses analytical, computational and experimental approaches. A simple model for the relationship between MSE and cutting depth is first derived with consideration of the wear progression of a circular cutter. This is an extension of Detournay and Defourny's phenomenological cutting model. Wear is modeled as a flat contact area at the bottom of amore » cutter referred to as a wear flat, and that wear flat in the past is often considered to be fixed during cutting. But during a drilling operation by a full bit that consists of multiple circular cutters, the wear flat length may increase because of various wear mechanisms involved. The wear progression of cutters generally results in reduced efficiency with either increased MSE or decreased ROP. Also, an accurate estimate of removed rock volume is found important for the evaluation of MSE. The derived model is compared with experiment results from a single circular cutter, for cutting soft rock under ambient pressure with actual depth measured through a micrometer, and for cutting high strength rock under high pressure with actual cutting area measured by a confocal microscope. Lastly, the model is employed to interpret the evolution of MSE with depth of cut for a full drilling bit under confining pressure. The general form of equation of the developed model is found to describe well the experiment data and can be applied to interpret the drilling data for a full bit.« less

  18. Mechanical specific energy versus depth of cut in rock cutting and drilling

    DOE PAGES

    Zhou, Yaneng; Zhang, Wu; Gamwo, Isaac; ...

    2017-12-07

    The relationship between Mechanical Specific Energy (MSE) and the Rate of Penetration (ROP), or equivalently the depth of cut per revolution, provides an important measure for strategizing a drilling operation. This study explores how MSE evolves with depth of cut, and presents a concerted effort that encompasses analytical, computational and experimental approaches. A simple model for the relationship between MSE and cutting depth is first derived with consideration of the wear progression of a circular cutter. This is an extension of Detournay and Defourny's phenomenological cutting model. Wear is modeled as a flat contact area at the bottom of amore » cutter referred to as a wear flat, and that wear flat in the past is often considered to be fixed during cutting. But during a drilling operation by a full bit that consists of multiple circular cutters, the wear flat length may increase because of various wear mechanisms involved. The wear progression of cutters generally results in reduced efficiency with either increased MSE or decreased ROP. Also, an accurate estimate of removed rock volume is found important for the evaluation of MSE. The derived model is compared with experiment results from a single circular cutter, for cutting soft rock under ambient pressure with actual depth measured through a micrometer, and for cutting high strength rock under high pressure with actual cutting area measured by a confocal microscope. Lastly, the model is employed to interpret the evolution of MSE with depth of cut for a full drilling bit under confining pressure. The general form of equation of the developed model is found to describe well the experiment data and can be applied to interpret the drilling data for a full bit.« less

  19. An improved real time superresolution FPGA system

    NASA Astrophysics Data System (ADS)

    Lakshmi Narasimha, Pramod; Mudigoudar, Basavaraj; Yue, Zhanfeng; Topiwala, Pankaj

    2009-05-01

    In numerous computer vision applications, enhancing the quality and resolution of captured video can be critical. Acquired video is often grainy and low quality due to motion, transmission bottlenecks, etc. Postprocessing can enhance it. Superresolution greatly decreases camera jitter to deliver a smooth, stabilized, high quality video. In this paper, we extend previous work on a real-time superresolution application implemented in ASIC/FPGA hardware. A gradient based technique is used to register the frames at the sub-pixel level. Once we get the high resolution grid, we use an improved regularization technique in which the image is iteratively modified by applying back-projection to get a sharp and undistorted image. The algorithm was first tested in software and migrated to hardware, to achieve 320x240 -> 1280x960, about 30 fps, a stunning superresolution by 16X in total pixels. Various input parameters, such as size of input image, enlarging factor and the number of nearest neighbors, can be tuned conveniently by the user. We use a maximum word size of 32 bits to implement the algorithm in Matlab Simulink as well as in FPGA hardware, which gives us a fine balance between the number of bits and performance. The proposed system is robust and highly efficient. We have shown the performance improvement of the hardware superresolution over the software version (C code).

  20. FPGA-Based, Self-Checking, Fault-Tolerant Computers

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Rennels, David

    2004-01-01

    A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.

  1. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less

  2. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  3. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  4. Swarm intelligence inspired shills and the evolution of cooperation.

    PubMed

    Duan, Haibin; Sun, Changhao

    2014-06-09

    Many hostile scenarios exist in real-life situations, where cooperation is disfavored and the collective behavior needs intervention for system efficiency improvement. Towards this end, the framework of soft control provides a powerful tool by introducing controllable agents called shills, who are allowed to follow well-designed updating rules for varying missions. Inspired by swarm intelligence emerging from flocks of birds, we explore here the dependence of the evolution of cooperation on soft control by an evolutionary iterated prisoner's dilemma (IPD) game staged on square lattices, where the shills adopt a particle swarm optimization (PSO) mechanism for strategy updating. We demonstrate that not only can cooperation be promoted by shills effectively seeking for potentially better strategies and spreading them to others, but also the frequency of cooperation could be arbitrarily controlled by choosing appropriate parameter settings. Moreover, we show that adding more shills does not contribute to further cooperation promotion, while assigning higher weights to the collective knowledge for strategy updating proves a efficient way to induce cooperative behavior. Our research provides insights into cooperation evolution in the presence of PSO-inspired shills and we hope it will be inspirational for future studies focusing on swarm intelligence based soft control.

  5. Nanotechnology regulation: a study in claims making.

    PubMed

    Malloy, Timothy F

    2011-01-25

    There appears to be consensus on the notion that the hazards of nanotechnology are a social problem in need of resolution, but much dispute remains over what that resolution should be. There are a variety of potential policy tools for tackling this challenge, including conventional direct regulation, self-regulation, tort liability, financial guarantees, and more. The literature in this area is replete with proposals embracing one or more of these tools, typically using conventional regulation as a foil in which its inadequacy is presented as justification for a new proposed approach. At its core, the existing literature raises a critical question: What is the most effective role of government as regulator in these circumstances? This article explores that question by focusing upon two policy approaches in particular: conventional regulation and self-regulation, often described as hard law and soft law, respectively. Drawing from the sociology of social problems, the article examines the soft law construction of the nanotechnology problem and the associated solutions, with emphasis on the claims-making strategies used. In particular, it critically examines the rhetoric and underlying grounds for the soft law approach. It also sets out the grounds and framework for an alternative construction and solution-the concept of iterative regulation.

  6. Sparse Covariance Matrix Estimation With Eigenvalue Constraints

    PubMed Central

    LIU, Han; WANG, Lie; ZHAO, Tuo

    2014-01-01

    We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866

  7. Analysis of computer images in the presence of metals

    NASA Astrophysics Data System (ADS)

    Buzmakov, Alexey; Ingacheva, Anastasia; Prun, Victor; Nikolaev, Dmitry; Chukalina, Marina; Ferrero, Claudio; Asadchikov, Victor

    2018-04-01

    Artifacts caused by intensely absorbing inclusions are encountered in computed tomography via polychromatic scanning and may obscure or simulate pathologies in medical applications. To improve the quality of reconstruction if high-Z inclusions in presence, previously we proposed and tested with synthetic data an iterative technique with soft penalty mimicking linear inequalities on the photon-starved rays. This note reports a test at the tomographic laboratory set-up at the Institute of Crystallography FSRC "Crystallography and Photonics" RAS in which tomographic scans were successfully made of temporary tooth without inclusion and with Pb inclusion.

  8. Proton irradiation effects on advanced digital and microwave III-V components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hash, G.L.; Schwank, J.R.; Shaneyfelt, M.R.

    1994-09-01

    A wide range of advanced III-V components suitable for use in high-speed satellite communication systems were evaluated for displacement damage and single-event effects in high-energy, high-fluence proton environments. Transistors and integrated circuits (both digital and MMIC) were irradiated with protons at energies from 41 to 197 MeV and at fluences from 10{sup 10} to 2 {times} 10{sup 14} protons/cm{sup 2}. Large soft-error rates were measured for digital GaAs MESFET (3 {times} 10{sup {minus}5} errors/bit-day) and heterojunction bipolar circuits (10{sup {minus}5} errors/bit-day). No transient signals were detected from MMIC circuits. The largest degradation in transistor response caused by displacement damage wasmore » observed for 1.0-{mu}m depletion- and enhancement-mode MESFET transistors. Shorter gate length MESFET transistors and HEMT transistors exhibited less displacement-induced damage. These results show that memory-intensive GaAs digital circuits may result in significant system degradation due to single-event upset in natural and man-made space environments. However, displacement damage effects should not be a limiting factor for fluence levels up to 10{sup 14} protons/cm{sup 2} [equivalent to total doses in excess of 10 Mrad(GaAs)].« less

  9. Proton irradiation effects on advanced digital and microwave III-V components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hash, G.L.; Schwank, J.R.; Shaneyfelt, M.R.

    1994-12-01

    A wide range of advanced III-V components suitable for use in high-speed satellite communication systems were evaluated for displacement damage and single-event effects in high-energy, high-fluence proton environments. Transistors and integrated circuits (both digital and MMIC) were irradiated with protons at energies from 41 to 197 MeV and at fluences from 10[sup 10] to 2 [times] 10[sup 14] protons/cm[sup 2]. Large soft-error rates were measured for digital GaAs MESFET (3 [times] 10[sup [minus]5] errors/bit-day) and heterojunction bipolar circuits (10[sup [minus]5] errors/bit-day). No transient signals were detected from MMIC circuits. The largest degradation in transistor response caused by displacement damage wasmore » observed for 1.0-[mu]m depletion- and enhancement-mode MESFET transistors. Shorter gate length MESFET transistors and HEMT transistors exhibited less displacement-induced damage. These results show that memory-intensive GaAs digital circuits may result in significant system degradation due to single-event upset in natural and man-made space environments. However, displacement damage effects should not be a limiting factor for fluence levels up to 10[sup 14] protons/cm[sup 2] [equivalent to total doses in excess of 10 Mrad (GaAs)].« less

  10. A customizable system for real-time image processing using the Blackfin DSProcessor and the MicroC/OS-II real-time kernel

    NASA Astrophysics Data System (ADS)

    Coffey, Stephen; Connell, Joseph

    2005-06-01

    This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.

  11. Considerations, constraints and strategies for drilling on Mars

    NASA Astrophysics Data System (ADS)

    Zacny, K.; Cooper, G.

    2006-04-01

    The effect of the environmental conditions on Mars - low temperature, low pressure, the uncertainty in the nature of the formations to be penetrated and the possibility of encountering ice - imply that a successful drilling system will have to be able to cope with a wide range of conditions. Systems using continuous drill pipe or wireline both offer attractive features and disadvantages, and the preferred choice may depend on the target depth. The drill bit will have to cope with a range of terrain, and we offer some suggestions for making a bit that will be able to drill in both hard and soft formations, and also be able to resist choking if it encounters ice or ice-bound materials. Since it will not be possible to use a liquid to remove the drilled cuttings on Mars, the cuttings removal system will probably use some form of auger, although it may be possible to use continuous or intermittent gas blasts. The sublimation of ice resulting from the heat of drilling in ice-containing formations may help in removing the cuttings, particularly as they are expected to be very fine as a result of the low power available for drilling. Drilling into ice bound soils was also found to be akin to drilling into ice-bound sandstones.

  12. Improving the efficiency of quantum hash function by dense coding of coin operators in discrete-time quantum walk

    NASA Astrophysics Data System (ADS)

    Yang, YuGuang; Zhang, YuChen; Xu, Gang; Chen, XiuBo; Zhou, Yi-Hua; Shi, WeiMin

    2018-03-01

    Li et al. first proposed a quantum hash function (QHF) in a quantum-walk architecture. In their scheme, two two-particle interactions, i.e., I interaction and π-phase interaction are introduced and the choice of I or π-phase interactions at each iteration depends on a message bit. In this paper, we propose an efficient QHF by dense coding of coin operators in discrete-time quantum walk. Compared with existing QHFs, our protocol has the following advantages: the efficiency of the QHF can be doubled and even more; only one particle is enough and two-particle interactions are unnecessary so that quantum resources are saved. It is a clue to apply the dense coding technique to quantum cryptographic protocols, especially to the applications with restricted quantum resources.

  13. Soft errors in commercial off-the-shelf static random access memories

    NASA Astrophysics Data System (ADS)

    Dilillo, L.; Tsiligiannis, G.; Gupta, V.; Bosser, A.; Saigne, F.; Wrobel, F.

    2017-01-01

    This article reviews state-of-the-art techniques for the evaluation of the effect of radiation on static random access memory (SRAM). We detailed irradiation test techniques and results from irradiation experiments with several types of particles. Two commercial SRAMs, in 90 and 65 nm technology nodes, were considered as case studies. Besides the basic static and dynamic test modes, advanced stimuli for the irradiation tests were introduced, as well as statistical post-processing techniques allowing for deeper analysis of the correlations between bit-flip cross-sections and design/architectural characteristics of the memory device. Further insight is provided on the response of irradiated stacked layer devices and on the use of characterized SRAM devices as particle detectors.

  14. Communications and information research: Improved space link performance via concatenated forward error correction coding

    NASA Technical Reports Server (NTRS)

    Rao, T. R. N.; Seetharaman, G.; Feng, G. L.

    1996-01-01

    With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.

  15. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  16. Spectral information enhancement using wavelet-based iterative filtering for in vivo gamma spectrometry.

    PubMed

    Paul, Sabyasachi; Sarkar, P K

    2013-04-01

    Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results.

  17. Anisotropic elastic moduli reconstruction in transversely isotropic model using MRE

    NASA Astrophysics Data System (ADS)

    Song, Jiah; In Kwon, Oh; Seo, Jin Keun

    2012-11-01

    Magnetic resonance elastography (MRE) is an elastic tissue property imaging modality in which the phase-contrast based MRI imaging technique is used to measure internal displacement induced by a harmonically oscillating mechanical vibration. MRE has made rapid technological progress in the past decade and has now reached the stage of clinical use. Most of the research outcomes are based on the assumption of isotropy. Since soft tissues like skeletal muscles show anisotropic behavior, the MRE technique should be extended to anisotropic elastic property imaging. This paper considers reconstruction in a transversely isotropic model, which is the simplest case of anisotropy, and develops a new non-iterative reconstruction method for visualizing the elastic moduli distribution. This new method is based on an explicit representation formula using the Newtonian potential of measured displacement. Hence, the proposed method does not require iterations since it directly recovers the anisotropic elastic moduli. We perform numerical simulations in order to demonstrate the feasibility of the proposed method in recovering a two-dimensional anisotropic tensor.

  18. A randomized controlled trial to assess the pain associated with the debond of orthodontic fixed appliances

    PubMed Central

    Mangnall, Louise A R; Dietrich, Thomas; Scholey, John M

    2013-01-01

    Objective: To determine patient experience of pain during treatment with fixed orthodontic appliances, expectations of pain during debond and whether biting on a soft acrylic wafer during debond decreases pain experience. Design: Multicentre randomized controlled trial. Setting: Three UK hospital based orthodontic departments: Mid-Staffordshire NHS Foundation Trust, Birmingham Dental Hospital and University Hospital of North Staffordshire. Materials and methods: Ninety patients were randomly allocated to either the control (n = 45) or wafer group (n = 45). A visual analogue scale-based questionnaire was completed pre-debond to determine pain experience during treatment and expectations of pain during debond. The appliances were debonded and those in the wafer group bit on a soft acrylic wafer. A second questionnaire was completed post-debond to assess the pain experienced. Results: Biting on an acrylic wafer significantly reduced the pain experienced when debonding the posterior teeth (P≤0·05). Thirty-nine per cent found the lower anterior teeth the most painful. The expected pain was significantly greater than that actually experienced (P≤0·0001). Greater pain during treatment correlated with increased expectations and increased actually experienced pain (P≤0·0001). Conclusions: Biting on a soft acrylic wafer during debond of the posterior teeth reduces the pain experienced. The lower anterior teeth are the most painful. The pain expected is significantly greater than actually experienced. Patients who had greater pain during treatment expected and experienced greater pain at debond. PMID:24009318

  19. Ultrasonic model and system for measurement of corneal biomechanical properties and validation on phantoms.

    PubMed

    Liu, Jun; He, Xiaoyin; Pan, Xueliang; Roberts, Cynthia J

    2007-01-01

    Non-invasive measurement of biomechanical properties of corneas may provide important information for ocular disease management and therapeutic procedures. An ultrasonic non-destructive evaluation method with a wave propagation model was developed to determine corneal biomechanical properties in vivo. In this study, we tested the feasibility of the approach in differentiating the mechanical properties of soft contact lenses as corneal phantoms. Three material types of soft contact lenses (six samples in each group) were measured using a broadband ultrasound transducer. The ultrasonic reflections from the contact lenses were recorded by a 500MHz/8-bit digitizer, and displayed and processed by a PC. A reference signal was recorded to compute the normalized power spectra using Fast Fourier Transformation. An inverse algorithm based on least-squares minimization was used to reconstruct three parameters of the contact lenses: density, thickness, and elastic constants lambda+2micro. The thickness of each sample was verified using an electronic thickness gauge, and the averaged density for each type of lenses was verified using Archimedes' principle and manufacturer's report. Our results demonstrated that the ultrasonic system was able to differentiate the elastic properties of the three types of the soft contact lenses with statistical significance (P-value<0.001). The reconstructed thicknesses and densities agreed well with the independent measurements. Our studies on corneal phantoms indicated that the ultrasonic system was sensitive and accurate in measuring the material properties of cornea-like structures. It is important to optimize the system for in vivo measurements.

  20. EFFECT OF ANNEALING TEMPERATURE ON THE STRUCTURE AND AC MAGNETIC PROPERTIES OF Fe73Cu1Nb3.5-xVxSi13.5B9 (x = 1.0, 1.5, 2.0) NANOCRYSTALLINE SOFT MAGNETIC ALLOYS

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Huang, Ping; Wang, Yuxin; Yan, Biao

    2013-07-01

    In this paper, Nb element was partially replaced by V element in Finemet-type Fe73Cu1Nb3.5-xVxSi13.5B9 (x = 1, 1.5, 2) alloys and the effect of annealing temperatures on the microstructure and AC magnetic properties of the samples are studied. The annealing temperatures affect the grain sizes of the bcc α-Fe phase greatly. When the annealing temperature is between 540-560°C, the samples have better AC magnetic properties than the samples annealed at other temperatures. The optimized annealing temperature of the studied samples is around 560°C. The coercivity and iron loss of the V2 sample is a little bit higher than that of V1 and V1.5 alloys while the amplitude permeability of V2 alloy is larger than that of V1 and V1.5, which indicate that the content of V element has strong influence on the magnetic properties of nanocrystalline soft magnetic alloys.

  1. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling

    PubMed Central

    Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W. F.; Jeelani, Owase; Dunaway, David J.; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face. PMID:29742139

  2. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling.

    PubMed

    Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.

  3. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  4. Investigation on iterative multiuser detection physical layer network coding in two-way relay free-space optical links with turbulences and pointing errors.

    PubMed

    Abu-Almaalie, Zina; Ghassemlooy, Zabih; Bhatnagar, Manav R; Le-Minh, Hoa; Aslam, Nauman; Liaw, Shien-Kuei; Lee, It Ee

    2016-11-20

    Physical layer network coding (PNC) improves the throughput in wireless networks by enabling two nodes to exchange information using a minimum number of time slots. The PNC technique is proposed for two-way relay channel free space optical (TWR-FSO) communications with the aim of maximizing the utilization of network resources. The multipair TWR-FSO is considered in this paper, where a single antenna on each pair seeks to communicate via a common receiver aperture at the relay. Therefore, chip interleaving is adopted as a technique to separate the different transmitted signals at the relay node to perform PNC mapping. Accordingly, this scheme relies on the iterative multiuser technique for detection of users at the receiver. The bit error rate (BER) performance of the proposed system is examined under the combined influences of atmospheric loss, turbulence-induced channel fading, and pointing errors (PEs). By adopting the joint PNC mapping with interleaving and multiuser detection techniques, the BER results show that the proposed scheme can achieve a significant performance improvement against the degrading effects of turbulences and PEs. It is also demonstrated that a larger number of simultaneous users can be supported with this new scheme in establishing a communication link between multiple pairs of nodes in two time slots, thereby improving the channel capacity.

  5. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  6. 32-Bit-Wide Memory Tolerates Failures

    NASA Technical Reports Server (NTRS)

    Buskirk, Glenn A.

    1990-01-01

    Electronic memory system of 32-bit words corrects bit errors caused by some common type of failures - even failure of entire 4-bit-wide random-access-memory (RAM) chip. Detects failure of two such chips, so user warned that ouput of memory may contain errors. Includes eight 4-bit-wide DRAM's configured so each bit of each DRAM assigned to different one of four parallel 8-bit words. Each DRAM contributes only 1 bit to each 8-bit word.

  7. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    NASA Astrophysics Data System (ADS)

    Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    2017-10-01

    Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  8. A time dependent difference theory for sound propagation in ducts with flow. [characteristic of inlet and exhaust ducts of turbofan engines

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1979-01-01

    A time dependent numerical solution of the linearized continuity and momentum equation was developed for sound propagation in a two dimensional straight hard or soft wall duct with a sheared mean flow. The time dependent governing acoustic difference equations and boundary conditions were developed along with a numerical determination of the maximum stable time increments. A harmonic noise source radiating into a quiescent duct was analyzed. This explicit iteration method then calculated stepwise in real time to obtain the transient as well as the steady state solution of the acoustic field. Example calculations were presented for sound propagation in hard and soft wall ducts, with no flow and plug flow. Although the problem with sheared flow was formulated and programmed, sample calculations were not examined. The time dependent finite difference analysis was found to be superior to the steady state finite difference and finite element techniques because of shorter solution times and the elimination of large matrix storage requirements.

  9. All-digital GPS receiver mechanization

    NASA Astrophysics Data System (ADS)

    Ould, P. C.; van Wechel, R. J.

    The paper describes the all-digital baseband correlation processing of GPS signals, which is characterized by (1) a potential for improved antijamming performance, (2) fast acquisition by a digital matched filter, (3) reduction of adjustment, (4) increased system reliability, and (5) provision of a basis for the realization of a high degree of VLSI potential for the development of small economical GPS sets. The basic technical approach consists of a broadband fix-tuned RF converter followed by a digitizer; digital-matched-filter acquisition section; phase- and delay-lock tracking via baseband digital correlation; software acquisition logic and loop filter implementation; and all-digital implementation of the feedback numerical controlled oscillators and code generator. Broadband in-phase and quadrature tracking is performed by an arctangent angle detector followed by a phase-unwrapping algorithm that eliminates false locks induced by sampling and data bit transitions, and yields a wide pull-in frequency range approaching one-fourth of the loop iteration frequency.

  10. Fast and accurate read-out of interferometric optical fiber sensors

    NASA Astrophysics Data System (ADS)

    Bartholsen, Ingebrigt; Hjelme, Dag R.

    2016-03-01

    We present results from an evaluation of phase and frequency estimation algorithms for read-out instrumentation of interferometric sensors. Tests on interrogating a micro Fabry-Perot sensor made of semi-spherical stimuli-responsive hydrogel immobilized on a single mode fiber end face, shows that an iterative quadrature demodulation technique (IQDT) implemented on a 32-bit microcontroller unit can achieve an absolute length accuracy of ±50 nm and length change accuracy of ±3 nm using an 80 nm SLED source and a grating spectrometer for interrogation. The mean absolute error for the frequency estimator is a factor 3 larger than the theoretical lower bound for a maximum likelihood estimator. The corresponding factor for the phase estimator is 1.3. The computation time for the IQDT algorithm is reduced by a factor 1000 compared to the full QDT for the same accuracy requirement.

  11. Frame Synchronization Without Attached Sync Markers

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2011-01-01

    We describe a method to synchronize codeword frames without making use of attached synchronization markers (ASMs). Instead, the synchronizer identifies the code structure present in the received symbols, by operating the decoder for a handful of iterations at each possible symbol offset and forming an appropriate metric. This method is computationally more complex and doesn't perform as well as frame synchronizers that utilize an ASM; nevertheless, the new synchronizer acquires frame synchronization in about two seconds when using a 600 kbps software decoder, and would take about 15 milliseconds on prototype hardware. It also eliminates the need for the ASMs, which is an attractive feature for short uplink codes whose coding gain would be diminished by the overheard of ASM bits. The lack of ASMs also would simplify clock distribution for the AR4JA low-density parity-check (LDPC) codes and adds a small amount to the coding gain as well (up to 0.2 dB).

  12. Analysis of the faster-than-Nyquist optimal linear multicarrier system

    NASA Astrophysics Data System (ADS)

    Marquet, Alexandre; Siclet, Cyrille; Roque, Damien

    2017-02-01

    Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"

  13. FPGA implementation of high-performance QC-LDPC decoder for optical communications

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2015-01-01

    Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.

  14. Parameter identification of hyperelastic material properties of the heel pad based on an analytical contact mechanics model of a spherical indentation.

    PubMed

    Suzuki, Ryo; Ito, Kohta; Lee, Taeyong; Ogihara, Naomichi

    2017-01-01

    Accurate identification of the material properties of the plantar soft tissue is important for computer-aided analysis of foot pathologies and design of therapeutic footwear interventions based on subject-specific models of the foot. However, parameter identification of the hyperelastic material properties of plantar soft tissues usually requires an inverse finite element analysis due to the lack of a practical contact model of the indentation test. In the present study, we derive an analytical contact model of a spherical indentation test in order to directly estimate the material properties of the plantar soft tissue. Force-displacement curves of the heel pads are obtained through an indentation experiment. The experimental data are fit to the analytical stress-strain solution of the spherical indentation in order to obtain the parameters. A spherical indentation approach successfully predicted the non-linear material properties of the heel pad without iterative finite element calculation. The force-displacement curve obtained in the present study was found to be situated lower than those identified in previous studies. The proposed framework for identifying the hyperelastic material parameters may facilitate the development of subject-specific FE modeling of the foot for possible clinical and ergonomic applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE PAGES

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; ...

    2017-09-19

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  16. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  17. Effect of PDC bit design and confining pressure on bit-balling tendencies while drilling shale using water base mud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hariharan, P.R.; Azar, J.J.

    1996-09-01

    A good majority of all oilwell drilling occurs in shale and other clay-bearing rocks. In the light of relatively fewer studies conducted, the problem of bit-balling in PDC bits while drilling shale has been addressed with the primary intention of attempting to quantify the degree of balling, as well as to investigate the influence of bit design and confining pressures. A series of full-scale laboratory drilling tests under simulated down hole conditions were conducted utilizing seven different PDC bits in Catoosa shale. Test results have indicated that the non-dimensional parameter R{sub d} [(bit torque).(weight-on-bit)/(bit diameter)] is a good indicator ofmore » the degree of bit-balling and that it correlated well with Specific-Energy. Furthermore, test results have shown bit-profile and bit-hydraulic design to be key parameters of bit design that dictate the tendency of balling in shales under a given set of operating conditions. A bladed bit was noticed to ball less compared to a ribbed or open-faced bit. Likewise, related to bit profile, test results have indicated that the parabolic profile has a lesser tendency to ball compared to round and flat profiles. The tendency of PDC bits to ball was noticed to increase with increasing confining pressures for the set of drilling conditions used.« less

  18. Fast l₁-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime.

    PubMed

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-06-01

    We present l₁-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l₁-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l₁-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l₁-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.

  19. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    PubMed Central

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  20. Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms

    NASA Astrophysics Data System (ADS)

    Fu, Haoyu; Chi, Yuejie

    2018-06-01

    Efficient estimation of wideband spectrum is of great importance for applications such as cognitive radio. Recently, sub-Nyquist sampling schemes based on compressed sensing have been proposed to greatly reduce the sampling rate. However, the important issue of quantization has not been fully addressed, particularly for high-resolution spectrum and parameter estimation. In this paper, we aim to recover spectrally-sparse signals and the corresponding parameters, such as frequency and amplitudes, from heavy quantizations of their noisy complex-valued random linear measurements, e.g. only the quadrant information. We first characterize the Cramer-Rao bound under Gaussian noise, which highlights the trade-off between sample complexity and bit depth under different signal-to-noise ratios for a fixed budget of bits. Next, we propose a new algorithm based on atomic norm soft thresholding for signal recovery, which is equivalent to proximal mapping of properly designed surrogate signals with respect to the atomic norm that motivates spectral sparsity. The proposed algorithm can be applied to both the single measurement vector case, as well as the multiple measurement vector case. It is shown that under the Gaussian measurement model, the spectral signals can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is the level of spectral sparsity and $n$ is the signal dimension. Finally, numerical simulations are provided to validate the proposed approaches.

  1. Acquisition and Retaining Granular Samples via a Rotating Coring Bit

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Badescu, Mircea; Sherrit, Stewart

    2013-01-01

    This device takes advantage of the centrifugal forces that are generated when a coring bit is rotated, and a granular sample is entered into the bit while it is spinning, making it adhere to the internal wall of the bit, where it compacts itself into the wall of the bit. The bit can be specially designed to increase the effectiveness of regolith capturing while turning and penetrating the subsurface. The bit teeth can be oriented such that they direct the regolith toward the bit axis during the rotation of the bit. The bit can be designed with an internal flute that directs the regolith upward inside the bit. The use of both the teeth and flute can be implemented in the same bit. The bit can also be designed with an internal spiral into which the various particles wedge. In another implementation, the bit can be designed to collect regolith primarily from a specific depth. For that implementation, the bit can be designed such that when turning one way, the teeth guide the regolith outward of the bit and when turning in the opposite direction, the teeth will guide the regolith inward into the bit internal section. This mechanism can be implemented with or without an internal flute. The device is based on the use of a spinning coring bit (hollow interior) as a means of retaining granular sample, and the acquisition is done by inserting the bit into the subsurface of a regolith, soil, or powder. To demonstrate the concept, a commercial drill and a coring bit were used. The bit was turned and inserted into the soil that was contained in a bucket. While spinning the bit (at speeds of 600 to 700 RPM), the drill was lifted and the soil was retained inside the bit. To prove this point, the drill was turned horizontally, and the acquired soil was still inside the bit. The basic theory behind the process of retaining unconsolidated mass that can be acquired by the centrifugal forces of the bit is determined by noting that in order to stay inside the interior of the bit, the frictional force must be greater than the weight of the sample. The bit can be designed with an internal sleeve to serve as a container for granular samples. This tube-shaped component can be extracted upon completion of the sampling, and the bottom can be capped by placing the bit onto a corklike component. Then, upon removal of the internal tube, the top section can be sealed. The novel features of this device are: center dot A mechanism of acquiring and retaining granular samples using a coring bit without a closed door. center dot An acquisition bit that has internal structure such as a waffle pattern for compartmentalizing or helical internal flute to propel the sample inside the bit and help in acquiring and retaining granular samples. center dot A bit with an internal spiral into which the various particles wedge. center dot A design that provides a method of testing frictional properties of the granular samples and potentially segregating particles based on size and density. A controlled acceleration or deceleration may be used to drop the least-frictional particles or to eventually shear the unconsolidated material near the bit center.

  2. Calibration free beam hardening correction for cardiac CT perfusion imaging

    NASA Astrophysics Data System (ADS)

    Levi, Jacob; Fahmi, Rachid; Eck, Brendan L.; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) and coronary CTA have the potential to make CT an ideal noninvasive gate-keeper for invasive coronary angiography. However, beam hardening artifacts (BHA) prevent accurate blood flow calculation in MPI-CT. BH Correction (BHC) methods require either energy-sensitive CT, not widely available, or typically a calibration-based method. We developed a calibration-free, automatic BHC (ABHC) method suitable for MPI-CT. The algorithm works with any BHC method and iteratively determines model parameters using proposed BHA-specific cost function. In this work, we use the polynomial BHC extended to three materials. The image is segmented into soft tissue, bone, and iodine images, based on mean HU and temporal enhancement. Forward projections of bone and iodine images are obtained, and in each iteration polynomial correction is applied. Corrections are then back projected and combined to obtain the current iteration's BHC image. This process is iterated until cost is minimized. We evaluate the algorithm on simulated and physical phantom images and on preclinical MPI-CT data. The scans were obtained on a prototype spectral detector CT (SDCT) scanner (Philips Healthcare). Mono-energetic reconstructed images were used as the reference. In the simulated phantom, BH streak artifacts were reduced from 12+/-2HU to 1+/-1HU and cupping was reduced by 81%. Similarly, in physical phantom, BH streak artifacts were reduced from 48+/-6HU to 1+/-5HU and cupping was reduced by 86%. In preclinical MPI-CT images, BHA was reduced from 28+/-6 HU to less than 4+/-4HU at peak enhancement. Results suggest that the algorithm can be used to reduce BHA in conventional CT and improve MPI-CT accuracy.

  3. Determination of optimal imaging settings for urolithiasis CT using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR): a physical human phantom study

    PubMed Central

    Choi, Se Y; Ahn, Seung H; Choi, Jae D; Kim, Jung H; Lee, Byoung-Il; Kim, Jeong-In

    2016-01-01

    Objective: The purpose of this study was to compare CT image quality for evaluating urolithiasis using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR) according to various scan parameters and radiation doses. Methods: A 5 × 5 × 5 mm3 uric acid stone was placed in a physical human phantom at the level of the pelvis. 3 tube voltages (120, 100 and 80 kV) and 4 current–time products (100, 70, 30 and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with FBP, statistical IR (Levels 5–7) and knowledge-based IMR (soft-tissue Levels 1–3). The radiation dose, objective image quality and signal-to-noise ratio (SNR) were evaluated, and subjective assessments were performed. Results: The effective doses ranged from 0.095 to 2.621 mSv. Knowledge-based IMR showed better objective image noise and SNR than did FBP and statistical IR. The subjective image noise of FBP was worse than that of statistical IR and knowledge-based IMR. The subjective assessment scores deteriorated after a break point of 100 kV and 30 mAs. Conclusion: At the setting of 100 kV and 30 mAs, the radiation dose can be decreased by approximately 84% while keeping the subjective image assessment. Advances in knowledge: Patients with urolithiasis can be evaluated with ultralow-dose non-enhanced CT using a knowledge-based IMR algorithm at a substantially reduced radiation dose with the imaging quality preserved, thereby minimizing the risks of radiation exposure while providing clinically relevant diagnostic benefits for patients. PMID:26577542

  4. A Concurrent Smalltalk Compiler for the Message-Driven Processor

    DTIC Science & Technology

    1988-05-01

    apj with bits from low-bit (inclusive) to high-bit (exclusive) set. ;;;Low-bit defaults to zero. (defmacro brange (high-bit &optional low-bit) (list...n2) (null (cddr num))) (aetg bits (b+ bits (if (>- nl n2) ( brange (1+ nl) n2) ( brange (1+ n2) ni)))) (error "Bad bmap range: -S" flu.)))) (t (error...vlocs) flat ((vlive (b- finst-vllv* mast) *I.( brange firat-context-slot-nun))) (next (inst-next last))) (if (bempty vlive) (delete-module module inat

  5. Effects of plastic bits on the condition and behaviour of captive-reared pheasants.

    PubMed

    Butler, D A; Davis, C

    2010-03-27

    Between 2005 and 2007, data were collected from game farms across England and Wales to examine the effects of the use of bits on the physiological condition and behaviour of pheasants. On each site, two pheasant pens kept in the same conditions were randomly allocated to either use bits or not. The behaviour and physiological conditions of pheasants in each treatment pen were assessed on the day of bitting and weekly thereafter until release. Detailed records of feed usage, medications and mortality were also kept. Bits halved the number of acts of bird-on-bird pecking, but they doubled the incidence of headshaking and scratching. Bits caused nostril inflammation and bill deformities in some birds, particularly after seven weeks of age. In all weeks after bitting, feather condition was poorer in non-bitted pheasants than in those fitted with bits. Less than 3 per cent of bitted birds had damaged skin, but in the non-bitted pens this figure increased over time to 23 per cent four weeks later. Feed use and mortality did not differ between bitted and non-bitted birds.

  6. New PDC bit design reduces vibrational problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mensa-Wilmot, G.; Alexander, W.L.

    1995-05-22

    A new polycrystalline diamond compact (PDC) bit design combines cutter layout, load balancing, unsymmetrical blades and gauge pads, and spiraled blades to reduce problematic vibrations without limiting drilling efficiency. Stabilization improves drilling efficiency and also improves dull characteristics for PDC bits. Some PDC bit designs mitigate one vibrational mode (such as bit whirl) through drilling parameter manipulation yet cause or excite another vibrational mode (such as slip-stick). An alternative vibration-reducing concept which places no limitations on the operational environment of a PDC bit has been developed to ensure optimization of the bit`s available mechanical energy. The paper discusses bit stabilization,more » vibration reduction, vibration prevention, cutter arrangement, load balancing, blade layout, spiraled blades, and bit design.« less

  7. Three Dimensional P-Wave Velocity Structure Beneath Eastern Turkey by Local Earthquake Tomography (LET) Method

    NASA Astrophysics Data System (ADS)

    Teoman, U. M.; Turkelli, N.; Gok, R.

    2005-12-01

    Recently, crustal structure and the tectonic evolution of Eastern Turkey region was extensively studied in the context of Eastern Turkey Seismic Experiment (ETSE) from late 1999 to August 2001. Collision of the Arabian and Eurasian plates has been occurring along East Anatolian Fault Zone (EAFZ) and the Bitlis Suture, which made Eastern Turkey an ideal platform for scientific research. High quality local earthquake data from the ETSE seismic network were used in order to determine the 3-D P-wave velocity structure of upper crust for Eastern Turkey. Within the 32-station network, 524 well locatable earthquakes with azimuthal gaps < 200° and number of P-wave observations > 8 (corresponding to 6842 P-phase readings) were selected from the initial data set and simultaneously inverted. 1-D reference velocity model was derived by an iterative 1-D velocity inversion including the updated hypocenters and the station delays. The following 3-D tomographic inversion was iteratively performed by SIMULPS14 algorithm in a ``damped least-squares'' sense using the appropriate ray tracing technique, model parametrization and control parameters. As far as resolution is concerned, S waves were not included in this study due to strong attenuation, insufficient number of S phase readings and higher picking errors with respect to P phases. Several tests with the synthetic data were conducted to assess the solution quality, suggesting that the velocity structure is well resolved down to ~17km. Overall,resulting 3-D P-wave velocity model led to a more reliable hypocenter determination indicated by reduced event scattering and a significant reduction of %50 both in variance and residual (rms) values.With the influence of improved velocity model, average location errors did not exceed ~1.5km in horizontal and ~4km in vertical directions. Tomographic images revealed the presence of lateral velocity variations in Eastern Turkey. Existence of relatively low velocity zones (5.6 < Vp < 6.0 km/sec) along most of the vertical profiles possibly indicates the influence of major tectonic structures such as North Anatolian Fault Zone (NAFZ), East Anatolian Fault Zone (EAFZ) and the Bitlis thrust belt correlated with the seismicity. Low velocity anomalies extend deeper along EAFZ down to ~15km compared to a depth of ~10km along NAFZ. Arabian plate is generally marked by relatively higher velocities (Vp > 6.2 km/sec) in 10-15 km depth range.

  8. CT dose reduction using Automatic Exposure Control and iterative reconstruction: A chest paediatric phantoms study.

    PubMed

    Greffier, Joël; Pereira, Fabricio; Macri, Francesco; Beregi, Jean-Paul; Larbi, Ahmed

    2016-04-01

    To evaluate the impact of Automatic Exposure Control (AEC) on radiation dose and image quality in paediatric chest scans (MDCT), with or without iterative reconstruction (IR). Three anthropomorphic phantoms representing children aged one, five and 10-year-old were explored using AEC system (CARE Dose 4D) with five modulation strength options. For each phantom, six acquisitions were carried out: one with fixed mAs (without AEC) and five each with different modulation strength. Raw data were reconstructed with Filtered Back Projection (FBP) and with two distinct levels of IR using soft and strong kernels. Dose reduction and image quality indices (Noise, SNR, CNR) were measured in lung and soft tissues. Noise Power Spectrum (NPS) was evaluated with a Catphan 600 phantom. The use of AEC produced a significant dose reduction (p<0.01) for all anthropomorphic sizes employed. According to the modulation strength applied, dose delivered was reduced from 43% to 91%. This pattern led to significantly increased noise (p<0.01) and reduced SNR and CNR (p<0.01). However, IR was able to improve these indices. The use of AEC/IR preserved image quality indices with a lower dose delivered. Doses were reduced from 39% to 58% for the one-year-old phantom, from 46% to 63% for the five-year-old phantom, and from 58% to 74% for the 10-year-old phantom. In addition, AEC/IR changed the patterns of NPS curves in amplitude and in spatial frequency. In chest paediatric MDCT, the use of AEC with IR allows one to obtain a significant dose reduction while maintaining constant image quality indices. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Channel modeling, signal processing and coding for perpendicular magnetic recording

    NASA Astrophysics Data System (ADS)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.

  10. Critique of a Hughes shuttle Ku-band data sampler/bit synchronizer

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.

    1980-01-01

    An alternative bit synchronizer proposed for shuttle was analyzed in a noise-free environment by considering the basic operation of the loop via timing diagrams and by linearizing the bit synchronizer as an equivalent, continuous, phased-lock loop (PLL). The loop is composed of a high-frequency phase-frequency detector which is capable of detecting both phase and frequency errors and is used to track the clock, and a bit transition detector which attempts to track the transitions of the data bits. It was determined that the basic approach was a good design which, with proper implementation of the accumulator, up/down counter and logic should provide accurate mid-bit sampling with symmetric bits. However, when bit asymmetry occurs, the bit synchronizer can lock up with a large timing error, yet be quasi-stable (timing will not change unless the clock and bit sequence drift). This will result in incorrectly detecting some bits.

  11. Bit-1 is an essential regulator of myogenic differentiation

    PubMed Central

    Griffiths, Genevieve S.; Doe, Jinger; Jijiwa, Mayumi; Van Ry, Pam; Cruz, Vivian; de la Vega, Michelle; Ramos, Joe W.; Burkin, Dean J.; Matter, Michelle L.

    2015-01-01

    Muscle differentiation requires a complex signaling cascade that leads to the production of multinucleated myofibers. Genes regulating the intrinsic mitochondrial apoptotic pathway also function in controlling cell differentiation. How such signaling pathways are regulated during differentiation is not fully understood. Bit-1 (also known as PTRH2) mutations in humans cause infantile-onset multisystem disease with muscle weakness. We demonstrate here that Bit-1 controls skeletal myogenesis through a caspase-mediated signaling pathway. Bit-1-null mice exhibit a myopathy with hypotrophic myofibers. Bit-1-null myoblasts prematurely express muscle-specific proteins. Similarly, knockdown of Bit-1 expression in C2C12 myoblasts promotes early differentiation, whereas overexpression delays differentiation. In wild-type mice, Bit-1 levels increase during differentiation. Bit-1-null myoblasts exhibited increased levels of caspase 9 and caspase 3 without increased apoptosis. Bit-1 re-expression partially rescued differentiation. In Bit-1-null muscle, Bcl-2 levels are reduced, suggesting that Bcl-2-mediated inhibition of caspase 9 and caspase 3 is decreased. Bcl-2 re-expression rescued Bit-1-mediated early differentiation in Bit-1-null myoblasts and C2C12 cells with knockdown of Bit-1 expression. These results support an unanticipated yet essential role for Bit-1 in controlling myogenesis through regulation of Bcl-2. PMID:25770104

  12. Biomechanics of the soft-palate in sleep apnea patients with polycystic ovarian syndrome.

    PubMed

    Subramaniam, Dhananjay Radhakrishnan; Arens, Raanan; Wagshul, Mark E; Sin, Sanghun; Wootton, David M; Gutmark, Ephraim J

    2018-05-17

    Highly compliant tissue supporting the pharynx and low muscle tone enhance the possibility of upper airway occlusion in children with obstructive sleep apnea (OSA). The present study describes subject-specific computational modeling of flow-induced velopharyngeal narrowing in a female child with polycystic ovarian syndrome (PCOS) with OSA and a non-OSA control. Anatomically accurate three-dimensional geometries of the upper airway and soft-palate were reconstructed for both subjects using magnetic resonance (MR) images. A fluid-structure interaction (FSI) shape registration analysis was performed using subject-specific values of flow rate to iteratively compute the biomechanical properties of the soft-palate. The optimized shear modulus for the control was 38 percent higher than the corresponding value for the OSA patient. The proposed computational FSI model was then employed for planning surgical treatment for the apneic subject. A virtual surgery comprising of a combined adenoidectomy, palatoplasty and genioglossus advancement was performed to estimate the resulting post-operative patterns of airflow and tissue displacement. Maximum flow velocity and velopharyngeal resistance decreased by 80 percent and 66 percent respectively following surgery. Post-operative flow-induced forces on the anterior and posterior faces of the soft-palate were equilibrated and the resulting magnitude of tissue displacement was 63 percent lower compared to the pre-operative case. Results from this pilot study indicate that FSI computational modeling can be employed to characterize the mechanical properties of pharyngeal tissue and evaluate the effectiveness of various upper airway surgeries prior to their application. Copyright © 2018. Published by Elsevier Ltd.

  13. Pressure driven digital logic in PDMS based microfluidic devices fabricated by multilayer soft lithography.

    PubMed

    Devaraju, Naga Sai Gopi K; Unger, Marc A

    2012-11-21

    Advances in microfluidics now allow an unprecedented level of parallelization and integration of biochemical reactions. However, one challenge still faced by the field has been the complexity and cost of the control hardware: one external pressure signal has been required for each independently actuated set of valves on chip. Using a simple post-modification to the multilayer soft lithography fabrication process, we present a new implementation of digital fluidic logic fully analogous to electronic logic with significant performance advances over the previous implementations. We demonstrate a novel normally closed static gain valve capable of modulating pressure signals in a fashion analogous to an electronic transistor. We utilize these valves to build complex fluidic logic circuits capable of arbitrary control of flows by processing binary input signals (pressure (1) and atmosphere (0)). We demonstrate logic gates and devices including NOT, NAND and NOR gates, bi-stable flip-flops, gated flip-flops (latches), oscillators, self-driven peristaltic pumps, delay flip-flops, and a 12-bit shift register built using static gain valves. This fluidic logic shows cascade-ability, feedback, programmability, bi-stability, and autonomous control capability. This implementation of fluidic logic yields significantly smaller devices, higher clock rates, simple designs, easy fabrication, and integration into MSL microfluidics.

  14. Soft Real-Time PID Control on a VME Computer

    NASA Technical Reports Server (NTRS)

    Karayan, Vahag; Sander, Stanley; Cageao, Richard

    2007-01-01

    microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.

  15. Transferring experience labs for production engineering students to universities in newly industrialized countries

    NASA Astrophysics Data System (ADS)

    Leiden, A.; Posselt, G.; Bhakar, V.; Singh, R.; Sangwan, K. S.; Herrmann, C.

    2018-01-01

    The Indian economy is one of the fastest growing economies in the world and the demand for the skilled engineers is increasing. Subsequently the Indian education sector is growing to provide the necessary number of skilled engineers. Current Indian engineering graduates have broad theoretical background but lack in methodological, soft and practical skills. To bridge this gap, the experience lab ideas from the engineering education at “Die Lernfabrik” (learning factory) of the Technische Universität Braunschweig (TU Braunschweig) is transferred to the Birla Institute of Technology and Science in Pilani (BITS Pilani), India. This Lernfabrik successfully strengthened the methodological, soft and practical skills of the TU Braunschweig production-engineering graduates. The target group is discrete manufacturing education with focusing on energy and resource efficiency as well as cyber physical production systems. As the requirements of industry and academia in India differs from Germany, the transfer of the experience lab to the Indian education system needs special attention to realize a successful transfer project. This publication provides a unique approach to systematically transfer the educational concept in Learning Factory from a specific university environment to a different environment in a newly industrialized country. The help of a bilateral university driven practice partnership between the two universities creates a lighthouse for the Indian university environment.

  16. Improved Iris Recognition through Fusion of Hamming Distance and Fragile Bit Distance.

    PubMed

    Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J

    2011-12-01

    The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are equally consistent. A bit is deemed fragile if its value changes across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. To our knowledge, this is the first and only work to use the coincidence of fragile bit locations to improve the accuracy of matches.

  17. BIT BY BIT: A Game Simulating Natural Language Processing in Computers

    ERIC Educational Resources Information Center

    Kato, Taichi; Arakawa, Chuichi

    2008-01-01

    BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…

  18. Bit selection using field drilling data and mathematical investigation

    NASA Astrophysics Data System (ADS)

    Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.

    2018-03-01

    A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.

  19. Investigation of PDC bit failure base on stick-slip vibration analysis of drilling string system plus drill bit

    NASA Astrophysics Data System (ADS)

    Huang, Zhiqiang; Xie, Dou; Xie, Bing; Zhang, Wenlin; Zhang, Fuxiao; He, Lei

    2018-03-01

    The undesired stick-slip vibration is the main source of PDC bit failure, such as tooth fracture and tooth loss. So, the study of PDC bit failure base on stick-slip vibration analysis is crucial to prolonging the service life of PDC bit and improving ROP (rate of penetration). For this purpose, a piecewise-smooth torsional model with 4-DOF (degree of freedom) of drilling string system plus PDC bit is proposed to simulate non-impact drilling. In this model, both the friction and cutting behaviors of PDC bit are innovatively introduced. The results reveal that PDC bit is easier to fail than other drilling tools due to the severer stick-slip vibration. Moreover, reducing WOB (weight on bit) and improving driving torque can effectively mitigate the stick-slip vibration of PDC bit. Therefore, PDC bit failure can be alleviated by optimizing drilling parameters. In addition, a new 4-DOF torsional model is established to simulate torsional impact drilling and the effect of torsional impact on PDC bit's stick-slip vibration is analyzed by use of an engineering example. It can be concluded that torsional impact can mitigate stick-slip vibration, prolonging the service life of PDC bit and improving drilling efficiency, which is consistent with the field experiment results.

  20. Flap-Lag-Torsion Stability in Forward Flight

    NASA Technical Reports Server (NTRS)

    Panda, B.; Chopra, I.

    1985-01-01

    An aeroelastic stability of three-degree flap-lag-torsion blade in forward flight is examined. Quasisteady aerodynamics with a dynamic inflow model is used. The nonlinear time dependent periodic blade response is calculated using an iterative procedure based on Floquet theory. The periodic perturbation equations are solved for stability using Floquet transition matrix theory as well as constant coefficient approximation in the fixed reference frame. Results are presented for both stiff-inplane and soft-inplane blade configurations. The effects of several parameters on blade stability are examined, including structural coupling, pitch-flap and pitch-lag coupling, torsion stiffness, steady inflow distribution, dynamic inflow, blade response solution and constant coefficient approximation.

  1. Motion and Emotional Behavior Design for Pet Robot Dog

    NASA Astrophysics Data System (ADS)

    Cheng, Chi-Tai; Yang, Yu-Ting; Miao, Shih-Heng; Wong, Ching-Chang

    A pet robot dog with two ears, one mouth, one facial expression plane, and one vision system is designed and implemented so that it can do some emotional behaviors. Three processors (Inter® Pentium® M 1.0 GHz, an 8-bit processer 8051, and embedded soft-core processer NIOS) are used to control the robot. One camera, one power detector, four touch sensors, and one temperature detector are used to obtain the information of the environment. The designed robot with 20 DOF (degrees of freedom) is able to accomplish the walking motion. A behavior system is built on the implemented pet robot so that it is able to choose a suitable behavior for different environmental situation. From the practical test, we can see that the implemented pet robot dog can do some emotional interaction with the human.

  2. Boring apparatus capable of boring straight holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, C.R.

    The invention relates to a rock boring assembly for producing a straight hole for use in a drill string above a pilot boring bit of predetermined diameter smaller than the desired final hole size. The boring assembly comprises a small conical boring bit and a larger conical boring, the conical boring bits mounted on lower and upper ends of an enlongated spacer, respectively, and the major effective cutting diameters of each of the conical boring bits being at least 10% greater than the minor effective cutting diameter of the respective bit. The spacer has a cross-section resistant bending and spacesmore » the conical boring bits apart a distance at least 5 times the major cutting diameter of the small conical boring bit, thereby spacing the pivot points provided by the two conical boring bits to limit bodily angular deflection of the assembly and providing a substantial moment arm to resist lateral forces applied to the assembly by the pilot bit and drill string. The spacing between the conical bits is less than about 20 times the major cutting diameter of the lower conical boring bit to enable the spacer to act as a bend-resistant beam to resist angular deflection of the axis of either of the conical boring bits relative to the other when it receives uneven lateral force due to non-uniformity of cutting conditions about the circumference of the bit. Advantageously the boring bits also are self-advancing and feature skewed rollers. 7 claims.« less

  3. On the performance of joint iterative detection and decoding in coherent optical channels with laser frequency fluctuations

    NASA Astrophysics Data System (ADS)

    Castrillón, Mario A.; Morero, Damián A.; Agazzi, Oscar E.; Hueda, Mario R.

    2015-08-01

    The joint iterative detection and decoding (JIDD) technique has been proposed by Barbieri et al. (2007) with the objective of compensating the time-varying phase noise and constant frequency offset experienced in satellite communication systems. The application of JIDD to optical coherent receivers in the presence of laser frequency fluctuations has not been reported in prior literature. Laser frequency fluctuations are caused by mechanical vibrations, power supply noise, and other mechanisms. They significantly degrade the performance of the carrier phase estimator in high-speed intradyne coherent optical receivers. This work investigates the performance of the JIDD algorithm in multi-gigabit optical coherent receivers. We present simulation results of bit error rate (BER) for non-differential polarization division multiplexing (PDM)-16QAM modulation in a 200 Gb/s coherent optical system that includes an LDPC code with 20% overhead and net coding gain of 11.3 dB at BER = 10-15. Our study shows that JIDD with a pilot rate ⩽ 5 % compensates for both laser phase noise and laser frequency fluctuation. Furthermore, since JIDD is used with non-differential modulation formats, we find that gains in excess of 1 dB can be achieved over existing solutions based on an explicit carrier phase estimator with differential modulation. The impact of the fiber nonlinearities in dense wavelength division multiplexing (DWDM) systems is also investigated. Our results demonstrate that JIDD is an excellent candidate for application in next generation high-speed optical coherent receivers.

  4. Bit-Grooming: Shave Your Bits with Razor-sharp Precision

    NASA Astrophysics Data System (ADS)

    Zender, C. S.; Silver, J.

    2017-12-01

    Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.

  5. A Compression Algorithm for Field Programmable Gate Arrays in the Space Environment

    DTIC Science & Technology

    2011-12-01

    Bit 1 ,Bit 0P  . (V.3) Equation (V.3) is implemented with a string of XOR gates and Bit Basher blocks, as shown in Figure 31. As discussed in...5], the string of Bit Basher blocks are used to separate each 35-bit value into 35 one-bit values, and the string of XOR gates is used to

  6. Drag bit construction

    DOEpatents

    Hood, M.

    1986-02-11

    A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face. 4 figs.

  7. Drag bit construction

    DOEpatents

    Hood, Michael

    1986-01-01

    A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face.

  8. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery.

    PubMed

    Hashemi, SayedMasoud; Song, William Y; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G; Ruschin, Mark

    2017-04-07

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm -1 which was increased to 1.2 mm -1 by SDIR, at half maximum.

  9. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery

    NASA Astrophysics Data System (ADS)

    Hashemi, SayedMasoud; Song, William Y.; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G.; Ruschin, Mark

    2017-04-01

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm-1 which was increased to 1.2 mm-1 by SDIR, at half maximum.

  10. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Louis A; Mason, John J.

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less

  11. Interior tomography from differential phase contrast data via Hilbert transform based on spline functions

    NASA Astrophysics Data System (ADS)

    Yang, Qingsong; Cong, Wenxiang; Wang, Ge

    2016-10-01

    X-ray phase contrast imaging is an important mode due to its sensitivity to subtle features of soft biological tissues. Grating-based differential phase contrast (DPC) imaging is one of the most promising phase imaging techniques because it works with a normal x-ray tube of a large focal spot at a high flux rate. However, a main obstacle before this paradigm shift is the fabrication of large-area gratings of a small period and a high aspect ratio. Imaging large objects with a size-limited grating results in data truncation which is a new type of the interior problem. While the interior problem was solved for conventional x-ray CT through analytic extension, compressed sensing and iterative reconstruction, the difficulty for interior reconstruction from DPC data lies in that the implementation of the system matrix requires the differential operation on the detector array, which is often inaccurate and unstable in the case of noisy data. Here, we propose an iterative method based on spline functions. The differential data are first back-projected to the image space. Then, a system matrix is calculated whose components are the Hilbert transforms of the spline bases. The system matrix takes the whole image as an input and outputs the back-projected interior data. Prior information normally assumed for compressed sensing is enforced to iteratively solve this inverse problem. Our results demonstrate that the proposed algorithm can successfully reconstruct an interior region of interest (ROI) from the differential phase data through the ROI.

  12. PDC-bit performance under simulated borehole conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, E.E.; Azar, J.J.

    1993-09-01

    Laboratory drilling tests were used to investigate the effects of pressure on polycrystalline-diamond-compact (PDC) drill-bit performance. Catoosa shale core samples were drilled with PDC and roller-cone bits at up to 1,750-psi confining pressure. All tests were conducted in a controlled environment with a full-scale laboratory drilling system. Test results indicate, that under similar operating conditions, increases in confining pressure reduce PDC-bit performance as much as or more than conventional-rock-bit performance. Specific energy calculations indicate that a combination of rock strength, chip hold-down, and bit balling may have reduced performance. Quantifying the degree to which pressure reduces PDC-bit performance will helpmore » researchers interpret test results and improve bit designs and will help drilling engineers run PDC bits more effectively in the field.« less

  13. Smart built-in test

    NASA Technical Reports Server (NTRS)

    Richards, Dale W.

    1990-01-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  14. Smart built-in test

    NASA Astrophysics Data System (ADS)

    Richards, Dale W.

    1990-03-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  15. All gates lead to smoking: the 'gateway theory', e-cigarettes and the remaking of nicotine.

    PubMed

    Bell, Kirsten; Keane, Helen

    2014-10-01

    The idea that drug use in 'softer' forms leads to 'harder' drug use lies at the heart of the gateway theory, one of the most influential models of drug use of the twentieth century. Although hotly contested, the notion of the 'gateway drug' continues to rear its head in discussions of drug use--most recently in the context of electronic cigarettes. Based on a critical reading of a range of texts, including scholarly literature and media reports, we explore the history and gestation of the gateway theory, highlighting the ways in which intersections between academic, media and popular accounts actively produced the concept. Arguing that the theory has been critical in maintaining the distinction between 'soft' and 'hard' drugs, we turn to its distinctive iteration in the context of debates about e-cigarettes. We show that the notion of the 'gateway' has been transformed from a descriptive to a predictive model, one in which nicotine is constituted as simultaneously 'soft' and 'hard'--as both relatively innocuous and incontrovertibly harmful. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Hey! A Flea Bit Me!

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Hey! A Flea Bit Me! KidsHealth / For Kids / Hey! A Flea Bit Me! Print en español ¡Ay! ¡ ... 30% DEET. More on this topic for: Kids Hey! A Gnat Bit Me! Hey! A Bedbug Bit ...

  17. Hey! A Louse Bit Me!

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Hey! A Louse Bit Me! KidsHealth / For Kids / Hey! A Louse Bit Me! Print en español ¡Ay! ¡ ... topic for: Kids Lice Aren't So Nice Hey! A Gnat Bit Me! Hey! A Flea Bit ...

  18. PDC bits: What`s needed to meet tomorrow`s challenge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, T.M.; Sinor, L.A.

    1994-12-31

    When polycrystalline diamond compact (PDC) bits were introduced in the mid-1970s they showed tantalizingly high penetration rates in laboratory drilling tests. Single cutter tests indicated that they had the potential to drill very hard rocks. Unfortunately, 20 years later we`re still striving to reach the potential that these bits seem to have. Many problems have been overcome, and PDC bits have offered capabilities not possible with roller cone bits. PDC bits provide the most economical bit choice in many areas, but their limited durability has hampered their application in many other areas.

  19. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less

  20. Validation of 3D documentation of palatal soft tissue shape, color, and irregularity with intraoral scanning.

    PubMed

    Deferm, Julie T; Schreurs, Ruud; Baan, Frank; Bruggink, Robin; Merkx, Matthijs A W; Xi, Tong; Bergé, Stefaan J; Maal, Thomas J J

    2018-04-01

    The purpose of this study was to assess the feasibility of 3D intraoral scanning for documentation of palatal soft tissue by evaluating the accuracy of shape, color, and curvature. Intraoral scans of ten participants' upper dentition and palate were acquired with the TRIOS® 3D intraoral scanner by two observers. Conventional impressions were taken and digitized as a gold standard. The resulting surface models were aligned using an Iterative Closest Point approach. The absolute distance measurements between the intraoral models and the digitized impression were used to quantify the trueness and precision of intraoral scanning. The mean color of the palatal soft tissue was extracted in HSV (hue, saturation, value) format to establish the color precision. Finally, the mean curvature of the surface models was calculated and used for surface irregularity. Mean average distance error between the conventional impression models and the intraoral models was 0.02 ± 0.07 mm (p = 0.30). Mean interobserver color difference was - 0.08 ± 1.49° (p = 0.864), 0.28 ± 0.78% (p = 0.286), and 0.30 ± 1.14% (p = 0.426) for respectively hue, saturation, and value. The interobserver differences for overall and maximum surface irregularity were 0.01 ± 0.03 and 0.00 ± 0.05 mm. This study supports the hypothesis that the intraoral scan can perform a 3D documentation of palatal soft tissue in terms of shape, color, and curvature. An intraoral scanner can be an objective tool, adjunctive to the clinical examination of the palatal tissue.

  1. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  2. Performance test of different 3.5 mm drill bits and consequences for orthopaedic surgery.

    PubMed

    Clement, Hans; Zopf, Christoph; Brandner, Markus; Tesch, Norbert P; Vallant, Rudolf; Puchwein, Paul

    2015-12-01

    Drilling of bones in orthopaedic and trauma surgery is a common procedure. There are yet no recommendations about which drill bits/coating should be preferred and when to change a used drill bit. In preliminary studies typical "drilling patterns" of surgeons concerning used spindle speed and feeding force were recorded. Different feeding forces were tested and abrasion was analysed using magnification and a scanning electron microscope (SEM). Acquired data were used for programming a friction stir welding machine (FSWM). Four drill bits (a default AISI 440A, a HSS, an AISI 440B and a Zirconium-oxide drill bit) were analysed for abrasive wear after 20/40/60 machine-guided and hand-driven drilled holes. Additionally different drill coatings [diamond-like carbon/grafitic (DLC), titanium nitride/carbide (Ti-N)] were tested. The mean applied feeding force by surgeons was 45 ± 15.6 Newton (N). HSS bits were still usable after 51 drill holes. Both coated AISI 440A bits showed considerable breakouts of the main cutting edge after 20 hand-driven drilled holes. The coated HSS bit showed very low abrasive wear. The non-coated AISI 440B bit had a similar durability to the HSS bits. The ZrO2 dental drill bit excelled its competitors (no considerable abrasive wear at >100 holes). If the default AISI 440A drill bit cannot be checked by 20-30× magnification after surgery, it should be replaced after 20 hand-driven drilled holes. Low price coated HSS bits could be a powerful alternative.

  3. Bit-1 Mediates Integrin-dependent Cell Survival through Activation of the NFκB Pathway*

    PubMed Central

    Griffiths, Genevieve S.; Grundl, Melanie; Leychenko, Anna; Reiter, Silke; Young-Robbins, Shirley S.; Sulzmaier, Florian J.; Caliva, Maisel J.; Ramos, Joe W.; Matter, Michelle L.

    2011-01-01

    Loss of properly regulated cell death and cell survival pathways can contribute to the development of cancer and cancer metastasis. Cell survival signals are modulated by many different receptors, including integrins. Bit-1 is an effector of anoikis (cell death due to loss of attachment) in suspended cells. The anoikis function of Bit-1 can be counteracted by integrin-mediated cell attachment. Here, we explored integrin regulation of Bit-1 in adherent cells. We show that knockdown of endogenous Bit-1 in adherent cells decreased cell survival and re-expression of Bit-1 abrogated this effect. Furthermore, reduction of Bit-1 promoted both staurosporine and serum-deprivation induced apoptosis. Indeed knockdown of Bit-1 in these cells led to increased apoptosis as determined by caspase-3 activation and positive TUNEL staining. Bit-1 expression protected cells from apoptosis by increasing phospho-IκB levels and subsequently bcl-2 gene transcription. Protection from apoptosis under serum-free conditions correlated with bcl-2 transcription and Bcl-2 protein expression. Finally, Bit-1-mediated regulation of bcl-2 was dependent on focal adhesion kinase, PI3K, and AKT. Thus, we have elucidated an integrin-controlled pathway in which Bit-1 is, in part, responsible for the survival effects of cell-ECM interactions. PMID:21383007

  4. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    NASA Astrophysics Data System (ADS)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.

  5. Causes of wear of PDC bits and ways of improving their wear resistance

    NASA Astrophysics Data System (ADS)

    Timonin, VV; Smolentsev, AS; Shakhtorin, I. O.; Polushin, NI; Laptev, AI; Kushkhabiev, AS

    2017-02-01

    The scope of the paper encompasses basic factors that influence PDC bit efficiency. Feasible ways of eliminating the negatives are illustrated. The wash fluid flow in a standard bit is modeled, the resultant pattern of the bit washing is analyzed, and the recommendations are made on modification of the PDC bit design.

  6. Next-to-minimal SOFTSUSY

    NASA Astrophysics Data System (ADS)

    Allanach, B. C.; Athron, P.; Tunstall, Lewis C.; Voigt, A.; Williams, A. G.

    2014-09-01

    We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a Z3 symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general Z3 violating (denoted as =) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper serves as a manual to the NMSSM mode of the program, detailing the approximations and conventions used. Catalogue identifier: ADPM_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADPM_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 154886 No. of bytes in distributed program, including test data, etc.: 1870890 Distribution format: tar.gz Programming language: C++, fortran. Computer: Personal computer. Operating system: Tested on Linux 3.x. Word size: 64 bits Classification: 11.1, 11.6. Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: ADPM_v3_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 785 Nature of problem: Calculating supersymmetric particle spectrum and mixing parameters in the next-to-minimal supersymmetric standard model. The solution to the renormalisation group equations must be consistent with boundary conditions on supersymmetry breaking parameters, as well as on the weak-scale boundary condition on gauge couplings, Yukawa couplings and the Higgs potential parameters. Solution method: Nested iterative algorithm and numerical minimisation of the Higgs potential. Reasons for new version: Major extension to include the next-to-minimal supersymmetric standard model. Summary of revisions: Added additional supersymmetric and supersymmetry breaking parameters associated with the additional gauge singlet. Electroweak symmetry breaking conditions are significantly changed in the next-to-minimal mode, and some sparticle mixing changes. An interface to NMSSMTools has also been included. Some of the object structure has also changed, and the command line interface has been made more user friendly. Restrictions: SOFTSUSY will provide a solution only in the perturbative regime and it assumes that all couplings of the model are real (i.e. CP-conserving). If the parameter point under investigation is non-physical for some reason (for example because the electroweak potential does not have an acceptable minimum), SOFTSUSY returns an error message. Running time: A few seconds per parameter point.

  7. Short-Block Protograph-Based LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  8. Rate-Compatible LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  9. An analog method of cross-talk compensation for a RGB wavelength division multiplexed optical link

    NASA Astrophysics Data System (ADS)

    Chisholm, George; Leveneur, Jérôme; Futter, John; Kennedy, John

    2018-06-01

    Pulse-width modulation (PWM) over optical fiber can be a very advantageous data transmission approach when an electrically isolated data link is required. The use of wavelength division multiplexing allows multiple data streams to be sent through a single fiber independently. The present investigation aims to demonstrate a novel approach to reduce cross-talk in a three-channel RGB optical link without the need for complex optical componentry. An op-amp circuit is developed to reduce the cross-talk so that the resolution of the PWM data is preserved. An iterative Monte-Carlo simulation approach is used to optimize the op-amp circuit. The approach is developed for a set of three PWM Hall effect magnetometers with 12-bit resolution and 128 Hz sampling rate. We show that, in these conditions, the loss of resolution due to cross-talk is prevented. We also show that the cross-talk compensation allows the RGB PWM link to outperform other transmission schemes.

  10. Error Control Techniques for Satellite and Space Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1996-01-01

    In this report, we present the results of our recent work on turbo coding in two formats. Appendix A includes the overheads of a talk that has been given at four different locations over the last eight months. This presentation has received much favorable comment from the research community and has resulted in the full-length paper included as Appendix B, 'A Distance Spectrum Interpretation of Turbo Codes'. Turbo codes use a parallel concatenation of rate 1/2 convolutional encoders combined with iterative maximum a posteriori probability (MAP) decoding to achieve a bit error rate (BER) of 10(exp -5) at a signal-to-noise ratio (SNR) of only 0.7 dB. The channel capacity for a rate 1/2 code with binary phase shift-keyed modulation on the AWGN (additive white Gaussian noise) channel is 0 dB, and thus the Turbo coding scheme comes within 0.7 DB of capacity at a BER of 10(exp -5).

  11. Image processing on the image with pixel noise bits removed

    NASA Astrophysics Data System (ADS)

    Chuang, Keh-Shih; Wu, Christine

    1992-06-01

    Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.

  12. Drilling and Caching Architecture for the Mars2020 Mission

    NASA Astrophysics Data System (ADS)

    Zacny, K.

    2013-12-01

    We present a Sample Acquisition and Caching (SAC) architecture for the Mars2020 mission and detail how the architecture meets the sampling requirements described in the Mars2020 Science Definition Team (SDT) report. The architecture uses 'One Bit per Core' approach. Having dedicated bit for each rock core allows a reduction in the number of core transfer steps and actuators and this reduces overall mission risk. It also alleviates the bit life problem, eliminates cross contamination, and aids in hermetic sealing. An added advantage is faster drilling time, lower power, lower energy, and lower Weight on Bit (which reduces Arm preload requirements). To enable replacing of core samples, the drill bits are based on the BigTooth bit design. The BigTooth bit cuts a core diameter slightly smaller than the imaginary hole inscribed by the inner surfaces of the bits. Hence the rock core could be much easier ejected along the gravity vector. The architecture also has three additional types of bits that allow analysis of rocks. Rock Abrasion and Brushing Bit (RABBit) allows brushing and grinding of rocks in the same was as Rock Abrasion Tool does on MER. PreView bit allows viewing and analysis of rock core surfaces. Powder and Regolith Acquisition Bit (PRABit) captures regolith and rock powder either for in situ analysis or sample return. PRABit also allows sieving capabilities. The architecture can be viewed here: http://www.youtube.com/watch?v=_-hOO4-zDtE

  13. Drill bit assembly for releasably retaining a drill bit cutter

    DOEpatents

    Glowka, David A.; Raymond, David W.

    2002-01-01

    A drill bit assembly is provided for releasably retaining a polycrystalline diamond compact drill bit cutter. Two adjacent cavities formed in a drill bit body house, respectively, the disc-shaped drill bit cutter and a wedge-shaped cutter lock element with a removable fastener. The cutter lock element engages one flat surface of the cutter to retain the cutter in its cavity. The drill bit assembly thus enables the cutter to be locked against axial and/or rotational movement while still providing for easy removal of a worn or damaged cutter. The ability to adjust and replace cutters in the field reduces the effect of wear, helps maintains performance and improves drilling efficiency.

  14. Low-density parity-check codes for volume holographic memory systems.

    PubMed

    Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali

    2003-02-10

    We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.

  15. Dimensions of vehicle sounds perception.

    PubMed

    Wagner, Verena; Kallus, K Wolfgang; Foehl, Ulrich

    2017-10-01

    Vehicle sounds play an important role concerning customer satisfaction and can show another differentiating factor of brands. With an online survey of 1762 German and American customers, the requirement characteristics of high-quality vehicle sounds were determined. On the basis of these characteristics, a requirement profile was generated for every analyzed sound. These profiles were investigated in a second study with 78 customers using real vehicles. The assessment results of the vehicle sounds can be represented using the dimensions "timbre", "loudness", and "roughness/sharpness". The comparison of the requirement profiles and the assessment results show that the sounds which are perceived as pleasant and high-quality, more often correspond to the requirement profile. High-quality sounds are characterized by the fact that they are rather gentle, soft and reserved, rich, a bit dark and not too rough. For those sounds which are assessed worse by the customers, recommendations for improvements can be derived. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Using Optimal Dependency-Trees for Combinatorial Optimization: Learning the Structure of the Search Space.

    DTIC Science & Technology

    1997-01-01

    create a dependency tree containing an optimum set of n-1 first-order dependencies. To do this, first, we select an arbitrary bit Xroot to place at the...the root to an arbitrary bit Xroot -For all other bits Xi, set bestMatchingBitInTree[Xi] to Xroot . -While not all bits have been

  17. Antiwhirl PDC bits increased penetration rates in Alberta drilling. [Polycrystalline Diamond Compact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobrosky, D.; Osmak, G.

    1993-07-05

    The antiwhirl PDC bits and an inhibitive mud system contributed to the quicker drilling of the time-sensitive shales. The hole washouts in the intermediate section were dramatically reduced, resulting in better intermediate casing cement jobs. Also, the use of antirotation PDC-drillable cementing plugs eliminated the need to drill out plugs and float equipment with a steel tooth bit and then trip for the PDC bit. By using an antiwhirl PDC bit, at least one trip was eliminated in the intermediate section. Offset data indicated that two to six conventional bits would have been required to drill the intermediate hole interval.more » The PDC bit was rebuildable and therefore rerunnable even after being used on five wells. In each instance, the cost of replacing chipped cutters was less than the cost of a new insert roller cone bit. The paper describes the antiwhirl bits; the development of the bits; and their application in a clastic sequence, a carbonate sequence, and the Shekilie oil field; the improvement in the rate of penetration; the selection of bottom hole assemblies; washout problems; and drill-out characteristics.« less

  18. Evaluations of bit sleeve and twisted-body bit designs for controlling roof bolter dust

    PubMed Central

    Beck, T.W.

    2015-01-01

    Drilling into coal mine roof strata to install roof bolts has the potential to release substantial quantities of respirable dust. Due to the proximity of drill holes to the breathing zone of roof bolting personnel, dust escaping the holes and avoiding capture by the dust collection system pose a potential respiratory health risk. Controls are available to complement the typical dry vacuum collection system and minimize harmful exposures during the initial phase of drilling. This paper examines the use of a bit sleeve in combination with a dust-hog-type bit to improve dust extraction during the critical initial phase of drilling. A twisted-body drill bit is also evaluated to determine the quantity of dust liberated in comparison with the dust-hog-type bit. Based on the results of our laboratory tests, the bit sleeve may reduce dust emissions by one-half during the initial phase of drilling before the drill bit is fully enclosed by the drill hole. Because collaring is responsible for the largest dust liberations, overall dust emission can also be substantially reduced. The use of a twisted-body bit has minimal improvement on dust capture compared with the commonly used dust-hog-type bit. PMID:26257435

  19. Sample Acquisition and Caching architecture for the Mars Sample Return mission

    NASA Astrophysics Data System (ADS)

    Zacny, K.; Chu, P.; Cohen, J.; Paulsen, G.; Craft, J.; Szwarc, T.

    This paper presents a Mars Sample Return (MSR) Sample Acquisition and Caching (SAC) study developed for the three rover platforms: MER, MER+, and MSL. The study took into account 26 SAC requirements provided by the NASA Mars Exploration Program Office. For this SAC architecture, the reduction of mission risk was chosen by us as having greater priority than mass or volume. For this reason, we selected a “ One Bit per Core” approach. The enabling technology for this architecture is Honeybee Robotics' “ eccentric tubes” core breakoff approach. The breakoff approach allows the drill bits to be relatively small in diameter and in turn lightweight. Hence, the bits could be returned to Earth with the cores inside them with only a modest increase to the total returned mass, but a significant decrease in complexity. Having dedicated bits allows a reduction in the number of core transfer steps and actuators. It also alleviates the bit life problem, eliminates cross contamination, and aids in hermetic sealing. An added advantage is faster drilling time, lower power, lower energy, and lower Weight on Bit (which reduces Arm preload requirements). Drill bits are based on the BigTooth bit concept, which allows re-use of the same bit multiple times, if necessary. The proposed SAC consists of a 1) Rotary-Percussive Core Drill, 2) Bit Storage Carousel, 3) Cache, 4) Robotic Arm, and 5) Rock Abrasion and Brushing Bit (RABBit), which is deployed using the Drill. The system also includes PreView bits (for viewing of cores prior to caching) and Powder bits for acquisition of regolith or cuttings. The SAC total system mass is less than 22 kg for MER and MER+ size rovers and less than 32 kg for the MSL-size rover.

  20. Heat Generation During Bone Drilling: A Comparison Between Industrial and Orthopaedic Drill Bits.

    PubMed

    Hein, Christopher; Inceoglu, Serkan; Juma, David; Zuckerman, Lee

    2017-02-01

    Cortical bone drilling for preparation of screw placement is common in multiple surgical fields. The heat generated while drilling may reach thresholds high enough to cause osteonecrosis. This can compromise implant stability. Orthopaedic drill bits are several orders more expensive than their similarly sized, publicly available industrial counterparts. We hypothesize that an industrial bit will generate less heat during drilling, and the bits will not generate more heat after multiple cortical passes. We compared 4 4.0 mm orthopaedic and 1 3.97 mm industrial drill bits. Three types of each bit were drilled into porcine femoral cortices 20 times. The temperature of the bone was measured with thermocouple transducers. The heat generated during the first 5 drill cycles for each bit was compared to the last 5 cycles. These data were analyzed with analysis of covariance. The industrial drill bit generated the smallest mean increase in temperature (2.8 ± 0.29°C) P < 0.0001. No significant difference was identified comparing the first 5 cortices drilled to the last 5 cortices drilled for each bit. The P-values are as follows: Bosch (P = 0.73), Emerge (P = 0.09), Smith & Nephew (P = 0.08), Stryker (P = 0.086), and Synthes (P = 0.16). The industrial bit generated less heat during drilling than its orthopaedic counterparts. The bits maintained their performance after 20 drill cycles. Consideration should be given by manufacturers to design differences that may contribute to a more efficient cutting bit. Further investigation into the reuse of these drill bits may be warranted, as our data suggest their efficiency is maintained after multiple uses.

  1. A source-channel coding approach to digital image protection and self-recovery.

    PubMed

    Sarreshtedari, Saeed; Akhaee, Mohammad Ali

    2015-07-01

    Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.

  2. An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai

    Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.

  3. Results of neutron irradiation of GEM detector for plasma radiation detection

    NASA Astrophysics Data System (ADS)

    Jednorog, S.; Bienkowska, B.; Chernyshova, M.; Łaszynska, E.; Prokopowicz, R.; Ziołkowski, A.

    2015-09-01

    The detecting devices dedicated for plasma monitoring will be exposed for massive fluxes of neutron, photons as well as other rays that are components of fusion reactions and their product interactions with plasma itself or surroundings. In result detecting module metallic components will be activated becoming a source of radiation. Moreover, electronics components could change their electronic properties. The prototype GEM detector constructed for monitoring soft X-ray radiation in ITER oriented tokamaks was used for plasma monitoring during experimental campaign on tokamak ASDEX Upgrade. After that it became a source of gamma radiation caused by neutrons. The present work contains description of detector activation in the laboratory conditions.

  4. Theoretical and subjective bit assignments in transform picture

    NASA Technical Reports Server (NTRS)

    Jones, H. W., Jr.

    1977-01-01

    It is shown that all combinations of symmetrical input distributions with difference distortion measures give a bit assignment rule identical to the well-known rule for a Gaussian input distribution with mean-square error. Published work is examined to show that the bit assignment rule is useful for transforms of full pictures, but subjective bit assignments for transform picture coding using small block sizes are significantly different from the theoretical bit assignment rule. An intuitive explanation is based on subjective design experience, and a subjectively obtained bit assignment rule is given.

  5. Effect of bit wear on hammer drill handle vibration and productivity.

    PubMed

    Antonucci, Andrea; Barr, Alan; Martin, Bernard; Rempel, David

    2017-08-01

    The use of large electric hammer drills exposes construction workers to high levels of hand vibration that may lead to hand-arm vibration syndrome and other musculoskeletal disorders. The aim of this laboratory study was to investigate the effect of bit wear on drill handle vibration and drilling productivity (e.g., drilling time per hole). A laboratory test bench system was used with an 8.3 kg electric hammer drill and 1.9 cm concrete bit (a typical drill and bit used in commercial construction). The system automatically advanced the active drill into aged concrete block under feed force control to a depth of 7.6 cm while handle vibration was measured according to ISO standards (ISO 5349 and 28927). Bits were worn to 4 levels by consecutive hole drilling to 4 cumulative drilling depths: 0, 1,900, 5,700, and 7,600 cm. Z-axis handle vibration increased significantly (p<0.05) from 4.8 to 5.1 m/s 2 (ISO weighted) and from 42.7-47.6 m/s 2 (unweighted) when comparing a new bit to a bit worn to 1,900 cm of cumulative drilling depth. Handle vibration did not increase further with bits worn more than 1900 cm of cumulative drilling depth. Neither x- nor y-axis handle vibration was effected by bit wear. The time to drill a hole increased by 58% for the bit with 5,700 cm of cumulative drilling depth compared to a new bit. Bit wear led to a small but significant increase in both ISO weighted and unweighted z-axis handle vibration. Perhaps more important, bit wear had a large effect on productivity. The effect on productivity will influence a worker's allowable daily drilling time if exposure to drill handle vibration is near the ACGIH Threshold Limit Value. [1] Construction contractors should implement a bit replacement program based on these findings.

  6. BitTorious volunteer: server-side extensions for centrally-managed volunteer storage in BitTorrent swarms.

    PubMed

    Lee, Preston V; Dinu, Valentin

    2015-11-04

    Our publication of the BitTorious portal [1] demonstrated the ability to create a privatized distributed data warehouse of sufficient magnitude for real-world bioinformatics studies using minimal changes to the standard BitTorrent tracker protocol. In this second phase, we release a new server-side specification to accept anonymous philantropic storage donations by the general public, wherein a small portion of each user's local disk may be used for archival of scientific data. We have implementated the server-side announcement and control portions of this BitTorrent extension into v3.0.0 of the BitTorious portal, upon which compatible clients may be built. Automated test cases for the BitTorious Volunteer extensions have been added to the portal's v3.0.0 release, supporting validation of the "peer affinity" concept and announcement protocol introduced by this specification. Additionally, a separate reference implementation of affinity calculation has been provided in C++ for informaticians wishing to integrate into libtorrent-based projects. The BitTorrent "affinity" extensions as provided in the BitTorious portal reference implementation allow data publishers to crowdsource the extreme storage prerequisites for research in "big data" fields. With sufficient awareness and adoption of BitTorious Volunteer-based clients by the general public, the BitTorious portal may be able to provide peta-scale storage resources to the scientific community at relatively insignificant financial cost.

  7. PDC bit hydraulics design, profile are key to reducing balling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hariharan, P.R.; Azar, J.J.

    1996-12-09

    Polycrystalline diamond compact (PDC) bits with a parabolic profile and bladed hydraulic design have a lesser tendency to ball during drilling of reactive shales. PDC bits with ribbed or open-face hydraulic designs and those with flat or rounded profiles tended to ball more often in the bit balling experiments conducted. Experimental work also indicates that PDC hydraulic design seems to have a greater influence on bit balling tendency compared to bit profile design. There are five main factors that affect bit balling: formation type, drilling fluid, drilling hydraulics, bit design, and confining pressures. An equation for specific energy showed thatmore » it could be used to describe the efficiency of the drilling process by examining the amount of energy spent in drilling a unit volume of rock. This concept of specific energy has been used herein to correlate with the parameter Rd, a parameter to quantify the degree of balling.« less

  8. Remote drill bit loader

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dokos, J.A.

    1996-12-31

    A drill bit loader is described for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pinsmore » prevent rotation of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned. In typical remote drilling operations, whether in hot cells or water pits, drill bits have been held using a collet or end mill type holder with set screws. In either case, to load or change a drill bit required the use master-slave manipulators to position the bits and tighten the collet or set screws. This requirement eliminated many otherwise useful work areas because they were not equipped with slaves, particularly in water pits.« less

  9. Magnetic nanostructuring and overcoming Brown's paradox to realize extraordinary high-temperature energy products

    NASA Astrophysics Data System (ADS)

    Balasubramanian, Balamurugan; Mukherjee, Pinaki; Skomski, Ralph; Manchanda, Priyanka; Das, Bhaskar; Sellmyer, David J.

    2014-09-01

    Nanoscience has been one of the outstanding driving forces in technology recently, arguably more so in magnetism than in any other branch of science and technology. Due to nanoscale bit size, a single computer hard disk is now able to store the text of 3,000,000 average-size books, and today's high-performance permanent magnets--found in hybrid cars, wind turbines, and disk drives--are nanostructured to a large degree. The nanostructures ideally are designed from Co- and Fe-rich building blocks without critical rare-earth elements, and often are required to exhibit high coercivity and magnetization at elevated temperatures of typically up to 180 °C for many important permanent-magnet applications. Here we achieve this goal in exchange-coupled hard-soft composite films by effective nanostructuring of high-anisotropy HfCo7 nanoparticles with a high-magnetization Fe65Co35 phase. An analysis based on a model structure shows that the soft-phase addition improves the performance of the hard-magnetic material by mitigating Brown's paradox in magnetism, a substantial reduction of coercivity from the anisotropy field. The nanostructures exhibit a high room-temperature energy product of about 20.3 MGOe (161.5 kJ/m3), which is a record for a rare earth- or Pt-free magnetic material and retain values as high as 17.1 MGOe (136.1 kJ/m3) at 180°C.

  10. Bit Grooming: Statistically accurate precision-preserving quantization with compression, evaluated in the netCDF operators (NCO, v4.4.8+)

    DOE PAGES

    Zender, Charles S.

    2016-09-19

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.« less

  11. Remote drill bit loader

    DOEpatents

    Dokos, J.A.

    1997-12-30

    A drill bit loader is described for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pins prevent rotation of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned. 5 figs.

  12. Region-of-interest determination and bit-rate conversion for H.264 video transcoding

    NASA Astrophysics Data System (ADS)

    Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan

    2013-12-01

    This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.

  13. Remote drill bit loader

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dokos, James A.

    A drill bit loader for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pins prevent rotationmore » of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned.« less

  14. Remote drill bit loader

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dokos, J.A.

    A drill bit loader is described for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pinsmore » prevent rotation of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned. 5 figs.« less

  15. Graded bit patterned magnetic arrays fabricated via angled low-energy He ion irradiation.

    PubMed

    Chang, L V; Nasruallah, A; Ruchhoeft, P; Khizroev, S; Litvinov, D

    2012-07-11

    A bit patterned magnetic array based on Co/Pd magnetic multilayers with a binary perpendicular magnetic anisotropy distribution was fabricated. The binary anisotropy distribution was attained through angled helium ion irradiation of a bit edge using hydrogen silsesquioxane (HSQ) resist as an ion stopping layer to protect the rest of the bit. The viability of this technique was explored numerically and evaluated through magnetic measurements of the prepared bit patterned magnetic array. The resulting graded bit patterned magnetic array showed a 35% reduction in coercivity and a 9% narrowing of the standard deviation of the switching field.

  16. Design and implementation of low complexity wake-up receiver for underwater acoustic sensor networks

    NASA Astrophysics Data System (ADS)

    Yue, Ming

    This thesis designs a low-complexity dual Pseudorandom Noise (PN) scheme for identity (ID) detection and coarse frame synchronization. The two PN sequences for a node are identical and are separated by a specified length of gap which serves as the ID of different sensor nodes. The dual PN sequences are short in length but are capable of combating severe underwater acoustic (UWA) multipath fading channels that exhibit time varying impulse responses up to 100 taps. The receiver ID detection is implemented on a microcontroller MSP430F5529 by calculating the correlation between the two segments of the PN sequence with the specified separation gap. When the gap length is matched, the correlator outputs a peak which triggers the wake-up enable. The time index of the correlator peak is used as the coarse synchronization of the data frame. The correlator is implemented by an iterative algorithm that uses only one multiplication and two additions for each sample input regardless of the length of the PN sequence, thus achieving low computational complexity. The real-time processing requirement is also met via direct memory access (DMA) and two circular buffers to accelerate data transfer between the peripherals and the memory. The proposed dual PN detection scheme has been successfully tested by simulated fading channels and real-world measured channels. The results show that, in long multipath channels with more than 60 taps, the proposed scheme achieves high detection rate and low false alarm rate using maximal-length sequences as short as 31 bits to 127 bits, therefore it is suitable as a low-power wake-up receiver. The future research will integrate the wake-up receiver with Digital Signal Processors (DSP) for payload detection.

  17. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  18. TPS-HAMMER: improving HAMMER registration algorithm by soft correspondence matching and thin-plate splines based deformation interpolation.

    PubMed

    Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang

    2010-02-01

    We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  19. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  20. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  1. Accurate tissue characterization in low-dose CT imaging with pure iterative reconstruction.

    PubMed

    Murphy, Kevin P; McLaughlin, Patrick D; Twomey, Maria; Chan, Vincent E; Moloney, Fiachra; Fung, Adrian J; Chan, Faimee E; Kao, Tafline; O'Neill, Siobhan B; Watson, Benjamin; O'Connor, Owen J; Maher, Michael M

    2017-04-01

    We assess the ability of low-dose hybrid iterative reconstruction (IR) and 'pure' model-based IR (MBIR) images to maintain accurate Hounsfield unit (HU)-determined tissue characterization. Standard-protocol (SP) and low-dose modified-protocol (MP) CTs were contemporaneously acquired in 34 Crohn's disease patients referred for CT. SP image reconstruction was via the manufacturer's recommendations (60% FBP, filtered back projection; 40% ASiR, Adaptive Statistical iterative Reconstruction; SP-ASiR40). MP data sets underwent four reconstructions (100% FBP; 40% ASiR; 70% ASiR; MBIR). Three observers measured tissue volumes using HU thresholds for fat, soft tissue and bone/contrast on each data set. Analysis was via SPSS. Inter-observer agreement was strong for 1530 datapoints (rs > 0.9). MP-MBIR tissue volume measurement was superior to other MP reconstructions and closely correlated with the reference SP-ASiR40 images for all tissue types. MP-MBIR superiority was most marked for fat volume calculation - close SP-ASiR40 and MP-MBIR Bland-Altman plot correlation was seen with the lowest average difference (336 cm 3 ) when compared with other MP reconstructions. Hounsfield unit-determined tissue volume calculations from MP-MBIR images resulted in values comparable to SP-ASiR40 calculations and values that are superior to MP-ASiR images. Accuracy of estimation of volume of tissues (e.g. fat) using segmentation software on low-dose CT images appears optimal when reconstructed with pure IR. © 2016 The Royal Australian and New Zealand College of Radiologists.

  2. Performance comparison between 8 and 14 bit-depth imaging in polarization-sensitive swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragoda, Deepa K.; Matcher, Stephen J.

    2011-03-01

    We compare true 8 and 14 bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) at 1.3μm wavelength by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of an equine tendon sample indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. We also compare the resulting image quality when the image data sampled with the 14-bit DAQ from human finger skin is artificially bit-reduced during post-processing. However, in agreement with the results reported previously, we also observe that in our system that real-world 8-bit image shows more artifacts than the image acquired by numerically truncating to 8-bits from the raw 14-bit image data, especially in low intensity image area. This is due to the higher noise floor and reduced dynamic range of the 8-bit DAQ. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artefacts due to strong Fresnel reflection.

  3. Unimodular sequence design under frequency hopping communication compatibility requirements

    NASA Astrophysics Data System (ADS)

    Ge, Peng; Cui, Guolong; Kong, Lingjiang; Yang, Jianyu

    2016-12-01

    The integrated design for both radar and anonymous communication has drawn more attention recently since wireless communication system appeals to enhance security and reliability. Given the frequency hopping (FH) communication system, an effective way to realize integrated design is to meet the spectrum compatibility between these two systems. The paper deals with a unimodular sequence design technique which considers optimizing both the spectrum compatibility and peak sidelobes levels (PSL) of auto-correlation function (ACF). The spectrum compatibility requirement realizes anonymous communication for the FH system and provides this system lower probability of intercept (LPI) since the spectrum of the FH system is hidden in that of the radar system. The proposed algorithm, named generalized fitting template (GFT) technique, converts the sequence optimization design problem to a iterative fitting process. In this process, the power spectrum density (PSD) and PSL behaviors of the generated sequences fit both PSD and PSL templates progressively. Two templates are established based on the spectrum compatibility requirement and the expected PSL. As noted, in order to ensure the communication security and reliability, spectrum compatibility requirement is given a higher priority to achieve in the GFT algorithm. This algorithm realizes this point by adjusting the weight adaptively between these two terms during the iteration process. The simulation results are analyzed in terms of bit error rate (BER), PSD, PSL, and signal-interference rate (SIR) for both the radar and FH systems. The performance of GFT is compared with SCAN, CAN, FRE, CYC, and MAT algorithms in the above aspects, which shows its good effectiveness.

  4. PDC bits break ground with advanced vibration mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-10-01

    Advancements in PDC bit technology have resulted in the identification and characterization of different types of vibrational modes that historically have limited PDC bit performance. As a result, concepts have been developed that prevent the initiation of vibration and also mitigate its damaging effects once it occurs. This vibration-reducing concept ensures more efficient use of the energy available to a PDC bit performance. As a result, concepts have been developed that prevent the imitation of vibration and also mitigate its damaging effects once it occurs. This vibration-reducing concept ensures more efficient use of the energy available to a PDC bit,more » thereby improving its performance. This improved understanding of the complex forces affecting bit performance is driving bit customization for specific drilling programs.« less

  5. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    NASA Technical Reports Server (NTRS)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  6. Method for compression of binary data

    DOEpatents

    Berlin, Gary J.

    1996-01-01

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.

  7. A new thermal model for bone drilling with applications to orthopaedic surgery.

    PubMed

    Lee, JuEun; Rabin, Yoed; Ozdoganlar, O Burak

    2011-12-01

    This paper presents a new thermal model for bone drilling with applications to orthopaedic surgery. The new model combines a unique heat-balance equation for the system of the drill bit and the chip stream, an ordinary heat diffusion equation for the bone, and heat generation at the drill tip, arising from the cutting process and friction. Modeling of the drill bit-chip stream system assumes an axial temperature distribution and a lumped heat capacity effect in the transverse cross-section. The new model is solved numerically using a tailor-made finite-difference scheme for the drill bit-chip stream system, coupled with a classic finite-difference method for the bone. The theoretical investigation addresses the significance of heat transfer between the drill bit and the bone, heat convection from the drill bit to the surroundings, and the effect of the initial temperature of the drill bit on the developing thermal field. Using the new model, a parametric study on the effects of machining conditions and drill-bit geometries on the resulting temperature field in the bone and the drill bit is presented. Results of this study indicate that: (1) the maximum temperature in the bone decreases with increased chip flow; (2) the transient temperature distribution is strongly influenced by the initial temperature; (3) the continued cooling (irrigation) of the drill bit reduces the maximum temperature even when the tip is distant from the cooled portion of the drill bit; and (4) the maximum temperature increases with increasing spindle speed, increasing feed rate, decreasing drill-bit diameter, increasing point angle, and decreasing helix angle. The model is expected to be useful in determination of optimum drilling conditions and drill-bit geometries. Copyright © 2011. Published by Elsevier Ltd.

  8. Method to manufacture bit patterned magnetic recording media

    DOEpatents

    Raeymaekers, Bart; Sinha, Dipen N

    2014-05-13

    A method to increase the storage density on magnetic recording media by physically separating the individual bits from each other with a non-magnetic medium (so-called bit patterned media). This allows the bits to be closely packed together without creating magnetic "cross-talk" between adjacent bits. In one embodiment, ferromagnetic particles are submerged in a resin solution, contained in a reservoir. The bottom of the reservoir is made of piezoelectric material.

  9. Testability Design Rating System: Testability Handbook. Volume 1

    DTIC Science & Technology

    1992-02-01

    4-10 4.7.5 Summary of False BIT Alarms (FBA) ............................. 4-10 4.7.6 Smart BIT Technique...Circuit Board PGA Pin Grid Array PLA Programmable Logic Array PLD Programmable Logic Device PN Pseudo-Random Number PREDICT Probabilistic Estimation of...11 4.7.6 Smart BIT ( reference: RADC-TR-85-198). " Smart " BIT is a term given to BIT circuitry in a system LRU which includes dedicated processor/memory

  10. The Behavioral Intervention Technology Model: An Integrated Conceptual and Technological Framework for eHealth and mHealth Interventions

    PubMed Central

    Schueller, Stephen M; Montague, Enid; Burns, Michelle Nicole; Rashidi, Parisa

    2014-01-01

    A growing number of investigators have commented on the lack of models to inform the design of behavioral intervention technologies (BITs). BITs, which include a subset of mHealth and eHealth interventions, employ a broad range of technologies, such as mobile phones, the Web, and sensors, to support users in changing behaviors and cognitions related to health, mental health, and wellness. We propose a model that conceptually defines BITs, from the clinical aim to the technological delivery framework. The BIT model defines both the conceptual and technological architecture of a BIT. Conceptually, a BIT model should answer the questions why, what, how (conceptual and technical), and when. While BITs generally have a larger treatment goal, such goals generally consist of smaller intervention aims (the "why") such as promotion or reduction of specific behaviors, and behavior change strategies (the conceptual "how"), such as education, goal setting, and monitoring. Behavior change strategies are instantiated with specific intervention components or “elements” (the "what"). The characteristics of intervention elements may be further defined or modified (the technical "how") to meet the needs, capabilities, and preferences of a user. Finally, many BITs require specification of a workflow that defines when an intervention component will be delivered. The BIT model includes a technological framework (BIT-Tech) that can integrate and implement the intervention elements, characteristics, and workflow to deliver the entire BIT to users over time. This implementation may be either predefined or include adaptive systems that can tailor the intervention based on data from the user and the user’s environment. The BIT model provides a step towards formalizing the translation of developer aims into intervention components, larger treatments, and methods of delivery in a manner that supports research and communication between investigators on how to design, develop, and deploy BITs. PMID:24905070

  11. The behavioral intervention technology model: an integrated conceptual and technological framework for eHealth and mHealth interventions.

    PubMed

    Mohr, David C; Schueller, Stephen M; Montague, Enid; Burns, Michelle Nicole; Rashidi, Parisa

    2014-06-05

    A growing number of investigators have commented on the lack of models to inform the design of behavioral intervention technologies (BITs). BITs, which include a subset of mHealth and eHealth interventions, employ a broad range of technologies, such as mobile phones, the Web, and sensors, to support users in changing behaviors and cognitions related to health, mental health, and wellness. We propose a model that conceptually defines BITs, from the clinical aim to the technological delivery framework. The BIT model defines both the conceptual and technological architecture of a BIT. Conceptually, a BIT model should answer the questions why, what, how (conceptual and technical), and when. While BITs generally have a larger treatment goal, such goals generally consist of smaller intervention aims (the "why") such as promotion or reduction of specific behaviors, and behavior change strategies (the conceptual "how"), such as education, goal setting, and monitoring. Behavior change strategies are instantiated with specific intervention components or "elements" (the "what"). The characteristics of intervention elements may be further defined or modified (the technical "how") to meet the needs, capabilities, and preferences of a user. Finally, many BITs require specification of a workflow that defines when an intervention component will be delivered. The BIT model includes a technological framework (BIT-Tech) that can integrate and implement the intervention elements, characteristics, and workflow to deliver the entire BIT to users over time. This implementation may be either predefined or include adaptive systems that can tailor the intervention based on data from the user and the user's environment. The BIT model provides a step towards formalizing the translation of developer aims into intervention components, larger treatments, and methods of delivery in a manner that supports research and communication between investigators on how to design, develop, and deploy BITs.

  12. Digital Ratiometer

    NASA Technical Reports Server (NTRS)

    Beer, R.

    1985-01-01

    Small, low-cost comparator with 24-bit-precision yields ratio signal from pair of analog or digital input signals. Arithmetic logic chips (bit-slice) sample two 24-bit analog-to-digital converters approximately once every millisecond and accumulate them in two 24-bit registers. Approach readily modified to arbitrary precision.

  13. Technology Development and Field Trials of EGS Drilling Systems at Chocolate Mountain

    DOE Data Explorer

    Steven Knudsen

    2012-01-01

    Polycrystalline diamond compact (PDC) bits are routinely used in the oil and gas industry for drilling medium to hard rock but have not been adopted for geothermal drilling, largely due to past reliability issues and higher purchase costs. The Sandia Geothermal Research Department has recently completed a field demonstration of the applicability of advanced synthetic diamond drill bits for production geothermal drilling. Two commercially-available PDC bits were tested in a geothermal drilling program in the Chocolate Mountains in Southern California. These bits drilled the granitic formations with significantly better Rate of Penetration (ROP) and bit life than the roller cone bit they are compared with. Drilling records and bit performance data along with associated drilling cost savings are presented herein. The drilling trials have demonstrated PDC bit drilling technology has matured for applicability and improvements to geothermal drilling. This will be especially beneficial for development of Enhanced Geothermal Systems whereby resources can be accessed anywhere within the continental US by drilling to deep, hot resources in hard, basement rock formations.

  14. An Off-Grid Turbo Channel Estimation Algorithm for Millimeter Wave Communications.

    PubMed

    Han, Lingyi; Peng, Yuexing; Wang, Peng; Li, Yonghui

    2016-09-22

    The bandwidth shortage has motivated the exploration of the millimeter wave (mmWave) frequency spectrum for future communication networks. To compensate for the severe propagation attenuation in the mmWave band, massive antenna arrays can be adopted at both the transmitter and receiver to provide large array gains via directional beamforming. To achieve such array gains, channel estimation (CE) with high resolution and low latency is of great importance for mmWave communications. However, classic super-resolution subspace CE methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotation invariant technique (ESPRIT) cannot be applied here due to RF chain constraints. In this paper, an enhanced CE algorithm is developed for the off-grid problem when quantizing the angles of mmWave channel in the spatial domain where off-grid problem refers to the scenario that angles do not lie on the quantization grids with high probability, and it results in power leakage and severe reduction of the CE performance. A new model is first proposed to formulate the off-grid problem. The new model divides the continuously-distributed angle into a quantized discrete grid part, referred to as the integral grid angle, and an offset part, termed fractional off-grid angle. Accordingly, an iterative off-grid turbo CE (IOTCE) algorithm is proposed to renew and upgrade the CE between the integral grid part and the fractional off-grid part under the Turbo principle. By fully exploiting the sparse structure of mmWave channels, the integral grid part is estimated by a soft-decoding based compressed sensing (CS) method called improved turbo compressed channel sensing (ITCCS). It iteratively updates the soft information between the linear minimum mean square error (LMMSE) estimator and the sparsity combiner. Monte Carlo simulations are presented to evaluate the performance of the proposed method, and the results show that it enhances the angle detection resolution greatly.

  15. A Coupled Experiment-finite Element Modeling Methodology for Assessing High Strain Rate Mechanical Response of Soft Biomaterials.

    PubMed

    Prabhu, Rajkumar; Whittington, Wilburn R; Patnaik, Sourav S; Mao, Yuxiong; Begonia, Mark T; Williams, Lakiesha N; Liao, Jun; Horstemeyer, M F

    2015-05-18

    This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g. brain, liver, tendon, fat, etc.) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec(-1). The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e. incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e. transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e. reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e. optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement.

  16. A Coupled Experiment-finite Element Modeling Methodology for Assessing High Strain Rate Mechanical Response of Soft Biomaterials

    PubMed Central

    Prabhu, Rajkumar; Whittington, Wilburn R.; Patnaik, Sourav S.; Mao, Yuxiong; Begonia, Mark T.; Williams, Lakiesha N.; Liao, Jun; Horstemeyer, M. F.

    2015-01-01

    This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g. brain, liver, tendon, fat, etc.) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec-1. The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e. incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e. transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e. reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e. optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement. PMID:26067742

  17. Equine Welfare during Exercise: An Evaluation of Breathing, Breathlessness and Bridles

    PubMed Central

    Mellor, David J.; Beausoleil, Ngaio J.

    2017-01-01

    Simple Summary Horses have superior athletic capabilities due largely to their exceptional cardiorespiratory responses during exercise. This has particular relevance to horses’ potential to experience breathlessness, especially when their athletic performance is reduced by impaired respiratory function. Breathlessness, incorporating three types of unpleasant experiences, has been noted as of significant animal welfare concern in other mammals. However, the potential for breathlessness to occur in horses as usually ridden wearing bitted bridles has not yet been evaluated in detail. Accordingly, key physiological responses to exercise and the consequences of impaired respiratory function are outlined. Then the physiological control of breathing and the generation of the aversive experiences of breathlessness are explained. Finally, the potential for horses with unimpaired and impaired respiratory function to experience the different types of breathlessness is evaluated. This information provides a basis for considering the circumstances in which breathlessness may have significant negative welfare impacts on horses as currently ridden wearing bitted bridles. Potential beneficial impacts on respiratory function of using bitless bridles are then discussed with emphasis on the underlying mechanisms and their relevance to breathlessness. It is noted that direct comparisons of cardiorespiratory responses to exercise in horses wearing bitless and bitted bridles are not available and it is recommended that such studies be undertaken. Abstract Horses engaged in strenuous exercise display physiological responses that approach the upper functional limits of key organ systems, in particular their cardiorespiratory systems. Maximum athletic performance is therefore vulnerable to factors that diminish these functional capacities, and such impairment might also lead to horses experiencing unpleasant respiratory sensations, i.e., breathlessness. The aim of this review is to use existing literature on equine cardiorespiratory physiology and athletic performance to evaluate the potential for various types of breathlessness to occur in exercising horses. In addition, we investigate the influence of management factors such as rein and bit use and of respiratory pathology on the likelihood and intensity of equine breathlessness occurring during exercise. In ridden horses, rein use that reduces the jowl angle, sometimes markedly, and conditions that partially obstruct the nasopharynx and/or larynx, impair airflow in the upper respiratory tract and lead to increased flow resistance. The associated upper airway pressure changes, transmitted to the lower airways, may have pathophysiological sequelae in the alveolae, which, in their turn, may increase airflow resistance in the lower airways and impede respiratory gas exchange. Other sequelae include decreases in respiratory minute volume and worsening of the hypoxaemia, hypercapnia and acidaemia commonly observed in healthy horses during strenuous exercise. These and other factors are implicated in the potential for ridden horses to experience three forms of breathlessness—”unpleasant respiratory effort”, “air hunger” and “chest tightness”—which arise when there is a mismatch between a heightened ventilatory drive and the adequacy of the respiratory response. It is not known to what extent, if at all, such mismatches would occur in strenuously exercising horses unhampered by low jowl angles or by pathophysiological changes at any level of the respiratory tract. However, different combinations of the three types of breathlessness seem much more likely to occur when pathophysiological conditions significantly reduce maximal athletic performance. Finally, most horses exhibit clear behavioural evidence of aversion to a bit in their mouths, varying from the bit being a mild irritant to very painful. This in itself is a significant animal welfare issue that should be addressed. A further major point is the potential for bits to disrupt the maintenance of negative pressure in the oropharynx, which apparently acts to prevent the soft palate from rising and obstructing the nasopharynx. The untoward respiratory outcomes and poor athletic performance due to this and other obstructions are well established, and suggest the potential for affected animals to experience significant intensities of breathlessness. Bitless bridle use may reduce or eliminate such effects. However, direct comparisons of the cardiorespiratory dynamics and the extent of any respiratory pathophysiology in horses wearing bitted and bitless bridles have not been conducted. Such studies would be helpful in confirming, or otherwise, the claimed potential benefits of bitless bridle use. PMID:28587125

  18. The Anoikis Effector Bit1 Inhibits EMT through Attenuation of TLE1-Mediated Repression of E-Cadherin in Lung Cancer Cells

    PubMed Central

    Yao, Xin; Pham, Tri; Temple, Brandi; Gray, Selena; Cannon, Cornita; Chen, Renwei; Abdel-Mageed, Asim B.; Biliran, Hector

    2016-01-01

    The mitochondrial Bcl-2 inhibitor of transcription 1 (Bit1) protein is part of an anoikis-regulating pathway that is selectively dependent on integrins. We previously demonstrated that the caspase-independent apoptotic effector Bit1 exerts tumor suppressive function in lung cancer in part by inhibiting anoikis resistance and anchorage-independent growth in vitro and tumorigenicity in vivo. Herein we show a novel function of Bit1 as an inhibitor cell migration and epithelial–mesenchymal transition (EMT) in the human lung adenocarcinoma A549 cell line. Suppression of endogenous Bit1 expression via siRNA and shRNA strategies promoted mesenchymal phenotypes, including enhanced fibroblastoid morphology and cell migratory potential with concomitant downregulation of the epithelial marker E-cadherin expression. Conversely, ectopic Bit1 expression in A549 cells promoted epithelial transition characterized by cuboidal-like epithelial cell phenotype, reduced cell motility, and upregulated E-cadherin expression. Specific downregulation of E-cadherin in Bit1-transfected cells was sufficient to block Bit1-mediated inhibition of cell motility while forced expression of E-cadherin alone attenuated the enhanced migration of Bit1 knockdown cells, indicating that E-cadherin is a downstream target of Bit1 in regulating cell motility. Furthermore, quantitative real-time PCR and reporter analyses revealed that Bit1 upregulates E-cadherin expression at the transcriptional level through the transcriptional regulator Amino-terminal Enhancer of Split (AES) protein. Importantly, the Bit1/AES pathway induction of E-cadherin expression involves inhibition of the TLE1-mediated repression of E-cadherin, by decreasing TLE1 corepressor occupancy at the E-cadherin promoter as revealed by chromatin immunoprecipitation assays. Consistent with its EMT inhibitory function, exogenous Bit1 expression significantly suppressed the formation of lung metastases of A549 cells in an in vivo experimental metastasis model. Taken together, our studies indicate Bit1 is an inhibitor of EMT and metastasis in lung cancer and hence can serve as a molecular target in curbing lung cancer aggressiveness. PMID:27655370

  19. Effects of size on three-cone bit performance in laboratory drilled shale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, A.D.; DiBona, B.G.; Sandstrom, J.L.

    1982-09-01

    The effects of size on the performance of 3-cone bits were measured during laboratory drilling tests in shale at simulated downhole conditions. Four Reed HP-SM 3-cone bits with diameters of 6 1/2, 7 7/8, 9 1/2 and 11 inches were used to drill Mancos shale with water-based mud. The tests were conducted at constant borehole pressure, two conditions of hydraulic horsepower per square inch of bit area, three conditions of rotary speed and four conditions of weight-on-bit per inch of bit diameter. The resulting penetration rates and torques were measured. Statistical techniques were used to analyze the data.

  20. LSB-based Steganography Using Reflected Gray Code for Color Quantum Images

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Lu, Aiping

    2018-02-01

    At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.

  1. Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina

    2016-09-01

    The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.

  2. Hey! A Tick Bit Me!

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Hey! A Tick Bit Me! KidsHealth / For Kids / Hey! A Tick Bit Me! Print en español ¡Ay! ¡ ... tick collar. More on this topic for: Kids Hey! A Brown Recluse Spider Bit Me! Hey! A ...

  3. Core drill's bit is replaceable without withdrawal of drill stem - A concept

    NASA Technical Reports Server (NTRS)

    Rushing, F. C.; Simon, A. B.

    1970-01-01

    Drill bit is divided into several sectors. When collapsed, the outside diameter is forced down the drill stem, when it reaches bottom the sectors are forced outward and form a cutting bit. A dulled bit is retracted by reversal of this procedure.

  4. Lathe tool bit and holder for machining fiberglass materials

    NASA Technical Reports Server (NTRS)

    Winn, L. E. (Inventor)

    1972-01-01

    A lathe tool and holder combination for machining resin impregnated fiberglass cloth laminates is described. The tool holder and tool bit combination is designed to accommodate a conventional carbide-tipped, round shank router bit as the cutting medium, and provides an infinite number of cutting angles in order to produce a true and smooth surface in the fiberglass material workpiece with every pass of the tool bit. The technique utilizes damaged router bits which ordinarily would be discarded.

  5. High speed, real-time, camera bandwidth converter

    DOEpatents

    Bower, Dan E; Bloom, David A; Curry, James R

    2014-10-21

    Image data from a CMOS sensor with 10 bit resolution is reformatted in real time to allow the data to stream through communications equipment that is designed to transport data with 8 bit resolution. The incoming image data has 10 bit resolution. The communication equipment can transport image data with 8 bit resolution. Image data with 10 bit resolution is transmitted in real-time, without a frame delay, through the communication equipment by reformatting the image data.

  6. Method for compression of binary data

    DOEpatents

    Berlin, G.J.

    1996-03-26

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.

  7. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, J.R.

    1997-02-11

    A method and apparatus are disclosed for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register. 15 figs.

  8. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, John R.

    1997-01-01

    A method and apparatus for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register.

  9. Implementation of a high-resolution workstation for primary diagnosis of projection radiography images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Herron, John M.; Maitz, Glenn S.; Gur, David; Miller, Stephen L.; Straub, William H.; Fuhrman, Carl R.

    1990-08-01

    We designed and implemented a high-resolution video workstation as the central hardware component in a comprehensive multi-project program comparing the use of digital and film modalities. The workstation utilizes a 1.8 GByte real-time disk (RCI) capable of storing 400 full-resolution images and two Tektronix (GMA251) display controllers with 19" monitors (GMA2O2). The display is configured in a portrait format with a resolution of 1536 x 2048 x 8 bit, and operates at 75 Hz in a noninterlaced mode. Transmission of data through a 12 to 8 bit lookup table into the display controllers occurs at 20 MBytes/second (.35 seconds per image). The workstation allows easy use of brightness (level) and contrast (window) to be manipulated with a trackball, and various processing options can be selected using push buttons. Display of any of the 400 images is also performed at 20MBytes/sec (.35 sec/image). A separate text display provides for the automatic display of patient history data and for a scoring form through which readers can interact with the system by means of a computer mouse. In addition, the workstation provides for the randomization of cases and for the immediate entry of diagnostic responses into a master database. Over the past year this workstation has been used for over 10,000 readings in diagnostic studies related to 1) image resolution; 2) film vs. soft display; 3) incorporation of patient history data into the reading process; and 4) usefulness of image processing.

  10. Hey! A Mosquito Bit Me! (For Kids)

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Hey! A Mosquito Bit Me! KidsHealth / For Kids / Hey! A Mosquito Bit Me! Print en español ¡Ay! ¡ ... your skin. More on this topic for: Kids Hey! A Flea Bit Me! Hey! A Scorpion Stung ...

  11. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  12. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  13. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation.

    PubMed

    Idris A, Elbakri; Fessler, Jeffrey A

    2003-08-07

    This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.

  14. A Study of a Standard BIT Circuit.

    DTIC Science & Technology

    1977-02-01

    IENDED BIT APPROACHES FOR QED MODULES AND APPLICATION OF THE ANALYTIC MEASURES 36 4.1 Built-In-Test for Memory Class Modules 37 4.1.1 Random Access...Implementation 68 4.1.5.5 Criti cal Parameters 68 4.1.5.6 QED Module Test Equipment Requirements 68 4.1.6 Application of Analytic Measures to the...Microprocessor BIT Techniques.. 121 4.2.9 Application of Analytic Measures to the Recommended BIT App roaches 125 4.2.10 Process Class BIT by Partial

  15. New PDC bit optimizes drilling performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besson, A.; Gudulec, P. le; Delwiche, R.

    1996-05-01

    The lithology in northwest Argentina contains a major section where polycrystalline diamond compact (PDC) bits have not succeeded in the past. The section consists of dense shales and cemented sandstone stringers with limestone laminations. Conventional PDC bits experienced premature failures in the section. A new generation PDC bit tripled rate of penetration (ROP) and increased by five times the potential footage per bit. Recent improvements in PDC bit technology that enabled the improved performance include: the ability to control the PDC cutter quality; use of an advanced cutter lay out defined by 3D software; using cutter face design code formore » optimized cleaning and cooling; and, mastering vibration reduction features, including spiraled blades.« less

  16. A comparison of orthogonal transformations for digital speech processing.

    NASA Technical Reports Server (NTRS)

    Campanella, S. J.; Robinson, G. S.

    1971-01-01

    Discrete forms of the Fourier, Hadamard, and Karhunen-Loeve transforms are examined for their capacity to reduce the bit rate necessary to transmit speech signals. To rate their effectiveness in accomplishing this goal the quantizing error (or noise) resulting for each transformation method at various bit rates is computed and compared with that for conventional companded PCM processing. Based on this comparison, it is found that Karhunen-Loeve provides a reduction in bit rate of 13.5 kbits/s, Fourier 10 kbits/s, and Hadamard 7.5 kbits/s as compared with the bit rate required for companded PCM. These bit-rate reductions are shown to be somewhat independent of the transmission bit rate.

  17. Neighborhood comparison operator

    NASA Technical Reports Server (NTRS)

    Gennery, Donald B. (Inventor)

    1987-01-01

    Digital values in a moving window are compared by an operator having nine comparators (18) connected to line buffers (16) for receiving a succession of central pixels together with eight neighborhood pixels. A single bit of program control determines whether the neighborhood pixels are to be compared with the central pixel or a threshold value. The central pixel is always compared with the threshold. The comparator output, plus 2 bits indicating odd-even pixel/line information about the central pixel, addresses a lookup table (20) to provide 14 bits of information, including 2 bits which control a selector (22) to pass either the central pixel value, the other 12 bits of table information, or the bit-wise logic OR of all neighboring pixels.

  18. Modular high speed counter employing edge-triggered code

    DOEpatents

    Vanstraelen, Guy F.

    1993-06-29

    A high speed modular counter (100) utilizing a novel counting method in which the first bit changes with the frequency of the driving clock, and changes in the higher order bits are initiated one clock pulse after a "0" to "1" transition of the next lower order bit. This allows all carries to be known one clock period in advance of a bit change. The present counter is modular and utilizes two types of standard counter cells. A first counter cell determines the zero bit. The second counter cell determines any other higher order bit. Additional second counter cells are added to the counter to accommodate any count length without affecting speed.

  19. Modular high speed counter employing edge-triggered code

    DOEpatents

    Vanstraelen, G.F.

    1993-06-29

    A high speed modular counter (100) utilizing a novel counting method in which the first bit changes with the frequency of the driving clock, and changes in the higher order bits are initiated one clock pulse after a 0'' to 1'' transition of the next lower order bit. This allows all carries to be known one clock period in advance of a bit change. The present counter is modular and utilizes two types of standard counter cells. A first counter cell determines the zero bit. The second counter cell determines any other higher order bit. Additional second counter cells are added to the counter to accommodate any count length without affecting speed.

  20. Serial data correlator/code translator

    NASA Technical Reports Server (NTRS)

    Morgan, L. E. (Inventor)

    1982-01-01

    A system for analyzing asynchronous signals containing bits of information for ensuring the validity of said signals, by sampling each bit of information a plurality of times, and feeding the sampled pieces of bits of information into a sequence controlled is described. The sequence controller has a plurality of maps or programs through which the sampled pieces of bits are stepped so as to identify the particular bit of information and determine the validity and phase of the bit. The step in which the sequence controller is clocked is controlled by a storage register. A data decoder decodes the information fed out of the storage register and feeds such information to shift registers for storage.

  1. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  2. Four channel Laser Firing Unit using laser diodes

    NASA Technical Reports Server (NTRS)

    Rosner, David, Sr.; Spomer, Edwin, Sr.

    1994-01-01

    This paper describes the accomplishments and status of PS/EDD's (Pacific Scientific/Energy Dynamics Division) internal research and development effort to prototype and demonstrate a practical four channel laser firing unit (LFU) that uses laser diodes to initiate pyrotechnic events. The LFU individually initiates four ordnance devices using the energy from four diode lasers carried over the fiber optics. The LFU demonstrates end-to-end optical built in test (BIT) capabilities. Both Single Fiber Reflective BIT and Dual Fiber Reflective BIT approaches are discussed and reflection loss data is presented. This paper includes detailed discussions of the advantages and disadvantages of both BIT approaches, all-fire and no-fire levels, and BIT detection levels. The following topics are also addressed: electronic control and BIT circuits, fiber optic sizing and distribution, and an electromechanical shutter type safe/arm device. This paper shows the viability of laser diode initiation systems and single fiber BIT for typing military applications.

  3. Note: optical receiver system for 152-channel magnetoencephalography.

    PubMed

    Kim, Jin-Mok; Kwon, Hyukchan; Yu, Kwon-kyu; Lee, Yong-Ho; Kim, Kiwoong

    2014-11-01

    An optical receiver system composing 13 serial data restore/synchronizer modules and a single module combiner converted optical 32-bit serial data into 32-bit synchronous parallel data for a computer to acquire 152-channel magnetoencephalography (MEG) signals. A serial data restore/synchronizer module identified 32-bit channel-voltage bits from 48-bit streaming serial data, and then consecutively reproduced 13 times of 32-bit serial data, acting in a synchronous clock. After selecting a single among 13 reproduced data in each module, a module combiner converted it into 32-bit parallel data, which were carried to 32-port digital input board in a computer. When the receiver system together with optical transmitters were applied to 152-channel superconducting quantum interference device sensors, this MEG system maintained a field noise level of 3 fT/√Hz @ 100 Hz at a sample rate of 1 kSample/s per channel.

  4. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  5. Purpose-built PDC bit successfully drills 7-in liner equipment and formation: An integrated solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puennel, J.G.A.; Huppertz, A.; Huizing, J.

    1996-12-31

    Historically, drilling out the 7-in, liner equipment has been a time consuming operation with a limited success ratio. The success of the operation is highly dependent on the type of drill bit employed. Tungsten carbide mills and mill tooth rock bits required from 7.5 to 11.5 hours respectively to drill the pack-off bushings, landing collar, shoe track and shoe. Rates of penetration dropped dramatically when drilling the float equipment. While conventional PDC bits have drilled the liner equipment successfully (averaging 9.7 hours), severe bit damage invariably prevented them from continuing to drill the formation at cost-effective penetration rates. This papermore » describes the integrated development and application of an IADC M433 Class PDC bit, which was designed specifically to drill out the 7-in. liner equipment and continue drilling the formation at satisfactory penetration rates. The development was the result of a joint investigation There the operator and bit/liner manufacturers shared their expertise in solving a drilling problem, The heavy-set bit was developed following drill-off tests conducted to investigate the drillability of the 7-in. liner equipment. Key features of the new bit and its application onshore The Netherlands will be presented and analyzed.« less

  6. Next generation PET data acquisition architectures

    NASA Astrophysics Data System (ADS)

    Jones, W. F.; Reed, J. H.; Everman, J. L.; Young, J. W.; Seese, R. D.

    1997-06-01

    New architectures for higher performance data acquisition in PET are proposed. Improvements are demanded primarily by three areas of advancing PET state of the art. First, larger detector arrays such as the Hammersmith ECAT/sup (R/) EXACT HR/sup ++/ exceed the addressing capacity of 32 bit coincidence event words. Second, better scintillators (LSO) make depth-of interaction (DOI) and time-of-flight (TOF) operation more practical. Third, fully optimized single photon attenuation correction requires higher rates of data collection. New technologies which enable the proposed third generation Real Time Sorter (RTS III) include: (1) 80 Mbyte/sec Fibre Channel RAID disk systems, (2) PowerPC on both VMEbus and PCI Local bus, and (3) quadruple interleaved DRAM controller designs. Data acquisition flexibility is enhanced through a wider 64 bit coincidence event word. PET methodology support includes DOI (6 bits), TOF (6 bits), multiple energy windows (6 bits), 512/spl times/512 sinogram indexes (18 bits), and 256 crystal rings (16 bits). Throughput of 10 M events/sec is expected for list-mode data collection as well as both on-line and replay histogramming. Fully efficient list-mode storage for each PET application is provided by real-time bit packing of only the active event word bits. Real-time circuits provide DOI rebinning.

  7. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications

    PubMed Central

    Park, Keunyeol; Song, Minkyu

    2018-01-01

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273

  8. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.

    PubMed

    Park, Keunyeol; Song, Minkyu; Kim, Soo Youn

    2018-02-24

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.

  9. Simple proof of the impossibility of bit commitment in generalized probabilistic theories using cone programming

    NASA Astrophysics Data System (ADS)

    Sikora, Jamie; Selby, John

    2018-04-01

    Bit commitment is a fundamental cryptographic task, in which Alice commits a bit to Bob such that she cannot later change the value of the bit, while, simultaneously, the bit is hidden from Bob. It is known that ideal bit commitment is impossible within quantum theory. In this work, we show that it is also impossible in generalized probabilistic theories (under a small set of assumptions) by presenting a quantitative trade-off between Alice's and Bob's cheating probabilities. Our proof relies crucially on a formulation of cheating strategies as cone programs, a natural generalization of semidefinite programs. In fact, using the generality of this technique, we prove that this result holds for the more general task of integer commitment.

  10. Multi-Bit Quantum Private Query

    NASA Astrophysics Data System (ADS)

    Shi, Wei-Xu; Liu, Xing-Tong; Wang, Jian; Tang, Chao-Jing

    2015-09-01

    Most of the existing Quantum Private Queries (QPQ) protocols provide only single-bit queries service, thus have to be repeated several times when more bits are retrieved. Wei et al.'s scheme for block queries requires a high-dimension quantum key distribution system to sustain, which is still restricted in the laboratory. Here, based on Markus Jakobi et al.'s single-bit QPQ protocol, we propose a multi-bit quantum private query protocol, in which the user can get access to several bits within one single query. We also extend the proposed protocol to block queries, using a binary matrix to guard database security. Analysis in this paper shows that our protocol has better communication complexity, implementability and can achieve a considerable level of security.

  11. Neighborhood comparison operator

    NASA Technical Reports Server (NTRS)

    Gennery, D. B. (Inventor)

    1985-01-01

    Digital values in a moving window are compared by an operator having nine comparators connected to line buffers for receiving a succession of central pixels together with eight neighborhood pixels. A single bit of program control determines whether the neighborhood pixels are to be compared with the central pixel or a threshold value. The central pixel is always compared with the threshold. The omparator output plus 2 bits indicating odd-even pixel/line information about the central pixel addresses a lookup table to provide 14 bits of information, including 2 bits which control a selector to pass either the central pixel value, the other 12 bits of table information, or the bit-wise logical OR of all nine pixels through circuit that implements a very wide OR gate.

  12. Ultrasonic rotary-hammer drill

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph (Inventor); Badescu, Mircea (Inventor); Sherrit, Stewart (Inventor); Kassab, Steve (Inventor); Bao, Xiaoqi (Inventor)

    2010-01-01

    A mechanism for drilling or coring by a combination of sonic hammering and rotation. The drill includes a hammering section with a set of preload weights mounted atop a hammering actuator and an axial passage through the hammering section. In addition, a rotary section includes a motor coupled to a drive shaft that traverses the axial passage through the hammering section. A drill bit is coupled to the drive shaft for drilling by a combination of sonic hammering and rotation. The drill bit includes a fluted shaft leading to a distal crown cutter with teeth. The bit penetrates sampled media by repeated hammering action. In addition, the bit is rotated. As it rotates the fluted bit carries powdered cuttings helically upward along the side of the bit to the surface.

  13. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, David D.; Napolitano, Jr., Leonard M.; Redinbo, G. Robert; Shreeve, William O.

    1994-01-01

    An internally fault-tolerant data error detection and correction integrated circuit device (10) and a method of operating same. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum is provided with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented.

  14. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, D.D.; Napolitano, L.M. Jr.; Redinbo, G.R.; Shreeve, W.O.

    1994-03-01

    An internally fault-tolerant data error detection and correction integrated circuit device and a method of operating same is described. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented. 8 figures.

  15. Steganography based on pixel intensity value decomposition

    NASA Astrophysics Data System (ADS)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  16. Seismic Data Preparation Procedures

    DTIC Science & Technology

    1977-10-20

    H. Swindell, and D. Sun. I’ I,,. - - _ _ _ _ - - I I j TABLE OF CONTENTS I SECTION TITLE PAGE ABSTRACT iii ACKNOWLEDGMENTS iv F I. INTRODUCTION I...followed by 4 bits of zero padding Bit 0 = 1 if i n s t rument 1 is bad Bit 1 = 1 if i n s t rumen t 2 is bad Bit ?. 1 if i n s t rument 3 is

  17. Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.

    PubMed

    Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai

    2017-02-20

    Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.

  18. Use of One Time Pad Algorithm for Bit Plane Security Improvement

    NASA Astrophysics Data System (ADS)

    Suhardi; Suwilo, Saib; Budhiarti Nababan, Erna

    2017-12-01

    BPCS (Bit-Plane Complexity Segmentation) which is one of the steganography techniques that utilizes the human vision characteristics that cannot see the change in binary patterns that occur in the image. This technique performs message insertion by making a switch to a high-complexity bit-plane or noise-like regions with bits of secret messages. Bit messages that were previously stored precisely result the message extraction process to be done easily by rearranging a set of previously stored characters in noise-like region in the image. Therefore the secret message becomes easily known by others. In this research, the process of replacing bit plane with message bits is modified by utilizing One Time Pad cryptography technique which aims to increase security in bit plane. In the tests performed, the combination of One Time Pad cryptographic algorithm to the steganography technique of BPCS works well in the insertion of messages into the vessel image, although in insertion into low-dimensional images is poor. The comparison of the original image with the stegoimage looks identical and produces a good quality image with a mean value of PSNR above 30db when using a largedimensional image as the cover messages.

  19. On the improvement of Wiener attack on RSA with small private exponent.

    PubMed

    Wu, Mu-En; Chen, Chien-Ming; Lin, Yue-Hsun; Sun, Hung-Min

    2014-01-01

    RSA system is based on the hardness of the integer factorization problem (IFP). Given an RSA modulus N = pq, it is difficult to determine the prime factors p and q efficiently. One of the most famous short exponent attacks on RSA is the Wiener attack. In 1997, Verheul and van Tilborg use an exhaustive search to extend the boundary of the Wiener attack. Their result shows that the cost of exhaustive search is 2r + 8 bits when extending the Weiner's boundary r bits. In this paper, we first reduce the cost of exhaustive search from 2r + 8 bits to 2r + 2 bits. Then, we propose a method named EPF. With EPF, the cost of exhaustive search is further reduced to 2r - 6 bits when we extend Weiner's boundary r bits. It means that our result is 2(14) times faster than Verheul and van Tilborg's result. Besides, the security boundary is extended 7 bits.

  20. The application of advanced PDC concepts proves effective in south Texas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlem, J.S.; Baxter, R.L.; Dunn, K.E.

    1996-12-01

    Over the years, a variety of problems with polycrystalline diamond compact (PDC) bit design and application has been documented, with bit whirl being identified as the cause of many inherent problems. The goal of most PDC manufacturers, and the subject of this paper, is development of a better-performing, whirl-resistant PDC bit design. Similarly, the goal for most operators is the lower cost resulting from effective application of such bits. Toward those ends, a cooperative development effort between operators and a manufacturer was undertaken to apply advanced concepts effectively to the design, manufacture, and application of a new series of PDCmore » bits in south Texas. Adoption of design concepts, such as force-balanced cutting structures, asymmetric blade layouts, spiral blade designs, and tracking cutter arrangements, proved effective in countering the destructive effects of bit whirl, and allowed PDC bits to be used in harder formations. Summaries of both operational and economic performance confirm the success of the undertaking.« less

  1. The investigation of the influence of thermomechanical treatment of the material of rotary cutter bit toolholders on its hardness

    NASA Astrophysics Data System (ADS)

    Chupin, S. A.; Bolobov, V. I.

    2017-02-01

    The causes of failure of the tangential rotary cutter bits of the road header during stonedrift in rocks of medium strength are analyzed in the article. It was revealed that the most typical cause of failure of cutter bits is premature wear of the toolholder (body) of the cutter bit. It is well known that the most effective way to improve the wear resistance is to increase hardness. The influence of the thermomechanical treatment of the material of the cutter bit toolholder on its hardness is studied. It was established that the thermomechanical treatment of the cutter bit toolholder material results in the increase of its hardness. It was found that the increase of material hardness is proportional to the increase of material strain intensity during thermomechanical treatment. It was concluded that the use of thermomechanical treatment can lead to the increase of both the hardness and wear resistance of the cutter bit material.

  2. On the Improvement of Wiener Attack on RSA with Small Private Exponent

    PubMed Central

    Chen, Chien-Ming; Lin, Yue-Hsun

    2014-01-01

    RSA system is based on the hardness of the integer factorization problem (IFP). Given an RSA modulus N = pq, it is difficult to determine the prime factors p and q efficiently. One of the most famous short exponent attacks on RSA is the Wiener attack. In 1997, Verheul and van Tilborg use an exhaustive search to extend the boundary of the Wiener attack. Their result shows that the cost of exhaustive search is 2r + 8 bits when extending the Weiner's boundary r bits. In this paper, we first reduce the cost of exhaustive search from 2r + 8 bits to 2r + 2 bits. Then, we propose a method named EPF. With EPF, the cost of exhaustive search is further reduced to 2r − 6 bits when we extend Weiner's boundary r bits. It means that our result is 214 times faster than Verheul and van Tilborg's result. Besides, the security boundary is extended 7 bits. PMID:24982974

  3. Spectral CT of the extremities with a silicon strip photon counting detector

    NASA Astrophysics Data System (ADS)

    Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Photon counting x-ray detectors (PCXDs) are an important emerging technology for spectral imaging and material differentiation with numerous potential applications in diagnostic imaging. We report development of a Si-strip PCXD system originally developed for mammography with potential application to spectral CT of musculoskeletal extremities, including challenges associated with sparse sampling, spectral calibration, and optimization for higher energy x-ray beams. Methods: A bench-top CT system was developed incorporating a Si-strip PCXD, fixed anode x-ray source, and rotational and translational motions to execute complex acquisition trajectories. Trajectories involving rotation and translation combined with iterative reconstruction were investigated, including single and multiple axial scans and longitudinal helical scans. The system was calibrated to provide accurate spectral separation in dual-energy three-material decomposition of soft-tissue, bone, and iodine. Image quality and decomposition accuracy were assessed in experiments using a phantom with pairs of bone and iodine inserts (3, 5, 15 and 20 mm) and an anthropomorphic wrist. Results: The designed trajectories improved the sampling distribution from 56% minimum sampling of voxels to 75%. Use of iterative reconstruction (viz., penalized likelihood with edge preserving regularization) in combination with such trajectories resulted in a very low level of artifacts in images of the wrist. For large bone or iodine inserts (>5 mm diameter), the error in the estimated material concentration was <16% for (50 mg/mL) bone and <8% for (5 mg/mL) iodine with strong regularization. For smaller inserts, errors of 20-40% were observed and motivate improved methods for spectral calibration and optimization of the edge-preserving regularizer. Conclusion: Use of PCXDs for three-material decomposition in joint imaging proved feasible through a combination of rotation-translation acquisition trajectories and iterative reconstruction with optimized regularization.

  4. Limb muscle sound speed estimation by ultrasound computed tomography excluding receivers in bone shadow

    NASA Astrophysics Data System (ADS)

    Qu, Xiaolei; Azuma, Takashi; Lin, Hongxiang; Takeuchi, Hideki; Itani, Kazunori; Tamano, Satoshi; Takagi, Shu; Sakuma, Ichiro

    2017-03-01

    Sarcopenia is the degenerative loss of skeletal muscle ability associated with aging. One reason is the increasing of adipose ratio of muscle, which can be estimated by the speed of sound (SOS), since SOSs of muscle and adipose are different (about 7%). For SOS imaging, the conventional bent-ray method iteratively finds ray paths and corrects SOS along them by travel-time. However, the iteration is difficult to converge for soft tissue with bone inside, because of large speed variation. In this study, the bent-ray method is modified to produce SOS images for limb muscle with bone inside. The modified method includes three steps. First, travel-time is picked up by a proposed Akaike Information Criterion (AIC) with energy term (AICE) method. The energy term is employed for detecting and abandoning the transmissive wave through bone (low energy wave). It results in failed reconstruction for bone, but makes iteration convergence and gives correct SOS for skeletal muscle. Second, ray paths are traced using Fermat's principle. Finally, simultaneous algebraic reconstruction technique (SART) is employed to correct SOS along ray paths, but excluding paths with low energy wave which may pass through bone. The simulation evaluation was implemented by k-wave toolbox using a model of upper arm. As the result, SOS of muscle was 1572.0+/-7.3 m/s, closing to 1567.0 m/s in the model. For vivo evaluation, a ring transducer prototype was employed to scan the cross sections of lower arm and leg of a healthy volunteer. And the skeletal muscle SOSs were 1564.0+/-14.8 m/s and 1564.1±18.0 m/s, respectively.

  5. Direct Reconstruction of CT-Based Attenuation Correction Images for PET With Cluster-Based Penalties

    NASA Astrophysics Data System (ADS)

    Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Kinahan, Paul E.

    2017-03-01

    Extremely low-dose (LD) CT acquisitions used for PET attenuation correction have high levels of noise and potential bias artifacts due to photon starvation. This paper explores the use of a priori knowledge for iterative image reconstruction of the CT-based attenuation map. We investigate a maximum a posteriori framework with cluster-based multinomial penalty for direct iterative coordinate decent (dICD) reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction used a Poisson log-likelihood data fit term and evaluated two image penalty terms of spatial and mixture distributions. The spatial regularization is based on a quadratic penalty. For the mixture penalty, we assumed that the attenuation map may consist of four material clusters: air + background, lung, soft tissue, and bone. Using simulated noisy sinogram data, dICD reconstruction was performed with different strengths of the spatial and mixture penalties. The combined spatial and mixture penalties reduced the root mean squared error (RMSE) by roughly two times compared with a weighted least square and filtered backprojection reconstruction of CT images. The combined spatial and mixture penalties resulted in only slightly lower RMSE compared with a spatial quadratic penalty alone. For direct PET attenuation map reconstruction from ultra-LD CT acquisitions, the combination of spatial and mixture penalties offers regularization of both variance and bias and is a potential method to reconstruct attenuation maps with negligible patient dose. The presented results, using a best-case histogram suggest that the mixture penalty does not offer a substantive benefit over conventional quadratic regularization and diminishes enthusiasm for exploring future application of the mixture penalty.

  6. Turbodrills and innovative PDC bits economically drilled hard formations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boudreaux, R.C.; Massey, K.

    1994-03-28

    The use of turbodrills and polycrystalline diamond compact (PDC) bits with an innovative, tracking cutting structure has improved drilling economics in medium and hard formations in the Gulf of Mexico. Field results have confirmed that turbodrilling with trackset PDC bits reduced drilling costs, compared to offset wells. The combination of turbodrills and trackset bits has been used successfully in a broad range of applications and with various drilling parameters. Formations ranging from medium shales to hard, abrasive sands have been successfully and economically drilled. The tools have been used in both water-based and oil-based muds. Additionally, the turbo-drill and tracksetmore » PDC bit combination has been stable on directional drilling applications. The locking effect of the cutting structure helps keep the bit on course.« less

  7. Combined group ECC protection and subgroup parity protection

    DOEpatents

    Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin

    2013-06-18

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.

  8. Dense soft tissue 3D reconstruction refined with super-pixel segmentation for robotic abdominal surgery.

    PubMed

    Penza, Veronica; Ortiz, Jesús; Mattos, Leonardo S; Forgione, Antonello; De Momi, Elena

    2016-02-01

    Single-incision laparoscopic surgery decreases postoperative infections, but introduces limitations in the surgeon's maneuverability and in the surgical field of view. This work aims at enhancing intra-operative surgical visualization by exploiting the 3D information about the surgical site. An interactive guidance system is proposed wherein the pose of preoperative tissue models is updated online. A critical process involves the intra-operative acquisition of tissue surfaces. It can be achieved using stereoscopic imaging and 3D reconstruction techniques. This work contributes to this process by proposing new methods for improved dense 3D reconstruction of soft tissues, which allows a more accurate deformation identification and facilitates the registration process. Two methods for soft tissue 3D reconstruction are proposed: Method 1 follows the traditional approach of the block matching algorithm. Method 2 performs a nonparametric modified census transform to be more robust to illumination variation. The simple linear iterative clustering (SLIC) super-pixel algorithm is exploited for disparity refinement by filling holes in the disparity images. The methods were validated using two video datasets from the Hamlyn Centre, achieving an accuracy of 2.95 and 1.66 mm, respectively. A comparison with ground-truth data demonstrated the disparity refinement procedure: (1) increases the number of reconstructed points by up to 43 % and (2) does not affect the accuracy of the 3D reconstructions significantly. Both methods give results that compare favorably with the state-of-the-art methods. The computational time constraints their applicability in real time, but can be greatly improved by using a GPU implementation.

  9. Concept for a beryllium divertor with in-situ plasma spray surface regeneration

    NASA Astrophysics Data System (ADS)

    Smith, M. F.; Watson, R. D.; McGrath, R. T.; Croessmann, C. D.; Whitley, J. B.; Causey, R. A.

    1990-04-01

    Two serious problems with the use of graphite tiles on the ITER divertor are the limited lifetime due to erosion and the difficulty of replacing broken tiles inside the machine. Beryllium is proposed as an alternative low-Z armor material because the plasma spray process can be used to make in-situ repairs of eroded or damaged surfaces. Recent advances in plasma spray technology have produced beryllium coatings of 98% density with a 95% deposition efficiency and strong adhesion to the substrate. With existing technology, the entire active region of the ITER divertor surface could be coated with 2 mm of beryllium in less than 15 h using four small plasma spray guns. Beryllium also has other potential advantages over graphite, e.g., efficient gettering of oxygen, ten times less tritium inventory, reduced problems of transient fueling from D/T exchange and release, no runaway erosion cascades from self-sputtering, better adhesion of redeposited material, as well as higher strength, ductility, and fracture toughness than graphite. A 2-D finite element stress analysis was performed on a 3 mm thick Be tile brazed to an OFHC soft-copper saddle block, which was brazed to a high-strength copper tube. Peak stresses remained 50% below the ultimate strength for both brazing and in-service thermal stresses.

  10. CT coronary angiography: impact of adapted statistical iterative reconstruction (ASIR) on coronary stenosis and plaque composition analysis.

    PubMed

    Fuchs, Tobias A; Fiechter, Michael; Gebhard, Cathérine; Stehli, Julia; Ghadri, Jelena R; Kazakauskaite, Egle; Herzog, Bernhard A; Husmann, Lars; Gaemperli, Oliver; Kaufmann, Philipp A

    2013-03-01

    To assess the impact of adaptive statistical iterative reconstruction (ASIR) on coronary plaque volume and composition analysis as well as on stenosis quantification in high definition coronary computed tomography angiography (CCTA). We included 50 plaques in 29 consecutive patients who were referred for the assessment of known or suspected coronary artery disease (CAD) with contrast-enhanced CCTA on a 64-slice high definition CT scanner (Discovery HD 750, GE Healthcare). CCTA scans were reconstructed with standard filtered back projection (FBP) with no ASIR (0 %) or with increasing contributions of ASIR, i.e. 20, 40, 60, 80 and 100 % (no FBP). Plaque analysis (volume, components and stenosis degree) was performed using a previously validated automated software. Mean values for minimal diameter and minimal area as well as degree of stenosis did not change significantly using different ASIR reconstructions. There was virtually no impact of reconstruction algorithms on mean plaque volume or plaque composition (e.g. soft, intermediate and calcified component). However, with increasing ASIR contribution, the percentage of plaque volume component between 401 and 500 HU decreased significantly (p < 0.05). Modern image reconstruction algorithms such as ASIR, which has been developed for noise reduction in latest high resolution CCTA scans, can be used reliably without interfering with the plaque analysis and stenosis severity assessment.

  11. Communication system analysis for manned space flight

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1977-01-01

    One- and two-dimensional adaptive delta modulator (ADM) algorithms are discussed and compared. Results are shown for bit rates of two bits/pixel, one bit/pixel and 0.5 bits/pixel. Pictures showing the difference between the encoded-decoded pictures and the original pictures are presented. The effect of channel errors on the reconstructed picture is illustrated. A two-dimensional ADM using interframe encoding is also presented. This system operates at the rate of two bits/pixel and produces excellent quality pictures when there is little motion. The effect of large amounts of motion on the reconstructed picture is described.

  12. Percussive Augmenter of Rotary Drills for Operating as a Rotary-Hammer Drill

    NASA Technical Reports Server (NTRS)

    Aldrich, Jack Barron (Inventor); Bar-Cohen, Yoseph (Inventor); Sherrit, Stewart (Inventor); Badescu, Mircea (Inventor); Bao, Xiaoqi (Inventor); Scott, James Samson (Inventor)

    2014-01-01

    A percussive augmenter bit includes a connection shaft for mounting the bit onto a rotary drill. In a first modality, an actuator percussively drives the bit, and an electric slip-ring provides power to the actuator while being rotated by the drill. Hammering action from the actuator and rotation from the drill are applied directly to material being drilled. In a second modality, a percussive augmenter includes an actuator that operates as a hammering mechanism that drives a free mass into the bit creating stress pulses that fracture material that is in contact with the bit.

  13. A burst-mode photon counting receiver with automatic channel estimation and bit rate detection

    NASA Astrophysics Data System (ADS)

    Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.

    2016-04-01

    We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.

  14. Zn-BTC MOFs with active metal sites synthesized via a structure-directing approach for highly efficient carbon conversion.

    PubMed

    Huang, Xianqiang; Chen, Yifa; Lin, Zhengguo; Ren, Xiaoqian; Song, Yuna; Xu, Zhenzhu; Dong, Xinmei; Li, Xingguo; Hu, Changwen; Wang, Bo

    2014-03-11

    Three zinc-trimesic acid (Zn-BTC) MOFs, BIT-101, BIT-102 and BIT-103, have been synthesized via a structure-directing strategy. Interestingly, BIT-102 and -103 exhibit extraordinary catalytic performance (up to Conv. 100% and Sele. 95.2%) in the cycloaddition of CO2 under solvent- and halogen-free conditions without any additives or co-catalysts.

  15. Field-Deployable Video Cloud Solution

    DTIC Science & Technology

    2016-03-01

    78 2. Shipboard Server or Video Cloud System .......................................79 3. 4G LTE and Wi-Fi...local area network LED light emitting diode Li-ion lithium ion LTE long term evolution Mbps mega-bits per second MBps mega-bytes per second xv...restrictions on distribution. File size is dependent on both bit rate and content length. Bit rate is a value measured in bits per second (bps) and is

  16. An interlaboratory study of TEX86 and BIT analysis of sediments, extracts, and standard mixtures

    NASA Astrophysics Data System (ADS)

    Schouten, Stefan; Hopmans, Ellen C.; Rosell-Melé, Antoni; Pearson, Ann; Adam, Pierre; Bauersachs, Thorsten; Bard, Edouard; Bernasconi, Stefano M.; Bianchi, Thomas S.; Brocks, Jochen J.; Carlson, Laura Truxal; Castañeda, Isla S.; Derenne, Sylvie; Selver, Ayça. Doǧrul; Dutta, Koushik; Eglinton, Timothy; Fosse, Celine; Galy, Valier; Grice, Kliti; Hinrichs, Kai-Uwe; Huang, Yongsong; Huguet, Arnaud; Huguet, Carme; Hurley, Sarah; Ingalls, Anitra; Jia, Guodong; Keely, Brendan; Knappy, Chris; Kondo, Miyuki; Krishnan, Srinath; Lincoln, Sara; Lipp, Julius; Mangelsdorf, Kai; Martínez-García, Alfredo; Ménot, Guillemette; Mets, Anchelique; Mollenhauer, Gesine; Ohkouchi, Naohiko; Ossebaar, Jort; Pagani, Mark; Pancost, Richard D.; Pearson, Emma J.; Peterse, Francien; Reichart, Gert-Jan; Schaeffer, Philippe; Schmitt, Gaby; Schwark, Lorenz; Shah, Sunita R.; Smith, Richard W.; Smittenberg, Rienk H.; Summons, Roger E.; Takano, Yoshinori; Talbot, Helen M.; Taylor, Kyle W. R.; Tarozo, Rafael; Uchida, Masao; van Dongen, Bart E.; Van Mooy, Benjamin A. S.; Wang, Jinxiang; Warren, Courtney; Weijers, Johan W. H.; Werne, Josef P.; Woltering, Martijn; Xie, Shucheng; Yamamoto, Masanobu; Yang, Huan; Zhang, Chuanlun L.; Zhang, Yige; Zhao, Meixun; Damsté, Jaap S. Sinninghe

    2013-12-01

    Two commonly used proxies based on the distribution of glycerol dialkyl glycerol tetraethers (GDGTs) are the TEX86 (TetraEther indeX of 86 carbon atoms) paleothermometer for sea surface temperature reconstructions and the BIT (Branched Isoprenoid Tetraether) index for reconstructing soil organic matter input to the ocean. An initial round-robin study of two sediment extracts, in which 15 laboratories participated, showed relatively consistent TEX86 values (reproducibility ±3-4°C when translated to temperature) but a large spread in BIT measurements (reproducibility ±0.41 on a scale of 0-1). Here we report results of a second round-robin study with 35 laboratories in which three sediments, one sediment extract, and two mixtures of pure, isolated GDGTs were analyzed. The results for TEX86 and BIT index showed improvement compared to the previous round-robin study. The reproducibility, indicating interlaboratory variation, of TEX86 values ranged from 1.3 to 3.0°C when translated to temperature. These results are similar to those of other temperature proxies used in paleoceanography. Comparison of the results obtained from one of the three sediments showed that TEX86 and BIT indices are not significantly affected by interlaboratory differences in sediment extraction techniques. BIT values of the sediments and extracts were at the extremes of the index with values close to 0 or 1, and showed good reproducibility (ranging from 0.013 to 0.042). However, the measured BIT values for the two GDGT mixtures, with known molar ratios of crenarchaeol and branched GDGTs, had intermediate BIT values and showed poor reproducibility and a large overestimation of the "true" (i.e., molar-based) BIT index. The latter is likely due to, among other factors, the higher mass spectrometric response of branched GDGTs compared to crenarchaeol, which also varies among mass spectrometers. Correction for this different mass spectrometric response showed a considerable improvement in the reproducibility of BIT index measurements among laboratories, as well as a substantially improved estimation of molar-based BIT values. This suggests that standard mixtures should be used in order to obtain consistent, and molar-based, BIT values.

  17. Mars Rover Step Toward Possible Resumption of Drilling

    NASA Image and Video Library

    2017-10-23

    NASA's Curiosity Mars rover conducted a test on Oct. 17, 2017, as part of the rover team's development of a new way to use the rover's drill. This image from Curiosity's front Hazard Avoidance Camera (Hazcam) shows the drill's bit touching the ground during an assessment of measurements by a sensor on the rover's robotic arm. Curiosity used its drill to acquire sample material from Martian rocks 15 times from 2013 to 2016. In December 2016, the drill's feed mechanism stopped working reliably. During the test shown in this image, the rover touched the drill bit to the ground for the first time in 10 months. The image has been adjusted to brighten shaded areas so that the bit is more evident. The date was the 1,848th Martian day, or sol, of Curiosity's work on Mars In drill use prior to December 2016, two contact posts -- the stabilizers on either side of the bit -- were placed on the target rock while the bit was in a withdrawn position. Then the motorized feed mechanism within the drill extended the bit forward, and the bit's rotation and percussion actions penetrated the rock. A promising alternative now under development and testing -- called feed-extended drilling -- uses motion of the robotic arm to directly advance the extended bit into a rock. In this image, the bit is touching the ground but the stabilizers are not. In the Sol 1848 activity, Curiosity pressed the drill bit downward, and then applied smaller sideways forces while taking measurements with a force/torque sensor on the arm. The objective was to gain understanding about how readings from the sensor can be used during drilling to adjust for any sideways pressure that might risk the bit becoming stuck in a rock. While rover-team engineers are working on an alternative drilling method, the mission continues to examine sites on Mount Sharp, Mars, with other tools. https://photojournal.jpl.nasa.gov/catalog/PIA22063

  18. Development of a Piezoelectric Rotary Hammer Drill

    NASA Technical Reports Server (NTRS)

    Domm, Lukas N.

    2011-01-01

    The Piezoelectric Rotary Hammer Drill is designed to core through rock using a combination of rotation and high frequency hammering powered by a single piezoelectric actuator. It is designed as a low axial preload, low mass, and low power device for sample acquisition on future missions to extraterrestrial bodies. The purpose of this internship is to develop and test a prototype of the Piezoelectric Rotary Hammer Drill in order to verify the use of a horn with helical or angled cuts as a hammering and torque inducing mechanism. Through an iterative design process using models in ANSYS Finite Element software and a Mason's Equivalent Circuit model in MATLAB, a horn design was chosen for fabrication based on the predicted horn tip motion, electromechanical coupling, and neutral plane location. The design was then machined and a test bed assembled. The completed prototype has proven that a single piezoelectric actuator can be used to produce both rotation and hammering in a drill string through the use of a torque inducing horn. Final data results include bit rotation produced versus input power, and best drilling rate achieved with the prototype.

  19. A Submillimeter Resolution PET Prototype Evaluated With an 18F Inkjet Printed Phantom

    NASA Astrophysics Data System (ADS)

    Schneider, Florian R.; Hohberg, Melanie; Mann, Alexander B.; Paul, Stephan; Ziegler, Sibylle I.

    2015-10-01

    This work presents a submillimeter resolution PET (Positron Emission Tomography) scanner prototype based on SiPM/MPPC arrays (Silicon Photomultiplier/Multi Pixel Photon Counter). Onto each active area a 1 ×1 ×20 mm3 LYSO (Lutetium-Yttrium-Oxyorthosilicate) scintillator crystal is coupled one-to-one. Two detector modules facing each other in a distance of 10.0 cm have been set up with in total 64 channels that are digitized by SADCs (Sampling Analog to Digital Converters) with 80 MHz, 10 bit resolution and FPGA (Field Programmable Gate Array) based extraction of energy and time information. Since standard phantoms are not sufficient for testing submillimeter resolution at which positron range is an issue, a 18F inkjet printed phantom has been used to explore the limit in spatial resolution. The phantom could be successfully reconstructed with an iterative MLEM (Maximum Likelihood Expectation Maximization) and an analytically calculated system matrix based on the DRF (Detector Response Function) model. The system yields a coincidence time resolution of 4.8 ns FWHM, an energy resolution of 20%-30% FWHM and a spatial resolution of 0.8 mm.

  20. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  1. Design and implementation of the modified signed digit multiplication routine on a ternary optical computer.

    PubMed

    Xu, Qun; Wang, Xianchao; Xu, Chao

    2017-06-01

    Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.

  2. Evaluation of Clipping Based Iterative PAPR Reduction Techniques for FBMC Systems

    PubMed Central

    Kollár, Zsolt

    2014-01-01

    This paper investigates filter bankmulticarrier (FBMC), a multicarrier modulation technique exhibiting an extremely low adjacent channel leakage ratio (ACLR) compared to conventional orthogonal frequency division multiplexing (OFDM) technique. The low ACLR of the transmitted FBMC signal makes it especially favorable in cognitive radio applications, where strict requirements are posed on out-of-band radiation. Large dynamic range resulting in high peak-to-average power ratio (PAPR) is characteristic of all sorts of multicarrier signals. The advantageous spectral properties of the high-PAPR FBMC signal are significantly degraded if nonlinearities are present in the transceiver chain. Spectral regrowth may appear, causing harmful interference in the neighboring frequency bands. This paper presents novel clipping based PAPR reduction techniques, evaluated and compared by simulations and measurements, with an emphasis on spectral aspects. The paper gives an overall comparison of PAPR reduction techniques, focusing on the reduction of the dynamic range of FBMC signals without increasing out-of-band radiation. An overview is presented on transmitter oriented techniques employing baseband clipping, which can maintain the system performance with a desired bit error rate (BER). PMID:24558338

  3. Automated segmentation of three-dimensional MR brain images

    NASA Astrophysics Data System (ADS)

    Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee

    2006-03-01

    Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.

  4. Downregulation of Bit1 expression promotes growth, anoikis resistance, and transformation of immortalized human bronchial epithelial cells via Erk activation-dependent suppression of E-cadherin.

    PubMed

    Yao, Xin; Gray, Selena; Pham, Tri; Delgardo, Mychael; Nguyen, An; Do, Stephen; Ireland, Shubha Kale; Chen, Renwei; Abdel-Mageed, Asim B; Biliran, Hector

    2018-01-01

    The mitochondrial Bit1 protein exerts tumor-suppressive function in NSCLC through induction of anoikis and inhibition of EMT. Having this dual tumor suppressive effect, its downregulation in the established human lung adenocarcinoma A549 cell line resulted in potentiation of tumorigenicity and metastasis in vivo. However, the exact role of Bit1 in regulating malignant growth and transformation of human lung epithelial cells, which are origin of most forms of human lung cancers, has not been examined. To this end, we have downregulated the endogenous Bit1 expression in the immortalized non-tumorigenic human bronchial epithelial BEAS-2B cells. Knockdown of Bit1 enhanced the growth and anoikis insensitivity of BEAS-2B cells. In line with their acquired anoikis resistance, the Bit1 knockdown BEAS-2B cells exhibited enhanced anchorage-independent growth in vitro but failed to form tumors in vivo. The loss of Bit1-induced transformed phenotypes was in part attributable to the repression of E-cadherin expression since forced exogenous E-cadherin expression attenuated the malignant phenotypes of the Bit1 knockdown cells. Importantly, we show that the loss of Bit1 expression in BEAS-2B cells resulted in increased Erk activation, which functions upstream to promote TLE1-mediated transcriptional repression of E-cadherin. These collective findings indicate that loss of Bit1 expression contributes to the acquisition of malignant phenotype of human lung epithelial cells via Erk activation-induced suppression of E-cadherin expression. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Morphology of powders of tungsten carbide used in wear-resistant coatings and deposition on the PDC drill bits

    NASA Astrophysics Data System (ADS)

    Zakharova, E. S.; Markova, I. Yu; Maslov, A. L.; Polushin, N. I.; Laptev, A. I.

    2017-05-01

    Modern drill bits have high abrasive wear in the area of contact with the rock and removed sludge. Currently, these bits have a protective layer on the bit body, which consists of a metal matrix with inclusions of carbide particles. The research matrix of this coating and the wear-resistant particles is a prerequisite in the design and production of drill bits. In this work, complex investigation was made for various carbide powders of the grades Relit (tungsten carbide produced by Ltd “ROSNAMIS”) which are used as wear-resistant particles in the coating of the drill bit body. The morphology and phase composition of the chosen powders as well as the influence of a particle shape on prospects of their application in wear-resistance coating presented in this work.

  6. Combined group ECC protection and subgroup parity protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Cheng, Dong; Heidelberger, Philip

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less

  7. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  8. Reconfigurable data path processor

    NASA Technical Reports Server (NTRS)

    Donohoe, Gregory (Inventor)

    2005-01-01

    A reconfigurable data path processor comprises a plurality of independent processing elements. Each of the processing elements advantageously comprising an identical architecture. Each processing element comprises a plurality of data processing means for generating a potential output. Each processor is also capable of through-putting an input as a potential output with little or no processing. Each processing element comprises a conditional multiplexer having a first conditional multiplexer input, a second conditional multiplexer input and a conditional multiplexer output. A first potential output value is transmitted to the first conditional multiplexer input, and a second potential output value is transmitted to the second conditional multiplexer output. The conditional multiplexer couples either the first conditional multiplexer input or the second conditional multiplexer input to the conditional multiplexer output, according to an output control command. The output control command is generated by processing a set of arithmetic status-bits through a logical mask. The conditional multiplexer output is coupled to a first processing element output. A first set of arithmetic bits are generated according to the processing of the first processable value. A second set of arithmetic bits may be generated from a second processing operation. The selection of the arithmetic status-bits is performed by an arithmetic-status bit multiplexer selects the desired set of arithmetic status bits from among the first and second set of arithmetic status bits. The conditional multiplexer evaluates the select arithmetic status bits according to logical mask defining an algorithm for evaluating the arithmetic status bits.

  9. Development of a Tool Condition Monitoring System for Impregnated Diamond Bits in Rock Drilling Applications

    NASA Astrophysics Data System (ADS)

    Perez, Santiago; Karakus, Murat; Pellet, Frederic

    2017-05-01

    The great success and widespread use of impregnated diamond (ID) bits are due to their self-sharpening mechanism, which consists of a constant renewal of diamonds acting at the cutting face as the bit wears out. It is therefore important to keep this mechanism acting throughout the lifespan of the bit. Nonetheless, such a mechanism can be altered by the blunting of the bit that ultimately leads to a less than optimal drilling performance. For this reason, this paper aims at investigating the applicability of artificial intelligence-based techniques in order to monitor tool condition of ID bits, i.e. sharp or blunt, under laboratory conditions. Accordingly, topologically invariant tests are carried out with sharp and blunt bits conditions while recording acoustic emissions (AE) and measuring-while-drilling variables. The combined output of acoustic emission root-mean-square value (AErms), depth of cut ( d), torque (tob) and weight-on-bit (wob) is then utilized to create two approaches in order to predict the wear state condition of the bits. One approach is based on the combination of the aforementioned variables and another on the specific energy of drilling. The two different approaches are assessed for classification performance with various pattern recognition algorithms, such as simple trees, support vector machines, k-nearest neighbour, boosted trees and artificial neural networks. In general, Acceptable pattern recognition rates were obtained, although the subset composed by AErms and tob excels due to the high classification performances rates and fewer input variables.

  10. A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP).

    PubMed

    Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong

    2017-01-01

    SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift-Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent "Bit 0," "Bit 1" and "Bit 2" respectively. Different to common BFSK in digital communication, "Bit 0" and "Bit 1" composited the unique identifier of stimuli in binary bit stream form, while "Bit 2" indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2 n -1 ( n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations.

  11. Development and testing of a Mudjet-augmented PDC bit.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, Alan; Chahine, Georges; Raymond, David Wayne

    2006-01-01

    This report describes a project to develop technology to integrate passively pulsating, cavitating nozzles within Polycrystalline Diamond Compact (PDC) bits for use with conventional rig pressures to improve the rock-cutting process in geothermal formations. The hydraulic horsepower on a conventional drill rig is significantly greater than that delivered to the rock through bit rotation. This project seeks to leverage this hydraulic resource to extend PDC bits to geothermal drilling.

  12. Hydromechanical drilling device

    DOEpatents

    Summers, David A.

    1978-01-01

    A hydromechanical drilling tool which combines a high pressure water jet drill with a conventional roller cone type of drilling bit. The high pressure jet serves as a tap drill for cutting a relatively small diameter hole in advance of the conventional bit. Auxiliary laterally projecting jets also serve to partially cut rock and to remove debris from in front of the bit teeth thereby reducing significantly the thrust loading for driving the bit.

  13. IRIG Serial Time Code Formats

    DTIC Science & Technology

    2016-08-01

    codes contain control functions (CFs) that are reserved for encoding various controls, identification, and other special- purpose functions. Time...set of CF bits for the encoding of various control, identification, and other special- purpose functions. The control bits may be programmed in any... recycles yearly. • There are 18 CFs occur between position identifiers P6 and P8. Any CF bit or combination of bits can be programmed to read a

  14. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    NASA Astrophysics Data System (ADS)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  15. APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study

    NASA Astrophysics Data System (ADS)

    Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak

    2017-04-01

    In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.

  16. FPGA implementation of bit controller in double-tick architecture

    NASA Astrophysics Data System (ADS)

    Kobylecki, Michał; Kania, Dariusz

    2017-11-01

    This paper presents a comparison of the two original architectures of programmable bit controllers built on FPGAs. Programmable Logic Controllers (which include, among other things programmable bit controllers) built on FPGAs provide a efficient alternative to the controllers based on microprocessors which are expensive and often too slow. The presented and compared methods allow for the efficient implementation of any bit control algorithm written in Ladder Diagram language into the programmable logic system in accordance with IEC61131-3. In both cases, we have compared the effect of the applied architecture on the performance of executing the same bit control program in relation to its own size.

  17. Conditions for the optical wireless links bit error ratio determination

    NASA Astrophysics Data System (ADS)

    Kvíčala, Radek

    2017-11-01

    To determine the quality of the Optical Wireless Links (OWL), there is necessary to establish the availability and the probability of interruption. This quality can be defined by the optical beam bit error rate (BER). Bit error rate BER presents the percentage of successfully transmitted bits. In practice, BER runs into the problem with the integration time (measuring time) determination. For measuring and recording of BER at OWL the bit error ratio tester (BERT) has been developed. The 1 second integration time for the 64 kbps radio links is mentioned in the accessible literature. However, it is impossible to use this integration time for singularity of coherent beam propagation.

  18. Grinding tool for making hemispherical bores in hard materials

    DOEpatents

    Duran, E.L.

    1985-04-03

    A grinding tool for forming hemispherical bores in hard materials such as boron carbide. The tool comprises a hemicircular grinding bit, formed of a metal bond diamond matrix, which is mounted transversely on one end of a tubular tool shaft. The bit includes a spherically curved outer edge surface which is the active grinding surface of the tool. Two coolant fluid ports on opposite sides of the bit enable introduction of coolant fluid through the bore of the tool shaft so as to be emitted adjacent the opposite sides of the grinding bit, thereby providing optimum cooling of both the workpiece and the bit.

  19. Alternative technological development for RF hybridization

    NASA Astrophysics Data System (ADS)

    Antônio Finardi, Célio; da Fontoura Ponchet, André; Battesini Adamo, Cristina; Flacker, Alexander; Cotrin Teixeira, Ricardo; Panepucci, Roberto Ricardo

    2017-03-01

    The paper presents a technological solution for high frequency packaging platform evaluated up to 40 GHz. The main purpose of this development was to define an alternative hybrid technology that is more flexible and faster to prototype compared with thin film or multi chip module (MCM-D). The alternative technology also shows adequate performance for high bit rate solutions integrating optical and electronics blocks. This approach consists of a soft substrate (laminate material), plating processes (electroless Ni-P/Au, electrolytic Au) and lithography patterning. Ground coplanar waveguide was used for microwave structures with excellent ground planes connections due to easy via holes implementation. We present results of high frequency packaging of important RF blocks, such as integrated broadband bias-T, transimpedance amplifier ICs and silicon photonics optical modulators. The paper demonstrates a solution for high frequency hybridization that can be implemented with standard substrates, designed with any shape and with large numbers of metalized via holes and compatible with usual assembling techniques.

  20. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  1. Mathematical modeling of PDC bit drilling process based on a single-cutter mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojtanowicz, A.K.; Kuru, E.

    1993-12-01

    An analytical development of a new mechanistic drilling model for polycrystalline diamond compact (PDC) bits is presented. The derivation accounts for static balance of forces acting on a single PDC cutter and is based on assumed similarity between bit and cutter. The model is fully explicit with physical meanings given to all constants and functions. Three equations constitute the mathematical model: torque, drilling rate, and bit life. The equations comprise cutter`s geometry, rock properties drilling parameters, and four empirical constants. The constants are used to match the model to a PDC drilling process. Also presented are qualitative and predictive verificationsmore » of the model. Qualitative verification shows that the model`s response to drilling process variables is similar to the behavior of full-size PDC bits. However, accuracy of the model`s predictions of PDC bit performance is limited primarily by imprecision of bit-dull evaluation. The verification study is based upon the reported laboratory drilling and field drilling tests as well as field data collected by the authors.« less

  2. SpecBit, DecayBit and PrecisionBit: GAMBIT modules for computing mass spectra, particle decay rates and precision observables

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin

    2018-01-01

    We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.

  3. Double acting bit holder

    DOEpatents

    Morrell, Roger J.; Larson, David A.; Ruzzi, Peter L.

    1994-01-01

    A double acting bit holder that permits bits held in it to be resharpened during cutting action to increase energy efficiency by reducing the amount of small chips produced. The holder consist of: a stationary base portion capable of being fixed to a cutter head of an excavation machine and having an integral extension therefrom with a bore hole therethrough to accommodate a pin shaft; a movable portion coextensive with the base having a pin shaft integrally extending therefrom that is insertable in the bore hole of the base member to permit the moveable portion to rotate about the axis of the pin shaft; a recess in the movable portion of the holder to accommodate a shank of a bit; and a biased spring disposed in adjoining openings in the base and moveable portions of the holder to permit the moveable portion to pivot around the pin shaft during cutting action of a bit fixed in a turret to allow front, mid and back positions of the bit during cutting to lessen creation of small chip amounts and resharpen the bit during excavation use.

  4. Low latency counter event indication

    DOEpatents

    Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY

    2008-09-16

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  5. Low latency counter event indication

    DOEpatents

    Gara, Alan G.; Salapura, Valentina

    2010-08-24

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  6. Development of signal processing system of avalanche photo diode for space observations by Astro-H

    NASA Astrophysics Data System (ADS)

    Ohno, M.; Goto, K.; Hanabata, Y.; Takahashi, H.; Fukazawa, Y.; Yoshino, M.; Saito, T.; Nakamori, T.; Kataoka, J.; Sasano, M.; Torii, S.; Uchiyama, H.; Nakazawa, K.; Watanabe, S.; Kokubun, M.; Ohta, M.; Sato, T.; Takahashi, T.; Tajima, H.

    2013-01-01

    Astro-H is the sixth Japanese X-ray space observatory which will be launched in 2014. Two of onboard instruments of Astro-H, Hard X-ray Imager and Soft Gamma-ray Detector are surrounded by many number of large Bismuth Germanate (Bi4Ge3O12; BGO) scintillators. Optimum readout system of scintillation lights from these BGOs are essential to reduce the background signals and achieve high performance for main detectors because most of gamma-rays from out of field-of-view of main detectors or radio-isotopes produced inside them due to activation can be eliminated by anti-coincidence technique using BGO signals. We apply Avalanche Photo Diode (APD) for light sensor of these BGO detectors since their compactness and high quantum efficiency make it easy to design such large number of BGO detector system. For signal processing from APDs, digital filter and other trigger logics on the Field-Programmable Gate Array (FPGA) is used instead of discrete analog circuits due to limitation of circuit implementation area on spacecraft. For efficient observations, we have to achieve as low threshold of anti-coincidence signal as possible by utilizing the digital filtering. In addition, such anti-coincident signals should be sent to the main detector within 5 μs to make it in time to veto the A-D conversion. Considering this requirement and constraint from logic size of FPGA, we adopt two types of filter, 8 delay taps filter with only 2 bit precision coefficient and 16 delay taps filter with 8 bit precision coefficient. The data after former simple filter provides anti-coincidence signal quickly in orbit, and the latter filter is used for detail analysis after the data is down-linked.

  7. Integrated-Circuit Pseudorandom-Number Generator

    NASA Technical Reports Server (NTRS)

    Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur

    1992-01-01

    Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.

  8. Fastener Recess Evaluation

    DTIC Science & Technology

    1978-04-01

    lbs respectively. The Torx recess suffered most under the test methods adopted (as would any similar recess such as the internal hex). Zero values ...Administrationi, National Tool Center Apex Machine and Tool Co. r~. Phillips International Co.] Hi-Shear Corp. General Dynamics Corp. Defense Logistics Agency...29 6. Undersized Bits 30 7. Worn Bit Test 31 8. Stock Bit and Screw Comparison 32 9. Ribbed Bits 36 V FIELD DATA OBSERVATIONS 38 1. Torque Values 38 2

  9. Meteor burst communications for LPI applications

    NASA Astrophysics Data System (ADS)

    Schilling, D. L.; Apelewicz, T.; Lomp, G. R.; Lundberg, L. A.

    A technique that enhances the performance of meteor-burst communications is described. The technique, the feedback adaptive variable rate (FAVR) system, maintains a feedback channel that allows the transmitted bit rate to mimic the time behavior of the received power so that a constant bit energy is maintained. This results in a constant probability of bit error in each transmitted bit. Experimentally determined meteor-burst channel characteristics and FAVR system simulation results are presented.

  10. Short Note on Complexity of Multi-Value Byzantine Agreement

    DTIC Science & Technology

    2010-07-27

    which lead to nBl /D bits over the whole algorithm. Broadcasts in extended step: In the extended step, every node broadcasts D bits. Thus nDB bits...bits, as: (n− 1)l + n(n− 1)(k +D/k)l/D + nBl /D + nDBt(t+ 1) (4) = (n− 1)l +O(n2kl/D + n2l/k + nBl /D + n3BD). (5) Notice that broadcast algorithm of

  11. Modulation and synchronization technique for MF-TDMA system

    NASA Technical Reports Server (NTRS)

    Faris, Faris; Inukai, Thomas; Sayegh, Soheil

    1994-01-01

    This report addresses modulation and synchronization techniques for a multi-frequency time division multiple access (MF-TDMA) system with onboard baseband processing. The types of synchronization techniques analyzed are asynchronous (conventional) TDMA, preambleless asynchronous TDMA, bit synchronous timing with a preamble, and preambleless bit synchronous timing. Among these alternatives, preambleless bit synchronous timing simplifies onboard multicarrier demultiplexer/demodulator designs (about 2:1 reduction in mass and power), requires smaller onboard buffers (10:1 to approximately 3:1 reduction in size), and provides better frame efficiency as well as lower onboard processing delay. Analysis and computer simulation illustrate that this technique can support a bit rate of up to 10 Mbit/s (or higher) with proper selection of design parameters. High bit rate transmission may require Doppler compensation and multiple phase error measurements. The recommended modulation technique for bit synchronous timing is coherent QPSK with differential encoding for the uplink and coherent QPSK for the downlink.

  12. Long sequence correlation coprocessor

    NASA Astrophysics Data System (ADS)

    Gage, Douglas W.

    1994-09-01

    A long sequence correlation coprocessor (LSCC) accelerates the bitwise correlation of arbitrarily long digital sequences by calculating in parallel the correlation score for 16, for example, adjacent bit alignments between two binary sequences. The LSCC integrated circuit is incorporated into a computer system with memory storage buffers and a separate general purpose computer processor which serves as its controller. Each of the LSCC's set of sequential counters simultaneously tallies a separate correlation coefficient. During each LSCC clock cycle, computer enable logic associated with each counter compares one bit of a first sequence with one bit of a second sequence to increment the counter if the bits are the same. A shift register assures that the same bit of the first sequence is simultaneously compared to different bits of the second sequence to simultaneously calculate the correlation coefficient by the different counters to represent different alignments of the two sequences.

  13. Foldable Instrumented Bits for Ultrasonic/Sonic Penetrators

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Badescu, Mircea; Iskenderian, Theodore; Sherrit, Stewart; Bao, Xiaoqi; Linderman, Randel

    2010-01-01

    Long tool bits are undergoing development that can be stowed compactly until used as rock- or ground-penetrating probes actuated by ultrasonic/sonic mechanisms. These bits are designed to be folded or rolled into compact form for transport to exploration sites, where they are to be connected to their ultrasonic/ sonic actuation mechanisms and unfolded or unrolled to their full lengths for penetrating ground or rock to relatively large depths. These bits can be designed to acquire rock or soil samples and/or to be equipped with sensors for measuring properties of rock or soil in situ. These bits can also be designed to be withdrawn from the ground, restowed, and transported for reuse at different exploration sites. Apparatuses based on the concept of a probe actuated by an ultrasonic/sonic mechanism have been described in numerous prior NASA Tech Briefs articles, the most recent and relevant being "Ultrasonic/ Sonic Impacting Penetrators" (NPO-41666) NASA Tech Briefs, Vol. 32, No. 4 (April 2008), page 58. All of those apparatuses are variations on the basic theme of the earliest ones, denoted ultrasonic/sonic drill corers (USDCs). To recapitulate: An apparatus of this type includes a lightweight, low-power, piezoelectrically driven actuator in which ultrasonic and sonic vibrations are generated and coupled to a tool bit. The combination of ultrasonic and sonic vibrations gives rise to a hammering action (and a resulting chiseling action at the tip of the tool bit) that is more effective for drilling than is the microhammering action of ultrasonic vibrations alone. The hammering and chiseling actions are so effective that the size of the axial force needed to make the tool bit advance into soil, rock, or another material of interest is much smaller than in ordinary twist drilling, ordinary hammering, or ordinary steady pushing. Examples of properties that could be measured by use of an instrumented tool bit include electrical conductivity, permittivity, magnetic field, magnetic permeability, temperature, and any other properties that can be measured by fiber-optic sensors. The problem of instrumenting a probe of this type is simplified, relative to the problem of attaching electrodes in a rotating drill bit, in two ways: (1) Unlike a rotating drill bit, a bit of this type does not have flutes, which would compound the problem of ensuring contact between sensors and the side wall of a hole; and (2) there is no need for slip rings for electrical contact between sensor electronic circuitry and external circuitry because, unlike a rotating drill, a tool bit of this type is not rotated continuously during operation. One design for a tool bit of the present type is a segmented bit with a segmented, hinged support structure (see figure). The bit and its ultrasonic/sonic actuator are supported by a slider/guiding fixture, and its displacement and preload are controlled by a motor. For deployment from the folded configuration, a spring-loaded mechanism rotates the lower segment about the hinges, causing the lower segment to become axially aligned with the upper segment. A latching mechanism then locks the segments of the bit and the corresponding segments of the slider/guiding fixture. Then the entire resulting assembly is maneuvered into position for drilling into the ground. Another design provides for a bit comprising multiple tubular segments with an inner alignment string, similar to a foldable tent pole comprising multiple tubular segments with an inner elastic cable connecting the two ends. At the beginning of deployment, all segments except the first (lowermost) one remain folded, and the ultrasonic/sonic actuator is clamped to the top of the lowermost segment and used to drive this segment into the ground. When the first segment has penetrated to a specified depth, the second segment is connected to the upper end of the first segment to form a longer rigid tubular bit and the actuator is moved to the upper end of the second segnt. The process as described thus far is repeated, adding segments until the desired depth of penetration has been attained. Yet other designs provide for bits in the form of bistable circular- or rectangular- cross-section tubes that can be stowed compactly like rolls of flat tape and become rigidified upon extension to full length, in a manner partly similar to that of a common steel tape measure. Albeit not marketed for use in tool bits, a bistable reeled composite product that transforms itself from a flat coil to a rigid tube of circular cross section when unrolled, is commercially available under the trade name RolaTube(TradeMark) and serves as a model for the further development of tool bits of this subtype.

  14. On optimal soft-decision demodulation. [in digital communication system

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1976-01-01

    A necessary condition is derived for optimal J-ary coherent demodulation of M-ary (M greater than 2) signals. Optimality is defined as maximality of the symmetric cutoff rate of the resulting discrete memoryless channel. Using a counterexample, it is shown that the condition derived is generally not sufficient for optimality. This condition is employed as the basis for an iterative optimization method to find the optimal demodulator decision regions from an initial 'good guess'. In general, these regions are found to be bounded by hyperplanes in likelihood space; the corresponding regions in signal space are found to have hyperplane asymptotes for the important case of additive white Gaussian noise. Some examples are presented, showing that the regions in signal space bounded by these asymptotic hyperplanes define demodulator decision regions that are virtually optimal.

  15. A boosted negative bit-line SRAM with write-assisted cell in 45 nm CMOS technology

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Vipul; Kumar, Pradeep; Pandey, Neeta; Pandey, Sujata

    2018-02-01

    A new 11 T SRAM cell with write-assist is proposed to improve operation at low supply voltage. In this technique, a negative bit-line voltage is applied to one of the write bit-lines, while a boosted voltage is applied to the other write bit-line where transmission gate access is used in proposed 11 T cell. Supply voltage to one of the inverters is interrupted to weaken the feedback. Improved write feature is attributed to strengthened write access devices and weakened feedback loop of cell at the same time. Amount of boosting required for write performance improvement is also reduced due to feedback weakening, solving the persistent problem of half-selected cells and reliability reduction of access devices with the other suggested boosted and negative bit-line techniques. The proposed design improves write time by 79%, 63% and slower by 52% with respect to LP 10 T, WRE 8 T and 6 T cells respectively. It is found that write margin for the proposed cell is improved by about 4×, 2.4× and 5.37× compared to WRE8 T, LP10 T and 6 T respectively. The proposed cell with boosted negative bit line (BNBL) provides 47%, 31%, and 68.4% improvement in write margin with respect to no write-assist, negative bit line (NBL) and boosted bit line (BBL) write-assist respectively. Also, new sensing circuit with replica bit-line is proposed to give a more precise timing of applying boosted voltages for improved results. All simulations are done on TSMC 45 nm CMOS technology.

  16. A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP)

    PubMed Central

    Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong

    2017-01-01

    SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift–Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent “Bit 0,” “Bit 1” and “Bit 2” respectively. Different to common BFSK in digital communication, “Bit 0” and “Bit 1” composited the unique identifier of stimuli in binary bit stream form, while “Bit 2” indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2n−1 (n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations. PMID:28626393

  17. Estimation of the safe use concentrations of the preservative 1,2-benzisothiazolin-3-one (BIT) in consumer cleaning products and sunscreens.

    PubMed

    Novick, Rachel M; Nelson, Mindy L; Unice, Kenneth M; Keenan, James J; Paustenbach, Dennis J

    2013-06-01

    1,2-Benzisothiazolin-3-one (BIT; CAS # 2634-33-5) is a preservative used in consumer products. Dermal exposure to BIT at sufficient dose and duration can produce skin sensitization and allergic contact dermatitis in animals and susceptible humans.The purpose of this study is to derive a maximal concentration of BIT in various consumer products that would result in exposures below the No Expected Sensitization Induction Level (NESIL), a dose below which skin sensitization should not occur. A screening level exposure estimate was performed for several product use scenarios with sunscreen, laundry detergent, dish soap, and spray cleaner. We calculated that BIT concentrations below the following concentrations of 0.0075%, 0.035%, 0.035%, 0.021% in sunscreen, laundry detergent, dish soap, and spray cleaner, respectively, are unlikely to induce skin sensitization. We completed a pilot study consisting of bulk sample analysis of one representative product from each category labelled as containing BIT, and found BIT concentrations of 0.0009% and 0.0027% for sunscreen and dish soap, respectively. BIT was not detected in the laundry detergent and spray cleaner products above the limit of detection of 0.0006%. Based on publically available data for product formulations and our results, we were able to establish that cleaning products and sunscreens likely contain BIT at concentrations similar to or less than our calculated maximal safe concentrations and that exposures are unlikely to induce skin sensitization in most users. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. FastBit: Interactively Searching Massive Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Ahern, Sean; Bethel, E. Wes

    2009-06-23

    As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reducesmore » the response time and enables interactive exploration on terabytes of data.« less

  19. Optical domain analog to digital conversion methods and apparatus

    DOEpatents

    Vawter, Gregory A

    2014-05-13

    Methods and apparatus for optical analog to digital conversion are disclosed. An optical signal is converted by mapping the optical analog signal onto a wavelength modulated optical beam, passing the mapped beam through interferometers to generate analog bit representation signals, and converting the analog bit representation signals into an optical digital signal. A photodiode receives an optical analog signal, a wavelength modulated laser coupled to the photodiode maps the optical analog signal to a wavelength modulated optical beam, interferometers produce an analog bit representation signal from the mapped wavelength modulated optical beam, and sample and threshold circuits corresponding to the interferometers produce a digital bit signal from the analog bit representation signal.

  20. Characteristics of Single-Event Upsets in a Fabric Switch (ADS151)

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; Carts, Martin A.; McMorrow, Dale; Kim, Hak; Marshall, Paul W.; LaBel, Kenneth A.

    2003-01-01

    Abstract-Two types of single event effects - bit errors and single event functional interrupts - were observed during heavy-ion testing of the AD8151 crosspoint switch. Bit errors occurred in bursts with the average number of bits in a burst being dependent on both the ion LET and on the data rate. A pulsed laser was used to identify the locations on the chip where the bit errors and single event functional interrupts occurred. Bit errors originated in the switches, drivers, and output buffers. Single event functional interrupts occurred when the laser was focused on the second rank latch containing the data specifying the state of each switch in the 33x17 matrix.

  1. Family centered brief intensive treatment: a pilot study of an outpatient treatment for acute suicidal ideation.

    PubMed

    Anastasia, Trena T; Humphries-Wadsworth, Terresa; Pepper, Carolyn M; Pearson, Timothy M

    2015-02-01

    Family Centered Brief Intensive Treatment (FC BIT), a hospital diversion treatment program for individuals with acute suicidal ideation, was developed to treat suicidal clients and their families. Individuals who met criteria for hospitalization were treated as outpatients using FC BIT (n = 19) or an intensive outpatient treatment without the family component (IOP; n = 24). Clients receiving FC BIT identified family members or supportive others to participate in therapy. FC BIT clients had significantly greater improvement at the end of treatment compared to IOP clients on measures of depression, hopelessness, and suicidality. Further research is needed to test the efficacy of FC BIT. © 2014 The American Association of Suicidology.

  2. High speed, very large (8 megabyte) first in/first out buffer memory (FIFO)

    DOEpatents

    Baumbaugh, Alan E.; Knickerbocker, Kelly L.

    1989-01-01

    A fast FIFO (First In First Out) memory buffer capable of storing data at rates of 100 megabytes per second. The invention includes a data packer which concatenates small bit data words into large bit data words, a memory array having individual data storage addresses adapted to store the large bit data words, a data unpacker into which large bit data words from the array can be read and reconstructed into small bit data words, and a controller to control and keep track of the individual data storage addresses in the memory array into which data from the packer is being written and data to the unpacker is being read.

  3. "Push back" technique: A simple method to remove broken drill bit from the proximal femur.

    PubMed

    Chouhan, Devendra K; Sharma, Siddhartha

    2015-11-18

    Broken drill bits can be difficult to remove from the proximal femur and may necessitate additional surgical exploration or special instrumentation. We present a simple technique to remove a broken drill bit that does not require any special instrumentation and can be accomplished through the existing incision. This technique is useful for those cases where the length of the broken drill bit is greater than the diameter of the bone.

  4. Secret Bit Transmission Using a Random Deal of Cards

    DTIC Science & Technology

    1990-05-01

    conversation between sender and receiver is public and is heard by all. A correct protocol always succeeds in transmitting the secret bit, and the other player...s), who receive the remaining cards and are assumed to have unlimited computing power, gain no information whatsoever about the value of the secret bit...In other words, their probability of correctly guessing the secret is bit exactly the same after listening to a run of the protocol as it was

  5. Optimum Cyclic Redundancy Codes for Noisy Channels

    NASA Technical Reports Server (NTRS)

    Posner, E. C.; Merkey, P.

    1986-01-01

    Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.

  6. Hash Bit Selection for Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Junfeng He; Shih-Fu Chang

    2017-11-01

    To overcome the barrier of storage and computation when dealing with gigantic-scale data sets, compact hashing has been studied extensively to approximate the nearest neighbor search. Despite the recent advances, critical design issues remain open in how to select the right features, hashing algorithms, and/or parameter settings. In this paper, we address these by posing an optimal hash bit selection problem, in which an optimal subset of hash bits are selected from a pool of candidate bits generated by different features, algorithms, or parameters. Inspired by the optimization criteria used in existing hashing algorithms, we adopt the bit reliability and their complementarity as the selection criteria that can be carefully tailored for hashing performance in different tasks. Then, the bit selection solution is discovered by finding the best tradeoff between search accuracy and time using a modified dynamic programming method. To further reduce the computational complexity, we employ the pairwise relationship among hash bits to approximate the high-order independence property, and formulate it as an efficient quadratic programming method that is theoretically equivalent to the normalized dominant set problem in a vertex- and edge-weighted graph. Extensive large-scale experiments have been conducted under several important application scenarios of hash techniques, where our bit selection framework can achieve superior performance over both the naive selection methods and the state-of-the-art hashing algorithms, with significant accuracy gains ranging from 10% to 50%, relatively.

  7. A Modified Differential Coherent Bit Synchronization Algorithm for BeiDou Weak Signals with Large Frequency Deviation.

    PubMed

    Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi

    2017-07-04

    BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.

  8. On a gas electron multiplier based synthetic diagnostic for soft x-ray tomography on WEST with focus on impurity transport studies

    NASA Astrophysics Data System (ADS)

    Jardin, A.; Mazon, D.; Malard, P.; O'Mullane, M.; Chernyshova, M.; Czarski, T.; Malinowski, K.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.

    2017-08-01

    The tokamak WEST aims at testing ITER divertor high heat flux component technology in long pulse operation. Unfortunately, heavy impurities like tungsten (W) sputtered from the plasma facing components can pollute the plasma core by radiation cooling in the soft x-ray (SXR) range, which is detrimental for the energy confinement and plasma stability. SXR diagnostics give valuable information to monitor impurities and study their transport. The WEST SXR diagnostic is composed of two new cameras based on the Gas Electron Multiplier (GEM) technology. The WEST GEM cameras will be used for impurity transport studies by performing 2D tomographic reconstructions with spectral resolution in tunable energy bands. In this paper, we characterize the GEM spectral response and investigate W density reconstruction thanks to a synthetic diagnostic recently developed and coupled with a tomography algorithm based on the minimum Fisher information (MFI) inversion method. The synthetic diagnostic includes the SXR source from a given plasma scenario, the photoionization, electron cloud transport and avalanche in the detection volume using Magboltz, and tomographic reconstruction of the radiation from the GEM signal. Preliminary studies of the effect of transport on the W ionization equilibrium and on the reconstruction capabilities are also presented.

  9. WASP8 Download

    EPA Pesticide Factsheets

    All of the WASP Installers are listed below. There is a 64 Bit Windows Installer, 64 Bit Mac OS X (Yosemite or Higher), 64 Bit Linux (Built on Ubuntu). You will need to have knowledge on how to install software on your target operating system.

  10. Spin-Valve and Spin-Tunneling Devices: Read Heads, MRAMs, Field Sensors

    NASA Astrophysics Data System (ADS)

    Freitas, P. P.

    Hard disk magnetic data storage is increasing at a steady state in terms of units sold, with 144 million drives sold in 1998 (107 million for desktops, 18 million for portables, and 19 million for enterprise drives), corresponding to a total business of 34 billion US [1]. The growing need for storage coming from new PC operating systems, INTERNET applications, and a foreseen explosion of applications connected to consumer electronics (digital TV, video, digital cameras, GPS systems, etc.), keep the magnetics community actively looking for new solutions, concerning media, heads, tribology, and system electronics. Current state of the art disk drives (January 2000), using dual inductive-write, magnetoresistive-read (MR) integrated heads reach areal densities of 15 to 23 bit/μm2, capable of putting a full 20 GB in one platter (a 2 hour film occupies 10 GB). Densities beyond 80 bit/μm2 have already been demonstrated in the laboratory (Fujitsu 87 bit/μm2-Intermag 2000, Hitachi 81 bit/μm2, Read-Rite 78 bit/μ m2, Seagate 70 bit/μ m2 - all the last three demos done in the first 6 months of 2000, with IBM having demonstrated 56 bit/μ m2 already at the end of 1999). At densities near 60 bit/μm2, the linear bit size is sim 43 nm, and the width of the written tracks is sim 0.23 μm. Areal density in commercial drives is increasing steadily at a rate of nearly 100% per year [1], and consumer products above 60 bit/μm2 are expected by 2002. These remarkable achievements are only possible by a stream of technological innovations, in media [2], write heads [3], read heads [4], and system electronics [5]. In this chapter, recent advances on spin valve materials and spin valve sensor architectures, low resistance tunnel junctions and tunnel junction head architectures will be addressed.

  11. JPEG 2000 Encoding with Perceptual Distortion Control

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Liu, Zhen; Karam, Lina J.

    2008-01-01

    An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.

  12. Numerical simulation study on the optimization design of the crown shape of PDC drill bit.

    PubMed

    Ju, Pei; Wang, Zhenquan; Zhai, Yinghu; Su, Dongyu; Zhang, Yunchi; Cao, Zhaohui

    The design of bit crown is an important part of polycrystalline diamond compact (PDC) bit design, although predecessors have done a lot of researches on the design principles of PDC bit crown, the study of the law about rock-breaking energy consumption according to different bit crown shape is not very systematic, and the mathematical model of design is over-simplified. In order to analyze the relation between rock-breaking energy consumption and bit crown shape quantificationally, the paper puts forward an idea to take "per revolution-specific rock-breaking work" as objective function, and analyzes the relationship between rock properties, inner cone angle, outer cone arc radius, and per revolution-specific rock-breaking work by means of explicit dynamic finite element method. Results show that the change law between per revolution-specific rock-breaking work and the radius of gyration is similar for rocks with different properties, it is beneficial to decrease rock-breaking energy consumption by decreasing inner cone angle or outer cone arc radius. Of course, we should also consider hydraulic structure and processing technology in the optimization design of PDC bit crown.

  13. Design and testing of coring bits on drilling lunar rock simulant

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jiang, Shengyuan; Tang, Dewei; Xu, Bo; Ma, Chao; Zhang, Hui; Qin, Hongwei; Deng, Zongquan

    2017-02-01

    Coring bits are widely utilized in the sampling of celestial bodies, and their drilling behaviors directly affect the sampling results and drilling security. This paper introduces a lunar regolith coring bit (LRCB), which is a key component of sampling tools for lunar rock breaking during the lunar soil sampling process. We establish the interaction model between the drill bit and rock at a small cutting depth, and the two main influential parameters (forward and outward rake angles) of LRCB on drilling loads are determined. We perform the parameter screening task of LRCB with the aim to minimize the weight on bit (WOB). We verify the drilling load performances of LRCB after optimization, and the higher penetrations per revolution (PPR) are, the larger drilling loads we gained. Besides, we perform lunar soil drilling simulations to estimate the efficiency on chip conveying and sample coring of LRCB. The results of the simulation and test are basically consistent on coring efficiency, and the chip removal efficiency of LRCB is slightly lower than HIT-H bit from simulation. This work proposes a method for the design of coring bits in subsequent extraterrestrial explorations.

  14. Computer-Aided Design for Built-In-Test (CADBIT) - BIT Library. Volume 2

    DTIC Science & Technology

    1989-10-01

    TECHNIQUE: ON-BOARD RONI CATEGORY: LONG TUTORIA \\L PAG E 5 of 14I SUBCATEGORY: BIT TECHNIQUE ATTRIBUTES DATA TYPE: TEXT El LIST E] TABLE [ GRAPHIC E...SHIFT REGISTER (MISR) CATEGORY: LONG TUTORIA -L PAGE i Of 13 SUBCATEGORY: BIT TECH-{MQUE ATTRIBUTES DATA TYPE: TEXT LIST El TABLE GRAPHIC E EQUATIONS...ELEMENT DATA SHEET BIT TECHNIQUE: UTILIZING REDUNDANCY CATEGORY: LONG TUTORIA L PAGE 9 of 10 SUBCATEGORY: PARTS DATA TABLE DATA TYPE: TEXT F1 UST C3

  15. Wear and performance: An experimental study on PDC bits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, O.; Azar, J.J.

    1997-07-01

    Real-time drilling data, gathered under full-scale conditions, was analyzed to determine the influence of cutter dullness on PDC-bit rate of penetration. It was found that while drilling in shale, the cutters` wearflat area was not a controlling factor on rate of penetration; however, when drilling in limestone, wearflat area significantly influenced PDC bit penetration performance. Similarly, the presence of diamond lips on PDC cutters was found to be unimportant while drilling in shale, but it greatly enhanced bit performance when drilling in limestone.

  16. SEM Analysis Techniques for LSI Microcircuits. Volume 2

    DTIC Science & Technology

    1980-08-01

    4~, 1 v’ ’ RADC-TR80-250, Vol 11 (of two), Final Technical -Report, Augut1980 SEM, ANALYSIS TECHNIQUES, FOR LSI MICROCIRCUITS: ’Martin...Bit Static ’RAM.. Volume II - 1024 Bit Stat’i RAM, 4096 Bit Dynamic RAM (SiGATE WOS,)., 4096 Bit -Dynamic RAM ( 1 2 L Bipolar)., ,Summary. RADC-TR-80-250...States, ithout.irst obtani an export nse, is a violation t Internatio 1 Tr ffic in A . eguiations. Such violation is subject o penalty of to 2 years impr

  17. Compositional Verification with Abstraction, Learning, and SAT Solving

    DTIC Science & Technology

    2015-05-01

    arithmetic, and bit-vectors (currently, via bit-blasting). The front-end is based on an existing tool called UFO [8] which converts C programs to the Horn...supports propositional logic, linear arithmetic, and bit-vectors (via bit-blasting). The front-end is based on the tool UFO [8]. It encodes safety of...tool UFO [8]. The encoding in Horn-SMT only uses the theory of Linear Rational Arithmetic. All experiments were carried out on an Intel R© CoreTM2 Quad

  18. Personal supercomputing by using transputer and Intel 80860 in plasma engineering

    NASA Astrophysics Data System (ADS)

    Ido, S.; Aoki, K.; Ishine, M.; Kubota, M.

    1992-09-01

    Transputer (T800) and 64-bit RISC Intel 80860 (i860) added on a personal computer can be used as an accelerator. When 32-bit T800s in a parallel system or 64-bit i860s are used, scientific calculations are carried out several ten times as fast as in the case of commonly used 32-bit personal computers or UNIX workstations. Benchmark tests and examples of physical simulations using T800s and i860 are reported.

  19. Guidance Material for Mode S-Specific Protocol Application Avionics

    DTIC Science & Technology

    2007-06-04

    the high-order 28 bits of each register are used to specify the configuration state of uplink MSP channels, while the low -order 28 bits of each...TID field contains the 24-bit Mode S address of the threat (when the threat is Mode S equipped). The low -order 2 bits of the TID field are cleared. If...the register should be sufficient to ensure that the maximum latency of each data value is not exceeded. (Note: If all five of the status fields in the

  20. Floating point arithmetic in future supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

Top