Science.gov

Sample records for optimization-based bit allocation

  1. A bit allocation method for sparse source coding.

    PubMed

    Kaaniche, Mounir; Fraysse, Aurélia; Pesquet-Popescu, Béatrice; Pesquet, Jean-Christophe

    2014-01-01

    In this paper, we develop an efficient bit allocation strategy for subband-based image coding systems. More specifically, our objective is to design a new optimization algorithm based on a rate-distortion optimality criterion. To this end, we consider the uniform scalar quantization of a class of mixed distributed sources following a Bernoulli-generalized Gaussian distribution. This model appears to be particularly well-adapted for image data, which have a sparse representation in a wavelet basis. In this paper, we propose new approximations of the entropy and the distortion functions using piecewise affine and exponential forms, respectively. Because of these approximations, bit allocation is reformulated as a convex optimization problem. Solving the resulting problem allows us to derive the optimal quantization step for each subband. Experimental results show the benefits that can be drawn from the proposed bit allocation method in a typical transform-based coding application.

  2. A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems

    NASA Astrophysics Data System (ADS)

    Zhu, Li-Ping; Yao, Yan; Zhou, Shi-Dong; Dong, Shi-Wei

    2007-12-01

    A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT) systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL) test loops show that the proposed algorithm is efficient for practical DMT transmissions.

  3. Bit-rate allocation for multiple video streams using a pricing-based mechanism.

    PubMed

    Tiwari, Mayank; Groves, Theodore; Cosman, Pamela

    2011-11-01

    We consider the problem of bit-rate allocation for multiple video users sharing a common transmission channel. Previously, overall quality of multiple users was improved by exploiting relative video complexity. Users with high-complexity video benefit at the expense of video quality reduction for other users with simpler videos. The quality of all users can be improved by collectively allocating the bit rate in a centralized fashion which requires sharing video information with a central controller. In this paper, we present an informationally decentralized bit-rate allocation for multiple users where a user only needs to inform his demand to an allocator. Each user separately calculates his bit-rate demand based on his video complexity and bit-rate price, where the bit-rate price is announced by the allocator. The allocator adjusts the bit-rate price for the next period based on the bit rate demanded by the users and the total available bit-rate supply. Simulation results show that all users improve their quality by the pricing-based decentralized bit-rate allocation method compared with their allocation when acting individually. The results of our proposed method are comparable to the centralized bit-rate allocation.

  4. Adaptive differential pulse-code modulation with adaptive bit allocation

    NASA Astrophysics Data System (ADS)

    Frangoulis, E. D.; Yoshida, K.; Turner, L. F.

    1984-08-01

    Studies have been conducted regarding the possibility to obtain good quality speech at data rates in the range from 16 kbit/s to 32 kbit/s. The techniques considered are related to adaptive predictive coding (APC) and adaptive differential pulse-code modulation (ADPCM). At 16 kbit/s adaptive transform coding (ATC) has also been used. The present investigation is concerned with a new method of speech coding. The described method employs adaptive bit allocation, similar to that used in adaptive transform coding, together with adaptive differential pulse-code modulation, employing first-order prediction. The new method has the objective to improve the quality of the speech over that which can be obtained with conventional ADPCM employing a fourth-order predictor. Attention is given to the ADPCM-AB system, the design of a subjective test, and the application of switched preemphasis to ADPCM.

  5. Proposed first-generation WSQ bit allocation procedure

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-09-08

    The Wavelet/Scalar Quantization (WSQ) gray-scale fingerprint image compression algorithm involves a symmetric wavelet transform (SWT) image decomposition followed by uniform scalar quantization of each subband. The algorithm is adaptive insofar as the bin widths for the scalar quantizers are image-specific and are included in the compressed image format. Since the decoder requires only the actual bin width values -- but not the method by which they were computed -- the standard allows for future refinements of the WSQ algorithm by improving the method used to select the scalar quantizer bin widths. This report proposes a bit allocation procedure for use with the first-generation WSQ encoder. In previous work a specific formula is provided for the relative sizes of the scalar quantizer bin widths in terms of the variances of the SWT subbands. An explicit specification for the constant of proportionality, q, that determines the absolute bin widths was not given. The actual compression ratio produced by the WSQ algorithm will generally vary from image to image depending on the amount of coding gain obtained by the run-length and Huffman coding, stages of the algorithm, but testing performed by the FBI established that WSQ compression produces archival quality images at compression ratios of around 20 to 1. The bit allocation procedure described in this report possesses a control parameter, r, that can be set by the user to achieve a predetermined amount of lossy compression, effectively giving the user control over the amount of distortion introduced by quantization noise. The variability observed in final compression ratios is thus due only to differences in lossless coding gain from image to image, chiefly a result of the varying amounts of blank background surrounding the print area in the images. Experimental results are presented that demonstrate the proposed method`s effectiveness.

  6. S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation

    PubMed Central

    2014-01-01

    Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data

  7. Bit allocation algorithm with novel view synthesis distortion model for multiview video plus depth coding.

    PubMed

    Chung, Tae-Young; Sim, Jae-Young; Kim, Chang-Su

    2014-08-01

    An efficient bit allocation algorithm based on a novel view synthesis distortion model is proposed for the rate-distortion optimized coding of multiview video plus depth sequences in this paper. We decompose an input frame into nonedge blocks and edge blocks. For each nonedge block, we linearly approximate its texture and disparity values, and derive a view synthesis distortion model, which quantifies the impacts of the texture and depth distortions on the qualities of synthesized virtual views. On the other hand, for each edge block, we use its texture and disparity gradients for the distortion model. In addition, we formulate a bit-rate allocation problem in terms of the quantization parameters for texture and depth data. By solving the problem, we can optimally divide a limited bit budget between the texture and depth data, in order to maximize the qualities of synthesized virtual views, as well as those of encoded real views. Experimental results demonstrate that the proposed algorithm yields the average PSNR gains of 1.98 and 2.04 dB in two-view and three-view scenarios, respectively, as compared with a benchmark conventional algorithm.

  8. Message-passing algorithm for two-dimensional dependent bit allocation

    NASA Astrophysics Data System (ADS)

    Sagetong, Phoom; Ortega, Antonio

    2003-05-01

    We address the bit allocation problem in scenarios where there exist two-dimensional (2D) dependencies in the bit allocation, i.e., where the allocation involves a 2D set of coding units (e.g., DCT blocks in standard MPEG coding) and where the rate-distortion (RD) characteristics of each coding unit depend on one or more of the other coding units. These coding units can be located anywhere in 2D space. As an example we consider MPEG-4 intra-coding where, in order to further reduce the redundancy between coefficients, both the DC and certain of the AC coefficients of each block are predicted from the corresponding coefficients in either the previous block in the same line (to the left) or the one above the current block. To find the optimal solution may be a time-consuming problem, given that the RD characteristics of each block depend on those of the neighbors. Greedy search approaches are popular due to their low complexity and low memory consumption, but they may be far from optimal due to the dependencies in the coding. In this work, we propose an iterative message-passing technique to solve 2D dependent bit allocation problems. This technique is based on (i) Soft-in/Soft-out (SISO) algorithms first used in the context of Turbo codes, (ii) a grid model, and (iii) Lagrangian optimization techniques. In order to solve this problem our approach is to iteratively compute the soft information of a current DCT block (intrinsic information) and pass the soft decision (extrinsic information) to other nearby DCT block(s). Since the computational complexity is also dominated by the data generation phase, i.e., in the Rate-Distortion (RD) data population process, we introduce an approximation method to eliminate the need to generate the entire set of RD points. Experimental studies reveal that the system that uses the proposed message-passing algorithm is able to outperform the greedy search approach by 0.57 dB on average. We also show that the proposed algorithm requires

  9. Bit and Power Allocation in Constrained Multicarrier Systems: The Single-User Case

    NASA Astrophysics Data System (ADS)

    Papandreou, Nikolaos; Antonakopoulos, Theodore

    2007-12-01

    Multicarrier modulation is a powerful transmission technique that provides improved performance in various communication fields. A fundamental topic of multicarrier communication systems is the bit and power loading, which is addressed in this article as a constrained multivariable nonlinear optimization problem. In particular, we present the main classes of loading problems, namely, rate maximization and margin maximization, and we discuss their optimal solutions for the single-user case. Initially, the classical water-filling solution subject to a total power constraint is presented using the Lagrange multipliers optimization approach. Next, the peak-power constraint is included and the concept of cup-limited waterfilling is introduced. The loading problem is also addressed subject to the integer-bit restriction and the optimal discrete solution is examined using combinatorial optimization methods. Furthermore, we investigate the duality conditions of the rate maximization and margin maximization problems and we highlight various ideas for low-complexity loading algorithms. This article surveys and reviews existing results on resource allocation in constrained multicarrier systems and presents new trends in this area.

  10. Regional bit allocation and rate distortion optimization for multiview depth video coding with view synthesis distortion model.

    PubMed

    Zhang, Yun; Kwong, Sam; Xu, Long; Hu, Sudeng; Jiang, Gangyi; Kuo, C-C Jay

    2013-09-01

    In this paper, we propose a view synthesis distortion model (VSDM) that establishes the relationship between depth distortion and view synthesis distortion for the regions with different characteristics: color texture area corresponding depth (CTAD) region and color smooth area corresponding depth (CSAD), respectively. With this VSDM, we propose regional bit allocation (RBA) and rate distortion optimization (RDO) algorithms for multiview depth video coding (MDVC) by allocating more bits on CTAD for rendering quality and fewer bits on CSAD for compression efficiency. Experimental results show that the proposed VSDM based RBA and RDO can improve the coding efficiency significantly for the test sequences. In addition, for the proposed overall MDVC algorithm that integrates VSDM based RBA and RDO, it achieves 9.99% and 14.51% bit rate reduction on average for the high and low bit rate, respectively. It can improve virtual view image quality 0.22 and 0.24 dB on average at the high and low bit rate, respectively, when compared with the original joint multiview video coding model. The RD performance comparisons using five different metrics also validate the effectiveness of the proposed overall algorithm. In addition, the proposed algorithms can be applied to both INTRA and INTER frames.

  11. Optimality-based modeling of nitrogen allocation and photoacclimation in photosynthesis

    NASA Astrophysics Data System (ADS)

    Armstrong, Robert A.

    2006-03-01

    quota to produce maximum photosynthetic rate. This new nitrogen-limitation function permits derivation of a steady-state optimality-based relationship between chlorophyll:carbon ratios and nitrogen:carbon ratios; the predictions of this new model are shown to be at least as good as predictions based on the "chlorophyll a synthesis regulation term" of Geider et al. [Geider, R.J., MacIntyre, H.L., Kana, T.M., 1996. A dynamic model of photoadaptation in phytoplankton. Limnology and Oceanography 41, 1-15; Geider, R.J., MacIntyre, H.L., Kana, T.M., 1998. A dynamic regulatory model of phytoplanktonic acclimation to light, nutrients, and temperature. Limnology and Oceanography 43, 679-694]. The Laws and Bannister [Laws, E.A., Bannister, T.T., 1980. Nutrient- and light-limited growth of T. fluviatilis in continuous culture, with implications for phytoplankton growth in the ocean. Limnology and Oceanography 25, 457-473] data suggest that the relationship between chlorophyll:carbon ratio and nitrogen cell quota is independent of nitrogen source (nitrate vs. ammonium) for nitrogen-limited cells. Finally, a full set of parameters for the Laws and Bannister [Laws, E.A., Bannister, T.T., 1980. Nutrient- and light-limited growth of T. fluviatilis in continuous culture, with implications for phytoplankton growth in the ocean. Limnology and Oceanography 25, 457-473] data set is estimated and used to predict chlorophyll:carbon and nitrogen:carbon ratios as functions of growth rate. This improved conceptualization of nitrogen:carbon and chlorophyll:carbon relationships in photosynthesis should provide a robust theoretical underpinning for a new generation of models of multiple-nutrient limitation.

  12. Drag bit construction

    DOEpatents

    Hood, M.

    1986-02-11

    A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face. 4 figs.

  13. Drag bit construction

    DOEpatents

    Hood, Michael

    1986-01-01

    A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face.

  14. Remote drill bit loader

    SciTech Connect

    Dokos, James A.

    1997-01-01

    A drill bit loader for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pins prevent rotation of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned.

  15. Remote drill bit loader

    DOEpatents

    Dokos, J.A.

    1997-12-30

    A drill bit loader is described for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pins prevent rotation of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned. 5 figs.

  16. Double acting bit holder

    DOEpatents

    Morrell, Roger J.; Larson, David A.; Ruzzi, Peter L.

    1994-01-01

    A double acting bit holder that permits bits held in it to be resharpened during cutting action to increase energy efficiency by reducing the amount of small chips produced. The holder consist of: a stationary base portion capable of being fixed to a cutter head of an excavation machine and having an integral extension therefrom with a bore hole therethrough to accommodate a pin shaft; a movable portion coextensive with the base having a pin shaft integrally extending therefrom that is insertable in the bore hole of the base member to permit the moveable portion to rotate about the axis of the pin shaft; a recess in the movable portion of the holder to accommodate a shank of a bit; and a biased spring disposed in adjoining openings in the base and moveable portions of the holder to permit the moveable portion to pivot around the pin shaft during cutting action of a bit fixed in a turret to allow front, mid and back positions of the bit during cutting to lessen creation of small chip amounts and resharpen the bit during excavation use.

  17. Installation of MCNP on 64-bit parallel computers

    SciTech Connect

    Meginnis, A.B.; Hendricks, J.S.; McKinney, G.W.

    1995-09-01

    The Monte Carlo radiation transport code MCNP has been successfully ported to two 64-bit workstations, the SGI and DEC Alpha. We found the biggest problem for installation on these machines to be Fortran and C mismatches in argument passing. Correction of these mismatches enabled, for the first time, dynamic memory allocation on 64-bit workstations. Although the 64-bit hardware is faster because 8-bytes are processed at a time rather than 4-bytes, we found no speed advantage in true 64-bit coding versus implicit double precision when porting an existing code to the 64-bit workstation architecture. We did find that PVM multiasking is very successful and represents a significant performance enhancement for scientific workstations.

  18. Practical Relativistic Bit Commitment.

    PubMed

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Wehner, S; Zbinden, H

    2015-07-17

    Bit commitment is a fundamental cryptographic primitive in which Alice wishes to commit a secret bit to Bob. Perfectly secure bit commitment between two mistrustful parties is impossible through an asynchronous exchange of quantum information. Perfect security is, however, possible when Alice and Bob each split into several agents exchanging classical information at times and locations suitably chosen to satisfy specific relativistic constraints. In this Letter we first revisit a previously proposed scheme [C. Crépeau et al., Lect. Notes Comput. Sci. 7073, 407 (2011)] that realizes bit commitment using only classical communication. We prove that the protocol is secure against quantum adversaries for a duration limited by the light-speed communication time between the locations of the agents. We then propose a novel multiround scheme based on finite-field arithmetic that extends the commitment time beyond this limit, and we prove its security against classical attacks. Finally, we present an implementation of these protocols using dedicated hardware and we demonstrate a 2 ms-long bit commitment over a distance of 131 km. By positioning the agents on antipodal points on the surface of Earth, the commitment time could possibly be extended to 212 ms.

  19. Flexible bit: A new anti-vibration PDC bit concept

    SciTech Connect

    Defourny, P.; Abbassian, F.

    1995-12-31

    This paper introduces the novel concept of a {open_quotes}flexible{close_quotes} polycrystalline diamond compact (PDC) bit, and its capability to reduce detrimental vibration associated with drag bits. The tilt flexibility, introduced at the bit, decouples the dynamic motion of the bottom hole assembly (BHA) from that of the bit, thus providing a dynamically more stable bit. The paper describes the details of a prototype 8-1/2 inch flexible bit design together with laboratory experiments and field tests which verify the concept.

  20. Diamond-Cutter Drill Bits

    SciTech Connect

    1995-11-01

    Geothermal Energy Program Office of Geothermal and Wind Technologies Diamond-Cutter Drill Bits Diamond-cutter drill bits cut through tough rock quicker, reducing the cost of drilling for energy resources The U.S. Department of Energy (DOE) contributed markedly to the geothermal, oil, and gas industries through the development of the advanced polycrystalline diamond compact (PDC) drill bit. Introduced in the 1970s by General Electric Company (GE), the PDC bit uses thin, diamond layers bonded to t

  1. Resource Allocation.

    ERIC Educational Resources Information Center

    Stennett, R. G.

    A research allocation formula employed in London, Ontario elementary schools, as well as supporting data on the method, are provided in this report. Attempts to improve on the traditional methods of resource allocation in London's schools were based on two principles: (1) that need for a particular service could and should be determined…

  2. 32-Bit-Wide Memory Tolerates Failures

    NASA Technical Reports Server (NTRS)

    Buskirk, Glenn A.

    1990-01-01

    Electronic memory system of 32-bit words corrects bit errors caused by some common type of failures - even failure of entire 4-bit-wide random-access-memory (RAM) chip. Detects failure of two such chips, so user warned that ouput of memory may contain errors. Includes eight 4-bit-wide DRAM's configured so each bit of each DRAM assigned to different one of four parallel 8-bit words. Each DRAM contributes only 1 bit to each 8-bit word.

  3. Positional information, in bits

    PubMed Central

    Dubuis, Julien O.; Tkačik, Gašper; Wieschaus, Eric F.; Gregor, Thomas; Bialek, William

    2013-01-01

    Cells in a developing embryo have no direct way of “measuring” their physical position. Through a variety of processes, however, the expression levels of multiple genes come to be correlated with position, and these expression levels thus form a code for “positional information.” We show how to measure this information, in bits, using the gap genes in the Drosophila embryo as an example. Individual genes carry nearly two bits of information, twice as much as would be expected if the expression patterns consisted only of on/off domains separated by sharp boundaries. Taken together, four gap genes carry enough information to define a cell’s location with an error bar of along the anterior/posterior axis of the embryo. This precision is nearly enough for each cell to have a unique identity, which is the maximum information the system can use, and is nearly constant along the length of the embryo. We argue that this constancy is a signature of optimality in the transmission of information from primary morphogen inputs to the output of the gap gene network. PMID:24089448

  4. Drilling bits optimized for the Paris basin

    SciTech Connect

    Vennin, H.C. Pouyastruc )

    1989-07-31

    Paris basin wells have been successfully drilled using steel-body bits with stud-type cutters. These bits offer the possibility of optimizing the bit-face based on the strata to be drilled, as well as allowing replacement of worn cutters. This article discusses: bit manufacturing; bit repair; optimizing bits; hydraulics.

  5. Drill bit assembly for releasably retaining a drill bit cutter

    DOEpatents

    Glowka, David A.; Raymond, David W.

    2002-01-01

    A drill bit assembly is provided for releasably retaining a polycrystalline diamond compact drill bit cutter. Two adjacent cavities formed in a drill bit body house, respectively, the disc-shaped drill bit cutter and a wedge-shaped cutter lock element with a removable fastener. The cutter lock element engages one flat surface of the cutter to retain the cutter in its cavity. The drill bit assembly thus enables the cutter to be locked against axial and/or rotational movement while still providing for easy removal of a worn or damaged cutter. The ability to adjust and replace cutters in the field reduces the effect of wear, helps maintains performance and improves drilling efficiency.

  6. Experimental unconditionally secure bit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Cao, Yuan; Curty, Marcos; Liao, Sheng-Kai; Wang, Jian; Cui, Ke; Li, Yu-Huai; Lin, Ze-Hong; Sun, Qi-Chao; Li, Dong-Dong; Zhang, Hong-Fei; Zhao, Yong; Chen, Teng-Yun; Peng, Cheng-Zhi; Zhang, Qiang; Cabello, Adan; Pan, Jian-Wei

    2014-03-01

    Quantum physics allows unconditionally secure communication between parties that trust each other. However, when they do not trust each other such as in the bit commitment, quantum physics is not enough to guarantee security. Only when relativistic causality constraints combined, the unconditional secure bit commitment becomes feasible. Here we experimentally implement a quantum bit commitment with relativistic constraints that offers unconditional security. The commitment is made through quantum measurements in two quantum key distribution systems in which the results are transmitted via free-space optical communication to two agents separated with more than 20 km. Bits are successfully committed with less than 5 . 68 ×10-2 cheating probability. This provides an experimental proof of unconditional secure bit commitment and demonstrates the feasibility of relativistic quantum communication.

  7. Positional Information, in bits

    NASA Astrophysics Data System (ADS)

    Dubuis, Julien; Bialek, William; Wieschaus, Eric; Gregor, Thomas

    2010-03-01

    Pattern formation in early embryonic development provides an important testing ground for ideas about the structure and dynamics of genetic regulatory networks. Spatial variations in the concentration of particular transcription factors act as ``morphogens,'' driving more complex patterns of gene expression that in turn define cell fates, which must be appropriate to the physical location of the cells in the embryo. Thus, in these networks, the regulation of gene expression serves to transmit and process ``positional information.'' Here, using the early Drosophila embryo as a model system, we measure the amount of positional information carried by a group of four genes (the gap genes Hunchback, Kr"uppel, Giant and Knirps) that respond directly to the primary maternal morphogen gradients. We find that the information carried by individual gap genes is much larger than one bit, so that their spatial patterns provide much more than the location of an ``expression boundary.'' Preliminary data indicate that, taken together these genes provide enough information to specify the location of every row of cells along the embryo's anterior-posterior axis.

  8. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zu, Yun-Xiao; Zhou, Jie; Zeng, Chang-Chang

    2010-11-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.

  9. Experimental unconditionally secure bit commitment.

    PubMed

    Liu, Yang; Cao, Yuan; Curty, Marcos; Liao, Sheng-Kai; Wang, Jian; Cui, Ke; Li, Yu-Huai; Lin, Ze-Hong; Sun, Qi-Chao; Li, Dong-Dong; Zhang, Hong-Fei; Zhao, Yong; Chen, Teng-Yun; Peng, Cheng-Zhi; Zhang, Qiang; Cabello, Adán; Pan, Jian-Wei

    2014-01-10

    Quantum physics allows for unconditionally secure communication between parties that trust each other. However, when the parties do not trust each other such as in the bit commitment scenario, quantum physics is not enough to guarantee security unless extra assumptions are made. Unconditionally secure bit commitment only becomes feasible when quantum physics is combined with relativistic causality constraints. Here we experimentally implement a quantum bit commitment protocol with relativistic constraints that offers unconditional security. The commitment is made through quantum measurements in two quantum key distribution systems in which the results are transmitted via free-space optical communication to two agents separated with more than 20 km. The security of the protocol relies on the properties of quantum information and relativity theory. In each run of the experiment, a bit is successfully committed with less than 5.68×10(-2) cheating probability. This demonstrates the experimental feasibility of quantum communication with relativistic constraints.

  10. Experimental Unconditionally Secure Bit Commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Cao, Yuan; Curty, Marcos; Liao, Sheng-Kai; Wang, Jian; Cui, Ke; Li, Yu-Huai; Lin, Ze-Hong; Sun, Qi-Chao; Li, Dong-Dong; Zhang, Hong-Fei; Zhao, Yong; Chen, Teng-Yun; Peng, Cheng-Zhi; Zhang, Qiang; Cabello, Adán; Pan, Jian-Wei

    2014-01-01

    Quantum physics allows for unconditionally secure communication between parties that trust each other. However, when the parties do not trust each other such as in the bit commitment scenario, quantum physics is not enough to guarantee security unless extra assumptions are made. Unconditionally secure bit commitment only becomes feasible when quantum physics is combined with relativistic causality constraints. Here we experimentally implement a quantum bit commitment protocol with relativistic constraints that offers unconditional security. The commitment is made through quantum measurements in two quantum key distribution systems in which the results are transmitted via free-space optical communication to two agents separated with more than 20 km. The security of the protocol relies on the properties of quantum information and relativity theory. In each run of the experiment, a bit is successfully committed with less than 5.68×10-2 cheating probability. This demonstrates the experimental feasibility of quantum communication with relativistic constraints.

  11. Experimental unconditionally secure bit commitment.

    PubMed

    Liu, Yang; Cao, Yuan; Curty, Marcos; Liao, Sheng-Kai; Wang, Jian; Cui, Ke; Li, Yu-Huai; Lin, Ze-Hong; Sun, Qi-Chao; Li, Dong-Dong; Zhang, Hong-Fei; Zhao, Yong; Chen, Teng-Yun; Peng, Cheng-Zhi; Zhang, Qiang; Cabello, Adán; Pan, Jian-Wei

    2014-01-10

    Quantum physics allows for unconditionally secure communication between parties that trust each other. However, when the parties do not trust each other such as in the bit commitment scenario, quantum physics is not enough to guarantee security unless extra assumptions are made. Unconditionally secure bit commitment only becomes feasible when quantum physics is combined with relativistic causality constraints. Here we experimentally implement a quantum bit commitment protocol with relativistic constraints that offers unconditional security. The commitment is made through quantum measurements in two quantum key distribution systems in which the results are transmitted via free-space optical communication to two agents separated with more than 20 km. The security of the protocol relies on the properties of quantum information and relativity theory. In each run of the experiment, a bit is successfully committed with less than 5.68×10(-2) cheating probability. This demonstrates the experimental feasibility of quantum communication with relativistic constraints. PMID:24483878

  12. Research on optimization-based design

    NASA Astrophysics Data System (ADS)

    Balling, R. J.; Parkinson, A. R.; Free, J. C.

    1989-04-01

    Research on optimization-based design is discussed. Illustrative examples are given for cases involving continuous optimization with discrete variables and optimization with tolerances. Approximation of computationally expensive and noisy functions, electromechanical actuator/control system design using decomposition and application of knowledge-based systems and optimization for the design of a valve anti-cavitation device are among the topics covered.

  13. String bit models for superstring

    SciTech Connect

    Bergman, O.; Thorn, C.B.

    1995-12-31

    The authors extend the model of string as a polymer of string bits to the case of superstring. They mainly concentrate on type II-B superstring, with some discussion of the obstacles presented by not II-B superstring, together with possible strategies for surmounting them. As with previous work on bosonic string work within the light-cone gauge. The bit model possesses a good deal less symmetry than the continuous string theory. For one thing, the bit model is formulated as a Galilei invariant theory in (D {minus} 2) + 1 dimensional space-time. This means that Poincare invariance is reduced to the Galilei subgroup in D {minus} 2 space dimensions. Naturally the supersymmetry present in the bit model is likewise dramatically reduced. Continuous string can arise in the bit models with the formation of infinitely long polymers of string bits. Under the right circumstances (at the critical dimension) these polymers can behave as string moving in D dimensional space-time enjoying the full N = 2 Poincare supersymmetric dynamics of type II-B superstring.

  14. Bit-level systolic arrays

    SciTech Connect

    De Groot, A.J.

    1989-01-01

    In this dissertation the author considered the design of bit - level systolic arrays where the basic computational unit consists of a simple one - bit logic unit, so that the systolic process is carried out at the level of individual bits. In order to pursue the foregoing research, several areas have been studied. First, the concept of systolic processing has been investigated. Several important algorithms were investigated and put into systolic form using graph-theoretic methods. The bit-level, word-level and block-level systolic arrays which have been designed for these algorithms exhibit linear speedup with respect to the number of processors and exhibit efficiency close to 100%, even with low interprocessor communication bandwidth. Block-level systolic arrays deal with blocks of data with block-level operations and communications. Block-level systolic arrays improve cell efficiency and are more efficient than their word-level counterparts. A comparison of bit-level, word-level and block-level systolic arrays was performed. In order to verify the foregoing theory and analysis a systolic processor called the SPRINT was developed to provide and environment where bit-level, word-level and block-level systolic algorithms could be confirmed by direct implementation rather than by computer simulation. The SPRINT is a supercomputer class, 64-element multiprocessor with a reconfigurable interconnection network. The theory has been confirmed by the execution on the SPRINT of the bit-level, word-level, and block-level systolic algorithms presented in the dissertation.

  15. Drill bit method and apparatus

    SciTech Connect

    Davis, K.

    1986-08-19

    This patent describes a drill bit having a lower cutting face which includes a plurality of stud assemblies radially spaced from a longitudinal axial centerline of the bit, each stud assembly being mounted within a stud receiving socket which is formed in the bit cutting face. The method of removing the stud assemblies from the sockets of the bit face consists of: forming a socket passageway along the longitudinal axial centerline of the stud receiving socket and extending the passageway rearwardly of the socket; forming a blind passageway which extends from the bit cutting face into the bit body, and into intersecting relationship respective to the socket passageway; while arranging the socket passageway and the blind passageway laterally respective to one another; forming a wedge face on one side of a tool, forming a support post which has one side inclined to receive the wedge face of the tool thereagainst; forcing a ball to move from the cutting face of the bit, into the blind passageway, onto the support post, then into the socket passageway, and into abutting engagement with a rear end portion of the stud assembly; placing the wedge face against the side of the ball which is opposed to the stud assembly; forcing the tool to move into the blind passageway while part of the tool engages the blind passageway and the wedge face engages the ball and thereby forces the ball to move in a direction away from the blind passageway; applying sufficient force to the tool to cause the ball to engage the stud assembly with sufficient force to be moved outwardly in a direction away from the socket, thereby releasing the stud assembly from the socket.

  16. Drill bit and method of renewing drill bit cutting face

    SciTech Connect

    Davis, K.

    1987-04-07

    This patent describes a drill bit having a lower formation engaging face which includes sockets formed therein, a stud assembly mounted in each socket. The method is described of removing the stud assemblies from the bit face comprises: placing a seal means about each stud assembly so that a stud assembly can sealingly reciprocate within a socket with a piston-like action; forming a reduced diameter passageway which extends rearwardly from communication with each socket to the exterior of the bit; flowing fluid into the passageway, thereby exerting fluid pressure against the rear end of the stud assembly; applying sufficient pressure to the fluid within the passageway to produce a pressure differential across the stud assembly to force the stud assembly to move outwardly in a direction away from the socket, thereby releasing the stud assembly from the socket.

  17. Bit by bit: the Darwinian basis of life.

    PubMed

    Joyce, Gerald F

    2012-01-01

    All known examples of life belong to the same biology, but there is increasing enthusiasm among astronomers, astrobiologists, and synthetic biologists that other forms of life may soon be discovered or synthesized. This enthusiasm should be tempered by the fact that the probability for life to originate is not known. As a guiding principle in parsing potential examples of alternative life, one should ask: How many heritable "bits" of information are involved, and where did they come from? A genetic system that contains more bits than the number that were required to initiate its operation might reasonably be considered a new form of life.

  18. Bit by bit: the Darwinian basis of life.

    PubMed

    Joyce, Gerald F

    2012-01-01

    All known examples of life belong to the same biology, but there is increasing enthusiasm among astronomers, astrobiologists, and synthetic biologists that other forms of life may soon be discovered or synthesized. This enthusiasm should be tempered by the fact that the probability for life to originate is not known. As a guiding principle in parsing potential examples of alternative life, one should ask: How many heritable "bits" of information are involved, and where did they come from? A genetic system that contains more bits than the number that were required to initiate its operation might reasonably be considered a new form of life. PMID:22589698

  19. Introduction to the Mu-bit

    NASA Astrophysics Data System (ADS)

    Smarandache, Florentin; Christianto, V.

    2011-03-01

    Mu-bit is defined here as `multi-space bit'. It is different from the standard meaning of bit in conventional computation, because in Smarandache's multispace theory (also spelt multi-space) the bit is created simultaneously in many subspaces (that form together a multi-space). This new `bit' term is different from multi-valued-bit already known in computer technology, for example as MVLong. This new concept is also different from qu-bit from quantum computation terminology. We know that using quantum mechanics logic we could introduce new way of computation with `qubit' (quantum bit), but the logic remains Neumann. Now, from the viewpoint of m-valued multi-space logic, we introduce a new term: `mu-bit' (from `multi-space bit).

  20. A bit serial sequential circuit

    NASA Technical Reports Server (NTRS)

    Hu, S.; Whitaker, S.

    1990-01-01

    Normally a sequential circuit with n state variables consists of n unique hardware realizations, one for each state variable. All variables are processed in parallel. This paper introduces a new sequential circuit architecture that allows the state variables to be realized in a serial manner using only one next state logic circuit. The action of processing the state variables in a serial manner has never been addressed before. This paper presents a general design procedure for circuit construction and initialization. Utilizing pass transistors to form the combinational next state forming logic in synchronous sequential machines, a bit serial state machine can be realized with a single NMOS pass transistor network connected to shift registers. The bit serial state machine occupies less area than other realizations which perform parallel operations. Moreover, the logical circuit of the bit serial state machine can be modified by simply changing the circuit input matrix to develop an adaptive state machine.

  1. Hey! A Tarantula Bit Me!

    MedlinePlus

    ... leave you alone. Reviewed by: Elana Pearl Ben-Joseph, MD Date reviewed: April 2013 For Teens For Kids For Parents MORE ON THIS TOPIC Hey! A Fire Ant Stung Me! Hey! A Scorpion Stung Me! Hey! A Black Widow Spider Bit Me! Hey! A Brown Recluse ...

  2. Hey! A Mosquito Bit Me!

    MedlinePlus

    ... Here's Help White House Lunch Recipes Hey! A Mosquito Bit Me! KidsHealth > For Kids > Hey! A Mosquito ... español ¡Ay! ¡Me picó un mosquito! What's a Mosquito? A mosquito (say: mus-KEE-toe) is an ...

  3. Deterministic relativistic quantum bit commitment

    NASA Astrophysics Data System (ADS)

    Adlam, Emily; Kent, Adrian

    2015-06-01

    We describe new unconditionally secure bit commitment schemes whose security is based on Minkowski causality and the monogamy of quantum entanglement. We first describe an ideal scheme that is purely deterministic, in the sense that neither party needs to generate any secret randomness at any stage. We also describe a variant that allows the committer to proceed deterministically, requires only local randomness generation from the receiver, and allows the commitment to be verified in the neighborhood of the unveiling point. We show that these schemes still offer near-perfect security in the presence of losses and errors, which can be made perfect if the committer uses an extra single random secret bit. We discuss scenarios where these advantages are significant.

  4. Stability of single skyrmionic bits

    NASA Astrophysics Data System (ADS)

    Hagemeister, J.; Romming, N.; von Bergmann, K.; Vedmedenko, E. Y.; Wiesendanger, R.

    2015-10-01

    The switching between topologically distinct skyrmionic and ferromagnetic states has been proposed as a bit operation for information storage. While long lifetimes of the bits are required for data storage devices, the lifetimes of skyrmions have not been addressed so far. Here we show by means of atomistic Monte Carlo simulations that the field-dependent mean lifetimes of the skyrmionic and ferromagnetic states have a high asymmetry with respect to the critical magnetic field, at which these lifetimes are identical. According to our calculations, the main reason for the enhanced stability of skyrmions is a different field dependence of skyrmionic and ferromagnetic activation energies and a lower attempt frequency of skyrmions rather than the height of energy barriers. We use this knowledge to propose a procedure for the determination of effective material parameters and the quantification of the Monte Carlo timescale from the comparison of theoretical and experimental data.

  5. A brief review on quantum bit commitment

    NASA Astrophysics Data System (ADS)

    Almeida, Álvaro J.; Loura, Ricardo; Paunković, Nikola; Silva, Nuno A.; Muga, Nelson J.; Mateus, Paulo; André, Paulo S.; Pinto, Armando N.

    2014-08-01

    In classical cryptography, the bit commitment scheme is one of the most important primitives. We review the state of the art of bit commitment protocols, emphasizing its main achievements and applications. Next, we present a practical quantum bit commitment scheme, whose security relies on current technological limitations, such as the lack of long-term stable quantum memories. We demonstrate the feasibility of our practical quantum bit commitment protocol and that it can be securely implemented with nowadays technology.

  6. 24-Hour Relativistic Bit Commitment

    NASA Astrophysics Data System (ADS)

    Verbanis, Ephanielle; Martin, Anthony; Houlmann, Raphaël; Boso, Gianluca; Bussières, Félix; Zbinden, Hugo

    2016-09-01

    Bit commitment is a fundamental cryptographic primitive in which a party wishes to commit a secret bit to another party. Perfect security between mistrustful parties is unfortunately impossible to achieve through the asynchronous exchange of classical and quantum messages. Perfect security can nonetheless be achieved if each party splits into two agents exchanging classical information at times and locations satisfying strict relativistic constraints. A relativistic multiround protocol to achieve this was previously proposed and used to implement a 2-millisecond commitment time. Much longer durations were initially thought to be insecure, but recent theoretical progress showed that this is not so. In this Letter, we report on the implementation of a 24-hour bit commitment solely based on timed high-speed optical communication and fast data processing, with all agents located within the city of Geneva. This duration is more than 6 orders of magnitude longer than before, and we argue that it could be extended to one year and allow much more flexibility on the locations of the agents. Our implementation offers a practical and viable solution for use in applications such as digital signatures, secure voting and honesty-preserving auctions.

  7. Local, Optimization-based Simplicial Mesh Smoothing

    1999-12-09

    OPT-MS is a C software package for the improvement and untangling of simplicial meshes (triangles in 2D, tetrahedra in 3D). Overall mesh quality is improved by iterating over the mesh vertices and adjusting their position to optimize some measure of mesh quality, such as element angle or aspect ratio. Several solution techniques (including Laplacian smoothing, "Smart" Laplacian smoothing, optimization-based smoothing and several combinations thereof) and objective functions (for example, element angle, sin (angle), and aspectmore » ratio) are available to the user for both two and three-dimensional meshes. If the mesh contains invalid elements (those with negative area) a different optimization algorithm for mesh untangling is provided.« less

  8. Panel focuses on diamond shear bit care

    SciTech Connect

    Park, A.

    1982-10-04

    This article examines drilling parameters and marketability of Stratapax bits. Finds that core bits drill from 2 to 3 times faster than conventional diamond bits, thereby reducing filtrate invasion. Predicts that high speed drilling, downhole motors, deeper wells and slim hole drilling will mean greater Stratapax use.

  9. Development of PDC Bits for Downhole Motors

    SciTech Connect

    Karasawa, H.; Ohno, T.

    1995-01-01

    To develop polycrystalline hamond compact (PDC) bits of the full-face type which can be applied to downhole motor drilling, drilling tests for granite and two types of andesite were conducted using bits with 98.43 and 142.88 mm diameters. The bits successfully drilled these types of rock at rotary speeds from 300 to 400 rpm.

  10. BIT BY BIT: A Game Simulating Natural Language Processing in Computers

    ERIC Educational Resources Information Center

    Kato, Taichi; Arakawa, Chuichi

    2008-01-01

    BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…

  11. Bit by Bit: The Darwinian Basis of Life

    PubMed Central

    Joyce, Gerald F.

    2012-01-01

    All known examples of life belong to the same biology, but there is increasing enthusiasm among astronomers, astrobiologists, and synthetic biologists that other forms of life may soon be discovered or synthesized. This enthusiasm should be tempered by the fact that the probability for life to originate is not known. As a guiding principle in parsing potential examples of alternative life, one should ask: How many heritable “bits” of information are involved, and where did they come from? A genetic system that contains more bits than the number that were required to initiate its operation might reasonably be considered a new form of life. PMID:22589698

  12. Proper nozzle location, bit profile, and cutter arrangement affect PDC-bit performance significantly

    SciTech Connect

    Garcia-Gavito, D.; Azar, J.J.

    1994-09-01

    During the past 20 years, the drilling industry has looked to new technology to halt the exponentially increasing costs of drilling oil, gas, and geothermal wells. This technology includes bit design innovations to improve overall drilling performance and reduce drilling costs. These innovations include development of drag bits that use PDC cutters, also called PDC bits, to drill long, continuous intervals of soft to medium-hard formations more economically than conventional three-cone roller-cone bits. The cost advantage is the result of higher rates of penetration (ROP's) and longer bit life obtained with the PDC bits. An experimental study comparing the effects of polycrystalline-diamond-compact (PDC)-bit design features on the dynamic pressure distribution at the bit/rock interface was conducted on a full-scale drilling rig. Results showed that nozzle location, bit profile, and cutter arrangement are significant factors in PDC-bit performance.

  13. Instantaneous bit-error-rate meter

    NASA Astrophysics Data System (ADS)

    Slack, Robert A.

    1995-06-01

    An instantaneous bit error rate meter provides an instantaneous, real time reading of bit error rate for digital communications data. Bit error pulses are input into the meter and are first filtered in a buffer stage to provide input impedance matching and desensitization to pulse variations in amplitude, rise time and pulse width. The bit error pulses are transformed into trigger signals for a timing pulse generator. The timing pulse generator generates timing pulses for each transformed bit error pulse, and is calibrated to generate timing pulses having a preselected pulse width corresponding to the baud rate of the communications data. An integrator generates a voltage from the timing pulses that is representative of the bit error rate as a function of the data transmission rate. The integrated voltage is then displayed on a meter to indicate the bit error rate.

  14. Bit-serial neuroprocessor architecture

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    2001-01-01

    A neuroprocessor architecture employs a combination of bit-serial and serial-parallel techniques for implementing the neurons of the neuroprocessor. The neuroprocessor architecture includes a neural module containing a pool of neurons, a global controller, a sigmoid activation ROM look-up-table, a plurality of neuron state registers, and a synaptic weight RAM. The neuroprocessor reduces the number of neurons required to perform the task by time multiplexing groups of neurons from a fixed pool of neurons to achieve the successive hidden layers of a recurrent network topology.

  15. Stability of single skyrmionic bits

    NASA Astrophysics Data System (ADS)

    Vedmedenko, Olena; Hagemeister, Julian; Romming, Niklas; von Bergmann, Kirsten; Wiesendanger, Roland

    The switching between topologically distinct skyrmionic and ferromagnetic states has been proposed as a bit operation for information storage. While long lifetimes of the bits are required for data storage devices, the lifetimes of skyrmions have not been addressed so far. Here we show by means of atomistic Monte Carlo simulations that the field-dependent mean lifetimes of the skyrmionic and ferromagnetic states have a high asymmetry with respect to the critical magnetic field, at which these lifetimes are identical. According to our calculations, the main reason for the enhanced stability of skyrmions is a different field dependence of skyrmionic and ferromagnetic activation energies and a lower attempt frequency of skyrmions rather than the height of energy barriers. We use this knowledge to propose a procedure for the determination of effective material parameters and the quantification of the Monte Carlo timescale from the comparison of theoretical and experimental data. Financial support from the DFG in the framework of the SFB668 is acknowledged.

  16. FastBit Reference Manual

    SciTech Connect

    Wu, Kesheng

    2007-08-02

    An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. The compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.

  17. Stinger Enhanced Drill Bits For EGS

    SciTech Connect

    Durrand, Christopher J.; Skeem, Marcus R.; Crockett, Ron B.; Hall, David R.

    2013-04-29

    The project objectives were to design, engineer, test, and commercialize a drill bit suitable for drilling in hard rock and high temperature environments (10,000 meters) likely to be encountered in drilling enhanced geothermal wells. The goal is provide a drill bit that can aid in the increased penetration rate of three times over conventional drilling. Novatek has sought to leverage its polycrystalline diamond technology and a new conical cutter shape, known as the Stinger®, for this purpose. Novatek has developed a fixed bladed bit, known as the JackBit®, populated with both shear cutter and Stingers that is currently being tested by major drilling companies for geothermal and oil and gas applications. The JackBit concept comprises a fixed bladed bit with a center indenter, referred to as the Jack. The JackBit has been extensively tested in the lab and in the field. The JackBit has been transferred to a major bit manufacturer and oil service company. Except for the attached published reports all other information is confidential.

  18. FMO-based H.264 frame layer rate control for low bit rate video transmission

    NASA Astrophysics Data System (ADS)

    Cajote, Rhandley D.; Aramvith, Supavadee; Miyanaga, Yoshikazu

    2011-12-01

    The use of flexible macroblock ordering (FMO) in H.264/AVC improves error resiliency at the expense of reduced coding efficiency with added overhead bits for slice headers and signalling. The trade-off is most severe at low bit rates, where header bits occupy a significant portion of the total bit budget. To better manage the rate and improve coding efficiency, we propose enhancements to the H.264/AVC frame layer rate control, which take into consideration the effects of using FMO for video transmission. In this article, we propose a new header bits model, an enhanced frame complexity measure, a bit allocation and a quantization parameter adjustment scheme. Simulation results show that the proposed improvements achieve better visual quality compared with the JM 9.2 frame layer rate control with FMO enabled using a different number of slice groups. Using FMO as an error resilient tool with better rate management is suitable in applications that have limited bandwidth and in error prone environments such as video transmission for mobile terminals.

  19. Computer Processor Allocator

    2004-03-01

    The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

  20. REVERSIBLE N-BIT TO N-BIT INTEGER HAAR-LIKE TRANSFORMS

    SciTech Connect

    Duchaineau, M; Joy, K I; Senecal, J

    2004-02-14

    We introduce TLHaar, an n-bit to n-bit reversible transform similar to the Haar IntegerWavelet Transform (IWT). TLHaar uses lookup tables that approximate the Haar IWT, but reorder the coefficients so they fit into n bits. TLHaar is suited for lossless compression in fixed-width channels, such as digital video channels and graphics hardware frame buffers.

  1. Hey! A Brown Recluse Spider Bit Me!

    MedlinePlus

    ... putting them on. Reviewed by: Elana Pearl Ben-Joseph, MD Date reviewed: April 2013 For Teens For Kids For Parents MORE ON THIS TOPIC Hey! A Fire Ant Stung Me! Hey! A Tarantula Bit Me! Hey! A Scorpion Stung Me! Hey! A Black Widow Spider Bit Me! Camping and Woods Safety ...

  2. Drill bit with suction jet means

    SciTech Connect

    Castel, Y.; Cholet, H.

    1980-12-16

    This drill bit comprises a plurality of rollers provided with cutting teeth or inserts. At least one upwardly directed eduction jet is created and the bit comprises at least one nozzle located between two adjacent rollers and creating at least two fluid jets respectively directed towards these two adjacent rollers.

  3. MWD tools open window at bit

    SciTech Connect

    Not Available

    1993-05-24

    A new measurement-while-drilling (MWD) system takes resistivity and directional measurements directly at the bit, allowing drillers and geologists to 'see' the true direction and inclination of the bit with respect to the formation drilled. With real-time resistivity measurements at the bit (RAB), the formation is logged before fluid invasion occurs and the driller can steer directional wells more accurately than with conventional MWD tools. The MWD tools comprise an instrumented steerable motor and an instrumented near-bit stabilizer for rotary drilling. The tools have sensors for resistivity, gamma ray, and inclination located in a sub just behind the bit. The integrated steerable system was successfully tested in the Barbara 79 D well offshore Italy and in the Cortemaggiore 134 D well in northern Italy in November, 1992. This paper describes the system and its advantages over conventional MWD tools.

  4. An Improved N-Bit to N-Bit Reversible Haar-Like Transform

    SciTech Connect

    Senecal, J G; Lindstrom, P; Duchaineau, M A; Joy, K I

    2004-07-26

    We introduce the Piecewise-Linear Haar (PLHaar) transform, a reversible n-bit to n-bit transform that is based on the Haar wavelet transform. PLHaar is continuous, while all current n-bit to n-bit methods are not, and is therefore uniquely usable with both lossy and lossless methods (e.g. image compression). PLHaar has both integer and continuous (i.e. non-discrete) forms. By keeping the coefficients to n bits PLHaar is particularly suited for use in hardware environments where channel width is limited, such as digital video channels and graphics hardware.

  5. Bit-string scattering theory

    SciTech Connect

    Noyes, H.P.

    1990-01-29

    We construct discrete space-time coordinates separated by the Lorentz-invariant intervals h/mc in space and h/mc{sup 2} in time using discrimination (XOR) between pairs of independently generated bit-strings; we prove that if this space is homogeneous and isotropic, it can have only 1, 2 or 3 spacial dimensions once we have related time to a global ordering operator. On this space we construct exact combinatorial expressions for free particle wave functions taking proper account of the interference between indistinguishable alternative paths created by the construction. Because the end-points of the paths are fixed, they specify completed processes; our wave functions are born collapsed''. A convenient way to represent this model is in terms of complex amplitudes whose squares give the probability for a particular set of observable processes to be completed. For distances much greater than h/mc and times much greater than h/mc{sup 2} our wave functions can be approximated by solutions of the free particle Dirac and Klein-Gordon equations. Using a eight-counter paradigm we relate this construction to scattering experiments involving four distinguishable particles, and indicate how this can be used to calculate electromagnetic and weak scattering processes. We derive a non-perturbative formula relating relativistic bound and resonant state energies to mass ratios and coupling constants, equivalent to our earlier derivation of the Bohr relativistic formula for hydrogen. Using the Fermi-Yang model of the pion as a relativistic bound state containing a nucleon-antinucleon pair, we find that (G{sub {pi}N}{sup 2}){sup 2} = (2m{sub N}/m{sub {pi}}){sup 2} {minus} 1. 21 refs., 1 fig.

  6. Polynomial optimization techniques for activity scheduling. Optimization based prototype scheduler

    NASA Technical Reports Server (NTRS)

    Reddy, Surender

    1991-01-01

    Polynomial optimization techniques for activity scheduling (optimization based prototype scheduler) are presented in the form of the viewgraphs. The following subject areas are covered: agenda; need and viability of polynomial time techniques for SNC (Space Network Control); an intrinsic characteristic of SN scheduling problem; expected characteristics of the schedule; optimization based scheduling approach; single resource algorithms; decomposition of multiple resource problems; prototype capabilities, characteristics, and test results; computational characteristics; some features of prototyped algorithms; and some related GSFC references.

  7. PDC bits find applications in Oklahoma drilling

    SciTech Connect

    Offenbacher, L.A.; McDermaid, J.D.; Patterson, C.R.

    1983-02-01

    Drilling in Oklahoma is difficult by any standards. Polycrystalline diamond cutter (PDC) bits, with proven success drilling soft, homogenous formations common in the North Sea and U.S. Gulf Coast regions, have found some significant ''spot'' applications in Oklahoma. Applications qualified by bit design and application development over the past two (2) years include slim hole drilling in the deep Anadarko Basin, deviation control in Southern Oklahoma, drilling on mud motors, drilling in oil base mud, drilling cement, sidetracking, coring and some rotary drilling in larger hole sizes. PDC bits are formation sensitive, and care must be taken in selecting where to run them in Oklahoma. Most of the successful runs have been in water base mud drilling hard shales and soft, unconsolidated sands and lime, although bit life is often extended in oil-base muds.

  8. A practical quantum bit commitment protocol

    NASA Astrophysics Data System (ADS)

    Arash Sheikholeslam, S.; Aaron Gulliver, T.

    2012-01-01

    In this paper, we introduce a new quantum bit commitment protocol which is secure against entanglement attacks. A general cheating strategy is examined and shown to be practically ineffective against the proposed approach.

  9. 28-Bit serial word simulator/monitor

    NASA Technical Reports Server (NTRS)

    Durbin, J. W.

    1979-01-01

    Modular interface unit transfers data at high speeds along four channels. Device expedites variable-word-length communication between computers. Operation eases exchange of bit information by automatically reformatting coded input data and status information to match requirements of output.

  10. FastBit: Interactively Searching Massive Data

    SciTech Connect

    Wu, Kesheng; Ahern, Sean; Bethel, E. Wes; Chen, Jacqueline; Childs, Hank; Cormier-Michel, Estelle; Geddes, Cameron; Gu, Junmin; Hagen, Hans; Hamann, Bernd; Koegler, Wendy; Lauret, Jerome; Meredith, Jeremy; Messmer, Peter; Otoo, Ekow; Perevoztchikov, Victor; Poskanzer, Arthur; Prabhat,; Rubel, Oliver; Shoshani, Arie; Sim, Alexander; Stockinger, Kurt; Weber, Gunther; Zhang, Wei-Ming

    2009-06-23

    As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.

  11. Using magnetic permeability bits to store information

    NASA Astrophysics Data System (ADS)

    Timmerwilke, John; Petrie, J. R.; Wieland, K. A.; Mencia, Raymond; Liou, Sy-Hwang; Cress, C. D.; Newburgh, G. A.; Edelstein, A. S.

    2015-10-01

    Steps are described in the development of a new magnetic memory technology, based on states with different magnetic permeability, with the capability to reliably store large amounts of information in a high-density form for decades. The advantages of using the permeability to store information include an insensitivity to accidental exposure to magnetic fields or temperature changes, both of which are known to corrupt memory approaches that rely on remanent magnetization. The high permeability media investigated consists of either films of Metglas 2826 MB (Fe40Ni38Mo4B18) or bilayers of permalloy (Ni78Fe22)/Cu. Regions of films of the high permeability media were converted thermally to low permeability regions by laser or ohmic heating. The permeability of the bits was read by detecting changes of an external 32 Oe probe field using a magnetic tunnel junction 10 μm away from the media. Metglas bits were written with 100 μs laser pulses and arrays of 300 nm diameter bits were read. The high and low permeability bits written using bilayers of permalloy/Cu are not affected by 10 Mrad(Si) of gamma radiation from a 60Co source. An economical route for writing and reading bits as small at 20 nm using a variation of heat assisted magnetic recording is discussed.

  12. Managing the number of tag bits transmitted in a bit-tracking RFID collision resolution protocol.

    PubMed

    Landaluce, Hugo; Perallos, Asier; Angulo, Ignacio

    2014-01-08

    Radio Frequency Identification (RFID) technology faces the problem of message collisions. The coexistence of tags sharing the communication channel degrades bandwidth, and increases the number of bits transmitted. The window methodology, which controls the number of bits transmitted by the tags, is applied to the collision tree (CT) protocol to solve the tag collision problem. The combination of this methodology with the bit-tracking technology, used in CT, improves the performance of the window and produces a new protocol which decreases the number of bits transmitted. The aim of this paper is to show how the CT bit-tracking protocol is influenced by the proposed window, and how the performance of the novel protocol improves under different conditions of the scenario. Therefore, we have performed a fair comparison of the CT protocol, which uses bit-tracking to identify the first collided bit, and the new proposed protocol with the window methodology. Simulations results show that the proposed window positively decreases the total number of bits that are transmitted by the tags, and outperforms the CT protocol latency in slow tag data rate scenarios.

  13. Friction of drill bits under Martian pressure

    NASA Astrophysics Data System (ADS)

    Zacny, K. A.; Cooper, G. A.

    2007-03-01

    Frictional behavior was investigated for two materials that are good candidates for Mars drill bits: Diamond Impregnated Segments and Polycrystalline Diamond Compacts (PDC). The bits were sliding against dry sandstone and basalt rocks under both Earth and Mars atmospheric pressures and also at temperatures ranging from subzero to over 400 °C. It was found that the friction coefficient dropped from approximately 0.16 to 0.1 as the pressure was lowered from the Earth's pressure to Mars' pressure, at room temperature. This is thought to be a result of the loss of weakly bound water on the sliding surfaces. Holding the pressure at 5 torr and increasing the temperature to approximately 200°C caused a sudden increase in the friction coefficient by approximately 50%. This is attributed to the loss of surface oxides. If no indication of the bit temperature is available, an increase in drilling torque could be misinterpreted as being caused by an increase in auger torque (due to accumulation of cuttings) rather than being the result of a loss of oxide layers due to elevated bit temperatures. An increase in rotational speed (to allow for clearing of cuttings) would then cause greater frictional heating and would increase the drilling torque further. Therefore it would be advisable to monitor the bit temperature or, if that is not possible, to include pauses in drilling to allow the heat to dissipate. Higher friction would also accelerate the wear of the drill bit and in turn reduce the depth of the hole.

  14. Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.

    PubMed

    Huang, Shih-Chia; Chen, Bo-Hao

    2013-12-01

    Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76

  15. Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.

    PubMed

    Huang, Shih-Chia; Chen, Bo-Hao

    2013-12-01

    Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76

  16. Classical teleportation of a quantum Bit

    PubMed

    Cerf; Gisin; Massar

    2000-03-13

    Classical teleportation is defined as a scenario where the sender is given the classical description of an arbitrary quantum state while the receiver simulates any measurement on it. This scenario is shown to be achievable by transmitting only a few classical bits if the sender and receiver initially share local hidden variables. Specifically, a communication of 2.19 bits is sufficient on average for the classical teleportation of a qubit, when restricted to von Neumann measurements. The generalization to positive-operator-valued measurements is also discussed.

  17. Quantum bit commitment under Gaussian constraints

    NASA Astrophysics Data System (ADS)

    Mandilara, Aikaterini; Cerf, Nicolas J.

    2012-06-01

    Quantum bit commitment has long been known to be impossible. Nevertheless, just as in the classical case, imposing certain constraints on the power of the parties may enable the construction of asymptotically secure protocols. Here, we introduce a quantum bit commitment protocol and prove that it is asymptotically secure if cheating is restricted to Gaussian operations. This protocol exploits continuous-variable quantum optical carriers, for which such a Gaussian constraint is experimentally relevant as the high optical nonlinearity needed to effect deterministic non-Gaussian cheating is inaccessible.

  18. Protected Polycrystalline Diamond Compact Bits For Hard Rock Drilling

    SciTech Connect

    Robert Lee Cardenas

    2000-10-31

    Two bits were designed. One bit was fabricated and tested at Terra-Tek's Drilling Research Laboratory. Fabrication of the second bit was not completed due to complications in fabrication and meeting scheduled test dates at the test facility. A conical bit was tested in a Carthage Marble (compressive strength 14,500 psi) and Sierra White Granite (compressive strength 28,200 psi). During the testing, Hydraulic Horsepower, Bit Weight, Rotation Rate, were varied for the Conical Bit, a Varel Tricone Bit and Varel PDC bit. The Conical Bi did cut rock at a reasonable rate in both rocks. Beneficial effects from the near and through cutter water nozzles were not evident in the marble due to test conditions and were not conclusive in the granite due to test conditions. At atmospheric drilling, the Conical Bit's penetration rate was as good as the standard PDC bit and better than the Tricone Bit. Torque requirements for the Conical Bit were higher than that required for the Standard Bits. Spudding the conical bit into the rock required some care to avoid overloading the nose cutters. The nose design should be evaluated to improve the bit's spudding characteristics.

  19. The Unobtrusive Memory Allocator

    2003-03-31

    This library implements a memory allocator/manager which ask its host program or library for memory refions to manage rather than requesting them from the operating system. This allocator supports multiple distinct heaps within a single executable, each of which may grow either upward or downward in memory. The GNU mmalloc library has been modified in such a way that its allocation algorithms have been preserved, but the manner in which it obtains regions to managemore » has been changed to request memory from the host program or library. Additional modifications allow the allocator to manage each heap as either upward or downward-growing. By allowing the hosting program or library to determine what memory is managed, this package allows a greater degree of control than other memory allocation/management libraries. Additional distinguishing features include the ability to manage multiple distinct heaps with in a single executable, each of which may grow either upward or downward in memory. The most common use of this library is in conjunction with the Berkeley Unified Parallel C (UPC) Runtime Library. This package is a modified version of the LGPL-licensed "mmalloc" allocator from release 5.2 of the "gdb" debugger's source code.« less

  20. Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection

    PubMed Central

    Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong

    2014-01-01

    In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505

  1. Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.

    PubMed

    Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong

    2014-10-27

    In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.

  2. Jet bit with onboard deviation means

    SciTech Connect

    Cherrington, M.D.

    1990-02-13

    This patent describes a directional drill bit utilizing pressurized fluid as a means for eroding earth in a forward path of said bit. It comprises: an elongate hollow body having a first proximal end and a first distal end, and having at least a rigid first section and at least a rigid second section. The first section and said second section being connected one to the other by a flexible joint positioned intermediately of said first section and said second section, with the combination of said first section, said flexible joint and said second section providing a conduit having lead-free annular sidewalls. The said combination thereby defining said elongate hollow body; a connecting means formed by said first proximal end for joining said elongated hollow body with an appropriate fluid conveyance means used to transport said pressurized fluid; a nozzle means borne by said first distal end. The nozzle means comprising a nozzle plate having at least one jet nozzle attached to and carried by said nozzle plate; and an articulation means. The articulation means being responsive to changes in fluid pressure and permitting a forward portion of said bit bearing said nozzle structure to change angular position with respect to an aft portion of the bit.

  3. Composite grease for rock bit bearings

    SciTech Connect

    Newcomb, A.L.

    1982-11-09

    A rock bit for drilling subterranean formations is lubricated with a grease with the following composition: molybdenum disulfide particles in the range of from 6 to 14% by weight; copper particles in the range of from 3 to 9% by weight; a metal soap thickener in the range of from 4 to 10% by weight; and a balance of primarily hydrocarbon oil.

  4. Multiple bit differential detection of offset QPSK

    NASA Technical Reports Server (NTRS)

    Simon, M.

    2003-01-01

    Analogous to multiple symbol differential detection of quadrature phase-shift-keying, a multiple bit differential detection scheme is described for offset QPSK that also exhibits continuous improvement in performance with increasing observation interval. Being derived from maximum-likelihood (ML) considerations, the proposed scheme is purported to be the most power efficient scheme for such a modulation and detection method.

  5. REVERSIBLE N-BIT TO N-BIT INTEGER HAAR-LIKE TRANSFORMS

    SciTech Connect

    Senecal, J G; Duchaineau, M A; Joy, K I

    2004-07-26

    We introduce TLHaar, an n-bit to n-bit reversible transform similar to the S-transform. TLHaar uses lookup tables that approximate the S-transform, but reorder the coefficients so they fit into n bits. TLHaar is suited for lossless compression in fixed-width channels, such as digital video channels and graphics hardware frame buffers. Tests indicate that when the incoming image data has lines or hard edges TLHaar coefficients compress better than S-transform coefficients. For other types of image data TLHaar coefficients compress up to 2.5% worse than those of the S-transform, depending on the data and the compression method used.

  6. Frictional ignition with coal-mining bits. Information Circular/1990

    SciTech Connect

    Courtney, W.G.

    1990-01-01

    The publication reviews recent U.S. Bureau of Mines studies of frictional ignition of a methane-air environment by coal mining bits cutting into sandstone and the effectiveness of remedial techniques to reduce the likelihood of frictional ignition. Frictional ignition with a mining bit always involves a worn bit having a wear flat on the tip of the bit. The worn bit forms hot spots on the surface of the sandstone because of frictional abrasion. The hot spots then can ignite the methane-air environment. A small wear flat forms a small hot spot, which does not give ignition, while a large wear flat forms a large hot spot, which gives ignition. The likelihood of frictional ignition can be somewhat reduced by using a mushroom-shaped tungsten-carbide bit tip on the mining bit and by increasing the bit clearance angle; it can be significantly reduced by using a water spray nozzle in back of each bit.

  7. Bit-by-bit autophagic removal of parkin-labelled mitochondria.

    PubMed

    Yang, Jin-Yi; Yang, Wei Yuan

    2013-01-01

    Eukaryotic cells maintain mitochondrial integrity through mitophagy, an autophagic process by which dysfunctional mitochondria are selectively sequestered into double-layered membrane structures, termed phagophores, and delivered to lysosomes for degradation. Here we show that small fragments of parkin-labelled mitochondria at omegasome-marked sites are engulfed by autophagic membranes one at a time. Using a light-activation scheme to impair long mitochondrial tubules, we demonstrate that sites undergoing bit-by-bit mitophagy display preferential ubiquitination, and are situated where parkin-labelled mitochondrial tubules and endoplasmic reticulum intersect. Our observations suggest contact regions between the endoplasmic reticulum and impaired mitochondria are initiation sites for local LC3 recruitment and mitochondrial remodelling that support bit-by-bit, parkin-mediated mitophagy. These results help in understanding how cells manage to fit large and morphologically heterogeneous mitochondria into micron-sized autophagic membranes during mitophagy.

  8. Bit-by-bit optical code scrambling technique for secure optical communication.

    PubMed

    Wang, Xu; Gao, Zhensen; Wang, Xuhua; Kataoka, Nobuyuki; Wada, Naoya

    2011-02-14

    We propose and demonstrate a novel bit-by-bit code scrambling technique based on time domain spectral phase encoding/decoding (SPE/SPD) scheme using only a single phase modulator to simultaneously generate and decode the code hopping sequence and DPSK data for secure optical communication application. In the experiment, 2.5-Gb/s DPSK data has been generated, decoded and securely transmitted over 34 km by scrambling five 8-chip, 20-Gchip/s Gold codes with prime-hop patterns. The proposed scheme can rapidly reconfigure the optical code hopping sequence bit-by-bit with the DPSK data, and thus it is very robust to conventional data rate energy detection and DPSK demodulation attack, exhibiting the potential to provide unconditional transmission security and realize even one-time pad.

  9. Management of an adaptable-bit-rate video service in a MAN environment

    NASA Astrophysics Data System (ADS)

    Marini, Michele; Albanese, Andres

    1991-02-01

    This paper describes an adaptable-bit-rate video service concept experiment and its management in an experimental prototype of a public metropolitan area network (MAN). In the experiment the " service providers" supply their customers with a set of service management primitives to implement customer-defined management applications and provide users with a high level of flexibility in the service definition. The paper describes the architecture for an experimental service management system that includes customer controlled features for dynamic bandwidth allocation group addressing and address screening. 1

  10. Bit-1 is an essential regulator of myogenic differentiation.

    PubMed

    Griffiths, Genevieve S; Doe, Jinger; Jijiwa, Mayumi; Van Ry, Pam; Cruz, Vivian; de la Vega, Michelle; Ramos, Joe W; Burkin, Dean J; Matter, Michelle L

    2015-05-01

    Muscle differentiation requires a complex signaling cascade that leads to the production of multinucleated myofibers. Genes regulating the intrinsic mitochondrial apoptotic pathway also function in controlling cell differentiation. How such signaling pathways are regulated during differentiation is not fully understood. Bit-1 (also known as PTRH2) mutations in humans cause infantile-onset multisystem disease with muscle weakness. We demonstrate here that Bit-1 controls skeletal myogenesis through a caspase-mediated signaling pathway. Bit-1-null mice exhibit a myopathy with hypotrophic myofibers. Bit-1-null myoblasts prematurely express muscle-specific proteins. Similarly, knockdown of Bit-1 expression in C2C12 myoblasts promotes early differentiation, whereas overexpression delays differentiation. In wild-type mice, Bit-1 levels increase during differentiation. Bit-1-null myoblasts exhibited increased levels of caspase 9 and caspase 3 without increased apoptosis. Bit-1 re-expression partially rescued differentiation. In Bit-1-null muscle, Bcl-2 levels are reduced, suggesting that Bcl-2-mediated inhibition of caspase 9 and caspase 3 is decreased. Bcl-2 re-expression rescued Bit-1-mediated early differentiation in Bit-1-null myoblasts and C2C12 cells with knockdown of Bit-1 expression. These results support an unanticipated yet essential role for Bit-1 in controlling myogenesis through regulation of Bcl-2.

  11. Simulation of Evapotranspiration using an Optimality-based Ecohydrological Model

    NASA Astrophysics Data System (ADS)

    Chen, Lajiao

    2014-05-01

    Accurate estimation of evapotranspiration (ET) is essential in understanding the effect of climate change and human activities on ecosystem and water resource. As an important tool for ET estimation, most of the traditional hydrological or ecohydrological models treat ET as a physical process, controlled by energy, vapor, pressure and turbulence. It is at times questionable as transpiration, major component of ET, is biological activity closely linked to photosynthesis by stomatal conductivity. Optimality-based ecohydrological models consider the mutual interaction of ET and photosynthesis based on optimality principle. However, as a rising generation of ecohydrological models, so far there are only a few applications of the optimality-based model in different ecosystems. The ability and reliability of this kind of models for ecohydrological modeling need to be validated in more ecosystems. The objective of this study is to validate the optimality hypothesis for water-limited ecosystem. To achieve this, the study applied an optimality-based model Vegetation Optimality Model (VOM) to simulate ET and its components based on optimality principle. The model is applied in a semiarid watershed. The simulated ET and soil waster were compared with long term measurement data in Kendall and Lcukyhill sites in the watershed. The result showed that the temporal variations of simulated ET and soil water are in good agreement with observed data. Temporal dynamic of soil evaporation and transpiration and their response to precipitation events can be well captured with the model. This could come to a conclusion the optimality-based ecohydrological model could be a potential approach to simulate ET.

  12. Acquisition and Retaining Granular Samples via a Rotating Coring Bit

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Badescu, Mircea; Sherrit, Stewart

    2013-01-01

    This device takes advantage of the centrifugal forces that are generated when a coring bit is rotated, and a granular sample is entered into the bit while it is spinning, making it adhere to the internal wall of the bit, where it compacts itself into the wall of the bit. The bit can be specially designed to increase the effectiveness of regolith capturing while turning and penetrating the subsurface. The bit teeth can be oriented such that they direct the regolith toward the bit axis during the rotation of the bit. The bit can be designed with an internal flute that directs the regolith upward inside the bit. The use of both the teeth and flute can be implemented in the same bit. The bit can also be designed with an internal spiral into which the various particles wedge. In another implementation, the bit can be designed to collect regolith primarily from a specific depth. For that implementation, the bit can be designed such that when turning one way, the teeth guide the regolith outward of the bit and when turning in the opposite direction, the teeth will guide the regolith inward into the bit internal section. This mechanism can be implemented with or without an internal flute. The device is based on the use of a spinning coring bit (hollow interior) as a means of retaining granular sample, and the acquisition is done by inserting the bit into the subsurface of a regolith, soil, or powder. To demonstrate the concept, a commercial drill and a coring bit were used. The bit was turned and inserted into the soil that was contained in a bucket. While spinning the bit (at speeds of 600 to 700 RPM), the drill was lifted and the soil was retained inside the bit. To prove this point, the drill was turned horizontally, and the acquired soil was still inside the bit. The basic theory behind the process of retaining unconsolidated mass that can be acquired by the centrifugal forces of the bit is determined by noting that in order to stay inside the interior of the bit, the

  13. Drill bit with improved cutter sizing pattern

    SciTech Connect

    Keith, C.W.; Clayton, R.I.

    1993-08-24

    A fixed cutter drill bit is described having a body with a nose portion thereof containing a plurality of angularly spaced generally radial wings, a first of said wings including a first row of cutting elements mounted thereon upon progressing radially outward from a center of said nose portion toward a periphery of the body of the bit, said first row of cutting elements having alternately larger and smaller area cutting faces at spaced radial positions along said first wing relative to the center of said nose, a second of said wings having a second similar row of cutting elements of larger and smaller area cutting faces thereon in substantially the same but reversed radial positions with respect to the relative radial placement of the larger and smaller diameter cutting faces of said elements in said first wing.

  14. Earth boring bit with eccentric seal boss

    SciTech Connect

    Helmick, J.E.

    1981-07-21

    A rolling cone cutter earth boring bit is provided with a sealing system that results in the seal being squeezed uniformly around the seal circumference during drilling. The bearing pin seal surface is machined eccentrically to the bearing pin by an amount equal to the radial clearance of the bearing. The bearing pin seal surface is machined about an axis that is offset from the central axis of the bearing pin in the direction of the unloaded side of the bearing pin. When the bit is drilling and the bearing pin is loaded the seal will run on an axis concentric with the axis of the seal surfaces of the bearing pin and the rolling cutter and will see uniform squeeze around its circumference.

  15. Cosmic Ray Induced Bit-Flipping Experiment

    NASA Astrophysics Data System (ADS)

    Pu, Ge; Callaghan, Ed; Parsons, Matthew; Cribflex Team

    2015-04-01

    CRIBFLEX is a novel approach to mid-altitude observational particle physics intended to correlate the phenomena of semiconductor bit-flipping with cosmic ray activity. Here a weather balloon carries a Geiger counter and DRAM memory to various altitudes; the data collected will contribute to the development of memory device protection. We present current progress toward initial flight and data acquisition. This work is supported by the Society of Physics Students with funding from a Chapter Research Award.

  16. Lathe tool bit and holder for machining fiberglass materials

    NASA Technical Reports Server (NTRS)

    Winn, L. E. (Inventor)

    1972-01-01

    A lathe tool and holder combination for machining resin impregnated fiberglass cloth laminates is described. The tool holder and tool bit combination is designed to accommodate a conventional carbide-tipped, round shank router bit as the cutting medium, and provides an infinite number of cutting angles in order to produce a true and smooth surface in the fiberglass material workpiece with every pass of the tool bit. The technique utilizes damaged router bits which ordinarily would be discarded.

  17. Method to manufacture bit patterned magnetic recording media

    DOEpatents

    Raeymaekers, Bart; Sinha, Dipen N

    2014-05-13

    A method to increase the storage density on magnetic recording media by physically separating the individual bits from each other with a non-magnetic medium (so-called bit patterned media). This allows the bits to be closely packed together without creating magnetic "cross-talk" between adjacent bits. In one embodiment, ferromagnetic particles are submerged in a resin solution, contained in a reservoir. The bottom of the reservoir is made of piezoelectric material.

  18. Genetic algorithm approach for adaptive power and subcarrier allocation in multi-user OFDM systems

    NASA Astrophysics Data System (ADS)

    Reddy, Y. B.; Naraghi-Pour, Mort

    2007-04-01

    In this paper, a novel genetic algorithm application is proposed for adaptive power and subcarrier allocation in multi-user Orthogonal Frequency Division Multiplexing (OFDM) systems. To test the application, a simple genetic algorithm was implemented in MATLAB language. With the goal of minimizing the overall transmit power while ensuring the fulfillment of each user's rate and bit error rate (BER) requirements, the proposed algorithm acquires the needed allocation through genetic search. The simulations were tested for BER 0.1 to 0.00001, data rate of 256 bit per OFDM block and chromosome length of 128. The results show that genetic algorithm outperforms the results in [3] in subcarrier allocation. The convergence of GA model with 8 users and 128 subcarriers performs better in power requirement compared to that in [4] but converges more slowly.

  19. NSC 800, 8-bit CMOS microprocessor

    NASA Technical Reports Server (NTRS)

    Suszko, S. F.

    1984-01-01

    The NSC 800 is an 8-bit CMOS microprocessor manufactured by National Semiconductor Corp., Santa Clara, California. The 8-bit microprocessor chip with 40-pad pin-terminals has eight address buffers (A8-A15), eight data address -- I/O buffers (AD(sub 0)-AD(sub 7)), six interrupt controls and sixteen timing controls with a chip clock generator and an 8-bit dynamic RAM refresh circuit. The 22 internal registers have the capability of addressing 64K bytes of memory and 256 I/O devices. The chip is fabricated on N-type (100) silicon using self-aligned polysilicon gates and local oxidation process technology. The chip interconnect consists of four levels: Aluminum, Polysi 2, Polysi 1, and P(+) and N(+) diffusions. The four levels, except for contact interface, are isolated by interlevel oxide. The chip is packaged in a 40-pin dual-in-line (DIP), side brazed, hermetically sealed, ceramic package with a metal lid. The operating voltage for the device is 5 V. It is available in three operating temperature ranges: 0 to +70 C, -40 to +85 C, and -55 to +125 C. Two devices were submitted for product evaluation by F. Stott, MTS, JPL Microprocessor Specialist. The devices were pencil-marked and photographed for identification.

  20. An Optical Bit-Counting Algorithm

    NASA Technical Reports Server (NTRS)

    Mack, Marilyn; Lapir, Gennadi M.; Berkovich, Simon

    2000-01-01

    This paper addresses the omnipresent problem of counting bits - an operation discussed since the very early stages of the establishing of computer science. The need for a quick bit-counting method acquires a special significance with the proliferation of search engines on the Internet. It arises in several other computer applications. This is especially true in information retrieval in which an array of binary vectors is used to represent a characteristic function (CF) of a set of qualified documents. The number of "I"s in the CF equals the cardinality of the set. The process of repeated evaluations of this cardinality is a pivotal point in choosing a rational strategy for deciding whether to constrain or broaden the search criteria to ensure selection of the desired items. Another need for bit-counting occurs when trying to determine the differences between given files, (images or text), in terms of the Hamming distance. An Exclusive OR operation applied to a pair of files results in a binary vector array of mismatches that must be counted.

  1. Approaches to Resource Allocation

    ERIC Educational Resources Information Center

    Dressel, Paul; Simon, Lou Anna Kimsey

    1976-01-01

    Various budgeting patterns and strategies are currently in use, each with its own particular strengths and weaknesses. Neither cost-benefit analysis nor cost-effectiveness analysis offers any better solution to the allocation problem than do the unsupported contentions of departments or the historical unit costs. An operable model that performs…

  2. Multiple Leader Candidate and Competitive Position Allocation for Robust Formation against Member Robot Faults

    PubMed Central

    Kwon, Ji-Wook; Kim, Jin Hyo; Seo, Jiwon

    2015-01-01

    This paper proposes a Multiple Leader Candidate (MLC) structure and a Competitive Position Allocation (CPA) algorithm which can be applicable for various applications including environmental sensing. Unlike previous formation structures such as virtual-leader and actual-leader structures with position allocation including a rigid allocation and an optimization based allocation, the formation employing the proposed MLC structure and CPA algorithm is robust against the fault (or disappearance) of the member robots and reduces the entire cost. In the MLC structure, a leader of the entire system is chosen among leader candidate robots. The CPA algorithm is the decentralized position allocation algorithm that assigns the robots to the vertex of the formation via the competition of the adjacent robots. The numerical simulations and experimental results are included to show the feasibility and the performance of the multiple robot system employing the proposed MLC structure and the CPA algorithm. PMID:25954956

  3. Multiple Leader Candidate and Competitive Position Allocation for Robust Formation against Member Robot Faults.

    PubMed

    Kwon, Ji-Wook; Kim, Jin Hyo; Seo, Jiwon

    2015-05-06

    This paper proposes a Multiple Leader Candidate (MLC) structure and a Competitive Position Allocation (CPA) algorithm which can be applicable for various applications including environmental sensing. Unlike previous formation structures such as virtual-leader and actual-leader structures with position allocation including a rigid allocation and an optimization based allocation, the formation employing the proposed MLC structure and CPA algorithm is robust against the fault (or disappearance) of the member robots and reduces the entire cost. In the MLC structure, a leader of the entire system is chosen among leader candidate robots. The CPA algorithm is the decentralized position allocation algorithm that assigns the robots to the vertex of the formation via the competition of the adjacent robots. The numerical simulations and experimental results are included to show the feasibility and the performance of the multiple robot system employing the proposed MLC structure and the CPA algorithm.

  4. PDC bits stand up to high speed, soft formation drilling

    SciTech Connect

    Hover, E.R.; Middleton, J.N.

    1982-08-01

    Six experimental, polycrystalline diamond compact (PDC) bit designs were tested in the lab at both high and low speeds in three different types of rock. Testing procedures, bit performance and wear characteristics are discussed. These experimental results are correlated with specific design options such as rake angle and bit profile.

  5. PCM bit detection with correction for intersymbol interference

    NASA Technical Reports Server (NTRS)

    Thumim, A. I.

    1969-01-01

    For pulse code modulation bits, received signals are filtered by integrate and dump filter from which samples are directed to end of PCM bit. Threshold decision circuit determines level of sample voltage. Effects of interference of known past bit can be corrected by raising or lowering threshold voltage value.

  6. Laboratory and field testing of improved geothermal rock bits

    SciTech Connect

    Hendrickson, R.R.; Jones, A.H.; Winzenried, R.W.; Maish, A.B.

    1980-07-01

    The development and testing of 222 mm (8-3/4 inch) unsealed, insert type, medium hard formation, high-temperature bits are described. The new bits were fabricated by substituting improved materials in critical bit components. These materials were selected on bases of their high temperature properties, machinability, and heat treatment response. Program objectives required that both machining and heat treating could be accomplished with existing rock bit production equipment. Two types of experimental bits were subjected to laboratory air drilling tests at 250/sup 0/C (482/sup 0/F) in cast iron. These tests indicated field testing could be conducted without danger to the hole, and that bearing wear would be substantially reduced. Six additional experimental bits, and eight conventional bits were then subjected to air drilling a 240/sup 0/C (464/sup 0/F) in Francisan Graywacke at The Geysers, CA. The materials selected improved roller wear by 200%, friction-pin wear by 150%, and lug wear by 150%. Geysers drilling performances compared directly to conventional bits indicate that in-gage drilling life was increased by 70%. All bits at The Geysers are subjected to reaming out-of-gage hole prior to drilling. Under these conditions the experimental bits showed a 30% increase in usable hole over the conventional bits. These tests demonstrated a potential well cost reduction of 4 to 8%. Savings of 12% are considered possible with drilling procedures optimized for the experimental bits.

  7. Multi-Bit Nano-Electromechanical Nonvolatile Memory Cells (Zigzag T Cells) for the Suppression of Bit-to-Bit Interference.

    PubMed

    Choi, Woo Young; Han, Jae Hwan; Cha, Tae Min

    2016-05-01

    Multi-bit nano-electromechanical (NEM) nonvolatile memory cells such as T cells were proposed for higher memory density. However, they suffered from bit-to-bit interference (BI). In order to suppress BI without sacrificing cell size, this paper proposes zigzag T cell structures. The BI suppression of the proposed zigzag T cell is verified by finite-element modeling (FEM). Based on the FEM results, the design of zigzag T cells is optimized.

  8. A novel bit-quad-based Euler number computing algorithm.

    PubMed

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms. PMID:26636023

  9. A novel bit-quad-based Euler number computing algorithm.

    PubMed

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.

  10. Myrmics Memory Allocator

    SciTech Connect

    Lymperis, S.

    2011-09-23

    MMA is a stand-alone memory management system for MPI clusters. It implements a shared Partitioned Global Address Space, where multiple MPI processes request objects from the allocator and the latter provides them with system-wide unique memory addresses for each object. It provides applications with an intuitive way of managing the memory system in a unified way, thus enabling easier writing of irregular application code.

  11. Cosmic Ray Induced Bit-Flipping Experiment

    NASA Astrophysics Data System (ADS)

    Callaghan, Edward; Parsons, Matthew

    2015-04-01

    CRIBFLEX is a novel approach to mid-altitude observational particle physics intended to correlate the phenomena of semiconductor bit-flipping with cosmic ray activity. Here a weather balloon carries a Geiger counter and DRAM memory to various altitudes; the data collected will contribute to the development of memory device protection. We present current progress toward initial flight and data acquisition. This work is supported by the Society of Physics Students with funding from a Chapter Research Award. Supported by a Society of Physics Students Chapter Research Award.

  12. Optimization-based multiple-point geostatistics: A sparse way

    NASA Astrophysics Data System (ADS)

    Kalantari, Sadegh; Abdollahifard, Mohammad Javad

    2016-10-01

    In multiple-point simulation the image should be synthesized consistent with the given training image and hard conditioning data. Existing sequential simulation methods usually lead to error accumulation which is hardly manageable in future steps. Optimization-based methods are capable of handling inconsistencies by iteratively refining the simulation grid. In this paper, the multiple-point stochastic simulation problem is formulated in an optimization-based framework using a sparse model. Sparse model allows each patch to be constructed as a superposition of a few atoms of a dictionary formed using training patterns, leading to a significant increase in the variability of the patches. To control the creativity of the model, a local histogram matching method is proposed. Furthermore, effective solutions are proposed for different issues arisen in multiple-point simulation. In order to handle hard conditioning data a weighted matching pursuit method is developed in this paper. Moreover, a simple and efficient thresholding method is developed which allows working with categorical variables. The experiments show that the proposed method produces acceptable realizations in terms of pattern reproduction, increases the variability of the realizations, and properly handles numerous conditioning data.

  13. Synaptic Tagging During Memory Allocation

    PubMed Central

    Rogerson, Thomas; Cai, Denise; Frank, Adam; Sano, Yoshitake; Shobe, Justin; Aranda, Manuel L.; Silva, Alcino J.

    2014-01-01

    There is now compelling evidence that the allocation of memory to specific neurons (neuronal allocation) and synapses (synaptic allocation) in a neurocircuit is not random and that instead specific mechanisms, such as increases in neuronal excitability and synaptic tagging and capture, determine the exact sites where memories are stored. We propose an integrated view of these processes, such that neuronal allocation, synaptic tagging and capture, spine clustering and metaplasticity reflect related aspects of memory allocation mechanisms. Importantly, the properties of these mechanisms suggest a set of rules that profoundly affect how memories are stored and recalled. PMID:24496410

  14. 50 CFR 660.323 - Pacific whiting allocations, allocation attainment, and inseason allocation reapportionment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Pacific whiting allocations, allocation attainment, and inseason allocation reapportionment. 660.323 Section 660.323 Wildlife and Fisheries FISHERY...) FISHERIES OFF WEST COAST STATES West Coast Groundfish Fisheries § 660.323 Pacific whiting...

  15. Physical Roots of It from Bit

    NASA Astrophysics Data System (ADS)

    Berezin, Alexander A.

    2003-04-01

    Why there is Something rather than Nothing? From Pythagoras ("everything is number") to Wheeler ("it from bit") theme of ultimate origin stresses primordiality of Ideal Platonic World (IPW) of mathematics. Even popular "quantum tunnelling out of nothing" can specify "nothing" only as (essentially) IPW. IPW exists everywhere (but nowhere in particular) and logically precedes space, time, matter or any "physics" in any conceivable universe. This leads to propositional conjecture (axiom?) that (meta)physical "Platonic Pressure" of infinitude of numbers acts as engine for self-generation of physical universe directly out of mathematics: cosmogenesis is driven by the very fact of IPW inexhaustibility. While physics in other quantum branches of inflating universe (Megaverse)can be(arbitrary) different from ours, number theory (and rest of IPW)is not (it is unique, absolute, immutable and infinitely resourceful). Let (infinite) totality of microstates ("its") of entire Megaverse form countable set. Since countable sets are hierarchically inexhaustible (Cantor's "fractal branching"), each single "it" still has infinite tail of non-overlapping IPW-based "personal labels". Thus, each "bit" ("it") is infinitely and uniquely resourceful: possible venue of elimination ergodicity basis for eternal return cosmological argument. Physics (in any subuniverse) may be limited only by inherent impossibilities residing in IPW, e.g. insolvability of Continuum Problem may be IPW foundation of quantum indeterminicity.

  16. Object tracking based on bit-planes

    NASA Astrophysics Data System (ADS)

    Li, Na; Zhao, Xiangmo; Liu, Ying; Li, Daxiang; Wu, Shiqian; Zhao, Feng

    2016-01-01

    Visual object tracking is one of the most important components in computer vision. The main challenge for robust tracking is to handle illumination change, appearance modification, occlusion, motion blur, and pose variation. But in surveillance videos, factors such as low resolution, high levels of noise, and uneven illumination further increase the difficulty of tracking. To tackle this problem, an object tracking algorithm based on bit-planes is proposed. First, intensity and local binary pattern features represented by bit-planes are used to build two appearance models, respectively. Second, in the neighborhood of the estimated object location, a region that is most similar to the models is detected as the tracked object in the current frame. In the last step, the appearance models are updated with new tracking results in order to deal with environmental and object changes. Experimental results on several challenging video sequences demonstrate the superior performance of our tracker compared with six state-of-the-art tracking algorithms. Additionally, our tracker is more robust to low resolution, uneven illumination, and noisy video sequences.

  17. Recent developments in polycrystalline diamond-drill-bit design

    SciTech Connect

    Huff, C.F.; Varnado, S.G.

    1980-05-01

    Development of design criteria for polycrystalline diamond compact (PDC) drill bits for use in severe environments (hard or fractured formations, hot and/or deep wells) is continuing. This effort consists of both analytical and experimental analyses. The experimental program includes single point tests of cutters, laboratory tests of full scale bits, and field tests of these designs. The results of laboratory tests at simulated downhole conditions utilizing new and worn bits are presented. Drilling at simulated downhole pressures was conducted in Mancos Shale and Carthage Marble. Comparisons are made between PDC bits and roller cone bits in drilling with borehole pressures up to 5000 psi (34.5 PMa) with oil and water based muds. The PDC bits drilled at rates up to 5 times as fast as roller bits in the shale. In the first field test, drilling rates approximately twice those achieved with conventional bits were achieved with a PDC bit. A second test demonstrated the value of these bits in correcting deviation and reaming.

  18. Progress in the Advanced Synthetic-Diamond Drill Bit Program

    SciTech Connect

    Glowka, D.A.; Dennis, T.; Le, Phi; Cohen, J.; Chow, J.

    1995-11-01

    Cooperative research is currently underway among five drill bit companies and Sandia National Laboratories to improve synthetic-diamond drill bits for hard-rock applications. This work, sponsored by the US Department of Energy and individual bit companies, is aimed at improving performance and bit life in harder rock than has previously been possible to drill effectively with synthetic-diamond drill bits. The goal is to extend to harder rocks the economic advantages seen in using synthetic-diamond drill bits in soft and medium rock formations. Four projects are being conducted under this research program. Each project is investigating a different area of synthetic diamond bit technology that builds on the current technology base and market interests of the individual companies involved. These projects include: optimization of the PDC claw cutter; optimization of the Track-Set PDC bit; advanced TSP bit development; and optimization of impregnated-diamond drill bits. This paper describes the progress made in each of these projects to date.

  19. An Optimization-based Atomistic-to-Continuum Coupling Method

    SciTech Connect

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally, we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.

  20. Optimization-based interactive segmentation interface for multiregion problems.

    PubMed

    Baxter, John S H; Rajchl, Martin; Peters, Terry M; Chen, Elvis C S

    2016-04-01

    Interactive segmentation is becoming of increasing interest to the medical imaging community in that it combines the positive aspects of both manual and automated segmentation. However, general-purpose tools have been lacking in terms of segmenting multiple regions simultaneously with a high degree of coupling between groups of labels. Hierarchical max-flow segmentation has taken advantage of this coupling for individual applications, but until recently, these algorithms were constrained to a particular hierarchy and could not be considered general-purpose. In a generalized form, the hierarchy for any given segmentation problem is specified in run-time, allowing different hierarchies to be quickly explored. We present an interactive segmentation interface, which uses generalized hierarchical max-flow for optimization-based multiregion segmentation guided by user-defined seeds. Applications in cardiac and neonatal brain segmentation are given as example applications of its generality. PMID:27335892

  1. An Optimization-based Atomistic-to-Continuum Coupling Method

    DOE PAGESBeta

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less

  2. [Organ allocation. Ethical issues].

    PubMed

    Cattorini, P

    2010-01-01

    The criteria for allocating organs are one of the most debated ethical issue in the transplantation programs. The article examines some rules and principles followed by "Nord Italia Transplant program", summarized in its Principles' Charter and explained in a recent interdisciplinary book. General theories of justice and their application to individual clinical cases are commented and evaluated, in order to foster a public, democratic, transparent debate among professionals and citizens, scientific associations and customers' organizations. Some specific moral dilemmas are focused regarding the concepts of proportionate treatment, unselfish donation by living persons, promotion of local institutions efficiency. PMID:20677677

  3. An Optimality-Based Fully-Distributed Watershed Ecohydrological Model

    NASA Astrophysics Data System (ADS)

    Chen, L., Jr.

    2015-12-01

    Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial

  4. Temperature-compensated 8-bit column driver for AMLCD

    NASA Astrophysics Data System (ADS)

    Dingwall, Andrew G. F.; Lin, Mark L.

    1995-06-01

    An all-digital, 5 V input, 50 Mhz bandwidth, 10-bit resolution, 128- column, AMLCD column driver IC has been designed and tested. The 10-bit design can enhance display definition over 6-bit nd 8-bit column drivers. Precision is realized with on-chip, switched-capacitor DACs plus transparently auto-offset-calibrated, opamp outputs. Increased resolution permits multiple 10-bit digital gamma remappings in EPROMs over temperature. Driver IC features include externally programmable number of output column, bi-directional digital data shifting, user- defined row/column/pixel/frame inversion, power management, timing control for daisy-chained column drivers, and digital bit inversion. The architecture uses fewer reference power supplies.

  5. Efficient Resource Allocation for Multiclass Services in Multiuser OFDM Systems

    NASA Astrophysics Data System (ADS)

    Lee, Jae Soong; Lee, Jae Young; Lee, Soobin; Lee, Hwang Soo

    Although each application has its own quality of service (QoS) requirements, the resource allocation for multiclass services has not been studied adequately in multiuser orthogonal frequency division multiplexing (OFDM) systems. In this paper, a total transmit power minimization problem for downlink transmission is examined while satisfying multiclass services consisting of different data rates and target bit-error rates (BER). Lagrangian relaxation is used to find an optimal subcarrier allocation criterion in the context of subcarrier time-sharing by all users. We suggest an iterative algorithm using this criterion to find the upper and lower bounds of optimal power consumption. We also propose a prioritized subcarrier allocation (PSA) algorithm that provides low computation cost and performance very close to that of the iterative algorithm. The PSA algorithm employs subcarrier selection order (SSO) in order to decide which user takes its best subcarrier first over other users. The SSO is determined by the data rates, channel gain, and target BER of each user. The proposed algorithms are simulated in various QoS parameters and the fading channel model. Furthermore, resource allocation is performed not only subcarrier by subcarrier but also frequency block by frequency block (comprises several subcarriers). These extensive simulation environments provide a more complete assessment of the proposed algorithms. Simulation results show that the proposed algorithms significantly outperform existing algorithms in terms of total transmit power consumption.

  6. Cooling system for cooling the bits of a cutting machine

    SciTech Connect

    Wrulich, H.; Gekle, S.; Schetina, O.; Zitz, A.

    1984-06-26

    The invention refers to a system for cooling the bits of a cutting machine and comprising a nozzle for the cooling water to be ejected under pressure, said nozzle being arranged at the area of the bit, the water supply to said nozzle being closable by means of a shutoff valve and the bit being supported on the bit holder for limited axial shifting movement under the action of the cutting pressure against the force of a spring and against the hydraulic pressure of the cooling water and the shutoff valve being coupled with the bit by means of a coupling member such that the shutoff valve is opened on shifting movement of the bit in direction of the cutting pressure. In this system the arrangement is such that the bit (6) has in a manner known per se the shape of a cap and is enclosing a bit shaft (3) adapted to be inserted into the bit holder (1), in that the cap-shaped bit (6) is supported on the shaft (3) for shifting movement in axial direction and in that the shutoff valve (11) and the coupling member (10) are arranged within the bit shaft (3). The coupling member is formed of a push rod (10) acting on the closure member (11) of the valve, said push rod being guided within a central bore (9) of the bit shaft and the closure member (11) closing the valve in opposite direction to the action of the cutting pressure and being moved in open position by the push rod (10) in direction of the acting cutting pressure.

  7. Development and testing of a Mudjet-augmented PDC bit.

    SciTech Connect

    Black, Alan; Chahine, Georges; Raymond, David Wayne; Matthews, Oliver; Grossman, James W.; Bertagnolli, Ken (US Synthetic); Vail, Michael

    2006-01-01

    This report describes a project to develop technology to integrate passively pulsating, cavitating nozzles within Polycrystalline Diamond Compact (PDC) bits for use with conventional rig pressures to improve the rock-cutting process in geothermal formations. The hydraulic horsepower on a conventional drill rig is significantly greater than that delivered to the rock through bit rotation. This project seeks to leverage this hydraulic resource to extend PDC bits to geothermal drilling.

  8. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  9. Markov speckle for efficient random bit generation.

    PubMed

    Horstmeyer, Roarke; Chen, Richard Y; Judkewitz, Benjamin; Yang, Changhuei

    2012-11-19

    Optical speckle is commonly observed in measurements using coherent radiation. While lacking experimental validation, previous work has often assumed that speckle's random spatial pattern follows a Markov process. Here, we present a derivation and experimental confirmation of conditions under which this assumption holds true. We demonstrate that a detected speckle field can be designed to obey the first-order Markov property by using a Cauchy attenuation mask to modulate scattered light. Creating Markov speckle enables the development of more accurate and efficient image post-processing algorithms, with applications including improved de-noising, segmentation and super-resolution. To show its versatility, we use the Cauchy mask to maximize the entropy of a detected speckle field with fixed average speckle size, allowing cryptographic applications to extract a maximum number of useful random bits from speckle images.

  10. Nanostructures applied to bit-cell devices

    NASA Astrophysics Data System (ADS)

    Kołodziej, Andrzej; Łukasiak, Lidia; Kołodziej, Michał

    2013-07-01

    In this work split-gate charge trap FLASH memory with a storage layer containing 3D nano-crystals is proposed and compared with existing sub-90 nm solutions. We estimate electrical properties, cell operations and reliability issues. Analytical predictions show that for nano-crystals with the diameter < 3 nm metals could be the preferred material. The presented 3D layers were fabricated in a CMOS compatible process. We also show what kinds of nano-crystal geometries and distributions could be achieved. The study shows that the proposed memory cells have very good program/erase/read characteristics approaching those of SONOS cells but better retention time than standard discrete charge storage cells. Also dense nano-crystal structure should allow 2-bits of information to be stored.

  11. Single Abrikosov vortices as quantized information bits.

    PubMed

    Golod, T; Iovan, A; Krasnov, V M

    2015-10-12

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex.

  12. Bit-commitment-based quantum coin flipping

    SciTech Connect

    Nayak, Ashwin; Shor, Peter

    2003-01-01

    In this paper we focus on a special framework for quantum coin-flipping protocols, bit-commitment-based protocols, within which almost all known protocols fit. We show a lower bound of 1/16 for the bias in any such protocol. We also analyze a sequence of multiround protocols that tries to overcome the drawbacks of the previously proposed protocols in order to lower the bias. We show an intricate cheating strategy for this sequence, which leads to a bias of 1/4. This indicates that a bias of 1/4 might be optimal in such protocols, and also demonstrates that a more clever proof technique may be required to show this optimality.

  13. Second quantization in bit-string physics

    NASA Technical Reports Server (NTRS)

    Noyes, H. Pierre

    1993-01-01

    Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.

  14. Single Abrikosov vortices as quantized information bits

    NASA Astrophysics Data System (ADS)

    Golod, T.; Iovan, A.; Krasnov, V. M.

    2015-10-01

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex.

  15. Very low bit rate video coding standards

    NASA Astrophysics Data System (ADS)

    Zhang, Ya-Qin

    1995-04-01

    Very low bit rate video coding has received considerable attention in academia and industry in terms of both coding algorithms and standards activities. In addition to the earlier ITU-T efforts on H.320 standardization for video conferencing from 64 kbps to 1.544 Mbps in ISDN environment, the ITU-T/SG15 has formed an expert group on low bit coding (LBC) for visual telephone below 64 kbps. The ITU-T/SG15/LBC work consists of two phases: the near-term and long-term. The near-term standard H.32P/N, based on existing compression technologies, mainly addresses the issues related to visual telephony at below 28.8 kbps, the V.34 modem rate used in the existing Public Switched Telephone Network (PSTN). H.32P/N will be technically frozen in January '95. The long-term standard H.32P/L, relying on fundamentally new compression technologies with much improved performance, will address video telephony in both PSTN and mobile environment. The ISO/SG29/WG11, after its highly visible and successful MPEG 1/2 work, is starting to focus on the next- generation audiovisual multimedia coding standard MPEG 4. With the recent change of direction, MPEG 4 intends to provide an audio visual coding standard allowing for interactivity, high compression, and/or universal accessibility, with high degree of flexibility and extensibility. This paper briefly summarizes these on-going standards activities undertaken by ITU-T/LBC and ISO/MPEG 4 as of December 1994.

  16. New PDC bit design increased penetration rate in slim wells

    SciTech Connect

    Gerbaud, L.; Sellami, H.; Lamine, E.; Sagot, A.

    1997-07-01

    This paper describes slim hole bit design developed at the Paris School of Mines and Security DBS. The design is a compromise between several criteria such as drilling efficiency, uniform wear distribution around the bit face and low level of vibration of the bit, according to the hole diameter and the formation characteristics. Two new bits were manufactured and run successfully in a full scale drilling test bench and in field test in Gabon. The result show improvement of the drilling performances in slimhole application.

  17. Microstructural Evolution of DP980 Steel during Friction Bit Joining

    NASA Astrophysics Data System (ADS)

    Huang, T.; Sato, Y. S.; Kokawa, H.; Miles, M. P.; Kohkonen, K.; Siemssen, B.; Steel, R. J.; Packer, S.

    2009-12-01

    The authors study a new solid-state spot joining process, friction bit joining (FBJ), which relies on the use of a consumable joining bit. It has been reported that FBJ is feasible for the joining of steel/steel and aluminum/steel, but the metallurgical characteristics of the joint for enhancement of the properties and reliability remain unclear. Therefore, this study produced friction bit joints in DP980 steel and then examined the microstructures in the joint precisely. In this article, the microstructure distribution associated with hardness in the friction-bit-joined DP980 steel and the microstructural evolution during FBJ are reported.

  18. Quantum bit commitment with cheat sensitive binding and approximate sealing

    NASA Astrophysics Data System (ADS)

    Li, Yan-Bing; Xu, Sheng-Wei; Huang, Wei; Wan, Zong-Jie

    2015-04-01

    This paper proposes a cheat-sensitive quantum bit commitment scheme based on single photons, in which Alice commits a bit to Bob. Here, Bob’s probability of success at cheating as obtains the committed bit before the opening phase becomes close to \\frac{1}{2} (just like performing a guess) as the number of single photons used is increased. And if Alice alters her committed bit after the commitment phase, her cheating will be detected with a probability that becomes close to 1 as the number of single photons used is increased. The scheme is easy to realize with present day technology.

  19. PDC (polycrystalline diamond compact) bit research at Sandia National Laboratories

    SciTech Connect

    Finger, J.T.; Glowka, D.A.

    1989-06-01

    From the beginning of the geothermal development program, Sandia has performed and supported research into polycrystalline diamond compact (PDC) bits. These bits are attractive because they are intrinsically efficient in their cutting action (shearing, rather than crushing) and they have no moving parts (eliminating the problems of high-temperature lubricants, bearings, and seals.) This report is a summary description of the analytical and experimental work done by Sandia and our contractors. It describes analysis and laboratory tests of individual cutters and complete bits, as well as full-scale field tests of prototype and commercial bits. The report includes a bibliography of documents giving more detailed information on these topics. 26 refs.

  20. Generalized multidimensional dynamic allocation method.

    PubMed

    Lebowitsch, Jonathan; Ge, Yan; Young, Benjamin; Hu, Feifang

    2012-12-10

    Dynamic allocation has received considerable attention since it was first proposed in the 1970s as an alternative means of allocating treatments in clinical trials which helps to secure the balance of prognostic factors across treatment groups. The purpose of this paper is to present a generalized multidimensional dynamic allocation method that simultaneously balances treatment assignments at three key levels: within the overall study, within each level of each prognostic factor, and within each stratum, that is, combination of levels of different factors Further it offers capabilities for unbalanced and adaptive designs for trials. The treatment balancing performance of the proposed method is investigated through simulations which compare multidimensional dynamic allocation with traditional stratified block randomization and the Pocock-Simon method. On the basis of these results, we conclude that this generalized multidimensional dynamic allocation method is an improvement over conventional dynamic allocation methods and is flexible enough to be applied for most trial settings including Phases I, II and III trials.

  1. BitPredator: A Discovery Algorithm for BitTorrent Initial Seeders and Peers

    SciTech Connect

    Borges, Raymond; Patton, Robert M; Kettani, Houssain; Masalmah, Yahya

    2011-01-01

    There is a large amount of illegal content being replicated through peer-to-peer (P2P) networks where BitTorrent is dominant; therefore, a framework to profile and police it is needed. The goal of this work is to explore the behavior of initial seeds and highly active peers to develop techniques to correctly identify them. We intend to establish a new methodology and software framework for profiling BitTorrent peers. This involves three steps: crawling torrent indexers for keywords in recently added torrents using Really Simple Syndication protocol (RSS), querying torrent trackers for peer list data and verifying Internet Protocol (IP) addresses from peer lists. We verify IPs using active monitoring methods. Peer behavior is evaluated and modeled using bitfield message responses. We also design a tool to profile worldwide file distribution by mapping IP-to-geolocation and linking to WHOIS server information in Google Earth.

  2. High density bit transition requirements versus the effects on BCH error correcting code. [bit synchronization

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.

  3. Research on allocation efficiency of the daisy chain allocation algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jingping; Zhang, Weiguo

    2013-03-01

    With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.

  4. Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zu, Yun-Xiao; Zhou, Jie

    2012-01-01

    Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate.

  5. Experimental test of Landauer’s principle in single-bit operations on nanomagnetic memory bits

    PubMed Central

    Hong, Jeongmin; Lambson, Brian; Dhuey, Scott; Bokor, Jeffrey

    2016-01-01

    Minimizing energy dissipation has emerged as the key challenge in continuing to scale the performance of digital computers. The question of whether there exists a fundamental lower limit to the energy required for digital operations is therefore of great interest. A well-known theoretical result put forward by Landauer states that any irreversible single-bit operation on a physical memory element in contact with a heat bath at a temperature T requires at least kBT ln(2) of heat be dissipated from the memory into the environment, where kB is the Boltzmann constant. We report an experimental investigation of the intrinsic energy loss of an adiabatic single-bit reset operation using nanoscale magnetic memory bits, by far the most ubiquitous digital storage technology in use today. Through sensitive, high-precision magnetometry measurements, we observed that the amount of dissipated energy in this process is consistent (within 2 SDs of experimental uncertainty) with the Landauer limit. This result reinforces the connection between “information thermodynamics” and physical systems and also provides a foundation for the development of practical information processing technologies that approach the fundamental limit of energy dissipation. The significance of the result includes insightful direction for future development of information technology. PMID:26998519

  6. Experimental test of Landauer's principle in single-bit operations on nanomagnetic memory bits.

    PubMed

    Hong, Jeongmin; Lambson, Brian; Dhuey, Scott; Bokor, Jeffrey

    2016-03-01

    Minimizing energy dissipation has emerged as the key challenge in continuing to scale the performance of digital computers. The question of whether there exists a fundamental lower limit to the energy required for digital operations is therefore of great interest. A well-known theoretical result put forward by Landauer states that any irreversible single-bit operation on a physical memory element in contact with a heat bath at a temperature T requires at least k B T ln(2) of heat be dissipated from the memory into the environment, where k B is the Boltzmann constant. We report an experimental investigation of the intrinsic energy loss of an adiabatic single-bit reset operation using nanoscale magnetic memory bits, by far the most ubiquitous digital storage technology in use today. Through sensitive, high-precision magnetometry measurements, we observed that the amount of dissipated energy in this process is consistent (within 2 SDs of experimental uncertainty) with the Landauer limit. This result reinforces the connection between "information thermodynamics" and physical systems and also provides a foundation for the development of practical information processing technologies that approach the fundamental limit of energy dissipation. The significance of the result includes insightful direction for future development of information technology. PMID:26998519

  7. Experimental test of Landauer's principle in single-bit operations on nanomagnetic memory bits.

    PubMed

    Hong, Jeongmin; Lambson, Brian; Dhuey, Scott; Bokor, Jeffrey

    2016-03-01

    Minimizing energy dissipation has emerged as the key challenge in continuing to scale the performance of digital computers. The question of whether there exists a fundamental lower limit to the energy required for digital operations is therefore of great interest. A well-known theoretical result put forward by Landauer states that any irreversible single-bit operation on a physical memory element in contact with a heat bath at a temperature T requires at least k B T ln(2) of heat be dissipated from the memory into the environment, where k B is the Boltzmann constant. We report an experimental investigation of the intrinsic energy loss of an adiabatic single-bit reset operation using nanoscale magnetic memory bits, by far the most ubiquitous digital storage technology in use today. Through sensitive, high-precision magnetometry measurements, we observed that the amount of dissipated energy in this process is consistent (within 2 SDs of experimental uncertainty) with the Landauer limit. This result reinforces the connection between "information thermodynamics" and physical systems and also provides a foundation for the development of practical information processing technologies that approach the fundamental limit of energy dissipation. The significance of the result includes insightful direction for future development of information technology.

  8. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  9. 8-, 16-, and 32-Bit Processors: Characteristics and Appropriate Applications.

    ERIC Educational Resources Information Center

    Williams, James G.

    1984-01-01

    Defines and describes the components and functions that constitute a microcomputer--bits, bytes, address register, cycle time, data path, and bus. Characteristics of 8-, 16-, and 32-bit machines are explained in detail, and microprocessor evolution, architecture, and implementation are discussed. Application characteristics or types for each bit…

  10. TriBITS (Tribal Build, Integrate, and Test System)

    SciTech Connect

    2013-05-16

    TriBITS is a configuration, build, test, and reporting system that uses the Kitware open-source CMake/CTest/CDash system. TriBITS contains a number of custom CMake/CTest scripts and python scripts that extend the functionality of the out-of-the-box CMake/CTest/CDash system.

  11. Adaptive downsampling to improve image compression at low bit rates.

    PubMed

    Lin, Weisi; Dong, Li

    2006-09-01

    At low bit rates, better coding quality can be achieved by downsampling the image prior to compression and estimating the missing portion after decompression. This paper presents a new algorithm in such a paradigm, based on the adaptive decision of appropriate downsampling directions/ratios and quantization steps, in order to achieve higher coding quality with low bit rates with the consideration of local visual significance. The full-resolution image can be restored from the DCT coefficients of the downsampled pixels so that the spatial interpolation required otherwise is avoided. The proposed algorithm significantly raises the critical bit rate to approximately 1.2 bpp, from 0.15-0.41 bpp in the existing downsample-prior-to-JPEG schemes and, therefore, outperforms the standard JPEG method in a much wider bit-rate scope. The experiments have demonstrated better PSNR improvement over the existing techniques before the critical bit rate. In addition, the adaptive mode decision not only makes the critical bit rate less image-independent, but also automates the switching coders in variable bit-rate applications, since the algorithm turns to the standard JPEG method whenever it is necessary at higher bit rates.

  12. The Principal as Resource Allocator.

    ERIC Educational Resources Information Center

    Peterson, Kent D.

    The effect of political influences on the allocation of personnel, money, facilities, and equipment by elementary school principals is discussed in this paper. The use of Zald's political economy framework as a tool for understanding the principal's role in allocating resources is described by the author. He suggests that the principal occupies a…

  13. Evolution of a Hybrid Roller Cone/PDC core bit

    SciTech Connect

    Pettitt, R.; Laney, R.; George, D.; Clemens, G.

    1980-01-01

    The development of the hot dry rock (HDR) geothermal resource, as presently being accomplished by the Los Alamos Scientific Laboratory (LASL), requires that sufficient quantities of good quality core be obtained at a reasonable cost. The use of roller cone core bits, with tungsten carbide inserts, was initiated by the Deep Sea Drilling Program. These bits were modified for continental drilling in deep, hot, granitic rock for the LASL HDR Geothermal Site at Fenton Hill, New Mexico in 1974. After the advent of monocrystalline diamond Stratapax pads, a prototype hybrid roller cone/Stratapax core bit was fabricated by Smith Tool, and tested at Fenton Hill in 1978. During the drilling for a deeper HDR reservoir system in 1979 and 1980, six of the latest generation of these bits, now called Hybrid Roller Cone/Polycrystalline Diamond Cutter (PDC) core bits, were successfully used in granitic rock at depths below 11,000 ft.

  14. Uniqueness: skews bit occurrence frequencies in randomly generated fingerprint libraries.

    PubMed

    Chen, Nelson G

    2016-08-01

    Requiring that randomly generated chemical fingerprint libraries have unique fingerprints such that no two fingerprints are identical causes a systematic skew in bit occurrence frequencies, the proportion at which specified bits are set. Observed frequencies (O) at which each bit is set within the resulting libraries systematically differ from frequencies at which bits are set at fingerprint generation (E). Observed frequencies systematically skew toward 0.5, with the effect being more pronounced as library size approaches the compound space, which is the total number of unique possible fingerprints given the number of bit positions each fingerprint contains. The effect is quantified for varying library sizes as a fraction of the overall compound space, and for changes in the specified frequency E. The cause and implications for this systematic skew are subsequently discussed. When generating random libraries of chemical fingerprints, the imposition of a uniqueness requirement should either be avoided or taken into account.

  15. Drill bit stud and method of manufacture

    SciTech Connect

    Hake, L.W.; Huff, C.F.; Miller, J.W.

    1984-10-23

    A polycrystalline diamond compact is a polycrystalline diamond wafer attached to a tungsten carbide substrate forming a disc. In this form, it is attached to a stud which is attached within a drill bit. The compact is attached to the stud with the aid of a positioning ring. When the stud is made of impact resistant material, a full pedestal may be formed on the stud to facilitate the use of the positioning ring. When the stud is made of brittle material, the positioning ring is attached to the flat face of the stud without a pedestal. The ring is positioned on a stud and the disc inserted in the ring so that the disc is positioned against the bonding surface. The disc remains in position against the bonding surface during the handling before and during the bonding process. As a second embodiment, the polycrystalline diamond compact is smaller than the disc itself and the remainder of the disc is formed of metal having the same thickness as the polycrystalline diamond compact or its tungsten carbide substrate. The shape of the smaller polycrystalline diamond compact may be semicircular, circular, polygon shaped, (i.e., triangular, square, etc.) or other geometric figures.

  16. Continuous chain bit with downhole cycling capability

    DOEpatents

    Ritter, Don F.; St. Clair, Jack A.; Togami, Henry K.

    1983-01-01

    A continuous chain bit for hard rock drilling is capable of downhole cycling. A drill head assembly moves axially relative to a support body while the chain on the head assembly is held in position so that the bodily movement of the chain cycles the chain to present new composite links for drilling. A pair of spring fingers on opposite sides of the chain hold the chain against movement. The chain is held in tension by a spring-biased tensioning bar. A head at the working end of the chain supports the working links. The chain is centered by a reversing pawl and piston actuated by the pressure of the drilling mud. Detent pins lock the head assembly with respect to the support body and are also operated by the drilling mud pressure. A restricted nozzle with a divergent outlet sprays drilling mud into the cavity to remove debris. Indication of the centered position of the chain is provided by noting a low pressure reading indicating proper alignment of drilling mud slots on the links with the corresponding feed branches.

  17. Single Abrikosov vortices as quantized information bits.

    PubMed

    Golod, T; Iovan, A; Krasnov, V M

    2015-01-01

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex. PMID:26456592

  18. Single Abrikosov vortices as quantized information bits

    PubMed Central

    Golod, T.; Iovan, A.; Krasnov, V. M.

    2015-01-01

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex. PMID:26456592

  19. 24 CFR 92.50 - Formula allocation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Formula allocation. 92.50 Section... Development HOME INVESTMENT PARTNERSHIPS PROGRAM Allocation Formula § 92.50 Formula allocation. (a) Jurisdictions eligible for a formula allocation. HUD will provide allocations of funds in amounts determined...

  20. 23 CFR 1240.15 - Allocations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Allocations. 1240.15 Section 1240.15 Highways NATIONAL... GUIDELINES SAFETY INCENTIVE GRANTS FOR USE OF SEAT BELTS-ALLOCATIONS BASED ON SEAT BELT USE RATES Determination of Allocations § 1240.15 Allocations. (a) Funds allocated under this part shall be available...

  1. Resource Balancing Control Allocation

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Bodson, Marc

    2010-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.

  2. Modeling and analysis of stick-slip and bit bounce in oil well drillstrings equipped with drag bits

    NASA Astrophysics Data System (ADS)

    Kamel, Jasem M.; Yigit, Ahmet S.

    2014-12-01

    Rotary drilling systems equipped with drag bits or fixed cutter bits (also called PDC), used for drilling deep boreholes for the production and the exploration of oil and natural gas, often suffer from severe vibrations. These vibrations are detrimental to the bit and the drillstring causing different failures of equipment (e.g., twist-off, abrasive wear of tubulars, bit damage), and inefficiencies in the drilling operation (reduction of the rate of penetration (ROP)). Despite extensive research conducted in the last several decades, there is still a need to develop a consistent model that adequately captures all phenomena related to drillstring vibrations such as nonlinear cutting and friction forces at the bit/rock formation interface, drive system characteristics and coupling between various motions. In this work, a physically consistent nonlinear model for the axial and torsional motions of a rotating drillstring equipped with a drag bit is proposed. A more realistic cutting and contact model is used to represent bit/rock formation interaction at the bit. The dynamics of both drive systems for rotary and translational motions of the drillstring, including the hoisting system are also considered. In this model, the rotational and translational motions of the bit are obtained as a result of the overall dynamic behavior rather than prescribed functions or constants. The dynamic behavior predicted by the proposed model qualitatively agree well with field observations and published theoretical results. The effects of various operational parameters on the dynamic behavior are investigated with the objective of achieving a smooth and efficient drilling. The results show that with proper choice of operational parameters, it may be possible to minimize the effects of stick-slip and bit-bounce and increase the ROP. Therefore, it is expected that the results will help reduce the time spent in drilling process and costs incurred due to severe vibrations and consequent

  3. High-power TSP bits. [Thermally Stable Polycrystalline diamond

    SciTech Connect

    Cohen, J.H.; Maurer, W.C. ); Westcott, P.A. )

    1994-03-01

    This paper reviews a three-year R D project to develop advanced thermally stable polycrystalline diamond (TSP) bits that can operate at power levels 5 to 10 times greater than those typically delivered by rotary rigs. These bits are designed to operate on advanced drilling motors that drill 3 to 6 times faster than rotary rigs. TSP bit design parameters that were varied during these tests include cutter size, shape, density, and orientation. Drilling tests conducted in limestone, sandstone, marble, and granite blocks showed that these optimized bits drilled many of these rocks at 500 to 1,000 ft/hr (150 to 300 m/h), compared to 50 to 100 ft/hr (15 to 30 m/h) for roller bits. These tests demonstrated that TSP bits are capable of operating at the high speeds and high torques delivered by advanced drilling motors now being developed. These advanced bits and motors are designed for use in slim-hole and horizontal drilling applications.

  4. Collaborative Resource Allocation

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Wax, Allan; Lam, Raymond; Baldwin, John; Borden, Chester

    2007-01-01

    Collaborative Resource Allocation Networking Environment (CRANE) Version 0.5 is a prototype created to prove the newest concept of using a distributed environment to schedule Deep Space Network (DSN) antenna times in a collaborative fashion. This program is for all space-flight and terrestrial science project users and DSN schedulers to perform scheduling activities and conflict resolution, both synchronously and asynchronously. Project schedulers can, for the first time, participate directly in scheduling their tracking times into the official DSN schedule, and negotiate directly with other projects in an integrated scheduling system. A master schedule covers long-range, mid-range, near-real-time, and real-time scheduling time frames all in one, rather than the current method of separate functions that are supported by different processes and tools. CRANE also provides private workspaces (both dynamic and static), data sharing, scenario management, user control, rapid messaging (based on Java Message Service), data/time synchronization, workflow management, notification (including emails), conflict checking, and a linkage to a schedule generation engine. The data structure with corresponding database design combines object trees with multiple associated mortal instances and relational database to provide unprecedented traceability and simplify the existing DSN XML schedule representation. These technologies are used to provide traceability, schedule negotiation, conflict resolution, and load forecasting from real-time operations to long-range loading analysis up to 20 years in the future. CRANE includes a database, a stored procedure layer, an agent-based middle tier, a Web service wrapper, a Windows Integrated Analysis Environment (IAE), a Java application, and a Web page interface.

  5. Fitness Probability Distribution of Bit-Flip Mutation.

    PubMed

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis. PMID:24885680

  6. Fitness Probability Distribution of Bit-Flip Mutation.

    PubMed

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  7. Experimental bit commitment based on quantum communication and special relativity.

    PubMed

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Kent, A; Gisin, N; Wehner, S; Zbinden, H

    2013-11-01

    Bit commitment is a fundamental cryptographic primitive in which Bob wishes to commit a secret bit to Alice. Perfectly secure bit commitment between two mistrustful parties is impossible through asynchronous exchange of quantum information. Perfect security is however possible when Alice and Bob split into several agents exchanging classical and quantum information at times and locations suitably chosen to satisfy specific relativistic constraints. Here we report on an implementation of a bit commitment protocol using quantum communication and special relativity. Our protocol is based on [A. Kent, Phys. Rev. Lett. 109, 130501 (2012)] and has the advantage that it is practically feasible with arbitrary large separations between the agents in order to maximize the commitment time. By positioning agents in Geneva and Singapore, we obtain a commitment time of 15 ms. A security analysis considering experimental imperfections and finite statistics is presented.

  8. Compressed bit stream classification using VQ and GMM

    NASA Astrophysics Data System (ADS)

    Chen, Wenhua; Kuo, C.-C. Jay

    1997-10-01

    Algorithms of classifying and segmenting bit streams with different source content (such as speech, text and image, etc.) and different coding methods (such as ADPCM, (mu) -law, tiff, gif and JPEG, etc.) in a communication channel are investigated. In previous work, we focused on the separation of fixed- and variable-length coded bit streams, and the classification of two variable-length coded bit streams by using Fourier analysis and entropy feature. In this work, we consider the classification of multiple (more than two sources) compressed bit streams by using vector quantization (VQ) and Gaussian mixture modeling (GMM). The performance of the VQ and GMM techniques depend on various parameters such as the size of the codebook, the number of mixtures and the test segment length. It is demonstrated with experiments that both VQ and GMM outperform the single entropy feature. It is also shown that GMM generally outperforms VQ.

  9. Experimental bit commitment based on quantum communication and special relativity.

    PubMed

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Kent, A; Gisin, N; Wehner, S; Zbinden, H

    2013-11-01

    Bit commitment is a fundamental cryptographic primitive in which Bob wishes to commit a secret bit to Alice. Perfectly secure bit commitment between two mistrustful parties is impossible through asynchronous exchange of quantum information. Perfect security is however possible when Alice and Bob split into several agents exchanging classical and quantum information at times and locations suitably chosen to satisfy specific relativistic constraints. Here we report on an implementation of a bit commitment protocol using quantum communication and special relativity. Our protocol is based on [A. Kent, Phys. Rev. Lett. 109, 130501 (2012)] and has the advantage that it is practically feasible with arbitrary large separations between the agents in order to maximize the commitment time. By positioning agents in Geneva and Singapore, we obtain a commitment time of 15 ms. A security analysis considering experimental imperfections and finite statistics is presented. PMID:24237497

  10. Bit selection increases coiled tubing and slimhole success

    SciTech Connect

    Feiner, R.F.

    1995-07-01

    Slimhole applications have grown within the past few years to include deepening existing wells to untapped reservoirs, drilling smaller well programs to reduce tangible costs and recompleting wells to adjacent reservoirs through directional or horizontal sidetracks. When selecting the proper bit for an interval, the ultimate goal is the same in the slimhole application as in the conventional application -- to save the operator money by reducing drilling cost per foot (CPF). Slimhole bit selection is a three-step process: (1) identify the characteristics of the formations to be drilled; (2) analyze the operational limitations of the slimhole application; and (3) select the bit type that will most economically drill the interval. Knowledge of lithology is crucial to the selection process. Accurate formation knowledge can be acquired from offset well records, mud logs, cores, electric logs, compressive rock strength analysis and any other information relevant to the drilling operation. This paper reviews the steps in selecting slimhole bits and completion equipment.

  11. Secure self-calibrating quantum random-bit generator

    SciTech Connect

    Fiorentino, M.; Santori, C.; Spillane, S. M.; Beausoleil, R. G.; Munro, W. J.

    2007-03-15

    Random-bit generators (RBGs) are key components of a variety of information processing applications ranging from simulations to cryptography. In particular, cryptographic systems require 'strong' RBGs that produce high-entropy bit sequences, but traditional software pseudo-RBGs have very low entropy content and therefore are relatively weak for cryptography. Hardware RBGs yield entropy from chaotic or quantum physical systems and therefore are expected to exhibit high entropy, but in current implementations their exact entropy content is unknown. Here we report a quantum random-bit generator (QRBG) that harvests entropy by measuring single-photon and entangled two-photon polarization states. We introduce and implement a quantum tomographic method to measure a lower bound on the 'min-entropy' of the system, and we employ this value to distill a truly random-bit sequence. This approach is secure: even if an attacker takes control of the source of optical states, a secure random sequence can be distilled.

  12. 26. photographer unknown 29 December 1937 FLOATING MOORING BIT INSTALLED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. photographer unknown 29 December 1937 FLOATING MOORING BIT INSTALLED IN LOCK SIDEWALL. - Bonneville Project, Navigation Lock No. 1, Oregon shore of Columbia River near first Powerhouse, Bonneville, Multnomah County, OR

  13. Eight-Bit-Slice GaAs General Processor Circuit

    NASA Technical Reports Server (NTRS)

    Weissman, John; Gauthier, Robert V.

    1989-01-01

    Novel GaAs 8-bit slice enables quick and efficient implementation of variety of fast GaAs digital systems ranging from central processing units of computers to special-purpose processors for communications and signal-processing applications. With GaAs 8-bit slice, designers quickly configure and test hearts of many digital systems that demand fast complex arithmetic, fast and sufficient register storage, efficient multiplexing and routing of data words, and ease of control.

  14. Multiple-Bit Errors Caused By Single Ions

    NASA Technical Reports Server (NTRS)

    Zoutendyk, John A.; Edmonds, Larry D.; Smith, Laurence S.

    1991-01-01

    Report describes experimental and computer-simulation study of multiple-bit errors caused by impingement of single energetic ions on 256-Kb dynamic random-access memory (DRAM) integrated circuit. Studies illustrate effects of different mechanisms for transport of charge from ion tracks to various elements of integrated circuits. Shows multiple-bit errors occur in two different types of clusters about ion tracks causing them.

  15. Strong no-go theorem for Gaussian quantum bit commitment

    SciTech Connect

    Magnin, Loieck; Magniez, Frederic; Leverrier, Anthony

    2010-01-15

    Unconditionally secure bit commitment is forbidden by quantum mechanics. We extend this no-go theorem to continuous-variable protocols where both players are restricted to use Gaussian states and operations, which is a reasonable assumption in current-state optical implementations. Our Gaussian no-go theorem also provides a natural counter-example to a conjecture that quantum mechanics can be rederived from the assumption that key distribution is allowed while bit commitment is forbidden in Nature.

  16. Advanced bit establishes superior performance in Ceuta field

    SciTech Connect

    Mensa-Wilmot, G.

    1999-11-01

    A new-generation polycrystalline diamond compact (PDC) bit is redefining operational efficiency and reducing drilling costs in the Ceuta field, in the Lago de Maracaibo area of Venezuela. Its unique cutting structure and advancements in PDC cutter technology have established superior performance in this challenging application. The paper describes the new-generation PDC bit, advanced technology PDC cutters, and performance. A table gives cost per foot evaluation.

  17. 8-Bit Gray Scale Images of Fingerprint Image Groups

    National Institute of Standards and Technology Data Gateway

    NIST 8-Bit Gray Scale Images of Fingerprint Image Groups (PC database for purchase)   The NIST database of fingerprint images contains 2000 8-bit gray scale fingerprint image pairs. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  18. Bits with diamond-coated inserts reduce gauge problems

    SciTech Connect

    Eckstrom, D. )

    1991-06-17

    In highly abrasive formations, failure of the gauge row cutters on tungsten carbide insert bits may occur rapidly, resulting in short bit runs, poor performance, and undergauge hole. In certain applications, polycrystalline diamond (PCD) enhanced insert bits have longer bit runs and maintain an in-gauge hole which reduces reaming time and wear on downhole equipment. These bits with PCD-coated inserts have reduced drilling costs in several areas of Canada. PCD has been applied to rock drilling tools for several years because of its high wear resistance. Polycrystalline diamond compact (PDC) bits use polycrystalline diamonds formed in flat wafers applied to the flat surfaces on carbide inserts. The flat PDC cutters drill by shearing the formation. Smith International Canada Ltd. developed a patented process to apply PCD to curved surfaces, which now allows PCD-enhanced inserts to be used for percussion and rotary cone applications. These diamond-enhanced inserts combine the wear resistance properties of diamond with the durability of tungsten carbide.

  19. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  20. Two-level renegotiated constant bit rate algorithm (2RCBR) for scalable MPEG2 video over QoS networks

    NASA Astrophysics Data System (ADS)

    Pegueroles, Josep R.; Alins, Juan J.; de la Cruz, Luis J.; Mata, Jorge

    2001-07-01

    MPEG family codecs generate variable-bit-rate (VBR) compressed video with significant multiple-time-scale bit rate variability. Smoothing techniques remove the periodic fluctuations generated by the codification modes. However, global efficiency concerning network resource allocation remains low due to scene-time-scale variability. RCBR techniques provide suitable means to achieving higher efficiency. Among all RCBR techniques described in literature, 2RCBR mechanism seems to be especially suitable for video-on demand. The method takes advantage of the knowledge of the stored video to calculate the renegotiation intervals and of the client buffer memory to perform work-ahead buffering techniques. 2RCBR achieves 100% bandwidth global efficiency with only two renegotiation levels. The algorithm is based on the study of the second derivative of the cumulative video sequence to find out sharp-sloped inflection points that point out changes in the scene complexity. Due to its nature, 2RCBR becomes very adequate to deliver MPEG2 scalable sequences into the network cause it can assure a constant bit rate to the base MPEG2 layer and use the higher rate intervals to deliver the enhanced MPEG2 layer. However, slight changes in the algorithm parameters must be introduced to attain an optimal behavior. This is verified by means of simulations on MPEG2 video patterns.

  1. 15 CFR 335.4 - Allocation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Allocation. 335.4 Section 335.4... § 335.4 Allocation. (a) For HTS 9902.51.11 and HTS 9902.51.15 each Tariff Rate Quota will be allocated separately. Allocation will be based on an applicant's Worsted Wool Suit production, on a weighted...

  2. Preserving the allocation ratio at every allocation with biased coin randomization and minimization in studies with unequal allocation.

    PubMed

    Kuznetsova, Olga M; Tymofyeyev, Yevgen

    2012-04-13

    The demand for unequal allocation in clinical trials is growing. Most commonly, the unequal allocation is achieved through permuted block randomization. However, other allocation procedures might be required to better approximate the allocation ratio in small samples, reduce the selection bias in open-label studies, or balance on baseline covariates. When these allocation procedures are generalized to unequal allocation, special care is to be taken to preserve the allocation ratio at every allocation step. This paper offers a way to expand the biased coin randomization to unequal allocation that preserves the allocation ratio at every allocation. The suggested expansion works with biased coin randomization that balances only on treatment group totals and with covariate-adaptive procedures that use a random biased coin element at every allocation. Balancing properties of the allocation ratio preserving biased coin randomization and minimization are described through simulations. It is demonstrated that these procedures are asymptotically protected against the shift in the rerandomization distribution identified for some examples of minimization with 1:2 allocation. The asymptotic shift in the rerandomization distribution of the difference in treatment means for an arbitrary unequal allocation procedure is explicitly derived in the paper.

  3. Collective credit allocation in science

    PubMed Central

    Shen, Hua-Wei; Barabási, Albert-László

    2014-01-01

    Collaboration among researchers is an essential component of the modern scientific enterprise, playing a particularly important role in multidisciplinary research. However, we continue to wrestle with allocating credit to the coauthors of publications with multiple authors, because the relative contribution of each author is difficult to determine. At the same time, the scientific community runs an informal field-dependent credit allocation process that assigns credit in a collective fashion to each work. Here we develop a credit allocation algorithm that captures the coauthors’ contribution to a publication as perceived by the scientific community, reproducing the informal collective credit allocation of science. We validate the method by identifying the authors of Nobel-winning papers that are credited for the discovery, independent of their positions in the author list. The method can also compare the relative impact of researchers working in the same field, even if they did not publish together. The ability to accurately measure the relative credit of researchers could affect many aspects of credit allocation in science, potentially impacting hiring, funding, and promotion decisions. PMID:25114238

  4. Collective credit allocation in science.

    PubMed

    Shen, Hua-Wei; Barabási, Albert-László

    2014-08-26

    Collaboration among researchers is an essential component of the modern scientific enterprise, playing a particularly important role in multidisciplinary research. However, we continue to wrestle with allocating credit to the coauthors of publications with multiple authors, because the relative contribution of each author is difficult to determine. At the same time, the scientific community runs an informal field-dependent credit allocation process that assigns credit in a collective fashion to each work. Here we develop a credit allocation algorithm that captures the coauthors' contribution to a publication as perceived by the scientific community, reproducing the informal collective credit allocation of science. We validate the method by identifying the authors of Nobel-winning papers that are credited for the discovery, independent of their positions in the author list. The method can also compare the relative impact of researchers working in the same field, even if they did not publish together. The ability to accurately measure the relative credit of researchers could affect many aspects of credit allocation in science, potentially impacting hiring, funding, and promotion decisions. PMID:25114238

  5. Sleep stage classification with low complexity and low bit rate.

    PubMed

    Virkkala, Jussi; Värri, Alpo; Hasan, Joel; Himanen, Sari-Leena; Müller, Kiti

    2009-01-01

    Standard sleep stage classification is based on visual analysis of central (usually also frontal and occipital) EEG, two-channel EOG, and submental EMG signals. The process is complex, using multiple electrodes, and is usually based on relatively high (200-500 Hz) sampling rates. Also at least 12 bit analog to digital conversion is recommended (with 16 bit storage) resulting in total bit rate of at least 12.8 kbit/s. This is not a problem for in-house laboratory sleep studies, but in the case of online wireless self-applicable ambulatory sleep studies, lower complexity and lower bit rates are preferred. In this study we further developed earlier single channel facial EMG/EOG/EEG-based automatic sleep stage classification. An algorithm with a simple decision tree separated 30 s epochs into wakefulness, SREM, S1/S2 and SWS using 18-45 Hz beta power and 0.5-6 Hz amplitude. Improvements included low complexity recursive digital filtering. We also evaluated the effects of a reduced sampling rate, reduced number of quantization steps and reduced dynamic range on the sleep data of 132 training and 131 testing subjects. With the studied algorithm, it was possible to reduce the sampling rate to 50 Hz (having a low pass filter at 90 Hz), and the dynamic range to 244 microV, with an 8 bit resolution resulting in a bit rate of 0.4 kbit/s. Facial electrodes and a low bit rate enables the use of smaller devices for sleep stage classification in home environments.

  6. Foldable Instrumented Bits for Ultrasonic/Sonic Penetrators

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Badescu, Mircea; Iskenderian, Theodore; Sherrit, Stewart; Bao, Xiaoqi; Linderman, Randel

    2010-01-01

    Long tool bits are undergoing development that can be stowed compactly until used as rock- or ground-penetrating probes actuated by ultrasonic/sonic mechanisms. These bits are designed to be folded or rolled into compact form for transport to exploration sites, where they are to be connected to their ultrasonic/ sonic actuation mechanisms and unfolded or unrolled to their full lengths for penetrating ground or rock to relatively large depths. These bits can be designed to acquire rock or soil samples and/or to be equipped with sensors for measuring properties of rock or soil in situ. These bits can also be designed to be withdrawn from the ground, restowed, and transported for reuse at different exploration sites. Apparatuses based on the concept of a probe actuated by an ultrasonic/sonic mechanism have been described in numerous prior NASA Tech Briefs articles, the most recent and relevant being "Ultrasonic/ Sonic Impacting Penetrators" (NPO-41666) NASA Tech Briefs, Vol. 32, No. 4 (April 2008), page 58. All of those apparatuses are variations on the basic theme of the earliest ones, denoted ultrasonic/sonic drill corers (USDCs). To recapitulate: An apparatus of this type includes a lightweight, low-power, piezoelectrically driven actuator in which ultrasonic and sonic vibrations are generated and coupled to a tool bit. The combination of ultrasonic and sonic vibrations gives rise to a hammering action (and a resulting chiseling action at the tip of the tool bit) that is more effective for drilling than is the microhammering action of ultrasonic vibrations alone. The hammering and chiseling actions are so effective that the size of the axial force needed to make the tool bit advance into soil, rock, or another material of interest is much smaller than in ordinary twist drilling, ordinary hammering, or ordinary steady pushing. Examples of properties that could be measured by use of an instrumented tool bit include electrical conductivity, permittivity, magnetic

  7. Software For Allocation Of Tolerances

    NASA Technical Reports Server (NTRS)

    Fernandez, Ken; Raman, Shivakumar; Pulat, Simin

    1992-01-01

    Collection of computer programs being developed to assist engineers in allocating tolerances to dimensions of components and assemblies. System reflects tolerancing expertise of design and manufacturing engineers; helps engineers maintain comprehensive tolerancing policy and overview that might otherwise get lost when attending to details of design and manufacturing processes. Necessary to allocate tolerances for three main reasons: tolerances allow for variations in dimensions of components as manufactured; assembly of two or more components, dimensions lie between specified limits; and part replaced must fit in place.

  8. Optimality Versus Resilience In Patterns Of Carbon Allocation Within Plants Under Climate Change

    NASA Astrophysics Data System (ADS)

    Srinivasan, V.; Kumar, P.; Sivapalan, M.

    2010-12-01

    Predicting the allocation of assimilated carbon among different parts within a plant under current and future climates is a challenging task that is of significant interest. Several empirical and mechanistic models have been developed over the years to solve for the carbon allocation within a plant and these have demonstrated limited success. This challenge is further exacerbated when we need to consider the issue of plant acclimation due to climate change. Optimality based carbon allocation models have the ability to provide a general framework and have been proposed to be a strong alternative to empirical and mechanistic models. While several optimality functions have been proposed, more recently the idea of optimizing end of life cycle reproductive biomass has been demonstrated to have significant success (Iwasa 2000). This optimality function unlike others is more fundamental as it is directly based on the concept of evolutionary fitness of each individual. We apply an optimality based carbon allocation model to the soybean ecosystem and other ecosystems and analyze the predictions. Our analysis demonstrates that plants have the capability to achieve a given end state using different allocation strategies during a growing season. More importantly, the soybean ecosystem exhibits significant suboptimal behavior, where the end of life cycle reproductive biomass realized through field measurements, is lower than the model predicted optimum. From these one can infer that in reality, plants allocate a relatively larger fraction of its carbon to leaf and root biomass and a relatively smaller fraction to reproductive biomass when compared to the model predicted optimal allocation pathway. This trend is also obtained while simulating acclimation behavior under elevated CO2 conditions simulating future climate scenarios. We hypothesize that plants in nature exhibit a significant degree of resilience that prevents them from following an optimal pathway resulting in a

  9. Modeling and analysis of drag-bit cutting

    SciTech Connect

    Swenson, D.V.

    1983-07-01

    This report documents a finite-element analysis of drag-bit cutting using polycrystalline-diamond compact cutters. To verify the analysis capability, prototypic indention tests were performed on Berea sandstone specimens. Analysis of these tests, using measured material properties, predicted fairly well the experimentally observed fracture patterns and indention loads. The analysis of drag-bit cutting met with mixed success, being able to capture the major features of the cutting process, but not all the details. In particular, the analysis is sensitive to the assumed contact between the cutter and rock. Calculations of drag-bit cutting predict that typical vertical loads on the cutters are capable of forming fractures. Thus, indention-type loading may be one of the main fracture mechanisms during drag-bit cutting, not only the intuitive notion of contact between the front of the cutter and rock. The model also predicts a change in the cutting process from tensile fractures to shear failure when the rock is confined by in-situ stresses. Both of these results have implications for the design and testing of drag-bit cutters.

  10. Task allocation among multiple intelligent robots

    NASA Technical Reports Server (NTRS)

    Gasser, L.; Bekey, G.

    1987-01-01

    Researchers describe the design of a decentralized mechanism for allocating assembly tasks in a multiple robot assembly workstation. Currently, the approach focuses on distributed allocation to explore its feasibility and its potential for adaptability to changing circumstances, rather than for optimizing throughput. Individual greedy robots make their own local allocation decisions using both dynamic allocation policies which propagate through a network of allocation goals, and local static and dynamic constraints describing which robots are elibible for which assembly tasks. Global coherence is achieved by proper weighting of allocation pressures propagating through the assembly plan. Deadlock avoidance and synchronization is achieved using periodic reassessments of local allocation decisions, ageing of allocation goals, and short-term allocation locks on goals.

  11. Unconditionally secure bit commitment by transmitting measurement outcomes.

    PubMed

    Kent, Adrian

    2012-09-28

    We propose a new unconditionally secure bit commitment scheme based on Minkowski causality and the properties of quantum information. The receiving party sends a number of randomly chosen Bennett-Brassard 1984 (BB84) qubits to the committer at a given point in space-time. The committer carries out measurements in one of the two BB84 bases, depending on the committed bit value, and transmits the outcomes securely at (or near) light speed in opposite directions to remote agents. These agents unveil the bit by returning the outcomes to adjacent agents of the receiver. The protocol's security relies only on simple properties of quantum information and the impossibility of superluminal signalling. PMID:23030073

  12. An improved EZBC algorithm based on block bit length

    NASA Astrophysics Data System (ADS)

    Wang, Renlong; Ruan, Shuangchen; Liu, Chengxiang; Wang, Wenda; Zhang, Li

    2011-12-01

    Embedded ZeroBlock Coding and context modeling (EZBC) algorithm has high compression performance. However, it consumes large amounts of memory space because an Amplitude Quadtree of wavelet coefficients and other two link lists would be built during the encoding process. This is one of the big challenges for EZBC to be used in real time or hardware applications. An improved EZBC algorithm based on bit length of coefficients was brought forward in this article. It uses Bit Length Quadtree to complete the coding process and output the context for Arithmetic Coder. It can achieve the same compression performance as EZBC and save more than 75% memory space required in the encoding process. As Bit Length Quadtree can quickly locate the wavelet coefficients and judge their significance, the improved algorithm can dramatically accelerate the encoding speed. These improvements are also beneficial for hardware. PACS: 42.30.Va, 42.30.Wb

  13. New bits, motors improve economics of slim hole horizontal wells

    SciTech Connect

    McDonald, S.; Felderhoff, F.; Fisher, K.

    1996-03-11

    The latest generation of small-diameter bits, combined with a new extended power section positive displacement motor (PDM), has improved the economics of slim hole drilling programs. As costs are driven down, redevelopment reserves are generated in the older, more established fields. New reserves result from increases in the ultimate recovery and accelerated production rates from the implementation of horizontal wells in reentry programs. This logic stimulated an entire development program for a Gulf of Mexico platform, which was performed without significant compromises in well bore geometry. The savings from this new-generation drilling system come from reducing the total number of trips required during the drilling phase. This paper reviews the design improvements of roller cone bits, PDC bits, and positive displacement motors for offshore directional drilling operations.

  14. Fully photonics-based physical random bit generator.

    PubMed

    Li, Pu; Sun, Yuanyuan; Liu, Xianglian; Yi, Xiaogang; Zhang, Jianguo; Guo, Xiaomin; Guo, Yanqiang; Wang, Yuncai

    2016-07-15

    We propose a fully photonics-based approach for ultrafast physical random bit generation. This approach exploits a compact nonlinear loop mirror (called a terahertz optical asymmetric demultiplexer, TOAD) to sample the chaotic optical waveform in an all-optical domain and then generate random bit streams through further comparison with a threshold level. This method can efficiently overcome the electronic jitter bottleneck confronted by existing RBGs in practice. A proof-of-concept experiment demonstrates that this method can continuously extract 5 Gb/s random bit streams from the chaotic output of a distributed feedback laser diode (DFB-LD) with optical feedback. This limited generation rate is caused by the bandwidth of the used optical chaos. PMID:27420532

  15. Precision goniometer equipped with a 22-bit absolute rotary encoder.

    PubMed

    Xiaowei, Z; Ando, M; Jidong, W

    1998-05-01

    The calibration of a compact precision goniometer equipped with a 22-bit absolute rotary encoder is presented. The goniometer is a modified Huber 410 goniometer: the diffraction angles can be coarsely generated by a stepping-motor-driven worm gear and precisely interpolated by a piezoactuator-driven tangent arm. The angular accuracy of the precision rotary stage was evaluated with an autocollimator. It was shown that the deviation from circularity of the rolling bearing utilized in the precision rotary stage restricts the angular positioning accuracy of the goniometer, and results in an angular accuracy ten times larger than the angular resolution of 0.01 arcsec. The 22-bit encoder was calibrated by an incremental rotary encoder. It became evident that the accuracy of the absolute encoder is approximately 18 bit due to systematic errors.

  16. Security bound of cheat sensitive quantum bit commitment

    PubMed Central

    He, Guang Ping

    2015-01-01

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities. PMID:25796977

  17. Drill Bits: Education and Outreach for Scientific Drilling Projects

    NASA Astrophysics Data System (ADS)

    Prose, D. V.; Lamacchia, D. M.

    2007-12-01

    Drill Bits is a series of short, three- to five-minute videos that explore the research and capture the challenging nature of large scientific drilling projects occurring around the world. The drilling projects, conducted under the auspices of the International Continental Scientific Drilling Program (ICDP), address fundamental earth science topics, including those of significant societal relevance such as earthquakes, volcanoes, and global climate change. The videos are filmed on location and aimed at nonscientific audiences. The purpose of the Drill Bits series is to provide scientific drilling organizations, scientists, and educators with a versatile tool to help educate the public, students, the media, and public officials about scientific drilling. The videos are designed to be viewed in multiple formats: on DVD; videotape; and science-related web sites, where they can be streamed or downloaded as video podcasts. Several Drill Bits videos will be screened, and their uses for outreach and education will be discussed.

  18. Inexpensive programmable clock for a 12-bit computer

    NASA Technical Reports Server (NTRS)

    Vrancik, J. E.

    1972-01-01

    An inexpensive programmable clock was built for a digital PDP-12 computer. The instruction list includes skip on flag; clear the flag, clear the clock, and stop the clock; and preset the counter with the contents of the accumulator and start the clock. The clock counts at a rate determined by an external oscillator and causes an interrupt and sets a flag when a 12-bit overflow occurs. An overflow can occur after 1 to 4096 counts. The clock can be built for a total parts cost of less than $100 including power supply and I/O connector. Slight modification can be made to permit its use on larger machines (16 bit, 24 bit, etc.) and logic level shifting can be made to make it compatible with any computer.

  19. Security bound of cheat sensitive quantum bit commitment.

    PubMed

    He, Guang Ping

    2015-01-01

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities. PMID:25796977

  20. BitCube: A Bottom-Up Cubing Engineering

    NASA Astrophysics Data System (ADS)

    Ferro, Alfredo; Giugno, Rosalba; Puglisi, Piera Laura; Pulvirenti, Alfredo

    Enhancing on line analytical processing through efficient cube computation plays a key role in Data Warehouse management. Hashing, grouping and mining techniques are commonly used to improve cube pre-computation. BitCube, a fast cubing method which uses bitmaps as inverted indexes for grouping, is presented. It horizontally partitions data according to the values of one dimension and for each resulting fragment it performs grouping following bottom-up criteria. BitCube allows also partial materialization based on iceberg conditions to treat large datasets for which a full cube pre-computation is too expensive. Space requirement of bitmaps is optimized by applying an adaption of the WAH compression technique. Experimental analysis, on both synthetic and real datasets, shows that BitCube outperforms previous algorithms for full cube computation and results comparable on iceberg cubing.

  1. Security bound of cheat sensitive quantum bit commitment.

    PubMed

    He, Guang Ping

    2015-03-23

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.

  2. Fully photonics-based physical random bit generator.

    PubMed

    Li, Pu; Sun, Yuanyuan; Liu, Xianglian; Yi, Xiaogang; Zhang, Jianguo; Guo, Xiaomin; Guo, Yanqiang; Wang, Yuncai

    2016-07-15

    We propose a fully photonics-based approach for ultrafast physical random bit generation. This approach exploits a compact nonlinear loop mirror (called a terahertz optical asymmetric demultiplexer, TOAD) to sample the chaotic optical waveform in an all-optical domain and then generate random bit streams through further comparison with a threshold level. This method can efficiently overcome the electronic jitter bottleneck confronted by existing RBGs in practice. A proof-of-concept experiment demonstrates that this method can continuously extract 5 Gb/s random bit streams from the chaotic output of a distributed feedback laser diode (DFB-LD) with optical feedback. This limited generation rate is caused by the bandwidth of the used optical chaos.

  3. Can relativistic bit commitment lead to secure quantum oblivious transfer?

    NASA Astrophysics Data System (ADS)

    He, Guang Ping

    2015-05-01

    While unconditionally secure bit commitment (BC) is considered impossible within the quantum framework, it can be obtained under relativistic or experimental constraints. Here we study whether such BC can lead to secure quantum oblivious transfer (QOT). The answer is not completely negative. In one hand, we provide a detailed cheating strategy, showing that the "honest-but-curious adversaries" in some of the existing no-go proofs on QOT still apply even if secure BC is used, enabling the receiver to increase the average reliability of the decoded value of the transferred bit. On the other hand, it is also found that some other no-go proofs claiming that a dishonest receiver can always decode all transferred bits simultaneously with reliability 100% become invalid in this scenario, because their models of cryptographic protocols are too ideal to cover such a BC-based QOT.

  4. Security bound of cheat sensitive quantum bit commitment

    NASA Astrophysics Data System (ADS)

    He, Guang Ping

    2015-03-01

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.

  5. Regulating nutrient allocation in plants

    SciTech Connect

    Udvardi, Michael; Yang, Jiading; Worley, Eric

    2014-12-09

    The invention provides coding and promoter sequences for a VS-1 and AP-2 gene, which affects the developmental process of senescence in plants. Vectors, transgenic plants, seeds, and host cells comprising heterologous VS-1 and AP-2 genes are also provided. Additionally provided are methods of altering nutrient allocation and composition in a plant using the VS-1 and AP-2 genes.

  6. Administrators' Decisions about Resource Allocation

    ERIC Educational Resources Information Center

    Knight, William E.; Folkins, John W.; Hakel, Milton D.; Kennell, Richard P.

    2011-01-01

    Do academic administrators make decisions about resource allocation differently depending on the discipline receiving the funding? Does an administrator's academic identity influence these decisions? This study explored those questions with a sample of 1,690 academic administrators at doctoral-research universities. Participants used fictional…

  7. Report on Tribal Priority Allocations.

    ERIC Educational Resources Information Center

    Bureau of Indian Affairs (Dept. of Interior), Washington, DC.

    As part of Bureau of Indian Affairs (BIA) funding, Tribal Priority Allocations (TPA) are the principal source of funds for tribal governments and agency offices at the reservation level. According to their unique needs and circumstances, tribes may prioritize funding among eight general categories: government, human services, education, public…

  8. The Discipline of Asset Allocation.

    ERIC Educational Resources Information Center

    Petzel, Todd E.

    2000-01-01

    Discussion of asset allocation for college/university endowment funds focuses on three levels of risk: (1) the absolute risk of the portfolio (usually leading to asset diversification); (2) the benchmark risk (usually comparison with peer institutions; and (3) personal career risk (which may incline managers toward maximizing short-term returns,…

  9. Reading boundless error-free bits using a single photon

    NASA Astrophysics Data System (ADS)

    Guha, Saikat; Shapiro, Jeffrey H.

    2013-06-01

    We address the problem of how efficiently information can be encoded into and read out reliably from a passive reflective surface that encodes classical data by modulating the amplitude and phase of incident light. We show that nature imposes no fundamental upper limit to the number of bits that can be read per expended probe photon and demonstrate the quantum-information-theoretic trade-offs between the photon efficiency (bits per photon) and the encoding efficiency (bits per pixel) of optical reading. We show that with a coherent-state (ideal laser) source, an on-off (amplitude-modulation) pixel encoding, and shot-noise-limited direct detection (an overly optimistic model for commercial CD and DVD drives), the highest photon efficiency achievable in principle is about 0.5 bits read per transmitted photon. We then show that a coherent-state probe can read unlimited bits per photon when the receiver is allowed to make joint (inseparable) measurements on the reflected light from a large block of phase-modulated memory pixels. Finally, we show an example of a spatially entangled nonclassical light probe and a receiver design—constructible using a single-photon source, beam splitters, and single-photon detectors—that can in principle read any number of error-free bits of information. The probe is a single photon prepared in a uniform coherent superposition of multiple orthogonal spatial modes, i.e., a W state. The code and joint-detection receiver complexity required by a coherent-state transmitter to achieve comparable photon efficiency performance is shown to be much higher in comparison to that required by the W-state transceiver, although this advantage rapidly disappears with increasing loss in the system.

  10. Results of no-flow rotary drill bit comparison testing

    SciTech Connect

    WITWER, K.S.

    1998-11-30

    This document describes the results of testing of a newer rotary sampling bit and sampler insert called the No-Flow System. This No-Flow System was tested side by side against the currently used rotary bit and sampler insert, called the Standard System. The two systems were tested using several ''hard to sample'' granular non-hazardous simulants to determine which could provide greater sample recovery. The No-Flow System measurably outperformed the Standard System in each of the tested simulants.

  11. Cloning the entanglement of a pair of quantum bits

    SciTech Connect

    Lamoureux, Louis-Philippe; Navez, Patrick; Cerf, Nicolas J.; Fiurasek, Jaromir

    2004-04-01

    It is shown that any quantum operation that perfectly clones the entanglement of all maximally entangled qubit pairs cannot preserve separability. This 'entanglement no-cloning' principle naturally suggests that some approximate cloning of entanglement is nevertheless allowed by quantum mechanics. We investigate a separability-preserving optimal cloning machine that duplicates all maximally entangled states of two qubits, resulting in 0.285 bits of entanglement per clone, while a local cloning machine only yields 0.060 bits of entanglement per clone.

  12. Development of a jet-assisted polycrystalline diamond drill bit

    SciTech Connect

    Pixton, D.S.; Hall, D.R.; Summers, D.A.; Gertsch, R.E.

    1997-12-31

    A preliminary investigation has been conducted to evaluate the technical feasibility and potential economic benefits of a new type of drill bit. This bit transmits both rotary and percussive drilling forces to the rock face, and augments this cutting action with high-pressure mud jets. Both the percussive drilling forces and the mud jets are generated down-hole by a mud-actuated hammer. Initial laboratory studies show that rate of penetration increases on the order of a factor of two over unaugmented rotary and/or percussive drilling rates are possible with jet-assistance.

  13. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  14. A low cost alternative to high performance PCM bit synchronizers

    NASA Technical Reports Server (NTRS)

    Deshong, Bruce

    1993-01-01

    The Code Converter/Clock Regenerator (CCCR) provides a low-cost alternative to high-performance Pulse Code Modulation (PCM) bit synchronizers in environments with a large Signal-to-Noise Ratio (SNR). In many applications, the CCCR can be used in place of PCM bit synchronizers at about one fifth the cost. The CCCR operates at rates from 10 bps to 2.5 Mbps and performs PCM code conversion and clock regeneration. The CCCR has been integrated into a stand-alone system configurable from one to six channels and has also been designed for use in VMEbus compatible systems.

  15. Hanford coring bit temperature monitor development testing results report

    SciTech Connect

    Rey, D.

    1995-05-01

    Instrumentation which directly monitors the temperature of a coring bit used to retrieve core samples of high level nuclear waste stored in tanks at Hanford was developed at Sandia National Laboratories. Monitoring the temperature of the coring bit is desired to enhance the safety of the coring operations. A unique application of mature technologies was used to accomplish the measurement. This report documents the results of development testing performed at Sandia to assure the instrumentation will withstand the severe environments present in the waste tanks.

  16. Demonstration of low-power bit-interleaving TDM PON.

    PubMed

    Van Praet, Christophe; Chow, Hungkei; Suvakovic, Dusan; Van Veen, Doutje; Dupas, Arnaud; Boislaigue, Roger; Farah, Robert; Lau, Man Fai; Galaro, Joseph; Qua, Gin; Anthapadmanabhan, N Prasanth; Torfs, Guy; Yin, Xin; Vetter, Peter

    2012-12-10

    A functional demonstration of bit-interleaving TDM downstream protocol for passive optical networks (Bi-PON) is reported. The proposed protocol presents a significant reduction in dynamic power consumption in the customer premise equipment over the conventional TDM protocol. It allows to select the relevant bits of all aggregated incoming data immediately after clock and data recovery (CDR) and, hence, allows subsequent hardware to run at much lower user rate. Comparison of experimental results of FPGA-based implementations of Bi-PON and XG-PON shows that more than 30x energy-savings in protocol processing is achievable. PMID:23262914

  17. Nonanalytic function generation routines for 16-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Soeder, J. F.; Shaufl, M.

    1980-01-01

    Interpolation techniques for three types (univariate, bivariate, and map) of nonanalytic functions are described. These interpolation techniques are then implemented in scaled fraction arithmetic on a representative 16 bit microprocessor. A FORTRAN program is described that facilitates the scaling, documentation, and organization of data for use by these routines. Listings of all these programs are included in an appendix.

  18. Steganography forensics method for detecting least significant bit replacement attack

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng; Wei, Chengcheng; Han, Xiao

    2015-01-01

    We present an image forensics method to detect least significant bit replacement steganography attack. The proposed method provides fine-grained forensics features by using the hierarchical structure that combines pixels correlation and bit-planes correlation. This is achieved via bit-plane decomposition and difference matrices between the least significant bit-plane and each one of the others. Generated forensics features provide the susceptibility (changeability) that will be drastically altered when the cover image is embedded with data to form a stego image. We developed a statistical model based on the forensics features and used least square support vector machine as a classifier to distinguish stego images from cover images. Experimental results show that the proposed method provides the following advantages. (1) The detection rate is noticeably higher than that of some existing methods. (2) It has the expected stability. (3) It is robust for content-preserving manipulations, such as JPEG compression, adding noise, filtering, etc. (4) The proposed method provides satisfactory generalization capability.

  19. Characterization of a 16-Bit Digitizer for Lidar Data Acquisition

    NASA Technical Reports Server (NTRS)

    Williamson, Cynthia K.; DeYoung, Russell J.

    2000-01-01

    A 6-MHz 16-bit waveform digitizer was evaluated for use in atmospheric differential absorption lidar (DIAL) measurements of ozone. The digitizer noise characteristics were evaluated, and actual ozone DIAL atmospheric returns were digitized. This digitizer could replace computer-automated measurement and control (CAMAC)-based commercial digitizers and improve voltage accuracy.

  20. 16. STRUCTURAL DETAILS: CHANNEL, BIT & CLEAT, ANCHOR BOLTS & ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. STRUCTURAL DETAILS: CHANNEL, BIT & CLEAT, ANCHOR BOLTS & PLATES FOR PIERS 4, 5, AND 6, DWG. NO. 97, 1-1/2" = 1', MADE BY A.F., JUNE 13, 1908 - Baltimore Inner Harbor, Pier 5, South of Pratt Street between Market Place & Concord Street, Baltimore, Independent City, MD

  1. 17. PLANS & SECTIONS: 36" CAST IRON BITS: USED AT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. PLANS & SECTIONS: 36" CAST IRON BITS: USED AT LOWER END OF PIER 5, DWG. 208, 1/2 SIZE, DRAWN BY W.B.C., MARCH 4, 1910 - Baltimore Inner Harbor, Pier 5, South of Pratt Street between Market Place & Concord Street, Baltimore, Independent City, MD

  2. Fast random bit generation with bandwidth-enhanced chaos in semiconductor lasers.

    PubMed

    Hirano, Kunihito; Yamazaki, Taiki; Morikatsu, Shinichiro; Okumura, Haruka; Aida, Hiroki; Uchida, Atsushi; Yoshimori, Shigeru; Yoshimura, Kazuyuki; Harayama, Takahisa; Davis, Peter

    2010-03-15

    We experimentally demonstrate random bit generation using multi-bit samples of bandwidth-enhanced chaos in semiconductor lasers. Chaotic fluctuation of laser output is generated in a semiconductor laser with optical feedback and the chaotic output is injected into a second semiconductor laser to obtain a chaotic intensity signal with bandwidth enhanced up to 16 GHz. The chaotic signal is converted to an 8-bit digital signal by sampling with a digital oscilloscope at 12.5 Giga samples per second (GS/s). Random bits are generated by bitwise exclusive-OR operation on corresponding bits in samples of the chaotic signal and its time-delayed signal. Statistical tests verify the randomness of bit sequences obtained using 1 to 6 bits per sample, corresponding to fast random bit generation rates from 12.5 to 75 Gigabit per second (Gb/s) ( = 6 bit x 12.5 GS/s).

  3. A Systems Approach for Allocating Educational Space.

    ERIC Educational Resources Information Center

    Florida Univ., Gainesville. Center for Community Needs Assessment.

    A computer simulation model for allocating facilities and physical space is presented as a means of optimally allocating available educational resources. The model allows the decisionmaker to change specific program allocations, system parameters, and other controllable variables in order to determine the effects, both cost and utility, of these…

  4. 40 CFR 74.26 - Allocation formula.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) SULFUR DIOXIDE OPT-INS Allowance Calculations for Combustion Sources § 74.26 Allocation formula. (a) The Administrator will calculate the annual allowance allocation for a combustion source based on the data... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Allocation formula. 74.26 Section...

  5. 49 CFR 262.5 - Allocation requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Allocation requirements. 262.5 Section 262.5... IMPROVEMENT PROJECTS § 262.5 Allocation requirements. At least fifty percent of all grant funds awarded under... than $20,000,000 each. Designated, high-priority projects will be excluded from this allocation...

  6. 39 CFR 3060.12 - Asset allocation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Asset allocation. 3060.12 Section 3060.12 Postal... COMPETITIVE PRODUCTS ENTERPRISE § 3060.12 Asset allocation. Within 6 months of January 23, 2009, and for each... competitive products enterprise using a method of allocation based on appropriate revenue or cost...

  7. 15 CFR 923.110 - Allocation formula.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Allocation formula. 923.110 Section... MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Allocation of Section 306 Program Administration Grants § 923.110 Allocation formula. (a) As required by subsection 306(a), the Secretary may make...

  8. 25 CFR 39.902 - Allocation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Allocation. 39.902 Section 39.902 Indians BUREAU OF... Maintenance and Minor Repair Fund § 39.902 Allocation. (a) Interim Maintenance and Minor Repair funds shall be... determining school allocations shall be taken from the facilities inventory maintained by the Division...

  9. 24 CFR 945.203 - Allocation plan.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Allocation plan. 945.203 Section... FAMILIES Application and Approval Procedures § 945.203 Allocation plan. (a) Applicable terminology. (1) As used in this section, the terms “initial allocation plan” refers to the PHA's first submission of...

  10. 24 CFR 594.15 - Allocation amounts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Allocation amounts. 594.15 Section... DEVELOPMENT COMMUNITY FACILITIES JOHN HEINZ NEIGHBORHOOD DEVELOPMENT PROGRAM Funding Allocation and Criteria § 594.15 Allocation amounts. (a) Amounts and match requirement. HUD will make grants, in the form...

  11. Estimating Hardness from the USDC Tool-Bit Temperature Rise

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Sherrit, Stewart

    2008-01-01

    A method of real-time quantification of the hardness of a rock or similar material involves measurement of the temperature, as a function of time, of the tool bit of an ultrasonic/sonic drill corer (USDC) that is being used to drill into the material. The method is based on the idea that, other things being about equal, the rate of rise of temperature and the maximum temperature reached during drilling increase with the hardness of the drilled material. In this method, the temperature is measured by means of a thermocouple embedded in the USDC tool bit near the drilling tip. The hardness of the drilled material can then be determined through correlation of the temperature-rise-versus-time data with time-dependent temperature rises determined in finite-element simulations of, and/or experiments on, drilling at various known rates of advance or known power levels through materials of known hardness. The figure presents an example of empirical temperature-versus-time data for a particular 3.6-mm USDC bit, driven at an average power somewhat below 40 W, drilling through materials of various hardness levels. The temperature readings from within a USDC tool bit can also be used for purposes other than estimating the hardness of the drilled material. For example, they can be especially useful as feedback to control the driving power to prevent thermal damage to the drilled material, the drill bit, or both. In the case of drilling through ice, the temperature readings could be used as a guide to maintaining sufficient drive power to prevent jamming of the drill by preventing refreezing of melted ice in contact with the drill.

  12. Numerical study of the simplest string bit model

    NASA Astrophysics Data System (ADS)

    Chen, Gaoli; Sun, Songge

    2016-05-01

    String bit models provide a possible method to formulate a string as a discrete chain of pointlike string bits. When the bit number M is large, a chain behaves as a continuous string. We study the simplest case that has only one bosonic bit and one fermionic bit. The creation and annihilation operators are adjoint representations of the U (N ) color group. We show that the supersymmetry reduces the parameter number of a Hamiltonian from 7 to 3 and, at N =∞ , ensures a continuous energy spectrum, which implies the emergence of one spatial dimension. The Hamiltonian H0 is constructed so that in the large N limit it produces a world sheet spectrum with one Grassmann world sheet field. We concentrate on the numerical study of the model in finite N . For the Hamiltonian H0, we find that the would-be ground energy states disappear at N =(M -1 ) /2 for odd M ≤11 . Such a simple pattern is spoiled if H has an additional term ξ Δ H which does not affect the result of N =∞ . The disappearance point moves to higher (lower) N when ξ increases (decreases). Particularly, the ±(H0-Δ H ) cases suggest a possibility that the ground state could survive at large M and M ≫N . Our study reveals that the model has stringy behavior: when N is fixed and large enough, the ground energy decreases linearly with respect to M , and the excitation energy is roughly of order M-1. We also verify that a stable system of Hamiltonian ±H0+ξ Δ H requires ξ ≥∓1 .

  13. Communication patterns and allocation strategies.

    SciTech Connect

    Leung, Vitus Joseph; Mache, Jens Wolfgang; Bunde, David P.

    2004-01-01

    Motivated by observations about job runtimes on the CPlant system, we use a trace-driven microsimulator to begin characterizing the performance of different classes of allocation algorithms on jobs with different communication patterns in space-shared parallel systems with mesh topology. We show that relative performance varies considerably with communication pattern. The Paging strategy using the Hilbert space-filling curve and the Best Fit heuristic performed best across several communication patterns.

  14. Minority Transportation Expenditure Allocation Model

    SciTech Connect

    Vyas, Anant D.; Santini, Danilo J.; Marik, Sheri K.

    1993-04-12

    MITRAM (Minority TRansportation expenditure Allocation Model) can project various transportation related attributes of minority (Black and Hispanic) and majority (white) populations. The model projects vehicle ownership, vehicle miles of travel, workers, new car and on-road fleet fuel economy, amount and share of household income spent on gasoline, and household expenditures on public transportation and taxis. MITRAM predicts reactions to sustained fuel price changes for up to 10 years after the change.

  15. The effect of bandlimiting of a PCM/NRZ signal on the bit-error probability.

    NASA Technical Reports Server (NTRS)

    Tu, K.; Shehadeh, N. M.

    1971-01-01

    The explicit expressions for the intersymbol interference as a function of bandwidth-bit duration product and bit positions for PCM/NRZ systems operating in the presence of Gaussian noise and in a bandlimited channel are determined. Two types of linear bit detectors are considered, integrate and dump, and bandlimit and sample. Restriction of bandwidth results in a performance degradation. The degradation of signal-to-noise ratio is presented as a function of bandwidth-bit duration product and bit patterns. The average probability of bit errors is computed for various bandwidths. The calculations of the upper bound and lower bound on the error probability are also presented.

  16. The effects of reduced bit depth on optical coherence tomography phase data.

    PubMed

    Ling, William A; Ellerbee, Audrey K

    2012-07-01

    Past studies of the effects of bit depth on OCT magnitude data concluded that 8 bits of digitizer resolution provided nearly the same image quality as a 14-bit digitizer. However, such studies did not assess the effects of bit depth on the accuracy of phase data. In this work, we show that the effects of bit depth on phase data and magnitude data can differ significantly. This finding has an important impact on the design of phase-resolved OCT systems, such as those measuring motion and the birefringence of samples, particularly as one begins to consider the tradeoff between bit depth and digitizer speed.

  17. Allocating Variability and Reserve Requirements (Presentation)

    SciTech Connect

    Kirby, B.; King, J.; Milligan, M.

    2011-10-01

    This presentation describes how you could conceivably allocate variability and reserve requirements, including how to allocate aggregation benefits. Conclusions of this presentation are: (1) Aggregation provides benefits because individual requirements are not 100% correlated; (2) Method needed to allocate reduced requirement among participants; (3) Differences between allocation results are subtle - (a) Not immediately obvious which method is 'better'; (b) Many are numerically 'correct', they sum to the physical requirement; (c) Many are not 'fair', Results depend on sub-aggregation and/or the order individuals are included; and (4) Vector allocation method is simple and fair.

  18. Subjective audio quality evaluation of embedded-optimization-based distortion precompensation algorithms.

    PubMed

    Defraene, Bruno; van Waterschoot, Toon; Diehl, Moritz; Moonen, Marc

    2016-07-01

    Subjective audio quality evaluation experiments have been conducted to assess the performance of embedded-optimization-based precompensation algorithms for mitigating perceptible linear and nonlinear distortion in audio signals. It is concluded with statistical significance that the perceived audio quality is improved by applying an embedded-optimization-based precompensation algorithm, both in case (i) nonlinear distortion and (ii) a combination of linear and nonlinear distortion is present. Moreover, a significant positive correlation is reported between the collected subjective and objective PEAQ audio quality scores, supporting the validity of using PEAQ to predict the impact of linear and nonlinear distortion on the perceived audio quality. PMID:27475197

  19. Subjective audio quality evaluation of embedded-optimization-based distortion precompensation algorithms.

    PubMed

    Defraene, Bruno; van Waterschoot, Toon; Diehl, Moritz; Moonen, Marc

    2016-07-01

    Subjective audio quality evaluation experiments have been conducted to assess the performance of embedded-optimization-based precompensation algorithms for mitigating perceptible linear and nonlinear distortion in audio signals. It is concluded with statistical significance that the perceived audio quality is improved by applying an embedded-optimization-based precompensation algorithm, both in case (i) nonlinear distortion and (ii) a combination of linear and nonlinear distortion is present. Moreover, a significant positive correlation is reported between the collected subjective and objective PEAQ audio quality scores, supporting the validity of using PEAQ to predict the impact of linear and nonlinear distortion on the perceived audio quality.

  20. 40 CFR 96.53 - Recordation of NOX allowance allocations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE... allocated to an allocation set-aside. (c) Serial numbers for allocated NO X allowances. When allocating...

  1. 40 CFR 96.53 - Recordation of NOX allowance allocations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO 2 TRADING PROGRAMS FOR STATE... allocated to an allocation set-aside. (c) Serial numbers for allocated NO X allowances. When allocating...

  2. 45 CFR 402.31 - Determination of allocations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ASSISTANCE GRANTS State Allocations § 402.31 Determination of allocations. (a) Allocation formula. Allocations will be computed according to a formula using the following factors and weights: (1) 50...

  3. 45 CFR 402.31 - Determination of allocations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... State Allocations § 402.31 Determination of allocations. (a) Allocation formula. Allocations will be computed according to a formula using the following factors and weights: (1) 50 percent based on the...

  4. Bit silencing in fingerprints enables the derivation of compound class-directed similarity metrics.

    PubMed

    Wang, Yuan; Bajorath, Jürgen

    2008-09-01

    Fingerprints are molecular bit string representations and are among the most popular descriptors for similarity searching. In key-type fingerprints, each bit position monitors the presence or absence of a prespecified chemical or structural feature. In contrast to hashed fingerprints, this keyed design makes it possible to evaluate individual bit positions and the associated structural features during similarity searching. Bit silencing is introduced as a systematic approach to assess the contribution of each bit in a fingerprint to similarity search performance. From the resulting bit contribution profile, a bit position-dependent weight vector is derived that determines the relative weight of each bit on the basis of its individual contribution. By merging this weight vector with the Tanimoto coefficient, compound class-directed similarity metrics are obtained that further increase fingerprint search calculations compared to conventional calculations of Tanimoto similarity.

  5. Efficient biased random bit generation for parallel processing

    SciTech Connect

    Slone, D.M.

    1994-09-28

    A lattice gas automaton was implemented on a massively parallel machine (the BBN TC2000) and a vector supercomputer (the CRAY C90). The automaton models Burgers equation {rho}t + {rho}{rho}{sub x} = {nu}{rho}{sub xx} in 1 dimension. The lattice gas evolves by advecting and colliding pseudo-particles on a 1-dimensional, periodic grid. The specific rules for colliding particles are stochastic in nature and require the generation of many billions of random numbers to create the random bits necessary for the lattice gas. The goal of the thesis was to speed up the process of generating the random bits and thereby lessen the computational bottleneck of the automaton.

  6. Bit timing with pulse distortion and intersymbol interference

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1977-01-01

    Pulse distortion and intersymbol interference due to insufficient filtering in PCM and PSK channels cause performance degradation in terms of both bit error probabilities and timing errors. This paper reports the results of a study analyzing these effects on bit timing subsystems. Consideration is given to both the filter-rectifier and transition tracking type of timing subsystem. Although both these systems perform similarly in high SNR and ideal pulse models, pulse distortion and intersymbol affects each differently. The primary effects in both systems is to cause the presence of an irreducible mean squared timing error due to the intersymbol which limits the ultimate performance. Design procedures to minimize the anomalies of both systems are presented, and indicate modifications of the standard timing subsystems. It is found that specific design directions depend on whether the intersymbol or the receiver noise tends to dominate.

  7. Very low bit rate voice for packetized mobile applications

    SciTech Connect

    Knittle, C.D.; Malone, K.T.

    1991-01-01

    Transmitting digital voice via packetized mobile communications systems that employ relatively short packet lengths and narrow bandwidths often necessitates very low bit rate coding of the voice data. Sandia National Laboratories is currently developing an efficient voice coding system operating at 800 bits per second (bps). The coding scheme is a modified version of the 2400 bps NSA LPC-10e standard. The most significant modification to the LPC-10e scheme is the vector quantization of the line spectrum frequencies associated with the synthesis filters. An outline of a hardware implementation for the 800 bps coder is presented. The speech quality of the coder is generally good, although speaker recognition is not possible. Further research is being conducted to reduce the memory requirements and complexity of the vector quantizer, and to increase the quality of the reconstructed speech. 4 refs., 2 figs., 3 tabs.

  8. Very low bit rate voice for packetized mobile applications

    SciTech Connect

    Knittle, C.D.; Malone, K.T. )

    1991-01-01

    This paper reports that transmitting digital voice via packetized mobile communications systems that employ relatively short packet lengths and narrow bandwidths often necessitates very low bit rate coding of the voice data. Sandia National Laboratories is currently developing an efficient voice coding system operating at 800 bits per second (bps). The coding scheme is a modified version of the 2400 bps NSA LPC-10e standard. The most significant modification to the LPC-10e scheme is the vector quantization of the line spectrum frequencies associated with the synthesis filters. An outline of a hardware implementation for the 800 bps coder is presented. The speech quality of the coder is generally good, although speaker recognition is not possible. Further research is being conducted to reduce the memory requirements and complexity of the vector quantizer, and to increase the quality of the reconstructed speech. This work may be of use dealing with nuclear materials.

  9. Fully distrustful quantum bit commitment and coin flipping.

    PubMed

    Silman, J; Chailloux, A; Aharon, N; Kerenidis, I; Pironio, S; Massar, S

    2011-06-01

    In the distrustful quantum cryptography model the parties have conflicting interests and do not trust one another. Nevertheless, they trust the quantum devices in their labs. The aim of the device-independent approach to cryptography is to do away with the latter assumption, and, consequently, significantly increase security. It is an open question whether the scope of this approach also extends to protocols in the distrustful cryptography model, thereby rendering them "fully" distrustful. In this Letter, we show that for bit commitment-one of the most basic primitives within the model-the answer is positive. We present a device-independent (imperfect) bit-commitment protocol, where Alice's and Bob's cheating probabilities are ≃0.854 and 3/4, which we then use to construct a device-independent coin flipping protocol with bias ≲0.336.

  10. Fully distrustful quantum bit commitment and coin flipping.

    PubMed

    Silman, J; Chailloux, A; Aharon, N; Kerenidis, I; Pironio, S; Massar, S

    2011-06-01

    In the distrustful quantum cryptography model the parties have conflicting interests and do not trust one another. Nevertheless, they trust the quantum devices in their labs. The aim of the device-independent approach to cryptography is to do away with the latter assumption, and, consequently, significantly increase security. It is an open question whether the scope of this approach also extends to protocols in the distrustful cryptography model, thereby rendering them "fully" distrustful. In this Letter, we show that for bit commitment-one of the most basic primitives within the model-the answer is positive. We present a device-independent (imperfect) bit-commitment protocol, where Alice's and Bob's cheating probabilities are ≃0.854 and 3/4, which we then use to construct a device-independent coin flipping protocol with bias ≲0.336. PMID:21702585

  11. A 128K-bit CCD buffer memory system

    NASA Technical Reports Server (NTRS)

    Siemens, K. H.; Wallace, R. W.; Robinson, C. R.

    1976-01-01

    A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. 8K-bit CCD shift register memories were used to construct a feasibility model 128K-bit buffer memory system. Peak power dissipation during a data transfer is less than 7 W., while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. Descriptions are provided of both the buffer memory system and a custom tester that was used to exercise the memory. The testing procedures and testing results are discussed. Suggestions are provided for further development with regards to the utilization of advanced versions of CCD memory devices to both simplified and expanded memory system applications.

  12. Pack carburizing process for earth boring drill bits

    SciTech Connect

    Simons, R.W.; Scott, D.E.; Poland, J.R.

    1987-02-17

    A method is described of manufacturing an earth boring drill bit of the type having a bearing pin extending from a head section of the drill bit for rotatably mounting a cutter, comprising the steps of: providing a container having opposing end openings with sidewalls therebetween which define a container interior; placing the container over a portion of the head section so that the pin extends within the interior of the container; installing a spring spacer within the interior of the container about at least a portion of the circumference of the bearing pin at least one axial location; packing the container with a particulate treating medium; covering the container; and placing the pin and container into a furnace for a time and at a temperature to activate the treating medium.

  13. Finger Vein Recognition Based on a Personalized Best Bit Map

    PubMed Central

    Yang, Gongping; Xi, Xiaoming; Yin, Yilong

    2012-01-01

    Finger vein patterns have recently been recognized as an effective biometric identifier. In this paper, we propose a finger vein recognition method based on a personalized best bit map (PBBM). Our method is rooted in a local binary pattern based method and then inclined to use the best bits only for matching. We first present the concept of PBBM and the generating algorithm. Then we propose the finger vein recognition framework, which consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PBBM achieves not only better performance, but also high robustness and reliability. In addition, PBBM can be used as a general framework for binary pattern based recognition. PMID:22438735

  14. Fully Distrustful Quantum Bit Commitment and Coin Flipping

    NASA Astrophysics Data System (ADS)

    Silman, J.; Chailloux, A.; Aharon, N.; Kerenidis, I.; Pironio, S.; Massar, S.

    2011-06-01

    In the distrustful quantum cryptography model the parties have conflicting interests and do not trust one another. Nevertheless, they trust the quantum devices in their labs. The aim of the device-independent approach to cryptography is to do away with the latter assumption, and, consequently, significantly increase security. It is an open question whether the scope of this approach also extends to protocols in the distrustful cryptography model, thereby rendering them “fully” distrustful. In this Letter, we show that for bit commitment—one of the most basic primitives within the model—the answer is positive. We present a device-independent (imperfect) bit-commitment protocol, where Alice’s and Bob’s cheating probabilities are ≃0.854 and (3)/(4), which we then use to construct a device-independent coin flipping protocol with bias ≲0.336.

  15. 10 CFR 217.53 - Types of allocation orders.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Types of allocation orders. 217.53 Section 217.53 Energy DEPARTMENT OF ENERGY OIL ENERGY PRIORITIES AND ALLOCATIONS SYSTEM Allocation Actions § 217.53 Types of allocation orders. There are three types of allocation orders available for communicating allocation...

  16. Resource allocation using constraint propagation

    NASA Technical Reports Server (NTRS)

    Rogers, John S.

    1990-01-01

    The concept of constraint propagation was discussed. Performance increases are possible with careful application of these constraint mechanisms. The degree of performance increase is related to the interdependence of the different activities resource usage. Although this method of applying constraints to activities and resources is often beneficial, it is obvious that this is no panacea cure for the computational woes that are experienced by dynamic resource allocation and scheduling problems. A combined effort for execution optimization in all areas of the system during development and the selection of the appropriate development environment is still the best method of producing an efficient system.

  17. Photon-number-resolving detector with 10 bits of resolution

    SciTech Connect

    Jiang, Leaf A.; Dauler, Eric A.; Chang, Joshua T

    2007-06-15

    A photon-number-resolving detector with single-photon resolution is described and demonstrated. It has 10 bits of resolution, does not require cryogenic cooling, and is sensitive to near ir wavelengths. This performance is achieved by flood illuminating a 32x32 element In{sub x}Ga{sub 1-x}AsP Geiger-mode avalanche photodiode array that has an integrated counter and digital readout circuit behind each pixel.

  18. Capped bit patterned media for high density magnetic recording

    NASA Astrophysics Data System (ADS)

    Li, Shaojing; Livshitz, Boris; Bertram, H. Neal; Inomata, Akihiro; Fullerton, Eric E.; Lomakin, Vitaliy

    2009-04-01

    A capped composite patterned medium design is described which comprises an array of hard elements exchange coupled to a continuous cap layer. The role of the cap layer is to lower the write field of the individual hard element and introduce ferromagnetic exchange interactions between hard elements to compensate the magnetostatic interactions. Modeling results show significant reduction in the reversal field distributions caused by the magnetization states in the array which is important to prevent bit errors and increase achievable recording densities.

  19. Color encoding for gamut extension and bit-depth extension

    NASA Astrophysics Data System (ADS)

    Zeng, Huanzhao

    2005-02-01

    Monitor oriented RGB color spaces (e.g. sRGB) are widely applied for digital image representation for the simplicity in displaying images on monitor displays. However, the physical gamut limits its ability to encode colors accurately for color images that are not limited to the display RGB gamut. To extend the encoding gamut, non-physical RGB primaries may be used to define the color space, or the RGB tone ranges may be extended beyond the physical range. An out-of-gamut color has at least one of the R, G, and B channels that are smaller than 0 or higher than 100%. Instead of using wide-gamut RGB primaries for gamut expansion, we may extend the tone ranges to expand the encoding gamut. Negative tone values and tone values over 100% are allowed. Methods to efficiently and accurately encode out-of-gamut colors are discussed in this paper. Interpretation bits are added to interpret the range of color values or to encode color values with a higher bit-depth. The interpretation bits of R, G, and B primaries can be packed and stored in an alpha channel in some image formats (e.g. TIFF) or stored in a data tag (e.g. in JEPG format). If a color image does not have colors that are out of a regular RGB gamut, a regular program (e.g. Photoshop) is able to manipulate the data correctly.

  20. Application of morphological bit planes in retinal blood vessel extraction.

    PubMed

    Fraz, M M; Basit, A; Barman, S A

    2013-04-01

    The appearance of the retinal blood vessels is an important diagnostic indicator of various clinical disorders of the eye and the body. Retinal blood vessels have been shown to provide evidence in terms of change in diameter, branching angles, or tortuosity, as a result of ophthalmic disease. This paper reports the development for an automated method for segmentation of blood vessels in retinal images. A unique combination of methods for retinal blood vessel skeleton detection and multidirectional morphological bit plane slicing is presented to extract the blood vessels from the color retinal images. The skeleton of main vessels is extracted by the application of directional differential operators and then evaluation of combination of derivative signs and average derivative values. Mathematical morphology has been materialized as a proficient technique for quantifying the retinal vasculature in ocular fundus images. A multidirectional top-hat operator with rotating structuring elements is used to emphasize the vessels in a particular direction, and information is extracted using bit plane slicing. An iterative region growing method is applied to integrate the main skeleton and the images resulting from bit plane slicing of vessel direction-dependent morphological filters. The approach is tested on two publicly available databases DRIVE and STARE. Average accuracy achieved by the proposed method is 0.9423 for both the databases with significant values of sensitivity and specificity also; the algorithm outperforms the second human observer in terms of precision of segmented vessel tree.

  1. High performance 14-bit pipelined redundant signed digit ADC

    NASA Astrophysics Data System (ADS)

    Narula, Swina; Pandey, Sujata

    2016-03-01

    A novel architecture of a pipelined redundant-signed-digit analog to digital converter (RSD-ADC) is presented featuring a high signal to noise ratio (SNR), spurious free dynamic range (SFDR) and signal to noise plus distortion (SNDR) with efficient background correction logic. The proposed ADC architecture shows high accuracy with a high speed circuit and efficient utilization of the hardware. This paper demonstrates the functionality of the digital correction logic of 14-bit pipelined ADC at each 1.5 bit/stage. This prototype of ADC architecture accounts for capacitor mismatch, comparator offset and finite Op-Amp gain error in the MDAC (residue amplification circuit) stages. With the proposed architecture of ADC, SNDR obtained is 85.89 dB, SNR is 85.9 dB and SFDR obtained is 102.8 dB at the sample rate of 100 MHz. This novel architecture of digital correction logic is transparent to the overall system, which is demonstrated by using 14-bit pipelined ADC. After a latency of 14 clocks, digital output will be available at every clock pulse. To describe the circuit behavior of the ADC, VHDL and MATLAB programs are used. The proposed architecture is also capable of reducing the digital hardware. Silicon area is also the complexity of the design.

  2. Development of a Near-Bit MWD system

    SciTech Connect

    McDonald, W.J.; Pittard, G.T.

    1995-06-01

    The project objective is to develop a measurements-while-drilling (MWD) module that provides real-time reports of drilling conditions at the bit. The module is to support multiple types of sensors and to sample and encode their outputs in digital form under microprocessor control. The assembled message is to be electromagnetically transmitted along the drill string back to its associated receiver located in a collar typically 50--100 feet above the bit. The receiver demodulates the transmitted message and passes it data to the third party wireline or MWD telemetry system for relay to the surface. The collar also houses the conventional MWD or wireline probe assembly. The completed Phase 1 program began with the preparation of detailed performance specifications and ended with the design, fabrication and testing of a functioning prototype. The prototype was sized for operation with 6-3/4-inch multi-lobe mud motors due to the widespread use of this size motor in horizontal and directional drilling applications. The Phase 1 prototype provided inclination, temperature and pressure information. The Phase 2 program objective is to expand the current sensor suite to include at least one type of formation evaluation measurement, such as formation resistivity or natural gamma ray. The Near-Bit system will be subjected to a vigorous series of shock and vibration tests followed by field testing to ensure it possesses the reliability and performance required for commercial success.

  3. On the Lorentz invariance of bit-string geometry

    SciTech Connect

    Noyes, H.P.

    1995-09-01

    We construct the class of integer-sided triangles and tetrahedra that respectively correspond to two or three discriminately independent bit-strings. In order to specify integer coordinates in this space, we take one vertex of a regular tetrahedron whose common edge length is an even integer as the origin of a line of integer length to the {open_quotes}point{close_quotes} and three integer distances to this {open_quotes}point{close_quotes} from the three remaining vertices of the reference tetrahedron. This - usually chiral - integer coordinate description of bit-string geometry is possible because three discriminately independent bit-strings generate four more; the Hamming measures of these seven strings always allow this geometrical interpretation. On another occasion we intend to prove the rotational invariance of this coordinate description. By identifying the corners of these figures with the positions of recording counters whose clocks are synchronized using the Einstein convention, we define velocities in this space. This suggests that it may be possible to define boosts and discrete Lorentz transformations in a space of integer coordinates. We relate this description to our previous work on measurement accuracy and the discrete ordered calculus of Etter and Kauffman (DOC).

  4. Nodal Analysis Optimization Based on the Use of Virtual Current Sources: A Powerful New Pedagogical Method

    ERIC Educational Resources Information Center

    Chatzarakis, G. E.

    2009-01-01

    This paper presents a new pedagogical method for nodal analysis optimization based on the use of virtual current sources, applicable to any linear electric circuit (LEC), regardless of its complexity. The proposed method leads to straightforward solutions, mostly arrived at by inspection. Furthermore, the method is easily adapted to computer…

  5. Investigation of optimization-based reconstruction with an image-total-variation constraint in PET

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan

    2016-08-01

    Interest remains in reconstruction-algorithm research and development for possible improvement of image quality in current PET imaging and for enabling innovative PET systems to enhance existing, and facilitate new, preclinical and clinical applications. Optimization-based image reconstruction has been demonstrated in recent years of potential utility for CT imaging applications. In this work, we investigate tailoring the optimization-based techniques to image reconstruction for PET systems with standard and non-standard scan configurations. Specifically, given an image-total-variation (TV) constraint, we investigated how the selection of different data divergences and associated parameters impacts the optimization-based reconstruction of PET images. The reconstruction robustness was explored also with respect to different data conditions and activity up-takes of practical relevance. A study was conducted particularly for image reconstruction from data collected by use of a PET configuration with sparsely populated detectors. Overall, the study demonstrates the robustness of the TV-constrained, optimization-based reconstruction for considerably different data conditions in PET imaging, as well as its potential to enable PET configurations with reduced numbers of detectors. Insights gained in the study may be exploited for developing algorithms for PET-image reconstruction and for enabling PET-configuration design of practical usefulness in preclinical and clinical applications.

  6. Artificial intelligent techniques for optimizing water allocation in a reservoir watershed

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Chang, Li-Chiu; Wang, Yu-Chung

    2014-05-01

    This study proposes a systematical water allocation scheme that integrates system analysis with artificial intelligence techniques for reservoir operation in consideration of the great uncertainty upon hydrometeorology for mitigating droughts impacts on public and irrigation sectors. The AI techniques mainly include a genetic algorithm and adaptive-network based fuzzy inference system (ANFIS). We first derive evaluation diagrams through systematic interactive evaluations on long-term hydrological data to provide a clear simulation perspective of all possible drought conditions tagged with their corresponding water shortages; then search the optimal reservoir operating histogram using genetic algorithm (GA) based on given demands and hydrological conditions that can be recognized as the optimal base of input-output training patterns for modelling; and finally build a suitable water allocation scheme through constructing an adaptive neuro-fuzzy inference system (ANFIS) model with a learning of the mechanism between designed inputs (water discount rates and hydrological conditions) and outputs (two scenarios: simulated and optimized water deficiency levels). The effectiveness of the proposed approach is tested on the operation of the Shihmen Reservoir in northern Taiwan for the first paddy crop in the study area to assess the water allocation mechanism during drought periods. We demonstrate that the proposed water allocation scheme significantly and substantially avails water managers of reliably determining a suitable discount rate on water supply for both irrigation and public sectors, and thus can reduce the drought risk and the compensation amount induced by making restrictions on agricultural use water.

  7. Dynamic and balanced capacity allocation scheme with uniform bandwidth for OFDM-PON systems

    NASA Astrophysics Data System (ADS)

    Lei, Cheng; Chen, Hongwei; Chen, Minghua; Yu, Ying; Guo, Qiang; Yang, Sigang; Xie, Shizhong

    2015-03-01

    As the bitrate of orthogonal frequency division multiplexing passive optical network (OFDM-PON) system is continuously increasing, how to effectively allocate the system bandwidth among the huge number of optical network units (ONUs) is one of the key problems before OFDM-PON can be practical deployed. Unlike traditional bandwidth allocation scheme, in this paper, the transmission performance of single ONU is for the first time taken into consideration and optimized. To reduce the manufacturing complexity and fully utilize the processing ability of the receivers, the system bandwidth is equally distributed to the ONUs. Bit loading is used to allocate the total transmission capacity, and power loading is used to guarantee the ONUs have balanced transmission performance even if they operate at different bitrate. In this way, a dynamic and balanced capacity allocation scheme with uniform bandwidth for OFDM-PON systems can be realized. At last, an experimental system is established to verify the feasibility of the proposed scheme, and the influence that the scheme brings to the whole system is also analyzed.

  8. Constrained Allocation Flux Balance Analysis.

    PubMed

    Mori, Matteo; Hwa, Terence; Martin, Olivier C; De Martino, Andrea; Marinari, Enzo

    2016-06-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an "ensemble averaging" procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws.

  9. Constrained Allocation Flux Balance Analysis.

    PubMed

    Mori, Matteo; Hwa, Terence; Martin, Olivier C; De Martino, Andrea; Marinari, Enzo

    2016-06-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an "ensemble averaging" procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325

  10. Latent IBP Compound Dirichlet Allocation.

    PubMed

    Archambeau, Cedric; Lakshminarayanan, Balaji; Bouchard, Guillaume

    2015-02-01

    We introduce the four-parameter IBP compound Dirichlet process (ICDP), a stochastic process that generates sparse non-negative vectors with potentially an unbounded number of entries. If we repeatedly sample from the ICDP we can generate sparse matrices with an infinite number of columns and power-law characteristics. We apply the four-parameter ICDP to sparse nonparametric topic modelling to account for the very large number of topics present in large text corpora and the power-law distribution of the vocabulary of natural languages. The model, which we call latent IBP compound Dirichlet allocation (LIDA), allows for power-law distributions, both, in the number of topics summarising the documents and in the number of words defining each topic. It can be interpreted as a sparse variant of the hierarchical Pitman-Yor process when applied to topic modelling. We derive an efficient and simple collapsed Gibbs sampler closely related to the collapsed Gibbs sampler of latent Dirichlet allocation (LDA), making the model applicable in a wide range of domains. Our nonparametric Bayesian topic model compares favourably to the widely used hierarchical Dirichlet process and its heavy tailed version, the hierarchical Pitman-Yor process, on benchmark corpora. Experiments demonstrate that accounting for the power-distribution of real data is beneficial and that sparsity provides more interpretable results. PMID:26353244

  11. Constrained Allocation Flux Balance Analysis

    PubMed Central

    Mori, Matteo; Hwa, Terence; Martin, Olivier C.

    2016-01-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an “ensemble averaging” procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325

  12. An intelligent allocation algorithm for parallel processing

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Homaifar, Abdollah; Ananthram, Kishan G.

    1988-01-01

    The problem of allocating nodes of a program graph to processors in a parallel processing architecture is considered. The algorithm is based on critical path analysis, some allocation heuristics, and the execution granularity of nodes in a program graph. These factors, and the structure of interprocessor communication network, influence the allocation. To achieve realistic estimations of the executive durations of allocations, the algorithm considers the fact that nodes in a program graph have to communicate through varying numbers of tokens. Coarse and fine granularities have been implemented, with interprocessor token-communication duration, varying from zero up to values comparable to the execution durations of individual nodes. The effect on allocation of communication network structures is demonstrated by performing allocations for crossbar (non-blocking) and star (blocking) networks. The algorithm assumes the availability of as many processors as it needs for the optimal allocation of any program graph. Hence, the focus of allocation has been on varying token-communication durations rather than varying the number of processors. The algorithm always utilizes as many processors as necessary for the optimal allocation of any program graph, depending upon granularity and characteristics of the interprocessor communication network.

  13. Computational models and resource allocation for supercomputers

    NASA Technical Reports Server (NTRS)

    Mauney, Jon; Agrawal, Dharma P.; Harcourt, Edwin A.; Choe, Young K.; Kim, Sukil

    1989-01-01

    There are several different architectures used in supercomputers, with differing computational models. These different models present a variety of resource allocation problems that must be solved. The computational needs of a program must be cast in terms of the computational model supported by the supercomputer, and this must be done in a way that makes effective use of the machine's resources. This is the resource allocation problem. The computational models of available supercomputers and the associated resource allocation techniques are surveyed. It is shown that many problems and solutions appear repeatedly in very different computing environments. Some case studies are presented, showing concrete computational models and the allocation strategies used.

  14. Graded bit patterned magnetic arrays fabricated via angled low-energy He ion irradiation.

    PubMed

    Chang, L V; Nasruallah, A; Ruchhoeft, P; Khizroev, S; Litvinov, D

    2012-07-11

    A bit patterned magnetic array based on Co/Pd magnetic multilayers with a binary perpendicular magnetic anisotropy distribution was fabricated. The binary anisotropy distribution was attained through angled helium ion irradiation of a bit edge using hydrogen silsesquioxane (HSQ) resist as an ion stopping layer to protect the rest of the bit. The viability of this technique was explored numerically and evaluated through magnetic measurements of the prepared bit patterned magnetic array. The resulting graded bit patterned magnetic array showed a 35% reduction in coercivity and a 9% narrowing of the standard deviation of the switching field.

  15. Divergence in plant and microbial allocation strategies explains continental patterns in microbial allocation and biogeochemical fluxes.

    PubMed

    Averill, Colin

    2014-10-01

    Allocation trade-offs shape ecological and biogeochemical phenomena at local to global scale. Plant allocation strategies drive major changes in ecosystem carbon cycling. Microbial allocation to enzymes that decompose carbon vs. organic nutrients may similarly affect ecosystem carbon cycling. Current solutions to this allocation problem prioritise stoichiometric tradeoffs implemented in plant ecology. These solutions may not maximise microbial growth and fitness under all conditions, because organic nutrients are also a significant carbon resource for microbes. I created multiple allocation frameworks and simulated microbial growth using a microbial explicit biogeochemical model. I demonstrate that prioritising stoichiometric trade-offs does not optimise microbial allocation, while exploiting organic nutrients as carbon resources does. Analysis of continental-scale enzyme data supports the allocation patterns predicted by this framework, and modelling suggests large deviations in soil C loss based on which strategy is implemented. Therefore, understanding microbial allocation strategies will likely improve our understanding of carbon cycling and climate.

  16. Constant time worker thread allocation via configuration caching

    DOEpatents

    Eichenberger, Alexandre E; O'Brien, John K. P.

    2014-11-04

    Mechanisms are provided for allocating threads for execution of a parallel region of code. A request for allocation of worker threads to execute the parallel region of code is received from a master thread. Cached thread allocation information identifying prior thread allocations that have been performed for the master thread are accessed. Worker threads are allocated to the master thread based on the cached thread allocation information. The parallel region of code is executed using the allocated worker threads.

  17. Deep drilling basic research program shear bit design. Final report, March 1990-October 1995

    SciTech Connect

    Cohen, J.H.; Deskins, W.G.; Maurer, W.C.; Cooper, G.; Lee, J.

    1996-02-01

    Maurer Engineering Inc., under contract with the Gas Research Institute, evaluated drilling of deep gas wells to determine future research projects. The candidates providing the best possibility of success and maximum savings are PDC/TSP shear bits, slim-hole drilling, roller-cone bits, and downhole motors; of these, improvements in shear bits have the best opportunity for near-term results. Significant RD&D was conducted to optimize TSP bit design and efficiency for improved performance in deep drilling. Results of the Deep Drilling Basic Research Program showed that: Improvements in shear bits are the best way to reduce costs and increase efficiency of deep drilling. Changes in cutter size, density, and orientation significantly impact the penetration rate and life of TSP shear bits. Power delivered to the rock controls penetration rate. Higher power rates mean faster drilling. Correctly designed TSP shear bits can deliver more power to the rock.

  18. One-cone bits improve efficiency of drilling small diameter holes

    SciTech Connect

    Langford, J.W. Jr.

    1999-02-01

    A special center bit developed by RBI-Gearhart for the Ocean Drilling Program (ODP) for scientific coring of the ocean floor has grown and evolved into a very efficient slimhole bit for the oil and gas drilling industry. The ODP desired to have a single cone bit that was wireline retrievable. This enabled the driller to either core or drill ahead depending on whether a core barrel or a bit was installed in the roller cone core bit. Near-term new product development goals were modified based on knowledge gained during every step of the process until the product`s cost effective operating range was established in the market. The evolution of the one-cone bit has taken place due to the involvement, insights and cooperation among several producers, drilling contractors and the bit manufacturer.

  19. Multi-bit quantum random number generation by measuring positions of arrival photons

    SciTech Connect

    Yan, Qiurong; Zhao, Baosheng; Liao, Qinghong; Zhou, Nanrun

    2014-10-15

    We report upon the realization of a novel multi-bit optical quantum random number generator by continuously measuring the arrival positions of photon emitted from a LED using MCP-based WSA photon counting imaging detector. A spatial encoding method is proposed to extract multi-bits random number from the position coordinates of each detected photon. The randomness of bits sequence relies on the intrinsic randomness of the quantum physical processes of photonic emission and subsequent photoelectric conversion. A prototype has been built and the random bit generation rate could reach 8 Mbit/s, with random bit generation efficiency of 16 bits per detected photon. FPGA implementation of Huffman coding is proposed to reduce the bias of raw extracted random bits. The random numbers passed all tests for physical random number generator.

  20. Multi-bit quantum random number generation by measuring positions of arrival photons.

    PubMed

    Yan, Qiurong; Zhao, Baosheng; Liao, Qinghong; Zhou, Nanrun

    2014-10-01

    We report upon the realization of a novel multi-bit optical quantum random number generator by continuously measuring the arrival positions of photon emitted from a LED using MCP-based WSA photon counting imaging detector. A spatial encoding method is proposed to extract multi-bits random number from the position coordinates of each detected photon. The randomness of bits sequence relies on the intrinsic randomness of the quantum physical processes of photonic emission and subsequent photoelectric conversion. A prototype has been built and the random bit generation rate could reach 8 Mbit/s, with random bit generation efficiency of 16 bits per detected photon. FPGA implementation of Huffman coding is proposed to reduce the bias of raw extracted random bits. The random numbers passed all tests for physical random number generator.

  1. Two approaches for ultrafast random bit generation based on the chaotic dynamics of a semiconductor laser.

    PubMed

    Li, Nianqiang; Kim, Byungchil; Chizhevsky, V N; Locquet, A; Bloch, M; Citrin, D S; Pan, Wei

    2014-03-24

    This paper reports the experimental investigation of two different approaches to random bit generation based on the chaotic dynamics of a semiconductor laser with optical feedback. By computing high-order finite differences of the chaotic laser intensity time series, we obtain time series with symmetric statistical distributions that are more conducive to ultrafast random bit generation. The first approach is guided by information-theoretic considerations and could potentially reach random bit generation rates as high as 160 Gb/s by extracting 4 bits per sample. The second approach is based on pragmatic considerations and could lead to rates of 2.2 Tb/s by extracting 55 bits per sample. The randomness of the bit sequences obtained from the two approaches is tested against three standard randomness tests (ENT, Diehard, and NIST tests), as well as by calculating the statistical bias and the serial correlation coefficients on longer sequences of random bits than those used in the standard tests.

  2. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  3. Floating-point function generation routines for 16-bit microcomputers

    NASA Technical Reports Server (NTRS)

    Mackin, M. A.; Soeder, J. F.

    1984-01-01

    Several computer subroutines have been developed that interpolate three types of nonanalytic functions: univariate, bivariate, and map. The routines use data in floating-point form. However, because they are written for use on a 16-bit Intel 8086 system with an 8087 mathematical coprocessor, they execute as fast as routines using data in scaled integer form. Although all of the routines are written in assembly language, they have been implemented in a modular fashion so as to facilitate their use with high-level languages.

  4. 32-Bit computer for large memory applications on FASTBUS

    SciTech Connect

    Blossom, J.M.; Hong, J.P.; Kellner, R.G.

    1985-01-01

    A FASTBUS based 32-bit computer is being built at Los Alamos National Laboratory for use in systems requiring large fast memory in the FASTBUS environment. A separate local execution bus allows data reduction to proceed concurrently with other FASTBUS operations. The computer, which can operate in either master or slave mode, includes the National Semiconductor NS32032 chip set with demand paged memory management, floating point slave processor, interrupt control unit, timers, and time-of-day clock. The 16.0 megabytes of random access memory are interleaved to allow windowed direct memory access on and off the FASTBUS at 80 megabytes per second.

  5. A 16-Bit Microcomputer Based Biomedical Signal Processor

    PubMed Central

    Sarkady, Antal A.; Wallingford, Errol E.

    1979-01-01

    A versatile low-cost, two-channel signal processor was developed using a 16-bit microcomputer. The instrument can process biomedical signals in the time and frequency domains using a fast, fixed-point FFT algorithm. Many averaged signal processing functions and their estimates are computed efficiently on-line and in near real time using look-up tables and directives. The signal processing techniques were applied to phonocardiograms to develop a non-invasive technique to assess the severity of valvar aortic stenosis in children. A murmur power spectral analysis is presented which yields a statistically reliable spectrum. Envelograms are defined and found to be useful for timing cardiac events.

  6. All-optical pseudorandom bit sequences generator based on TOADs

    NASA Astrophysics Data System (ADS)

    Sun, Zhenchao; Wang, Zhi; Wu, Chongqing; Wang, Fu; Li, Qiang

    2016-03-01

    A scheme for all-optical pseudorandom bit sequences (PRBS) generator is demonstrated with optical logic gate 'XNOR' and all-optical wavelength converter based on cascaded Tera-Hertz Optical Asymmetric Demultiplexer (TOADs). Its feasibility is verified by generation of return-to-zero on-off keying (RZ-OOK) 263-1 PRBS at the speed of 1 Gb/s with 10% duty radio. The high randomness of ultra-long cycle PRBS is validated by successfully passing the standard benchmark test.

  7. Development of an autonomous 32-bit intelligent device controller

    NASA Astrophysics Data System (ADS)

    Bishop, D.; Waters, G.; Dale, D.; Ewert, T.; Harrison, D.; Lam, J.; Keitel, R.

    1994-12-01

    This paper describes the development and present status of an intelligent device controller for embedded systems. The controller is a low-cost single-board module in Euro-card format based on a Motorola MC68332 microcontroller. An onboard Ethernet interface allows software downloading and remote device control. Also included are 12 ADC channels, 4 DAC channels, 56 bits of digital I/O and 3 serial ports. Custom modules may be added using a backplane bus. The controller is designed to function as a VME slave device. A multitasking environment is provided by the VxWorks kernel.

  8. Simplified quantum bit commitment using single photon nonlocality

    NASA Astrophysics Data System (ADS)

    He, Guang Ping

    2014-10-01

    We simplified our previously proposed quantum bit commitment (QBC) protocol based on the Mach-Zehnder interferometer, by replacing symmetric beam splitters with asymmetric ones. It eliminates the need for random sending time of the photons; thus, the feasibility and efficiency are both improved. The protocol is immune to the cheating strategy in the Mayers-Lo-Chau no-go theorem of unconditionally secure QBC, because the density matrices of the committed states do not satisfy a crucial condition on which the no-go theorem holds.

  9. Use of bicenter PDC bit reduces drilling cost

    SciTech Connect

    Casto, R.G.; Senese, M.

    1995-11-13

    The use of bicenter polycrystalline diamond compact (PDC) bit technology, dual-power-head down-hole motors, and oil-based drilling fluids helped save significant costs on a recent well drilled in the Gulf of Mexico. Not only has underreaming been eliminated, but the overall rate of penetration has been significantly increased. Directional control problems experienced during one phase of the well may limit use of the technique in difficult directional wells. This article discusses both the successes and the failures of this technique during the drilling of two phases of the same Gulf of Mexico well.

  10. Based on reception in general with bit-by-bit decision-making algorithm for signal processing in fiber optic telecommunication systems

    NASA Astrophysics Data System (ADS)

    Burdin, Vladimir A.; Kartashevsky, Vyacheslav G.; Grigorov, Igor V.

    2016-03-01

    This paper presents «reception in general with bit-by-bit decision-making» algorithm, which is the alternative to Viterbi algorithm. It is proposed to use it for fiber-optic transmission systems. It's features is compared with the Viterbi algorithm for digital signal processing in optical communication channels.

  11. How to Do Random Allocation (Randomization)

    PubMed Central

    Shin, Wonshik

    2014-01-01

    Purpose To explain the concept and procedure of random allocation as used in a randomized controlled study. Methods We explain the general concept of random allocation and demonstrate how to perform the procedure easily and how to report it in a paper. PMID:24605197

  12. 10 CFR 490.503 - Credit allocation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Credit allocation. 490.503 Section 490.503 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ALTERNATIVE FUEL TRANSPORTATION PROGRAM Alternative Fueled Vehicle... described in section 490.507 of this part, DOE shall allocate one credit for each alternative fueled...

  13. 10 CFR 490.503 - Credit allocation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Credit allocation. 490.503 Section 490.503 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ALTERNATIVE FUEL TRANSPORTATION PROGRAM Alternative Fueled Vehicle... described in section 490.507 of this part, DOE shall allocate one credit for each alternative fueled...

  14. 10 CFR 490.503 - Credit allocation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Credit allocation. 490.503 Section 490.503 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ALTERNATIVE FUEL TRANSPORTATION PROGRAM Alternative Fueled Vehicle... described in section 490.507 of this part, DOE shall allocate one credit for each alternative fueled...

  15. 10 CFR 490.503 - Credit allocation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Credit allocation. 490.503 Section 490.503 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ALTERNATIVE FUEL TRANSPORTATION PROGRAM Alternative Fueled Vehicle... described in section 490.507 of this part, DOE shall allocate one credit for each alternative fueled...

  16. 10 CFR 490.503 - Credit allocation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Credit allocation. 490.503 Section 490.503 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ALTERNATIVE FUEL TRANSPORTATION PROGRAM Alternative Fueled Vehicle... described in section 490.507 of this part, DOE shall allocate one credit for each alternative fueled...

  17. Thematic Mapper data for forest resource allocation

    NASA Technical Reports Server (NTRS)

    Zeff, Ilene S.; Merry, Carolyn J.

    1993-01-01

    A technique for classifying a Landsat Thematic Mapper image was demonstrated on the Wayne National Forest of southeastern Ohio. The classified image was integrated into a geographic information system database, and prescriptive forest land use allocation models were developed using the techniques of cartographic modeling. Timber harvest sites and accompanying haul roads were allocated.

  18. 42 CFR 24.2 - Allocation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Allocation. 24.2 Section 24.2 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES PERSONNEL SENIOR BIOMEDICAL RESEARCH SERVICE § 24.2 Allocation. (a) The Secretary, within the number authorized in the PHS Act, shall determine...

  19. 42 CFR 24.2 - Allocation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Allocation. 24.2 Section 24.2 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES PERSONNEL SENIOR BIOMEDICAL RESEARCH SERVICE § 24.2 Allocation. (a) The Secretary, within the number authorized in the PHS Act, shall determine...

  20. 42 CFR 24.2 - Allocation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Allocation. 24.2 Section 24.2 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES PERSONNEL SENIOR BIOMEDICAL RESEARCH SERVICE § 24.2 Allocation. (a) The Secretary, within the number authorized in the PHS Act, shall determine...

  1. 42 CFR 24.2 - Allocation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Allocation. 24.2 Section 24.2 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES PERSONNEL SENIOR BIOMEDICAL RESEARCH SERVICE § 24.2 Allocation. (a) The Secretary, within the number authorized in the PHS Act, shall determine...

  2. 42 CFR 24.2 - Allocation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Allocation. 24.2 Section 24.2 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES PERSONNEL SENIOR BIOMEDICAL RESEARCH SERVICE § 24.2 Allocation. (a) The Secretary, within the number authorized in the PHS Act, shall determine...

  3. 45 CFR 304.15 - Cost allocation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... FEDERAL FINANCIAL PARTICIPATION § 304.15 Cost allocation. A State agency in support of its claims under title IV-D of the Social Security Act must have an approved cost allocation plan on file with the Department in accordance with the requirements contained in Subpart E of 45 CFR part 95. Subpart E also...

  4. 45 CFR 304.15 - Cost allocation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... FEDERAL FINANCIAL PARTICIPATION § 304.15 Cost allocation. A State agency in support of its claims under title IV-D of the Social Security Act must have an approved cost allocation plan on file with the Department in accordance with the requirements contained in Subpart E of 45 CFR part 95. Subpart E also...

  5. Resource Allocation in Classrooms. Final Report.

    ERIC Educational Resources Information Center

    Thomas, J. Alan

    This report deals with the allocation of resources within classrooms and homes. It is based on the assumption that learning occurs through a set of processes that require the utilization of human and material resources. It is assumed that the study of resource allocation at the micro level will help provide an understanding of the effect on…

  6. 45 CFR 98.55 - Cost allocation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Cost allocation. 98.55 Section 98.55 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Use of Child Care and Development Funds § 98.55 Cost allocation. (a) The Lead Agency and...

  7. 45 CFR 98.55 - Cost allocation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Cost allocation. 98.55 Section 98.55 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Use of Child Care and Development Funds § 98.55 Cost allocation. (a) The Lead Agency and...

  8. 45 CFR 98.55 - Cost allocation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Cost allocation. 98.55 Section 98.55 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Use of Child Care and Development Funds § 98.55 Cost allocation. (a) The Lead Agency and...

  9. 45 CFR 98.55 - Cost allocation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Cost allocation. 98.55 Section 98.55 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Use of Child Care and Development Funds § 98.55 Cost allocation. (a) The Lead Agency and...

  10. 45 CFR 98.55 - Cost allocation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Cost allocation. 98.55 Section 98.55 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Use of Child Care and Development Funds § 98.55 Cost allocation. (a) The Lead Agency and...

  11. 50 CFR 600.517 - Allocations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Allocations. 600.517 Section 600.517 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE MAGNUSON-STEVENS ACT PROVISIONS Foreign Fishing § 600.517 Allocations. The...

  12. 24 CFR 594.15 - Allocation amounts.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Allocation amounts. 594.15 Section... § 594.15 Allocation amounts. (a) Amounts and match requirement. HUD will make grants, in the form of... for less than the maximum amount established by statute, and to limit the number of times a...

  13. 45 CFR 304.15 - Cost allocation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Welfare Regulations Relating to Public Welfare OFFICE OF CHILD SUPPORT ENFORCEMENT (CHILD SUPPORT... FEDERAL FINANCIAL PARTICIPATION § 304.15 Cost allocation. A State agency in support of its claims under title IV-D of the Social Security Act must have an approved cost allocation plan on file with...

  14. Acquisitions Allocations: Fairness, Equity and Bundled Pricing.

    ERIC Educational Resources Information Center

    Packer, Donna

    2001-01-01

    Examined the effect of an interdisciplinary Web-based citation database with full text, the ProQuest Research Library, on the Western State University library's acquisitions allocation plan. Used list price of full-text journals to calculate increases in acquisitions funding. A list of articles discussing formula allocation is appended.…

  15. 50 CFR 660.55 - Allocations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... PCGFMP. Allocation of black rockfish is described in paragraph (l) of this section. Allocation of Pacific... RF North of 40°10′ N. lat. 81 18 Minor Slope RF South of 40°10′ N. lat. 63 37 Dover Sole 95 5 English... through the biennial harvest specifications and management measures process. (k) (l) Black...

  16. 50 CFR 660.55 - Allocations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... PCGFMP. Allocation of black rockfish is described in paragraph (l) of this section. Allocation of Pacific... RF North of 40°10′ N. lat. 81 18 Minor Slope RF South of 40°10′ N. lat. 63 37 Dover Sole 95 5 English... through the biennial harvest specifications and management measures process. (k) (l) Black...

  17. 50 CFR 660.55 - Allocations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... PCGFMP. Allocation of black rockfish is described in paragraph (l) of this section. Allocation of Pacific... Dover Sole 95 5 English Sole 95 5 Petrale Sole 95 5 Arrowtooth Flounder 95 5 Starry Flounder 50 50 Other... measures process. (l) Black rockfish harvest guideline. The commercial tribal harvest guideline for...

  18. 44 CFR 304.4 - Allocations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Allocations. 304.4 Section 304.4 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY PREPAREDNESS CONSOLIDATED GRANTS TO INSULAR AREAS § 304.4 Allocations. For each Federal...

  19. 23 CFR 660.107 - Allocations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Allocations. 660.107 Section 660.107 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS SPECIAL PROGRAMS (DIRECT FEDERAL) Forest Highways § 660.107 Allocations. On October 1 of each fiscal year, the FHWA...

  20. 15 CFR 336.4 - Allocation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Allocation. 336.4 Section 336.4 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) INTERNATIONAL... § 336.4 Allocation. (a) The Tariff Rate Quota licenses will be issued to eligible manufacturers on...

  1. 48 CFR 5452.249 - Allocation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Allocation. 5452.249 Section 5452.249 Federal Acquisition Regulations System DEFENSE LOGISTICS AGENCY, DEPARTMENT OF DEFENSE SOLICITATION PROVISIONS AND CONTRACT CLAUSES Texts of Provisions and Clauses 5452.249 Allocation. The...

  2. 48 CFR 5452.249 - Allocation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Allocation. 5452.249 Section 5452.249 Federal Acquisition Regulations System DEFENSE LOGISTICS AGENCY, DEPARTMENT OF DEFENSE SOLICITATION PROVISIONS AND CONTRACT CLAUSES Texts of Provisions and Clauses 5452.249 Allocation. The...

  3. 48 CFR 5452.249 - Allocation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Allocation. 5452.249 Section 5452.249 Federal Acquisition Regulations System DEFENSE LOGISTICS AGENCY, DEPARTMENT OF DEFENSE SOLICITATION PROVISIONS AND CONTRACT CLAUSES Texts of Provisions and Clauses 5452.249 Allocation. The...

  4. 45 CFR 205.150 - Cost allocation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ASSISTANCE PROGRAMS § 205.150 Cost allocation. A State plan under title I, IV-A, X, XIV, or XVI (AABD) of the Social Security Act must provide that the State agency will have an approved cost allocation plan on file with the Department in accordance with the requirements contained in subpart E of 45 CFR part...

  5. Resource Allocation in Public Research Universities

    ERIC Educational Resources Information Center

    Santos, Jose L.

    2007-01-01

    The purpose of this study was to conduct an econometric analysis of internal resource allocation. Two theories are used for this study of resource allocation in public research universities, and these are: (1) Theory of the Firm; and (2) Resource Dependence Theory. This study used the American Association of Universities Data Exchange (AAUDE)…

  6. 42 CFR 457.228 - Cost allocation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... accordance with the requirements contained in subpart E of 45 CFR part 95. Subpart E also sets forth the... 42 Public Health 4 2011-10-01 2011-10-01 false Cost allocation. 457.228 Section 457.228 Public...; Reduction of Federal Medical Payments § 457.228 Cost allocation. A State plan must provide that the...

  7. 42 CFR 433.34 - Cost allocation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Department in accordance with the requirements contained in subpart E of 45 CFR part 95. Subpart E also sets... 42 Public Health 4 2011-10-01 2011-10-01 false Cost allocation. 433.34 Section 433.34 Public... Provisions § 433.34 Cost allocation. A State plan under Title XIX of the Social Security Act must...

  8. Rethinking Reinforcement: Allocation, Induction, and Contingency

    ERIC Educational Resources Information Center

    Baum, William M.

    2012-01-01

    The concept of reinforcement is at least incomplete and almost certainly incorrect. An alternative way of organizing our understanding of behavior may be built around three concepts: "allocation," "induction," and "correlation." Allocation is the measure of behavior and captures the centrality of choice: All behavior entails choice and consists of…

  9. Rapid programmable/code-length-variable, time-domain bit-by-bit code shifting for high-speed secure optical communication.

    PubMed

    Gao, Zhensen; Dai, Bo; Wang, Xu; Kataoka, Nobuyuki; Wada, Naoya

    2011-05-01

    We propose and experimentally demonstrate a time-domain bit-by-bit code-shifting scheme that can rapidly program ultralong, code-length variable optical code by using only a dispersive element and a high-speed phase modulator for improving information security. The proposed scheme operates in the bit overlap regime and could eliminate the vulnerability of extracting the code by analyzing the fine structure of the time-domain spectral phase encoded signal. It is also intrinsically immune to eavesdropping via conventional power detection and differential-phase-shift-keying (DPSK) demodulation attacks. With this scheme, 10 Gbits/s of return-to-zero-DPSK data secured by bit-by-bit code shifting using up to 1024 chip optical code patterns have been transmitted over 49 km error free. The proposed scheme exhibits the potential for high-data-rate secure optical communication and to realize even one time pad.

  10. Optimization-based image reconstruction with artifact reduction in C-arm CBCT

    NASA Astrophysics Data System (ADS)

    Xia, Dan; Langan, David A.; Solomon, Stephen B.; Zhang, Zheng; Chen, Buxin; Lai, Hao; Sidky, Emil Y.; Pan, Xiaochuan

    2016-10-01

    We investigate an optimization-based reconstruction, with an emphasis on image-artifact reduction, from data collected in C-arm cone-beam computed tomography (CBCT) employed in image-guided interventional procedures. In the study, an image to be reconstructed is formulated as a solution to a convex optimization program in which a weighted data divergence is minimized subject to a constraint on the image total variation (TV); a data-derivative fidelity is introduced in the program specifically for effectively suppressing dominant, low-frequency data artifact caused by, e.g. data truncation; and the Chambolle–Pock (CP) algorithm is tailored to reconstruct an image through solving the program. Like any other reconstructions, the optimization-based reconstruction considered depends upon numerous parameters. We elucidate the parameters, illustrate their determination, and demonstrate their impact on the reconstruction. The optimization-based reconstruction, when applied to data collected from swine and patient subjects, yields images with visibly reduced artifacts in contrast to the reference reconstruction, and it also appears to exhibit a high degree of robustness against distinctively different anatomies of imaged subjects and scanning conditions of clinical significance. Knowledge and insights gained in the study may be exploited for aiding in the design of practical reconstructions of truly clinical-application utility.

  11. Artifact reduction in short-scan CBCT by use of optimization-based reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Han, Xiao; Pearson, Erik; Pelizzari, Charles; Sidky, Emil Y.; Pan, Xiaochuan

    2016-05-01

    Increasing interest in optimization-based reconstruction in research on, and applications of, cone-beam computed tomography (CBCT) exists because it has been shown to have to potential to reduce artifacts observed in reconstructions obtained with the Feldkamp–Davis–Kress (FDK) algorithm (or its variants), which is used extensively for image reconstruction in current CBCT applications. In this work, we carried out a study on optimization-based reconstruction for possible reduction of artifacts in FDK reconstruction specifically from short-scan CBCT data. The investigation includes a set of optimization programs such as the image-total-variation (TV)-constrained data-divergency minimization, data-weighting matrices such as the Parker weighting matrix, and objects of practical interest for demonstrating and assessing the degree of artifact reduction. Results of investigative work reveal that appropriately designed optimization-based reconstruction, including the image-TV-constrained reconstruction, can reduce significant artifacts observed in FDK reconstruction in CBCT with a short-scan configuration.

  12. Artifact reduction in short-scan CBCT by use of optimization-based reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Han, Xiao; Pearson, Erik; Pelizzari, Charles; Sidky, Emil Y.; Pan, Xiaochuan

    2016-05-01

    Increasing interest in optimization-based reconstruction in research on, and applications of, cone-beam computed tomography (CBCT) exists because it has been shown to have to potential to reduce artifacts observed in reconstructions obtained with the Feldkamp-Davis-Kress (FDK) algorithm (or its variants), which is used extensively for image reconstruction in current CBCT applications. In this work, we carried out a study on optimization-based reconstruction for possible reduction of artifacts in FDK reconstruction specifically from short-scan CBCT data. The investigation includes a set of optimization programs such as the image-total-variation (TV)-constrained data-divergency minimization, data-weighting matrices such as the Parker weighting matrix, and objects of practical interest for demonstrating and assessing the degree of artifact reduction. Results of investigative work reveal that appropriately designed optimization-based reconstruction, including the image-TV-constrained reconstruction, can reduce significant artifacts observed in FDK reconstruction in CBCT with a short-scan configuration.

  13. Asymmetric programming: a highly reliable metadata allocation strategy for MLC NAND flash memory-based sensor systems.

    PubMed

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-01-01

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme.

  14. Asymmetric programming: a highly reliable metadata allocation strategy for MLC NAND flash memory-based sensor systems.

    PubMed

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-01-01

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. PMID:25310473

  15. Making sense of peak load cost allocations

    SciTech Connect

    Power, T.M.

    1995-03-15

    When it comes to cost allocation, common wisdom assigns costs in proportion to class contributions to peak loads, The justification is simple: Since the equipment had to be sized to meet peak day loads, those costs should be allocated on the same basis. Many different peak allocators have been developed on this assumption: single coincident peak contribution, sum of coincident peaks, noncoincident peak, average and excess demand, peak and average demand, base and extra capacity, and so on. Such pure peak-load allocators may not be politically acceptable, but conceptually, at least, they appear to offer the only defensible approach. Nevertheless, where capacity can be added with significant economies of scale, making cost allocations in proportion to peak loads violates well-known relationships between economics and engineering. What is missing is any tracing of the way in which the peak-load design criteria actually influence the cost incurred.

  16. 49 CFR 33.53 - Types of allocation orders.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Allocation Actions § 33.53 Types of allocation orders. There are three types of allocation orders available for communicating allocation actions. These are: (a) Set-aside: An official action that requires a... 49 Transportation 1 2013-10-01 2013-10-01 false Types of allocation orders. 33.53 Section...

  17. 7 CFR 761.205 - Computing the formula allocation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Computing the formula allocation. 761.205 Section 761... Funds to State Offices § 761.205 Computing the formula allocation. (a) The formula allocation for FO, CL... program that the National Office allocates to a State Office. formula allocation = (amount available...

  18. Guaranteed energy-efficient bit reset in finite time.

    PubMed

    Browne, Cormac; Garner, Andrew J P; Dahlsten, Oscar C O; Vedral, Vlatko

    2014-09-01

    Landauer's principle states that it costs at least kBTln2 of work to reset one bit in the presence of a heat bath at temperature T. The bound of kBTln2 is achieved in the unphysical infinite-time limit. Here we ask what is possible if one is restricted to finite-time protocols. We prove analytically that it is possible to reset a bit with a work cost close to kBTln2 in a finite time. We construct an explicit protocol that achieves this, which involves thermalizing and changing the system's Hamiltonian so as to avoid quantum coherences. Using concepts and techniques pertaining to single-shot statistical mechanics, we furthermore prove that the heat dissipated is exponentially close to the minimal amount possible not just on average, but guaranteed with high confidence in every run. Moreover, we exploit the protocol to design a quantum heat engine that works near the Carnot efficiency in finite time.

  19. Device-independent bit commitment based on the CHSH inequality

    NASA Astrophysics Data System (ADS)

    Aharon, N.; Massar, S.; Pironio, S.; Silman, J.

    2016-02-01

    Bit commitment and coin flipping occupy a unique place in the device-independent landscape, as the only device-independent protocols thus far suggested for these tasks are reliant on tripartite GHZ correlations. Indeed, we know of no other bipartite tasks, which admit a device-independent formulation, but which are not known to be implementable using only bipartite nonlocality. Another interesting feature of these protocols is that the pseudo-telepathic nature of GHZ correlations—in contrast to the generally statistical character of nonlocal correlations, such as those arising in the violation of the CHSH inequality—is essential to their formulation and analysis. In this work, we present a device-independent bit commitment protocol based on CHSH testing, which achieves the same security as the optimal GHZ-based protocol, albeit at the price of fixing the time at which Alice reveals her commitment. The protocol is analyzed in the most general settings, where the devices are used repeatedly and may have long-term quantum memory. We also recast the protocol in a post-quantum setting where both honest and dishonest parties are restricted only by the impossibility of signaling, and find that overall the supra-quantum structure allows for greater security.

  20. A 32-bit Ultrafast Parallel Correlator using Resonant Tunneling Devices

    NASA Technical Reports Server (NTRS)

    Kulkarni, Shriram; Mazumder, Pinaki; Haddad, George I.

    1995-01-01

    An ultrafast 32-bit pipeline correlator has been implemented using resonant tunneling diodes (RTD) and hetero-junction bipolar transistors (HBT). The negative differential resistance (NDR) characteristics of RTD's is the basis of logic gates with the self-latching property that eliminates pipeline area and delay overheads which limit throughput in conventional technologies. The circuit topology also allows threshold logic functions such as minority/majority to be implemented in a compact manner resulting in reduction of the overall complexity and delay of arbitrary logic circuits. The parallel correlator is an essential component in code division multi-access (CDMA) transceivers used for the continuous calculation of correlation between an incoming data stream and a PN sequence. Simulation results show that a nano-pipelined correlator can provide and effective throughput of one 32-bit correlation every 100 picoseconds, using minimal hardware, with a power dissipation of 1.5 watts. RTD plus HBT based logic gates have been fabricated and the RTD plus HBT based correlator is compared with state of the art complementary metal oxide semiconductor (CMOS) implementations.

  1. Extending Landauer's Bound from Bit Erasure to Arbitrary Computation

    NASA Astrophysics Data System (ADS)

    Wolpert, David

    Recent analyses have calculated the minimal thermodynamic work required to perform any computation π whose output is independent of its input, e.g., bit erasure. First I extend these analyses to calculate the work required even if the output of π depends on its input. Next I show that if a physical computer C implementing a computation π will be re-used, then the work required depends only on the dynamics of the logical variables under π, independent of the physical details of C. This establishes a formal identity between the thermodynamics of (re-usable) computers and theoretical computer science. To illustrate this identity, I prove that the minimal work required to compute a bit string σ on a (physical) Turing machine M is kB Tln (2) [ Kolmogorov complexity(σ) + log (Bernoulli measure of the set of strings that compute σ) + log(halting probability of M) ] . I also prove that uncertainty about the distribution over inputs to the computer increases the minimal work required to run the computer. I end by using these results to relate the free energy flux incident on an organism / robot / biosphere to the maximal amount of computation that the organism / robot / biosphere can do per unit time.

  2. Statistical mechanics analysis of thresholding 1-bit compressed sensing

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Kabashima, Yoshiyuki

    2016-08-01

    The one-bit compressed sensing framework aims to reconstruct a sparse signal by only using the sign information of its linear measurements. To compensate for the loss of scale information, past studies in the area have proposed recovering the signal by imposing an additional constraint on the l 2-norm of the signal. Recently, an alternative strategy that captures scale information by introducing a threshold parameter to the quantization process was advanced. In this paper, we analyze the typical behavior of thresholding 1-bit compressed sensing utilizing the replica method of statistical mechanics, so as to gain an insight for properly setting the threshold value. Our result shows that fixing the threshold at a constant value yields better performance than varying it randomly when the constant is optimally tuned, statistically. Unfortunately, the optimal threshold value depends on the statistical properties of the target signal, which may not be known in advance. In order to handle this inconvenience, we develop a heuristic that adaptively tunes the threshold parameter based on the frequency of positive (or negative) values in the binary outputs. Numerical experiments show that the heuristic exhibits satisfactory performance while incurring low computational cost.

  3. Learning may need only a few bits of synaptic precision.

    PubMed

    Baldassi, Carlo; Gerace, Federica; Lucibello, Carlo; Saglietti, Luca; Zecchina, Riccardo

    2016-05-01

    Learning in neural networks poses peculiar challenges when using discretized rather then continuous synaptic states. The choice of discrete synapses is motivated by biological reasoning and experiments, and possibly by hardware implementation considerations as well. In this paper we extend a previous large deviations analysis which unveiled the existence of peculiar dense regions in the space of synaptic states which accounts for the possibility of learning efficiently in networks with binary synapses. We extend the analysis to synapses with multiple states and generally more plausible biological features. The results clearly indicate that the overall qualitative picture is unchanged with respect to the binary case, and very robust to variation of the details of the model. We also provide quantitative results which suggest that the advantages of increasing the synaptic precision (i.e., the number of internal synaptic states) rapidly vanish after the first few bits, and therefore that, for practical applications, only few bits may be needed for near-optimal performance, consistent with recent biological findings. Finally, we demonstrate how the theoretical analysis can be exploited to design efficient algorithmic search strategies. PMID:27300916

  4. Statistical mechanics approach to 1-bit compressed sensing

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Kabashima, Yoshiyuki

    2013-02-01

    Compressed sensing is a framework that makes it possible to recover an N-dimensional sparse vector x∈RN from its linear transformation y∈RM of lower dimensionality M < N. A scheme further reducing the data size of the compressed expression by using only the sign of each entry of y to recover x was recently proposed. This is often termed 1-bit compressed sensing. Here, we analyze the typical performance of an l1-norm-based signal recovery scheme for 1-bit compressed sensing using statistical mechanics methods. We show that the signal recovery performance predicted by the replica method under the replica symmetric ansatz, which turns out to be locally unstable for modes breaking the replica symmetry, is in good consistency with experimental results of an approximate recovery algorithm developed earlier. This suggests that the l1-based recovery problem typically has many local optima of a similar recovery accuracy, which can be achieved by the approximate algorithm. We also develop another approximate recovery algorithm inspired by the cavity method. Numerical experiments show that when the density of nonzero entries in the original signal is relatively large the new algorithm offers better performance than the abovementioned scheme and does so with a lower computational cost.

  5. Qualification of BitClean technology in photomask production

    NASA Astrophysics Data System (ADS)

    Robinson, Tod; White, Roy; Bozak, Ron; Archuletta, Mike; Brinkley, David; Yi, Daniel

    2010-09-01

    Makers and users of advanced technology photomasks have seen increased difficulties with the removal of persistent, or stubborn, nano-particle contamination. Shrinking pattern geometries, and new mask clean technologies to minimize haze, have both increased the number of problems and loss of mask yield due to these non-removable nano-particles. A novel technique (BitCleanTM) has been developed using the MerlinTM platform, a scanning probe microscope system originally designed for nanomachining photomask defect repair. Progress in the technical development of this approach into a manufacture-able solution is reviewed and its effectiveness is shown in selectively removing adherent particles without touching surrounding sensitive structures. Results will also be reviewed that were generated in the qualification and acceptance of this new technology in a photomask production environment. These results will be discussed in their relation to the minimum particle size allowed on a given design, particle removal efficiency per pass of the NanoBitTM (PREPP), and the resultant average removal throughput of particles unaffected by any other available mask clean process.

  6. Reexamination of quantum bit commitment: The possible and the impossible

    SciTech Connect

    D'Ariano, Giacomo Mauro; Kretschmann, Dennis; Schlingemann, Dirk; Werner, Reinhard F.

    2007-09-15

    Bit commitment protocols whose security is based on the laws of quantum mechanics alone are generally held to be impossible. We give a strengthened and explicit proof of this result. We extend its scope to a much larger variety of protocols, which may have an arbitrary number of rounds, in which both classical and quantum information is exchanged, and which may include aborts and resets. Moreover, we do not consider the receiver to be bound to a fixed 'honest' strategy, so that 'anonymous state protocols', which were recently suggested as a possible way to beat the known no-go results, are also covered. We show that any concealing protocol allows the sender to find a cheating strategy, which is universal in the sense that it works against any strategy of the receiver. Moreover, if the concealing property holds only approximately, the cheat goes undetected with a high probability, which we explicitly estimate. The proof uses an explicit formalization of general two-party protocols, which is applicable to more general situations, and an estimate about the continuity of the Stinespring dilation of a general quantum channel. The result also provides a natural characterization of protocols that fall outside the standard setting of unlimited available technology and thus may allow secure bit commitment. We present such a protocol whose security, perhaps surprisingly, relies on decoherence in the receiver's laboratory.

  7. Learning may need only a few bits of synaptic precision

    NASA Astrophysics Data System (ADS)

    Baldassi, Carlo; Gerace, Federica; Lucibello, Carlo; Saglietti, Luca; Zecchina, Riccardo

    2016-05-01

    Learning in neural networks poses peculiar challenges when using discretized rather then continuous synaptic states. The choice of discrete synapses is motivated by biological reasoning and experiments, and possibly by hardware implementation considerations as well. In this paper we extend a previous large deviations analysis which unveiled the existence of peculiar dense regions in the space of synaptic states which accounts for the possibility of learning efficiently in networks with binary synapses. We extend the analysis to synapses with multiple states and generally more plausible biological features. The results clearly indicate that the overall qualitative picture is unchanged with respect to the binary case, and very robust to variation of the details of the model. We also provide quantitative results which suggest that the advantages of increasing the synaptic precision (i.e., the number of internal synaptic states) rapidly vanish after the first few bits, and therefore that, for practical applications, only few bits may be needed for near-optimal performance, consistent with recent biological findings. Finally, we demonstrate how the theoretical analysis can be exploited to design efficient algorithmic search strategies.

  8. Improved Iris Recognition through Fusion of Hamming Distance and Fragile Bit Distance.

    PubMed

    Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J

    2011-12-01

    The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are equally consistent. A bit is deemed fragile if its value changes across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. To our knowledge, this is the first and only work to use the coincidence of fragile bit locations to improve the accuracy of matches.

  9. Bit-1 mediates integrin-dependent cell survival through activation of the NFkappaB pathway.

    PubMed

    Griffiths, Genevieve S; Grundl, Melanie; Leychenko, Anna; Reiter, Silke; Young-Robbins, Shirley S; Sulzmaier, Florian J; Caliva, Maisel J; Ramos, Joe W; Matter, Michelle L

    2011-04-22

    Loss of properly regulated cell death and cell survival pathways can contribute to the development of cancer and cancer metastasis. Cell survival signals are modulated by many different receptors, including integrins. Bit-1 is an effector of anoikis (cell death due to loss of attachment) in suspended cells. The anoikis function of Bit-1 can be counteracted by integrin-mediated cell attachment. Here, we explored integrin regulation of Bit-1 in adherent cells. We show that knockdown of endogenous Bit-1 in adherent cells decreased cell survival and re-expression of Bit-1 abrogated this effect. Furthermore, reduction of Bit-1 promoted both staurosporine and serum-deprivation induced apoptosis. Indeed knockdown of Bit-1 in these cells led to increased apoptosis as determined by caspase-3 activation and positive TUNEL staining. Bit-1 expression protected cells from apoptosis by increasing phospho-IκB levels and subsequently bcl-2 gene transcription. Protection from apoptosis under serum-free conditions correlated with bcl-2 transcription and Bcl-2 protein expression. Finally, Bit-1-mediated regulation of bcl-2 was dependent on focal adhesion kinase, PI3K, and AKT. Thus, we have elucidated an integrin-controlled pathway in which Bit-1 is, in part, responsible for the survival effects of cell-ECM interactions.

  10. Inter-bit prediction based on maximum likelihood estimate for distributed video coding

    NASA Astrophysics Data System (ADS)

    Klepko, Robert; Wang, Demin; Huchet, Grégory

    2010-01-01

    Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.

  11. Control Allocation with Load Balancing

    NASA Technical Reports Server (NTRS)

    Bodson, Marc; Frost, Susan A.

    2009-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l(infinity) norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is th desired command and the algorithm balances this load among various actuators. The solution using the l(infinity) norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples.

  12. Shift-invariant target in allocation problems.

    PubMed

    Mandal, Saumen; Biswas, Atanu

    2014-07-10

    We provide a template for finding target allocation proportions in optimal allocation designs where the target will be invariant for both shifts in location and scale of the response distributions. One possible application of such target allocation proportions is to carry out a response-adaptive allocation. While most of the existing designs are invariant for any change in scale of the underlying distributions, they are not location invariant in most of the cases. First, we indicate this serious flaw in the existing literature and illustrate how this lack of location invariance makes the performance of the designs very poor in terms of allocation for any drastic change in location, such as the changes from degrees centigrade to degrees Fahrenheit. We illustrate that unless a target allocation is location invariant, it might lead to a completely irrelevant and useless target for allocation. Then we discuss how such location invariance can be achieved for general continuous responses. We illustrate the proposed method using some real clinical trial data. We also indicate the possible extension of the procedure for more than two treatments at hand and in the presence of covariates.

  13. 47 CFR 76.924 - Allocation to service cost categories.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... allocator has been specified by the Commission are to be allocated among the service cost categories and the... analysis is not possible, common costs for which no allocator has been specified by the Commission...

  14. 47 CFR 76.924 - Allocation to service cost categories.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... allocator has been specified by the Commission are to be allocated among the service cost categories and the... analysis is not possible, common costs for which no allocator has been specified by the Commission...

  15. 47 CFR 76.924 - Allocation to service cost categories.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... allocator has been specified by the Commission are to be allocated among the service cost categories and the... analysis is not possible, common costs for which no allocator has been specified by the Commission...

  16. Brownian motion properties of optoelectronic random bit generators based on laser chaos.

    PubMed

    Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge

    2016-07-11

    The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source. PMID:27410852

  17. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    NASA Technical Reports Server (NTRS)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  18. Enhancement of LED indoor communications using OPPM-PWM modulation and grouped bit-flipping decoding.

    PubMed

    Yang, Aiying; Li, Xiangming; Jiang, Tao

    2012-04-23

    Combination of overlapping pulse position modulation and pulse width modulation at the transmitter and grouped bit-flipping algorithm for low-density parity-check decoding at the receiver are proposed for visible Light Emitting Diode (LED) indoor communication system in this paper. The results demonstrate that, with the same Photodetector, the bit rate can be increased and the performance of the communication system can be improved by the scheme we proposed. Compared with the standard bit-flipping algorithm, the grouped bit-flipping algorithm can achieve more than 2.0 dB coding gain at bit error rate of 10-5. By optimizing the encoding of overlapping pulse position modulation and pulse width modulation symbol, the performance can be further improved. It is reasonably expected that the bit rate can be upgraded to 400 Mbit/s with a single available LED, thus transmission rate beyond 1 Gbit/s is foreseen by RGB LEDs.

  19. Avalanche and bit independence characteristics of double random phase encoding in the Fourier and Fresnel domains.

    PubMed

    Moon, Inkyu; Yi, Faliu; Lee, Yeon H; Javidi, Bahram

    2014-05-01

    In this work, we evaluate the avalanche effect and bit independence properties of the double random phase encoding (DRPE) algorithm in the Fourier and Fresnel domains. Experimental results show that DRPE has excellent bit independence characteristics in both the Fourier and Fresnel domains. However, DRPE achieves better avalanche effect results in the Fresnel domain than in the Fourier domain. DRPE gives especially poor avalanche effect results in the Fourier domain when only one bit is changed in the plaintext or in the encryption key. Despite this, DRPE shows satisfactory avalanche effect results in the Fresnel domain when any other number of bits changes in the plaintext or in the encryption key. To the best of our knowledge, this is the first report on the avalanche effect and bit independence behaviors of optical encryption approaches for bit units.

  20. Brownian motion properties of optoelectronic random bit generators based on laser chaos.

    PubMed

    Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge

    2016-07-11

    The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source.

  1. Parameter allocation of parallel array bistable stochastic resonance and its application in communication systems

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Wang, You-Guo; Zhai, Qi-Qing; Liu, Jin

    2016-10-01

    In this paper, we propose a parameter allocation scheme in a parallel array bistable stochastic resonance-based communication system (P-BSR-CS) to improve the performance of weak binary pulse amplitude modulated (BPAM) signal transmissions. The optimal parameter allocation policy of the P-BSR-CS is provided to minimize the bit error rate (BER) and maximize the channel capacity (CC) under the adiabatic approximation condition. On this basis, we further derive the best parameter selection theorem in realistic communication scenarios via variable transformation. Specifically, the P-BSR structure design not only brings the robustness of parameter selection optimization, where the optimal parameter pair is not fixed but variable in quite a wide range, but also produces outstanding system performance. Theoretical analysis and simulation results indicate that in the P-BSR-CS the proposed parameter allocation scheme yields considerable performance improvement, particularly in very low signal-to-noise ratio (SNR) environments. Project supported by the National Natural Science Foundation of China (Grant No. 61179027), the Qinglan Project of Jiangsu Province of China (Grant No. QL06212006), and the University Postgraduate Research and Innovation Project of Jiangsu Province (Grant Nos. KYLX15_0829, KYLX15_0831).

  2. Linear modelling of attentional resource allocation

    NASA Technical Reports Server (NTRS)

    Pierce, B.

    1978-01-01

    Eight subjects time-shared performance of two compensatory tracking tasks under conditions when both were of constant difficulty, and when the control order of one task (designated primary) was varied over time within a trial. On line performance feedback was presented on half of the trials. The data are interpreted in terms of a linear model of the operator's attention allocation system, and suggest that this allocation is strongly suboptimal. Furthermore, the limitations in reallocating attentional resources between tasks, in response to difficulty fluctuations were not reduced by augmented performance feedback. Some characteristics of the allocation system are described, and reasons for its limitations suggested.

  3. Compact fibre Bragg grating-based thermometer for on-line temperature monitoring of drill bits

    NASA Astrophysics Data System (ADS)

    Hey Tow, Kenny; Llera, Miguel; Le Floch, Sébastien; Salvadé, Yves; Thévenaz, Luc

    2016-05-01

    In this communication, a novel compact fibre Bragg grating-based thermometer for on-line temperature monitoring of drill bits is reported. Our proposed technique can potentially be used to optimize any drilling process, requiring the use of small drill bits, through direct temperature measurement at the drill bit instead of relying on indirect parameters (speed of rotation, applied force) in order to avoid an overheating as it is currently done nowadays.

  4. Development and Testing of a Jet Assisted Polycrystalline Diamond Drilling Bit. Phase II Development Efforts

    SciTech Connect

    David S. Pixton

    1999-09-20

    Phase II efforts to develop a jet-assisted rotary-percussion drill bit are discussed. Key developments under this contract include: (1) a design for a more robust polycrystalline diamond drag cutter; (2) a new drilling mechanism which improves penetration and life of cutters; and (3) a means of creating a high-pressure mud jet inside of a percussion drill bit. Field tests of the new drill bit and the new robust cutter are forthcoming.

  5. Compressed binary bit trees: a new data structure for accelerating database searching.

    PubMed

    Smellie, Andrew

    2009-02-01

    Molecules are often represented as bit string fingerprints in databases. These bit strings are used for similarity searching using the Tanimoto coefficient and rapid indexing. A new data structure is introduced, the compressed bit binary tree, that rapidly increases search and indexing times by up to a factor of 30. Results will be shown for databases of up to 1 M compounds with a variety of search parameters.

  6. A bit-serial first-level calorimeter trigger for LHC detectors

    SciTech Connect

    Bohm, C.; Zhao, X.; Appelquist, G.; Engstroem, M.; Hellman, S.; Holmgren, S.O.; Johansson, E.; Yamdagni, N.

    1994-12-31

    A first-level calorimeter trigger design, implemented as a farm of local bit-serial systolic arrays, is presented. The massive bit-serial operation can achieve higher processing throughput and more compact designs than conventional bit-parallel data representation. The construction is based on high speed optical fiber data transmissions, Application Specific Integrated Circuits (ASICs) and multi-chip modules (MCMs) packaging technologies.

  7. "Push back" technique: A simple method to remove broken drill bit from the proximal femur.

    PubMed

    Chouhan, Devendra K; Sharma, Siddhartha

    2015-11-18

    Broken drill bits can be difficult to remove from the proximal femur and may necessitate additional surgical exploration or special instrumentation. We present a simple technique to remove a broken drill bit that does not require any special instrumentation and can be accomplished through the existing incision. This technique is useful for those cases where the length of the broken drill bit is greater than the diameter of the bone.

  8. Novel Parity-Preserving Designs of Reversible 4-Bit Comparator

    NASA Astrophysics Data System (ADS)

    Qi, Xue-mei; Chen, Fu-long; Wang, Hong-tao; Sun, Yun-xiang; Guo, Liang-min

    2014-04-01

    Reversible logic has attracted much attention in recent years especially when the calculation with minimum energy consumption is considered. This paper presents two novel approaches for designing reversible 4-bit comparator based on parity-preserving gates, which can detect any fault that affects no more than a single logic signal. In order to construct the comparator, three variable EX-OR gate (TVG), comparator gate (CPG), four variable EX-OR gate block (FVGB) and comparator gate block (CPGB) are designed, and they are parity-preserving and reversible. Their quantum equivalent implementations are also proposed. The design of two comparator circuits is completed by using existing reversible gates and the above new reversible circuits. All these comparators have been modeled and verified in Verilog hardware description language (Verilog HDL). The Quartus II simulation results indicate that their circuits' logic structures are correct. The comparative results are presented in terms of quantum cost, delay and garbage outputs.

  9. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  10. Power of one bit of quantum information in quantum metrology

    NASA Astrophysics Data System (ADS)

    Cable, Hugo; Gu, Mile; Modi, Kavan

    2016-04-01

    We present a model of quantum metrology inspired by the computational model known as deterministic quantum computation with one quantum bit (DQC1). Using only one pure qubit together with l fully mixed qubits we obtain measurement precision (defined as root-mean-square error for the parameter being estimated) at the standard quantum limit, which is typically obtained using the same number of uncorrelated qubits in fully pure states. In principle, the standard quantum limit can be exceeded using an additional qubit which adds only a small amount of purity. We show that the discord in the final state vanishes only in the limit of attaining infinite precision for the parameter being estimated.

  11. Quantum random bit generation using stimulated Raman scattering.

    PubMed

    Bustard, Philip J; Moffatt, Doug; Lausten, Rune; Wu, Guorong; Walmsley, Ian A; Sussman, Benjamin J

    2011-12-01

    Random number sequences are a critical resource in a wide variety of information systems, including applications in cryptography, simulation, and data sampling. We introduce a quantum random number generator based on the phase measurement of Stokes light generated by amplification of zero-point vacuum fluctuations using stimulated Raman scattering. This is an example of quantum noise amplification using the most noise-free process possible: near unitary quantum evolution. The use of phase offers robustness to classical pump noise and the ability to generate multiple bits per measurement. The Stokes light is generated with high intensity and as a result, fast detectors with high signal-to-noise ratios can be used for measurement, eliminating the need for single-photon sensitive devices. The demonstrated implementation uses optical phonons in bulk diamond. PMID:22273908

  12. Quantum image Gray-code and bit-plane scrambling

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Sun, Ya-Juan; Fan, Ping

    2015-05-01

    With the rapid development of multimedia technology, the image scrambling for information hiding and digital watermarking is crucial. But, in quantum image processing field, the study on image scrambling is still few. Several quantum image scrambling schemes are basically position space scrambling strategies; however, the quantum image scrambling focused on the color space does not exist. Therefore, in this paper, the quantum image Gray-code and bit-plane (GB) scrambling scheme, an entire color space scrambling strategy, is proposed boldly. On the strength of a quantum image representation NEQR, several different quantum scrambling methods using GB knowledge are designed. Not only can they change the histogram distribution of the image dramatically, some designed schemes can almost make the image histogram flush, enhance the anti-attack ability of digital image, but also their cost or complexity is very low. The simulation experiments result also shows a good performance and indicates the particular advantage of GB scrambling in quantum image processing field.

  13. Bit-string physics: A novel theory of everything

    SciTech Connect

    Noyes, H.P.

    1994-08-01

    We encode the quantum numbers of the standard model of quarks and leptons using constructed bitstrings of length 256. These label a grouting universe of bit-strings of growing length that eventually construct a finite and discrete space-time with reasonable cosmological properties. Coupling constants and mass ratios, computed from closure under XOR and a statistical hypothesis, using only {h_bar}, c and m{sub p} to fix our units of mass, length and time in terms of standard (meterkilogram-second) metrology, agree with the first four to seven significant figures of accepted experimental results. Finite and discrete conservation laws and commutation relations insure the essential characteristics of relativistic quantum mechanics, including particle-antiparticle pair creation. The correspondence limit in (free space) Maxwell electromagnetism and Einstein gravitation is consistent with the Feynman-Dyson-Tanimura ``proof.``

  14. Atomistic simulation of static magnetic properties of bit patterned media

    NASA Astrophysics Data System (ADS)

    Arbeláez-Echeverri, O. D.; Agudelo-Giraldo, J. D.; Restrepo-Parra, E.

    2016-09-01

    In this work we present a new design of Co based bit pattern media with out-of-plane uni-axial anisotropy induced by interface effects. Our model features the inclusion of magnetic impurities in the non-magnetic matrix. After the material model was refined during three iterations using Monte Carlo simulations, further simulations were performed using an atomistic integrator of Landau-Lifshitz-Gilbert equation with Langevin dynamics to study the behavior of the system paying special attention to the super-paramagnetic limit. Our model system exhibits three magnetic phase transitions, one due to the magnetically doped matrix material and the weak magnetic interaction between the nano-structures in the system. The different magnetic phases of the system as well as the features of its phase diagram are explained.

  15. Development of a 32-bit UNIX-based ELAS workstation

    NASA Technical Reports Server (NTRS)

    Spiering, Bruce A.; Pearson, Ronnie W.; Cheng, Thomas D.

    1987-01-01

    A mini/microcomputer UNIX-based image analysis workstation has been designed and is being implemented to use the Earth Resources Laboratory Applications Software (ELAS). The hardware system includes a MASSCOMP 5600 computer, which is a 32-bit UNIX-based system (compatible with AT&T System V and Berkeley 4.2 BSD operating system), a floating point accelerator, a 474-megabyte fixed disk, a tri-density magnetic tape drive, and an 1152 by 910 by 12-plane color graphics/image interface. The software conversion includes reconfiguring the ELAs driver Master Task, recompiling and then testing the converted application modules. This hardware and software configuration is a self-sufficient image analysis workstation which can be used as a stand-alone system, or networked with other compatible workstations.

  16. Optical refractive synchronization: bit error rate analysis and measurement

    NASA Astrophysics Data System (ADS)

    Palmer, James R.

    1999-11-01

    The direction of this paper is to describe the analytical tools and measurement techniques used at SilkRoad to evaluate the optical and electrical signals used in Optical Refractive Synchronization for transporting SONET signals across the transmission fiber. Fundamentally, the direction of this paper is to provide an outline of how SilkRoad, Inc., transports a multiplicity of SONET signals across a distance of fiber > 100 Km without amplification or regeneration of the optical signal, i.e., one laser over one fiber. Test and measurement data are presented to reflect how the SilkRoad technique of Optical Refractive Synchronization is employed to provide a zero bit error rate for transmission of multiple OC-12 and OC-48 SONET signals that are sent over a fiber optical cable which is > 100Km. The recovery and transformation modules are described for the modification and transportation of these SONET signals.

  17. Bit-Serial Adder Based on Quantum Dots

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Toomarian, Nikzad; Modarress, Katayoon; Spotnitz, Mathew

    2003-01-01

    A proposed integrated circuit based on quantum-dot cellular automata (QCA) would function as a bit-serial adder. This circuit would serve as a prototype building block for demonstrating the feasibility of quantum-dots computing and for the further development of increasingly complex and increasingly capable quantum-dots computing circuits. QCA-based bit-serial adders would be especially useful in that they would enable the development of highly parallel and systolic processors for implementing fast Fourier, cosine, Hartley, and wavelet transforms. The proposed circuit would complement the QCA-based circuits described in "Implementing Permutation Matrices by Use of Quantum Dots" (NPO-20801), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 42 and "Compact Interconnection Networks Based on Quantum Dots" (NPO-20855), which appears elsewhere in this issue. Those articles described the limitations of very-large-scale-integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCA-based signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes. To enable a meaningful description of the proposed bit-serial adder, it is necessary to further recapitulate the description of a quantum-dot cellular automation from the first-mentioned prior article: A quantum-dot cellular automaton contains four quantum dots positioned at the corners of a square cell. The cell contains two extra mobile electrons that can tunnel (in the

  18. Supercomputing on massively parallel bit-serial architectures

    NASA Technical Reports Server (NTRS)

    Iobst, Ken

    1985-01-01

    Research on the Goodyear Massively Parallel Processor (MPP) suggests that high-level parallel languages are practical and can be designed with powerful new semantics that allow algorithms to be efficiently mapped to the real machines. For the MPP these semantics include parallel/associative array selection for both dense and sparse matrices, variable precision arithmetic to trade accuracy for speed, micro-pipelined train broadcast, and conditional branching at the processing element (PE) control unit level. The preliminary design of a FORTRAN-like parallel language for the MPP has been completed and is being used to write programs to perform sparse matrix array selection, min/max search, matrix multiplication, Gaussian elimination on single bit arrays and other generic algorithms. A description is given of the MPP design. Features of the system and its operation are illustrated in the form of charts and diagrams.

  19. Quantum random bit generation using stimulated Raman scattering.

    PubMed

    Bustard, Philip J; Moffatt, Doug; Lausten, Rune; Wu, Guorong; Walmsley, Ian A; Sussman, Benjamin J

    2011-12-01

    Random number sequences are a critical resource in a wide variety of information systems, including applications in cryptography, simulation, and data sampling. We introduce a quantum random number generator based on the phase measurement of Stokes light generated by amplification of zero-point vacuum fluctuations using stimulated Raman scattering. This is an example of quantum noise amplification using the most noise-free process possible: near unitary quantum evolution. The use of phase offers robustness to classical pump noise and the ability to generate multiple bits per measurement. The Stokes light is generated with high intensity and as a result, fast detectors with high signal-to-noise ratios can be used for measurement, eliminating the need for single-photon sensitive devices. The demonstrated implementation uses optical phonons in bulk diamond.

  20. Information gain on reheating: The one bit milestone

    NASA Astrophysics Data System (ADS)

    Martin, Jérôme; Ringeval, Christophe; Vennin, Vincent

    2016-05-01

    We show that the Planck 2015 and BICEP2/KECK measurements of the cosmic microwave background (CMB) anisotropies provide together an information gain of 0.82 ±0.13 bits on the reheating history over all slow-roll single-field models of inflation. This corresponds to a 40% improvement compared to the Planck 2013 constraints on the reheating. Our method relies on an exhaustive CMB data analysis performed over nearly 200 models of inflation to derive the Kullback-Leibler entropy between the prior and the fully marginalized posterior of the reheating parameter. This number is a weighted average by the Bayesian evidence of each model to explain the data thereby ensuring its fairness and robustness.

  1. High-performance TSD bits improve penetration rate. [Thermally Stable Diamond

    SciTech Connect

    Cohen, J.H.; Maurer, W.C. ); Westcott, P.A. )

    1993-04-12

    Optimizing the number, size, and orientation of clutters on thermally stable diamond (TSD) bits increases penetration rate and extends bit life. The use of optimized TSD (also commonly referred to as thermally stable product or TSP) bits on high-power drilling motors can greatly reduce drilling time for harsh-environment wells, such as deep gas wells. The power delivered to the rock governs drilling rate, and at high speed the optimized TSD bits are capable of effectively delivering power to drill the rock. This article reviews a 3-year project to develop advanced thermally stable diamond bits that can operate at a power level 5-10 times greater than that typically delivered by conventional rotary drilling. These bits are designed to operate on advanced drilling motors that drill three to six times faster than rotary drilling. These advanced bits and motors are especially designed for use in slim-hole and horizontal drilling applications. The TSD bit design parameters which were varied during the tests were cutter size, shape, density (number of cutters), and orientation. Drilling tests in limestone, sandstone, marble, and granite blocks showed that these optimized bits drilled many of these rocks at 500-1,000 ft/hr compared to 50-100 ft/hr for conventional rotary drilling. A sensitivity model showed that doubling the rate of penetration significantly reduced the time to drill a well and reduced costs by 13 %.

  2. Validation of the Behavioural Inattention Test (BIT) in patients with acquired brain injury in Turkey.

    PubMed

    Kutlay, Sehim; Küçükdeveci, Ayşe A; Elhan, Atilla H; Tennant, Alan

    2009-06-01

    The aim of this descriptive study was to evaluate the construct validity and reliability of the Behavioural Inattention Test (BIT) in patients with acquired brain injury in Turkey. One hundred and eighteen acquired brain injury patients undergoing rehabilitation were assessed by the BIT. Internal construct validity was tested by Rasch analysis; reliability by internal consistency and the Person Separation Index; and external construct validity by associations with physical and cognitive disability. Analysis of the data revealed that some subtests deviated from Rasch model expectation and the conventional subscale of the BIT had an unsatisfactory reliability for individual use. Consequently, a common 10-item scale (BIT-10) was derived from both the behavioural and conventional subscales of the BIT. Reliability of .87 met expectation for individual use. The BIT-10 correlated at .52 with cognitive disability upon admission. As a conclusion the original BIT adapted for use in Turkey was shown to lack reliability and internal construct validity. A revised 10-item new version, BIT-10, gave a valid unidimensional summed score, with high sensitivity and specificity to the original cut points. Reliability of the BIT-10 was high and external construct validity was as expected.

  3. Field drilling tests on improved geothermal unsealed roller-cone bits. Final report

    SciTech Connect

    Hendrickson, R.R.; Jones, A.H.; Winzenried, R.W.; Maish, A.B.

    1980-05-01

    The development and field testing of a 222 mm (8-3/4 inch) unsealed, insert type, medium hard formation, high-temperature bit are described. Increased performance was gained by substituting improved materials in critical bit components. These materials were selected on bases of their high temperature properties, machinability and heat treatment response. Program objectives required that both machining and heat treating could be accomplished with existing rock bit production equipment. Six of the experimental bits were subjected to air drilling at 240/sup 0/C (460/sup 0/F) in Franciscan graywacke at the Geysers (California). Performances compared directly to conventional bits indicate that in-gage drilling time was increased by 70%. All bits at the Geysers are subjected to reaming out-of-gage hole prior to drilling. Under these conditions the experimental bits showed a 30% increase in usable hole drilled, compared with the conventional bits. The materials selected improved roller wear by 200%, friction per wear by 150%, and lug wear by 150%. These tests indicate a potential well cost savings of 4 to 8%. Savings of 12% are considered possible with drilling procedures optimized for the experimental bits.

  4. Analysis of an optimization-based atomistic-to-continuum coupling method for point defects

    SciTech Connect

    Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; Luskin, Mitchell

    2015-11-16

    Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.

  5. Optimal Resource Allocation in Library Systems

    ERIC Educational Resources Information Center

    Rouse, William B.

    1975-01-01

    Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)

  6. METHODS OF ANALYSIS FOR WASTE LOAD ALLOCATION

    EPA Science Inventory

    This research has addressed several unresolved questions concerning the allocation of allowable waste loads among multiple wastewater dischargers within a water quality limited stream segment. First, the traditional assumptions about critical design conditions for waste load allo...

  7. Optimality versus stability in water resource allocation.

    PubMed

    Read, Laura; Madani, Kaveh; Inanloo, Bahareh

    2014-01-15

    Water allocation is a growing concern in a developing world where limited resources like fresh water are in greater demand by more parties. Negotiations over allocations often involve multiple groups with disparate social, economic, and political status and needs, who are seeking a management solution for a wide range of demands. Optimization techniques for identifying the Pareto-optimal (social planner solution) to multi-criteria multi-participant problems are commonly implemented, although often reaching agreement for this solution is difficult. In negotiations with multiple-decision makers, parties who base decisions on individual rationality may find the social planner solution to be unfair, thus creating a need to evaluate the willingness to cooperate and practicality of a cooperative allocation solution, i.e., the solution's stability. This paper suggests seeking solutions for multi-participant resource allocation problems through an economics-based power index allocation method. This method can inform on allocation schemes that quantify a party's willingness to participate in a negotiation rather than opt for no agreement. Through comparison of the suggested method with a range of distance-based multi-criteria decision making rules, namely, least squares, MAXIMIN, MINIMAX, and compromise programming, this paper shows that optimality and stability can produce different allocation solutions. The mismatch between the socially-optimal alternative and the most stable alternative can potentially result in parties leaving the negotiation as they may be too dissatisfied with their resource share. This finding has important policy implications as it justifies why stakeholders may not accept the socially optimal solution in practice, and underlies the necessity of considering stability where it may be more appropriate to give up an unstable Pareto-optimal solution for an inferior stable one. Authors suggest assessing the stability of an allocation solution as an

  8. Neural-Network Processor Would Allocate Resources

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.; Moopenn, Alexander W.

    1990-01-01

    Global optimization problems solved quickly. Neural-network processor optimizes allocation of M resources among N expenditures according to cost of pairing each resource with each expenditure and subject to limit on number of resources feeding into each expenditure and/or limit on number of expenditures to which each resource allocated. One cell performs several analog and digital functions. Potential applications include assignment of jobs, scheduling, dispatching, and planning of military maneuvers.

  9. Frequency allocations accommodate new commercial applications

    NASA Astrophysics Data System (ADS)

    Stiglitz, Martin R.; Blanchard, Christine

    1992-07-01

    An overview is presented of the 1992 World Administrative Radio Frequency Conference whose principal responsibility is to review and update the International Radio Regulations, including the International Table of Frequency Allocations and the procedures for utilizing the allocations. Consideration is given to the earth exploration-satellite service, the space research space operation, general-satellite service, and wind profiler radar. Attention is given to shortwave or HF broadcasting, mobile and mobile-satellite services and future public land mobile telecommunications systems.

  10. Allocation of authority in European health policy.

    PubMed

    Adolph, Christopher; Greer, Scott L; Massard da Fonseca, Elize

    2012-11-01

    Although many study the effects of different allocations of health policy authority, few ask why countries assign responsibility over different policies as they do. We test two broad theories: fiscal federalism, which predicts rational governments will concentrate information-intensive operations at lower levels, and redistributive and regulatory functions at higher levels; and "politicized federalism", which suggests a combination of systematic and historically idiosyncratic political variables interfere with efficient allocation of authority. Drawing on the WHO Health in Transition country profiles, we present new data on the allocation of responsibility for key health care policy tasks (implementation, provision, finance, regulation, and framework legislation) and policy areas (primary, secondary and tertiary care, public health and pharmaceuticals) in the 27 EU member states and Switzerland. We use a Bayesian multinomial mixed logit model to analyze how different countries arrive at different allocations of authority over each task and area of health policy, and find the allocation of powers broadly follows fiscal federalism. Responsibility for pharmaceuticals, framework legislation, and most finance lodges at the highest levels of government, acute and primary care in the regions, and provision at the local and regional levels. Where allocation does not follow fiscal federalism, it appears to reflect ethnic divisions, the population of states and regions, the presence of mountainous terrain, and the timing of region creation. PMID:22858423

  11. Allocation Games: Addressing the Ill-Posed Nature of Allocation in Life-Cycle Inventories.

    PubMed

    Hanes, Rebecca J; Cruze, Nathan B; Goel, Prem K; Bakshi, Bhavik R

    2015-07-01

    Allocation is required when a life cycle contains multi-functional processes. One approach to allocation is to partition the embodied resources in proportion to a criterion, such as product mass or cost. Many practitioners apply multiple partitioning criteria to avoid choosing one arbitrarily. However, life cycle results from different allocation methods frequently contradict each other, making it difficult or impossible for the practitioner to draw any meaningful conclusions from the study. Using the matrix notation for life-cycle inventory data, we show that an inventory that requires allocation leads to an ill-posed problem: an inventory based on allocation is one of an infinite number of inventories that are highly dependent upon allocation methods. This insight is applied to comparative life-cycle assessment (LCA), in which products with the same function but different life cycles are compared. Recently, there have been several studies that applied multiple allocation methods and found that different products were preferred under different methods. We develop the Comprehensive Allocation Investigation Strategy (CAIS) to examine any given inventory under all possible allocation decisions, enabling us to detect comparisons that are not robust to allocation, even when the comparison appears robust under conventional partitioning methods. While CAIS does not solve the ill-posed problem, it provides a systematic way to parametrize and examine the effects of partitioning allocation. The practical usefulness of this approach is demonstrated with two case studies. The first compares ethanol produced from corn stover hydrolysis, corn stover gasification, and corn grain fermentation. This comparison was not robust to allocation. The second case study compares 1,3-propanediol (PDO) produced from fossil fuels and from biomass, which was found to be a robust comparison. PMID:26061700

  12. Allocation Games: Addressing the Ill-Posed Nature of Allocation in Life-Cycle Inventories.

    PubMed

    Hanes, Rebecca J; Cruze, Nathan B; Goel, Prem K; Bakshi, Bhavik R

    2015-07-01

    Allocation is required when a life cycle contains multi-functional processes. One approach to allocation is to partition the embodied resources in proportion to a criterion, such as product mass or cost. Many practitioners apply multiple partitioning criteria to avoid choosing one arbitrarily. However, life cycle results from different allocation methods frequently contradict each other, making it difficult or impossible for the practitioner to draw any meaningful conclusions from the study. Using the matrix notation for life-cycle inventory data, we show that an inventory that requires allocation leads to an ill-posed problem: an inventory based on allocation is one of an infinite number of inventories that are highly dependent upon allocation methods. This insight is applied to comparative life-cycle assessment (LCA), in which products with the same function but different life cycles are compared. Recently, there have been several studies that applied multiple allocation methods and found that different products were preferred under different methods. We develop the Comprehensive Allocation Investigation Strategy (CAIS) to examine any given inventory under all possible allocation decisions, enabling us to detect comparisons that are not robust to allocation, even when the comparison appears robust under conventional partitioning methods. While CAIS does not solve the ill-posed problem, it provides a systematic way to parametrize and examine the effects of partitioning allocation. The practical usefulness of this approach is demonstrated with two case studies. The first compares ethanol produced from corn stover hydrolysis, corn stover gasification, and corn grain fermentation. This comparison was not robust to allocation. The second case study compares 1,3-propanediol (PDO) produced from fossil fuels and from biomass, which was found to be a robust comparison.

  13. Hierarchical dynamic allocation procedures based on modified Zelen's approach in multiregional studies with unequal allocation.

    PubMed

    Kuznetsova, Olga M; Tymofyeyev, Yevgen

    2014-01-01

    Morrissey, McEntegart, and Lang (2010) showed that in multicenter studies with equal allocation to several treatment arms, the modified Zelen's approach provides excellent within-center and across-study balance in treatment assignments. In this article, hierarchical balancing procedures for equal allocation to more than two arms (with some elements different from earlier versions) and their unequal allocation expansions that incorporate modified Zelen's approach at the center level are described. The balancing properties of the described procedures for a case study of a multiregional clinical trial with 1:2 allocation where balance within regions as well as in other covariates is required are examined through simulations.

  14. Optimization-based image reconstruction from sparse-view data in offset-detector CBCT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan

    2013-01-01

    The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.

  15. Texture enhanced optimization-based image reconstruction (TxE-OBIR) from sparse projection views

    NASA Astrophysics Data System (ADS)

    Xie, Huiqiao; Niu, Tianye; Yang, Yi; Ren, Yi; Tang, Xiangyang

    2016-03-01

    The optimization-based image reconstruction (OBIR) has been proposed and investigated in recent years to reduce radiation dose in X-ray computed tomography (CT) through acquiring sparse projection views. However, the OBIR usually generates images with a quite different noise texture compared to the clinical widely used reconstruction method (i.e. filtered back-projection - FBP). This may make the radiologists/physicians less confident while they are making clinical decisions. Recognizing the fact that the X-ray photon noise statistics is relatively uniform across the detector cells, which is enabled by beam forming devices (e.g. bowtie filters), we propose and evaluate a novel and practical texture enhancement method in this work. In the texture enhanced optimization-based image reconstruction (TxEOBIR), we first reconstruct a texture image with the FBP algorithm from a full set of synthesized projection views of noise. Then, the TxE-OBIR image is generated by adding the texture image into the OBIR reconstruction. As qualitatively confirmed by visual inspection and quantitatively by noise power spectrum (NPS) evaluation, the proposed method can produce images with textures that are visually identical to those of the gold standard FBP images.

  16. Optimization-based image reconstruction from sparse-view data in offset-detector CBCT.

    PubMed

    Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y; Shao, Lingxiong; Pan, Xiaochuan

    2013-01-21

    The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.

  17. Optimization-based multicriteria decision analysis for identification of desired petroleum-contaminated groundwater remediation strategies.

    PubMed

    Lu, Hongwei; Feng, Mao; He, Li; Ren, Lixia

    2015-06-01

    The conventional multicriteria decision analysis (MCDA) methods used for pollution control generally depend on the data currently available. This could limit their real-world applications, especially where the input data (e.g., the most cost-effective remediation cost and eventual contaminant concentration) might vary by scenario. This study proposes an optimization-based MCDA (OMCDA) framework to address such a challenge. It is capable of (1) capturing various preferences of decision-makers, (2) screening and analyzing the performance of various optimized remediation strategies under changeable scenarios, and (3) compromising incongruous decision analysis results. A real-world case study is employed for demonstration, where four scenarios are considered with each one corresponding to a set of weights representative of the preference of the decision-makers. Four criteria are selected, i.e., optimal total pumping rate, remediation cost, contaminant concentration, and fitting error. Their values are determined through running optimization and optimization-based simulation procedures. Four sets of the most desired groundwater remediation strategies are identified, implying specific pumping rates under varied scenarios. Results indicate that the best action lies in groups 32 and 16 for the 5-year, groups 49 and 36 for the 10-year, groups 26 and 13 for the 15-year, and groups 47 and 13 for the 20-year remediation.

  18. H.264/SVC parameter optimization based on quantization parameter, MGS fragmentation, and user bandwidth distribution

    NASA Astrophysics Data System (ADS)

    Chen, Xu; Zhang, Ji-Hong; Liu, Wei; Liang, Yong-Sheng; Feng, Ji-Qiang

    2013-12-01

    In the situation of limited bandwidth, how to improve the performance of scalable video coding plays an important role in video coding. The previously proposed scalable video coding optimization schemes concentrate on reducing coding computation or trying to achieve consistent video quality; however, the connections between coding scheme, transmission environments, and users' accesses manner were not jointly considered. This article proposes a H.264/SVC (scalable video codec) parameter optimization scheme, which attempt to make full use of limited bandwidth, to achieve better peak signal-to-noise ratio, based on the joint measure of user bandwidth range and probability density distribution. This algorithm constructs a relationship map which consists of the bandwidth range of multiple users and the quantified quality increments measure, QP e , in order to make effective use of the video coding bit-stream. A medium grain scalability fragmentation optimization algorithm is also presented with respect to user bandwidth probability density distribution, encoding bit rate, and scalability. Experiments on a public dataset show that this method provides significant average quality improvement for streaming video applications.

  19. Development of a method for predicting the performance and wear of PDC (polycrystalline diamond compact) drill bits

    SciTech Connect

    Glowka, D.A.

    1987-09-01

    A method is developed for predicting cutter forces, temperatures, and wear on PDC bits as well as integrated bit performance parameters such as weight-on-bit, drilling torque, and bit imbalance. A computer code called PDCWEAR has been developed to make this method available as a tool for general bit design and analysis. The method uses single-cutter data to provide a measure of rock drillability and employs theoretical considerations to account for interaction among closely spaced cutters on the bit. Experimental data are presented to establish the effects of cutter size and wearflat area on the forces that develop during rock cutting. Waterjet assistance is shown to significantly reduce cutting forces, thereby potentially extending bit life and reducing weight-on-bit and torque requirements in hard rock. The effects of several other design and operating parameters on bit life and drilling performance are also investigated.

  20. 77 FR 51825 - Certain Drill Bits and Products Containing Same; Determination To Review an Initial Determination...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-27

    ... Company and Longyear TM, Inc. both of South Jordan, Utah. 76 FR 32997 (June 4, 2012). The complaint... COMMISSION Certain Drill Bits and Products Containing Same; Determination To Review an Initial Determination... importation of certain drill bits and products containing the same by reason of infringement of certain...

  1. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  2. Towards the generation of random bits at terahertz rates based on a chaotic semiconductor laser

    NASA Astrophysics Data System (ADS)

    Kanter, Ido; Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael

    2010-06-01

    Random bit generators (RBGs) are important in many aspects of statistical physics and crucial in Monte-Carlo simulations, stochastic modeling and quantum cryptography. The quality of a RBG is measured by the unpredictability of the bit string it produces and the speed at which the truly random bits can be generated. Deterministic algorithms generate pseudo-random numbers at high data rates as they are only limited by electronic hardware speed, but their unpredictability is limited by the very nature of their deterministic origin. It is widely accepted that the core of any true RBG must be an intrinsically non-deterministic physical process, e.g. measuring thermal noise from a resistor. Owing to low signal levels, such systems are highly susceptible to bias, introduced by amplification, and to small nonrandom external perturbations resulting in a limited generation rate, typically less than 100M bit/s. We present a physical random bit generator, based on a chaotic semiconductor laser, having delayed optical feedback, which operates reliably at rates up to 300Gbit/s. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.

  3. Performance of reduced bit-depth acquisition for optical frequency domain imaging.

    PubMed

    Goldberg, Brian D; Vakoc, Benjamin J; Oh, Wang-Yuhl; Suter, Melissa J; Waxman, Sergio; Freilich, Mark I; Bouma, Brett E; Tearney, Guillermo J

    2009-09-14

    High-speed optical frequency domain imaging (OFDI) has enabled practical wide-field microscopic imaging in the biological laboratory and clinical medicine. The imaging speed of OFDI, and therefore the field of view, of current systems is limited by the rate at which data can be digitized and archived rather than the system sensitivity or laser performance. One solution to this bottleneck is to natively digitize OFDI signals at reduced bit depths, e.g., at 8-bit depth rather than the conventional 12-14 bit depth, thereby reducing overall bandwidth. However, the implications of reduced bit-depth acquisition on image quality have not been studied. In this paper, we use simulations and empirical studies to evaluate the effects of reduced depth acquisition on OFDI image quality. We show that image acquisition at 8-bit depth allows high system sensitivity with only a minimal drop in the signal-to-noise ratio compared to higher bit-depth systems. Images of a human coronary artery acquired in vivo at 8-bit depth are presented and compared with images at higher bit-depth acquisition.

  4. Precious bits: frame synchronization in Jet Propulsion Laboratory's Advanced Multi-Mission Operations System (AMMOS)

    NASA Technical Reports Server (NTRS)

    Wilson, E.

    2001-01-01

    The Jet Propulsion Laboratory's (JPL) Advanced Multi-Mission Operations System (AMMOS) system processes data received from deep-space spacecraft, where error rates are high, bit rates are low, and every bit is precious. Frame synchronization and data extraction as performed by AMMOS enhanced data acquisition and reliability for maximum data return and validity.

  5. Seismic Investigations of the Zagros-Bitlis Thrust Zone

    NASA Astrophysics Data System (ADS)

    Gritto, R.; Sibol, M.; Caron, P.; Quigley, K.; Ghalib, H.; Chen, Y.

    2009-05-01

    We present results of crustal studies obtained with seismic data from the Northern Iraq Seismic Network (NISN). NISN has operated 10 broadband stations in north-eastern Iraq since late 2005. At present, over 800 GB of seismic waveform data have been analyzed. The aim of the present study is to derive models of the local and regional crustal structure of north and north-eastern Iraq, including the northern extension of the Zagros collision zone. This goal is, in part, achieved by estimating local and regional seismic velocity models using receiver function- and surface wave dispersion analyses and to use these velocity models to obtain accurate hypocenter locations and event focal mechanisms. Our analysis of hypocenter locations produces a clear picture of the seismicity associated with the tectonics of the region. The largest seismicity rate is confined to the active northern section of the Zagros thrust zone, while it decreases towards the southern end, before the intensity increases in the Bandar Abbas region again. Additionally, the rift zones in the Read Sea and the Gulf of Aden are clearly demarked by high seismicity rates. Our analysis of waveform data indicates clear propagation paths from the west or south-west across the Arabian shield as well as from the north and east into NISN. Phases including Pn, Pg, Sn, Lg, as well as LR are clearly observed on these seismograms. In contrast, blockage or attenuation of Pg and Sg-wave energy is observed for propagation paths across the Zagros-Bitlis zone from the south, while Pn and Sn phases are not affected. These findings are in support of earlier tectonic models that suggested the existence of multiple parallel listric faults splitting off the main Zagros fault zone in east-west direction. These faults appear to attenuate the crustal phases while the refracted phases, propagating across the mantle lid, remain unaffected. We will present surface wave analysis in support of these findings, indicating multi

  6. 24 CFR 791.404 - Field Office allocation planning.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Field Office allocation planning... Allocation of Budget Authority for Housing Assistance § 791.404 Field Office allocation planning. (a) General objective. The allocation planning process should provide for the equitable distribution of available...

  7. 10 CFR 217.54 - Elements of an allocation order.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Allocations System regulation (10 CFR part 217), which is part of the Federal Priorities and Allocations System”; and (e) A current copy of the Energy Priorities and Allocations System regulation (10 CFR part... 10 Energy 3 2014-01-01 2014-01-01 false Elements of an allocation order. 217.54 Section...

  8. 49 CFR 33.54 - Elements of an allocation order.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Transportation Priorities and Allocations System regulation (49 CFR Part 33)”; and (e) A current copy of the Transportation Priorities and Allocations System regulation (49 CFR Part 33) as of the date of the allocation... 49 Transportation 1 2012-10-01 2012-10-01 false Elements of an allocation order. 33.54 Section...

  9. 49 CFR 33.54 - Elements of an allocation order.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Transportation Priorities and Allocations System regulation (49 CFR Part 33)”; and (e) A current copy of the Transportation Priorities and Allocations System regulation (49 CFR Part 33) as of the date of the allocation... 49 Transportation 1 2013-10-01 2013-10-01 false Elements of an allocation order. 33.54 Section...

  10. 49 CFR 33.54 - Elements of an allocation order.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Transportation Priorities and Allocations System regulation (49 CFR Part 33)”; and (e) A current copy of the Transportation Priorities and Allocations System regulation (49 CFR Part 33) as of the date of the allocation... 49 Transportation 1 2014-10-01 2014-10-01 false Elements of an allocation order. 33.54 Section...

  11. 10 CFR 217.54 - Elements of an allocation order.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Allocations System regulation (10 CFR part 217), which is part of the Federal Priorities and Allocations System”; and (e) A current copy of the Energy Priorities and Allocations System regulation (10 CFR part... 10 Energy 3 2012-01-01 2012-01-01 false Elements of an allocation order. 217.54 Section...

  12. 10 CFR 217.54 - Elements of an allocation order.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Allocations System regulation (10 CFR part 217), which is part of the Federal Priorities and Allocations System”; and (e) A current copy of the Energy Priorities and Allocations System regulation (10 CFR part... 10 Energy 3 2013-01-01 2013-01-01 false Elements of an allocation order. 217.54 Section...

  13. The Use of Different Rules to Allocate Reward and Punishment.

    ERIC Educational Resources Information Center

    Mueller, Charles W.

    Much research has been conducted about how and when individuals allocate rewards, yet little research exists concerning the allocation of punishment. The process of allocating negative outcomes may be different from the decision making process for positive outcomes. To examine the decision making process for allocating rewards and punishment,…

  14. 49 CFR 198.13 - Grant allocation formula.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 3 2011-10-01 2011-10-01 false Grant allocation formula. 198.13 Section 198.13... PIPELINE SAFETY PROGRAMS Grant Allocation § 198.13 Grant allocation formula. (a) Beginning in calendar year... state agency comments on any proposed changes to the allocation formula. (f) Grants are limited to...

  15. 49 CFR 198.13 - Grant allocation formula.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 3 2014-10-01 2014-10-01 false Grant allocation formula. 198.13 Section 198.13... PIPELINE SAFETY PROGRAMS Grant Allocation § 198.13 Grant allocation formula. (a) Beginning in calendar year... state agency comments on any proposed changes to the allocation formula. (f) Grants are limited to...

  16. 49 CFR 198.13 - Grant allocation formula.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 3 2012-10-01 2012-10-01 false Grant allocation formula. 198.13 Section 198.13... PIPELINE SAFETY PROGRAMS Grant Allocation § 198.13 Grant allocation formula. (a) Beginning in calendar year... state agency comments on any proposed changes to the allocation formula. (f) Grants are limited to...

  17. 47 CFR 64.903 - Cost allocation manuals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Cost allocation manuals. 64.903 Section 64.903... RULES RELATING TO COMMON CARRIERS Allocation of Costs § 64.903 Cost allocation manuals. (a) Each... mid-sized incumbent local exchange carriers is required to file a cost allocation manual...

  18. 40 CFR 96.53 - Recordation of NOX allowance allocations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... allocated to an allocation set-aside. (c) Serial numbers for allocated NO X allowances. When allocating NOX... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Recordation of NOX allowance... PROGRAMS (CONTINUED) NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR...

  19. 26 CFR 1.514(e)-1 - Allocation rules.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Allocation rules. 1.514(e)-1 Section 1.514(e)-1... Allocation rules. Where only a portion of property is debt-financed property, proper allocation of the basis...)(iii) of § 1.514(b)-1 for illustrations of proper allocation....

  20. 7 CFR 761.202 - Timing of allocations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Timing of allocations. 761.202 Section 761.202... AGRICULTURE SPECIAL PROGRAMS GENERAL PROGRAM ADMINISTRATION Allocation of Farm Loan Programs Funds to State Offices § 761.202 Timing of allocations. The Agency's National Office allocates funds for FO and OL...

  1. 24 CFR 791.404 - Field Office allocation planning.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Field Office allocation planning... Allocation of Budget Authority for Housing Assistance § 791.404 Field Office allocation planning. (a) General... authority, consistent with the relative housing needs of each allocation area within the field...

  2. 26 CFR 1.141-6 - Allocation and accounting rules.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 2 2012-04-01 2012-04-01 false Allocation and accounting rules. 1.141-6 Section... Allocation and accounting rules. (a) Allocation of proceeds to expenditures. For purposes of §§ 1.141-1.... Thus, allocations generally may be made using any reasonable, consistently applied accounting...

  3. 26 CFR 1.141-6 - Allocation and accounting rules.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 2 2013-04-01 2013-04-01 false Allocation and accounting rules. 1.141-6 Section... Allocation and accounting rules. (a) Allocation of proceeds to expenditures. For purposes of §§ 1.141-1.... Thus, allocations generally may be made using any reasonable, consistently applied accounting...

  4. 26 CFR 1.141-6 - Allocation and accounting rules.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Allocation and accounting rules. 1.141-6 Section... Allocation and accounting rules. (a) Allocation of proceeds to expenditures. For purposes of §§ 1.141-1.... Thus, allocations generally may be made using any reasonable, consistently applied accounting...

  5. 10 CFR 455.30 - Allocation of funds.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Allocation of funds. 455.30 Section 455.30 Energy... § 455.30 Allocation of funds. (a) DOE will allocate available funds among the States for two purposes... that are eligible pursuant to § 455.91, up to 100 percent of the funds allocated to the State by...

  6. 26 CFR 1.141-6 - Allocation and accounting rules.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... Thus, allocations generally may be made using any reasonable, consistently applied accounting method... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Allocation and accounting rules. 1.141-6 Section... Allocation and accounting rules. (a) Allocation of proceeds to expenditures. For purposes of §§...

  7. A hyperspectral images compression algorithm based on 3D bit plane transform

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Xiang, Libin; Zhang, Sam; Quan, Shengxue

    2010-10-01

    According the analyses of the hyper-spectral images, a new compression algorithm based on 3-D bit plane transform is proposed. The spectral coefficient is higher than the spatial. The algorithm is proposed to overcome the shortcoming of 1-D bit plane transform for it can only reduce the correlation when the neighboring pixels have similar values. The algorithm calculates the horizontal, vertical and spectral bit plane transform sequentially. As the spectral bit plane transform, the algorithm can be easily realized by hardware. In addition, because the calculation and encoding of the transform matrix of each bit are independent, the algorithm can be realized by parallel computing model, which can improve the calculation efficiency and save the processing time greatly. The experimental results show that the proposed algorithm achieves improved compression performance. With a certain compression ratios, the algorithm satisfies requirements of hyper-spectral images compression system, by efficiently reducing the cost of computation and memory usage.

  8. Context-Adaptive Arithmetic Coding Scheme for Lossless Bit Rate Reduction of MPEG Surround in USAC

    NASA Astrophysics Data System (ADS)

    Yoon, Sungyong; Pang, Hee-Suk; Sung, Koeng-Mo

    We propose a new coding scheme for lossless bit rate reduction of the MPEG Surround module in unified speech and audio coding (USAC). The proposed scheme is based on context-adaptive arithmetic coding for efficient bit stream composition of spatial parameters. Experiments show that it achieves the significant lossless bit reduction of 9.93% to 12.14% for spatial parameters and 8.64% to 8.96% for the overall MPEG Surround bit streams compared to the original scheme. The proposed scheme, which is not currently included in USAC, can be used for the improved coding efficiency of MPEG Surround in USAC, where the saved bits can be utilized by the other modules in USAC.

  9. Multi-bit upset aware hybrid error-correction for cache in embedded processors

    NASA Astrophysics Data System (ADS)

    Jiaqi, Dong; Keni, Qiu; Weigong, Zhang; Jing, Wang; Zhenzhen, Wang; Lihua, Ding

    2015-11-01

    For the processor working in the radiation environment in space, it tends to suffer from the single event effect on circuits and system failures, due to cosmic rays and high energy particle radiation. Therefore, the reliability of the processor has become an increasingly serious issue. The BCH-based error correction code can correct multi-bit errors, but it introduces large latency overhead. This paper proposes a hybrid error correction approach that combines BCH and EDAC to correct both multi-bit and single-bit errors for caches with low cost. The proposed technique can correct up to four-bit error, and correct single-bit error in one cycle. Evaluation results show that, the proposed hybrid error-correction scheme can improve the performance of cache accesses up to 20% compared to the pure BCH scheme.

  10. All-optical 2-bit header recognition and packet switching using polarization bistable VCSELs.

    PubMed

    Hayashi, Daisuke; Nakao, Kazuya; Katayama, Takeo; Kawaguchi, Hitoshi

    2015-04-01

    We propose and evaluate an all-optical 2-bit header recognition and packet switching method using two 1.55-µm polarization bistable vertical-cavity surface-emitting lasers (VCSELs) and three optical switches. Polarization bistable VCSELs acted as flip-flop devices by using AND-gate operations of the header and set pulses, together with the reset pulses. Optical packets including 40-Gb/s non-return-to-zero pseudo-random bit-sequence payloads were successfully sent to one of four ports according to the state of two bits in the headers with a 4-bit 500-Mb/s return-to-zero format. The input pulse powers were 17.2 to 31.8 dB lower than the VCSEL output power. We also examined an extension of this method to multi-bit header recognition and packet switching.

  11. Changes realized from extended bit-depth and metal artifact reduction in CT

    SciTech Connect

    Glide-Hurst, C.; Chen, D.; Zhong, H.; Chetty, I. J.

    2013-06-15

    Purpose: High-Z material in computed tomography (CT) yields metal artifacts that degrade image quality and may cause substantial errors in dose calculation. This study couples a metal artifact reduction (MAR) algorithm with enhanced 16-bit depth (vs standard 12-bit) to quantify potential gains in image quality and dosimetry. Methods: Extended CT to electron density (CT-ED) curves were derived from a tissue characterization phantom with titanium and stainless steel inserts scanned at 90-140 kVp for 12- and 16-bit reconstructions. MAR was applied to sinogram data (Brilliance BigBore CT scanner, Philips Healthcare, v.3.5). Monte Carlo simulation (MC-SIM) was performed on a simulated double hip prostheses case (Cerrobend rods embedded in a pelvic phantom) using BEAMnrc/Dosxyz (400 000 0000 histories, 6X, 10 Multiplication-Sign 10 cm{sup 2} beam traversing Cerrobend rod). A phantom study was also conducted using a stainless steel rod embedded in solid water, and dosimetric verification was performed with Gafchromic film analysis (absolute difference and gamma analysis, 2% dose and 2 mm distance to agreement) for plans calculated with Anisotropic Analytic Algorithm (AAA, Eclipse v11.0) to elucidate changes between 12- and 16-bit data. Three patients (bony metastases to the femur and humerus, and a prostate cancer case) with metal implants were reconstructed using both bit depths, with dose calculated using AAA and derived CT-ED curves. Planar dose distributions were assessed via matrix analyses and using gamma criteria of 2%/2 mm. Results: For 12-bit images, CT numbers for titanium and stainless steel saturated at 3071 Hounsfield units (HU), whereas for 16-bit depth, mean CT numbers were much larger (e.g., titanium and stainless steel yielded HU of 8066.5 {+-} 56.6 and 13 588.5 {+-} 198.8 for 16-bit uncorrected scans at 120 kVp, respectively). MC-SIM was well-matched between 12- and 16-bit images except downstream of the Cerrobend rod, where 16-bit dose was {approx}6

  12. Multi-bit binary decoder based on Belousov-Zhabotinsky reaction.

    PubMed

    Sun, Ming-Zhu; Zhao, Xin

    2013-03-21

    It is known that Belousov-Zhabotinsky (BZ) reaction can be applied to chemical computation, e.g., image processing, computational geometry, logical computation, and so on. In the field of logical computation, some basic logic gates and basic combinational logic circuits, such as adder, counter, memory cell, have already been implemented in simulations or in chemical experiments. In this paper, we focus on another important combinational logic circuit, binary decoder. Integrating AND gate and NOT gate, we first design and implement a one-bit binary decoder through numerical simulation. Then we show that one-bit decoder can be extended to design two-bit, three-bit, or even higher bit binary decoders by a cascade method. The simulation results demonstrate the effectiveness of these devices. The chemical realization of decoders can guide the construction of more sophisticated functions based on BZ reaction; meanwhile, the cascade method can facilitate the design of other combinational logic circuits.

  13. A Novel Uniform Discrete Multitone Transceiver with Power-Allocation for Digital Subscriber Line

    NASA Astrophysics Data System (ADS)

    Baig, Sobia; Junaid Mughal, Muhammad

    A novel Uniform Discrete Multitone (DMT) transceiver is proposed, utilizing a wavelet packet based filter bank transmultiplexer in conjunction with a DMT transceiver. The proposed transceiver decomposes the channel spectrum into subbands of equal bandwidth. The objective is to minimize the bit error rate (BER), which is increased by channel-noise amplification. This noise amplification is due to the Zero-Forcing equalization (ZFE) technique. Quantization of the channel-noise amplification is presented, based on post-equalization signal-to-noise ratio (SNR) and probability of error in all subbands of the Uniform DMT system. A modified power loading algorithm is applied to allocate variable power according to subband gains. A BER performance comparison of the Uniform DMT with variable and uniform power-loading and with a conventional DMT system in a Digital Subscriber Line (DSL) channel is presented.

  14. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  15. Transforming networking within the ESIP Federation using ResearchBit

    NASA Astrophysics Data System (ADS)

    Robinson, E.

    2015-12-01

    Geoscientists increasingly need interdisciplinary teams to solve their research problems. Currently, geoscientists use Research Networking (RN) systems to connect with each other and find people of similar and dissimilar interests. As we shift to digitally mediated scholarship, we need innovative methods for scholarly communication. Formal methods for scholarly communication are undergoing vast transformation owing to the open-access movement and reproducible research. However, informal scholarly communication that takes place at professional society meetings and conferences, like AGU, has received limited research attention relying primarily on serendipitous interaction. The ResearchBit project aims to fundamentally improve informal methods of scholarly communication by leveraging the serendipitous interactions of researchers and making them more aware of co-located potential collaborators with mutual interests. This presentation will describe our preliminary hardware testing done at the Federation for Earth Science Information Partners (ESIP) Summer meeting this past July and the initial recommendation system design. The presentation will also cover the cultural shifts and hurdles to introducing new technology, the privacy concerns of tracking technology and how we are addressing those new issues.

  16. Transnational exchange of scientific data: The ``Bits of Power'' report

    NASA Astrophysics Data System (ADS)

    Berry, R. Stephen

    1998-07-01

    In 1994, the U.S. National Committee for the Committee on Data for Science and Technology (CODATA), organized under the Commission on Physical Sciences, Mathematics and Applications of the National Research Council established the Committee on Issues in the Transborder Flow of Scientific Data. The purpose of this Committee was to examine the current state of global access to scientific data, to identify strengths, problems and challenges confronting scientists now, or likely to arise in the next few years, and to make recommendations on building the strengths and ameliorating or avoiding the problems. The Committee's report appeared as the book Bits of Power: Issues in Global Access to Scientific Data (National Academy Press, Washington, D.C., 1997). This presentation is a brief summary of that report, particularly as it pertains to atomic and molecular data. The context is necessarily the evolution toward increasing electronic acquisition, archiving and distribution of scientific data. Thus the central issues were divided into the technological infrastructure, the issues for the sciences and scientists in the various disciplines, the economic aspects and the legal issues. For purposes of this study, the sciences fell naturally into four groups: the laboratory physical sciences, the biological sciences, the earth sciences and the astronomical and planetary sciences. Some of the substantive scientific aspects are specific to particular groups of sciences, but the matters of infrastructure, economic questions and legal issues apply, for the most part, to all the sciences.

  17. Advanced High-Speed 16-Bit Digitizer System

    SciTech Connect

    2012-05-01

    The fastest commercially available 16-bit ADC can only perform around 200 mega-samples per second (200 MS/s). Connecting ADC chips together in eight different time domains increases the quantity of samples taken by a factor of eight. This method of interleaving requires that the input signal being sampled is split into eight identical signals and arrives at each ADC chip at the same point in time. The splitting of the input signal is performed in the analog front end containing a wideband filter that impedance matches the input signal to the ADC chips. Each ADC uses a clock to tell it when to perform a conversion. Using eight unique clocks spaced in 45-degree increments is the method used to time shift when each ADC chip performs its conversion. Given that this control clock is a fixed frequency, the clock phase shifting is accomplished by tightly controlling the distance that the clock must travel, resulting in a time delay. The interleaved ADC chips will now generate digital data in eight different time domains. These data are processed inside a field-programmable gate array (FPGA) to move the data back into a single time domain and store it into memory. The FPGA also contains a Nios II processor that provides system control and data retrieval via Ethernet.

  18. A short impossibility proof of quantum bit commitment

    NASA Astrophysics Data System (ADS)

    Chiribella, Giulio; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Schlingemann, Dirk; Werner, Reinhard

    2013-06-01

    Bit commitment protocols, whose security is based on the laws of quantum mechanics alone, are generally held to be impossible on the basis of a concealment-bindingness tradeoff (Lo and Chau, 1997 [1], Mayers, 1997 [2]). A strengthened and explicit impossibility proof has been given in D'Ariano et al. (2007) [3] in the Heisenberg picture and in a C*-algebraic framework, considering all conceivable protocols in which both classical and quantum information is exchanged. In the present Letter we provide a new impossibility proof in the Schrödinger picture, greatly simplifying the classification of protocols and strategies using the mathematical formulation in terms of quantum combs (Chiribella et al., 2008 [4]), with each single-party strategy represented by a conditioned comb. We prove that assuming a stronger notion of concealment-for each classical communication history, not in average-allows Alice's cheat to pass also the worst-case Bob's test. The present approach allows us to restate the concealment-bindingness tradeoff in terms of the continuity of dilations of probabilistic quantum combs with the metric given by the comb discriminability-distance.

  19. A Complete Graphical Calculus for Spekkens' Toy Bit Theory

    NASA Astrophysics Data System (ADS)

    Backens, Miriam; Duman, Ali Nabi

    2016-01-01

    While quantum theory cannot be described by a local hidden variable model, it is nevertheless possible to construct such models that exhibit features commonly associated with quantum mechanics. These models are also used to explore the question of ψ -ontic versus ψ -epistemic theories for quantum mechanics. Spekkens' toy theory is one such model. It arises from classical probabilistic mechanics via a limit on the knowledge an observer may have about the state of a system. The toy theory for the simplest possible underlying system closely resembles stabilizer quantum mechanics, a fragment of quantum theory which is efficiently classically simulable but also non-local. Further analysis of the similarities and differences between those two theories can thus yield new insights into what distinguishes quantum theory from classical theories, and ψ -ontic from ψ -epistemic theories. In this paper, we develop a graphical language for Spekkens' toy theory. Graphical languages offer intuitive and rigorous formalisms for the analysis of quantum mechanics and similar theories. To compare quantum mechanics and a toy model, it is useful to have similar formalisms for both. We show that our language fully describes Spekkens' toy theory and in particular, that it is complete: meaning any equality that can be derived using other formalisms can also be derived entirely graphically. Our language is inspired by a similar graphical language for quantum mechanics called the ZX-calculus. Thus Spekkens' toy bit theory and stabilizer quantum mechanics can be analysed and compared using analogous graphical formalisms.

  20. PDC Bit Testing at Sandia Reveals Influence of Chatter in Hard-Rock Drilling

    SciTech Connect

    RAYMOND,DAVID W.

    1999-10-14

    Polycrystalline diamond compact (PDC) bits have yet to be routinely applied to drilling the hard-rock formations characteristic of geothermal reservoirs. Most geothermal production wells are currently drilled with tungsten-carbide-insert roller-cone bits. PDC bits have significantly improved penetration rates and bit life beyond roller-cone bits in the oil and gas industry where soft to medium-hard rock types are encountered. If PDC bits could be used to double current penetration rates in hard rock geothermal well-drilling costs could be reduced by 15 percent or more. PDC bits exhibit reasonable life in hard-rock wear testing using the relatively rigid setups typical of laboratory testing. Unfortunately, field experience indicates otherwise. The prevailing mode of failure encountered by PDC bits returning from hard-rock formations in the field is catastrophic, presumably due to impact loading. These failures usually occur in advance of any appreciable wear that might dictate cutter replacement. Self-induced bit vibration, or ''chatter'', is one of the mechanisms that may be responsible for impact damage to PDC cutters in hard-rock drilling. Chatter is more severe in hard-rock formations since they induce significant dynamic loading on the cutter elements. Chatter is a phenomenon whereby the drillstring becomes dynamically unstable and excessive sustained vibrations occur. Unlike forced vibration, the force (i.e., weight on bit) that drives self-induced vibration is coupled with the response it produces. Many of the chatter principles derived in the machine tool industry are applicable to drilling. It is a simple matter to make changes to a machine tool to study the chatter phenomenon. This is not the case with drilling. Chatter occurs in field drilling due to the flexibility of the drillstring. Hence, laboratory setups must be made compliant to observe chatter.

  1. Carbon allocation to ectomycorrhizal fungi correlates with belowground allocation in culture studies.

    PubMed

    Hobbie, Erik A

    2006-03-01

    Ectomycorrhizal fungi form symbioses with most temperate and boreal tree species, but difficulties in measuring carbon allocation to these symbionts have prevented the assessment of their importance in forest ecosystems. Here, I surveyed allocation patterns in 14 culture studies and five field studies of ectomycorrhizal plants. In culture studies, allocation to ectomycorrhizal fungi (NPPf) was linearly related to total belowground net primary production (NPPb) by the equation NPPf = 41.5% x NPPb - 11.3% (r2 = 0.55, P < 0.001) and ranged from 1% to 21% of total net primary production. As a percentage of NPP, allocation to ectomycorrhizal fungi was highest at lowest plant growth rates and lowest nutrient availabilities. Because total belowground allocation can be estimated using carbon balance techniques, these relationships should allow ecologists to incorporate mycorrhizal fungi into existing ecosystem models. In field studies, allocation to ectomycorrhizal fungi ranged from 0% to 22% of total allocation, but wide differences in measurement techniques made intercomparisons difficult. Techniques such as fungal in-growth cores, root branching-order studies, and isotopic analyses could refine our estimates of turnover rates of fine roots, mycorrhizae, and extraradical hyphae. Together with ecosystem modeling, such techniques could soon provide good estimates of the relative importance of root vs. fungal allocation in belowground carbon budgets.

  2. An Analysis and Allocation System for Library Collections Budgets: The Comprehensive Allocation Process (CAP)

    ERIC Educational Resources Information Center

    Lyons, Lucy Eleonore; Blosser, John

    2012-01-01

    The "Comprehensive Allocation Process" (CAP) is a reproducible decision-making structure for the allocation of new collections funds, for the reallocation of funds within stagnant budgets, and for budget cuts in the face of reduced funding levels. This system was designed to overcome common shortcomings of current methods. Its philosophical…

  3. Experimental validation of optimization-based integrated controls-structures design methodology for flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Joshi, Suresh M.; Walz, Joseph E.

    1993-01-01

    An optimization-based integrated design approach for flexible space structures is experimentally validated using three types of dissipative controllers, including static, dynamic, and LQG dissipative controllers. The nominal phase-0 of the controls structure interaction evolutional model (CEM) structure is redesigned to minimize the average control power required to maintain specified root-mean-square line-of-sight pointing error under persistent disturbances. The redesign structure, phase-1 CEM, was assembled and tested against phase-0 CEM. It is analytically and experimentally demonstrated that integrated controls-structures design is substantially superior to that obtained through the traditional sequential approach. The capability of a software design tool based on an automated design procedure in a unified environment for structural and control designs is demonstrated.

  4. Parameter estimation of copula functions using an optimization-based method

    NASA Astrophysics Data System (ADS)

    Abdi, Amin; Hassanzadeh, Yousef; Talatahari, Siamak; Fakheri-Fard, Ahmad; Mirabbasi, Rasoul

    2016-02-01

    Application of the copulas can be useful for the accurate multivariate frequency analysis of hydrological phenomena. There are many copula functions and some methods were proposed for estimating the copula parameters. Since the copula functions are mathematically complicated, estimating of the copula parameter is an effortful work. In the present study, an optimization-based method (OBM) is proposed to obtain the parameters of copulas. The usefulness of the proposed method is illustrated on drought events. For this purpose, three commonly used copulas of Archimedean family, namely, Clayton, Frank, and Gumbel copulas are used to construct the joint probability distribution of drought characteristics of 60 gauging sites located in East-Azarbaijan province, Iran. The performance of OBM was compared with two conventional methods, namely, method of moments and inference function for margins. The results illustrate the supremacy of the OBM to estimate the copula parameters compared to the other considered methods.

  5. An optimization-based integrated controls-structures design methodology for flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Joshi, Suresh M.; Armstrong, Ernest S.

    1993-01-01

    An approach for an optimization-based integrated controls-structures design is presented for a class of flexible spacecraft that require fine attitude pointing and vibration suppression. The integrated design problem is posed in the form of simultaneous optimization of both structural and control design variables. The approach is demonstrated by application to the integrated design of a generic space platform and to a model of a ground-based flexible structure. The numerical results obtained indicate that the integrated design approach can yield spacecraft designs that have substantially superior performance over a conventional design wherein the structural and control designs are performed sequentially. For example, a 40-percent reduction in the pointing error is observed along with a slight reduction in mass, or an almost twofold increase in the controlled performance is indicated with more than a 5-percent reduction in the overall mass of the spacecraft (a reduction of hundreds of kilograms).

  6. Optimization-based decision support to assist in logistics planning for hospital evacuations.

    PubMed

    Glick, Roger; Bish, Douglas R; Agca, Esra

    2013-01-01

    The evacuation of the hospital is a very complex process and evacuation planning is an important part of a hospital's emergency management plan. There are numerous factors that affect the evacuation plan including the nature of threat, availability of resources and staff the characteristics of the evacuee population, and risk to patients and staff. The safety and health of patients is of fundamental importance, but safely moving patients to alternative care facilities while under threat is a very challenging task. This article describes the logistical issues and complexities involved in planning and execution of hospital evacuations. Furthermore, this article provides examples of how optimization-based decision support tools can help evacuation planners to better plan for complex evacuations by providing real-world solutions to various evacuation scenarios.

  7. Optimization-based additive decomposition of weakly coercive problems with applications

    DOE PAGESBeta

    Bochev, Pavel B.; Ridzal, Denis

    2016-01-27

    In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less

  8. Global patterns of phytoplankton nutrient and light colimitation inferred from an optimality-based model

    NASA Astrophysics Data System (ADS)

    Arteaga, Lionel; Pahlow, Markus; Oschlies, Andreas

    2014-07-01

    The widely used concept of constant "Redfield" phytoplankton stoichiometry is often applied for estimating which nutrient limits phytoplankton growth in the surface ocean. Culture experiments, in contrast, show strong relations between growth conditions and cellular stoichiometry with often substantial deviations from Redfield stoichiometry. Here we investigate to what extent both views agree by analyzing remote sensing and in situ data with an optimality-based model of nondiazotrophic phytoplankton growth in order to infer seasonally varying patterns of colimitation by light, nitrogen (N), and phosphorus (P) in the global ocean. Our combined model-data analysis suggests strong N and N-P colimitation in the tropical ocean, seasonal light, and N-P colimitation in the Northern Hemisphere, and strong light limitation only during winter in the Southern Ocean. The eastern equatorial Pacific appears as the only ocean area that is essentially not limited by N, P, or light. Even though our optimality-based approach specifically accounts for flexible stoichiometry, inferred patterns of N and P limitation are to some extent consistent with those obtained from an analysis of surface inorganic nutrients with respect to the Redfield N:P ratio. Iron is not part of our analysis, implying that we cannot accurately predict N cell quotas in high-nutrient, low-chlorophyll regions. Elsewhere, we do not expect a major effect of iron on the relative distribution of N, P, and light colimitation areas. The relative importance of N, P, and light in limiting phytoplankton growth diagnosed here by combining observations and an optimal growth model provides a useful constraint for models used to predict future marine biological production under changing environmental conditions. 2014. American Geophysical Union. All Rights Reserved.

  9. Comparison of optimization-based approaches to imaging spectroscopic inversion in coastal waters

    NASA Astrophysics Data System (ADS)

    Filippi, Anthony M.; Mishonov, Andrey

    2005-06-01

    The United States Navy has recently shifted focus from open-ocean warfare to joint operations in optically complex nearshore regions. Accurately estimating bathymetry and water column inherent optical properties (IOPs) from passive remotely sensed imagery can be an important facilitator of naval operations. Lee et al. developed a semianalytical model that describes the relationship between shallow-water bottom depth, IOPs and subsurface and above-surface reflectance. They also developed a nonlinear optimization-based technique that estimates bottom depth and IOPs, using only measured spectral remote sensing reflectance as input. While quite effective, inversion using noisy field data can limit its accuracy. In this research, the nonlinear optimization-based Lee et al. inversion algorithm was used as a baseline method, and it provided the framework for a proposed hybrid evolutionary/classical optimization approach to hyperspectral data processing. All aspects of the proposed implementation were held constant with that of Lee et al., except that a hybrid evolutionary/classical optimizer (HECO) was substituted for the nonlinear method. HECO required more computer-processing time. In addition, HECO is nondeterministic, and the termination strategy is heuristic. However, the HECO method makes no assumptions regarding the mathematical form of the problem functions. Also, whereas smooth nonlinear optimization is only guaranteed to find a locally optimal solution, HECO has a higher probability of finding a more globally optimal result. While the HECO-acquired results are not provably optimal, we have empirically found that for certain variables, HECO does provide estimates comparable to nonlinear optimization (e.g., bottom albedo at 550 nm).

  10. Biomass Resource Allocation among Competing End Uses

    SciTech Connect

    Newes, E.; Bush, B.; Inman, D.; Lin, Y.; Mai, T.; Martinez, A.; Mulcahy, D.; Short, W.; Simpkins, T.; Uriarte, C.; Peck, C.

    2012-05-01

    The Biomass Scenario Model (BSM) is a system dynamics model developed by the U.S. Department of Energy as a tool to better understand the interaction of complex policies and their potential effects on the biofuels industry in the United States. However, it does not currently have the capability to account for allocation of biomass resources among the various end uses, which limits its utilization in analysis of policies that target biomass uses outside the biofuels industry. This report provides a more holistic understanding of the dynamics surrounding the allocation of biomass among uses that include traditional use, wood pellet exports, bio-based products and bioproducts, biopower, and biofuels by (1) highlighting the methods used in existing models' treatments of competition for biomass resources; (2) identifying coverage and gaps in industry data regarding the competing end uses; and (3) exploring options for developing models of biomass allocation that could be integrated with the BSM to actively exchange and incorporate relevant information.

  11. Task mapping for non-contiguous allocations.

    SciTech Connect

    Leung, Vitus Joseph; Bunde, David P.; Ebbers, Johnathan; Price, Nicholas W.; Swank, Matthew.; Feer, Stefan P.; Rhodes, Zachary D.

    2013-02-01

    This paper examines task mapping algorithms for non-contiguously allocated parallel jobs. Several studies have shown that task placement affects job running time for both contiguously and non-contiguously allocated jobs. Traditionally, work on task mapping either uses a very general model where the job has an arbitrary communication pattern or assumes that jobs are allocated contiguously, making them completely isolated from each other. A middle ground between these two cases is the mapping problem for non-contiguous jobs having a specific communication pattern. We propose several task mapping algorithms for jobs with a stencil communication pattern and evaluate them using experiments and simulations. Our strategies improve the running time of a MiniApp by as much as 30% over a baseline strategy. Furthermore, this improvement increases markedly with the job size, demonstrating the importance of task mapping as systems grow toward exascale.

  12. Allocation of Attention by Fishing Vessel Watchkeepers

    NASA Astrophysics Data System (ADS)

    Findlay, Malcolm

    2006-05-01

    This study examines the way in which attention is allocated by watchkeepers on fishing vessels and identifies differences in the approach displayed by individuals with different levels of training and experience. A method of analysing the way in which attention is allocated on a sample of UK fishing vessels is described. It was found that both skippers and mates allocated disproportionate amounts of attention to fishfinding equipment at certain stages of the fishing trip, while crewmen were heavily reliant upon the track plotter both while fishing and on passage. Those with more training and experience appeared to treat the array of navigation and control components as an integrated system, while untrained crewmen dealt with each aspect in isolation.

  13. Competing Principles for Allocating Health Care Resources.

    PubMed

    Carter, Drew; Gordon, Jason; Watt, Amber M

    2016-10-01

    We clarify options for conceptualizing equity, or what we refer to as justice, in resource allocation. We do this by systematically differentiating, expounding, and then illustrating eight different substantive principles of justice. In doing this, we compare different meanings that can be attributed to "need" and "the capacity to benefit" (CTB). Our comparison is sharpened by two analytical tools. First, quantification helps to clarify the divergent consequences of allocations commended by competing principles. Second, a diagrammatic approach developed by economists Culyer and Wagstaff offers a visual and conceptual aid. Of the eight principles we illustrate, only two treat as relevant both a person's initial health state and a person's CTB per resource unit expended: (1) allocate resources so as to most closely equalize final health states and (2) allocate resources so as to equally restore health states to population norms. These allocative principles ought to be preferred to the alternatives if one deems relevant both a person's initial health state and a person's CTB per resource unit expended. Finally, we examine some possibilities for conceptualizing benefits as relative to how badly off someone is, extending Parfit's thought on Prioritarianism (a prioritizing of the worst off). Questions arise as to how much intervention effects accruing to the worse off count for more and how this changes with improving health. We explicate some recent efforts to answer these questions, including in Dutch and British government circles. These efforts can be viewed as efforts to operationalize need as an allocative principle. Each effort seeks to maximize in the aggregate quanta of effect that are differentially valued in favor of the worst off. In this respect, each effort constitutes one type of Prioritarianism, which Parfit failed to differentiate from other types.

  14. The Allocation of Federal Expenditures Among States

    NASA Technical Reports Server (NTRS)

    Lee, Maw Lin

    1967-01-01

    This study explores factors associated with the allocation offederal expenditures by states and examines the implications of theseexpenditures on the state by state distribution of incomes. Theallocation of federal expenditures is functionally oriented toward theobjectives for which various government programs are set up. Thegeographical distribution of federal expenditures, therefore, washistorically considered to be a problem incidental to governmentactivity. Because of this, relatively little attention was given tothe question of why some states receive more federal allocation thanothers. In addition, the implications of this pattern of allocationamong the several states have not been intensively investigated.

  15. Concurrent engineering accelerates development of new slim hole bits to provide superior performance and reliability

    SciTech Connect

    Neal, P.A.

    1996-09-01

    The successful use of concurrent engineering to expedite new product development requires thoughtful selection of the project team, and clear performance objectives. Concurrent engineering, when properly implemented, can yield a superior product, with less development time, lower cost and greater product consistency. For critical applications, such as rock bits for slim hole drilling, product reliability is paramount. Slim hole bits require tighter process control to minimize variations that could impact bearing and seal clearances, bearing surface finishes, insert retention, and drilling efficiency of the bit. Greater durability and longer bit life are enhanced when the design and manufacturing processes are concurrently developed. The success of concurrently engineering a 4 3/4 inch slim hole insert bit for re-entry horizontal drilling had a direct impact on the cost of drilling, setting a world record in Canada for most footage drilled and reducing the cost per foot on a well in Hobbs, New Mexico by 40%. The total time required for the project to move from conception to successful bit run was 25% faster than traditional project development efforts. The paper will examine the development process of the 4 3/4 inch slim hole bit with emphasis on selecting a successful project team, establishing clear performance objectives and an analysis of field results from the initial field test which will demonstrate the attainment of the established objectives.

  16. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  17. An intermediate significant bit (ISB) watermarking technique using neural networks.

    PubMed

    Zeki, Akram; Abubakar, Adamu; Chiroma, Haruna

    2016-01-01

    Prior research studies have shown that the peak signal to noise ratio (PSNR) is the most frequent watermarked image quality metric that is used for determining the levels of strength and weakness of watermarking algorithms. Conversely, normalised cross correlation (NCC) is the most common metric used after attacks were applied to a watermarked image to verify the strength of the algorithm used. Many researchers have used these approaches to evaluate their algorithms. These strategies have been used for a long time, however, which unfortunately limits the value of PSNR and NCC in reflecting the strength and weakness of the watermarking algorithms. This paper considers this issue to determine the threshold values of these two parameters in reflecting the amount of strength and weakness of the watermarking algorithms. We used our novel watermarking technique for embedding four watermarks in intermediate significant bits (ISB) of six image files one-by-one through replacing the image pixels with new pixels and, at the same time, keeping the new pixels very close to the original pixels. This approach gains an improved robustness based on the PSNR and NCC values that were gathered. A neural network model was built that uses the image quality metrics (PSNR and NCC) values obtained from the watermarking of six grey-scale images that use ISB as the desired output and that are trained for each watermarked image's PSNR and NCC. The neural network predicts the watermarked image's PSNR together with NCC after the attacks when a portion of the output of the same or different types of image quality metrics (PSNR and NCC) are obtained. The results indicate that the NCC metric fluctuates before the PSNR values deteriorate. PMID:27386317

  18. An intermediate significant bit (ISB) watermarking technique using neural networks.

    PubMed

    Zeki, Akram; Abubakar, Adamu; Chiroma, Haruna

    2016-01-01

    Prior research studies have shown that the peak signal to noise ratio (PSNR) is the most frequent watermarked image quality metric that is used for determining the levels of strength and weakness of watermarking algorithms. Conversely, normalised cross correlation (NCC) is the most common metric used after attacks were applied to a watermarked image to verify the strength of the algorithm used. Many researchers have used these approaches to evaluate their algorithms. These strategies have been used for a long time, however, which unfortunately limits the value of PSNR and NCC in reflecting the strength and weakness of the watermarking algorithms. This paper considers this issue to determine the threshold values of these two parameters in reflecting the amount of strength and weakness of the watermarking algorithms. We used our novel watermarking technique for embedding four watermarks in intermediate significant bits (ISB) of six image files one-by-one through replacing the image pixels with new pixels and, at the same time, keeping the new pixels very close to the original pixels. This approach gains an improved robustness based on the PSNR and NCC values that were gathered. A neural network model was built that uses the image quality metrics (PSNR and NCC) values obtained from the watermarking of six grey-scale images that use ISB as the desired output and that are trained for each watermarked image's PSNR and NCC. The neural network predicts the watermarked image's PSNR together with NCC after the attacks when a portion of the output of the same or different types of image quality metrics (PSNR and NCC) are obtained. The results indicate that the NCC metric fluctuates before the PSNR values deteriorate.

  19. Digital PSK to BiO-L demodulator for 2 sup nx(bit rate) carrier

    NASA Technical Reports Server (NTRS)

    Shull, T. A.

    1979-01-01

    A phase shift key (PSK) to BiO-L demodulator which uses standard digital integrated circuits is discussed. The demodulator produces NRZ-L, bit clock, and BiO-L outputs from digital PSK input signals for which the carrier is a 2 to the Nth multiple of the bit rate. Various bit and carrier rates which are accommodated by changing various component values within the demodulator are described. The use of the unit for sinusoidal inputs as well as digital inputs is discussed.

  20. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.

  1. Effect of blade spacing of polycrystalline diamond compact (PDC) bits on stability of drillstrings

    SciTech Connect

    Elsayed, M.A.; Dupuy, C.A.

    1997-07-01

    The geometry of PDC bits, particularly their blade spacing, plays a major role in stability of the drillstrings to which they are attached. In this paper, the authors use a bit model in which the cutters are arrayed in radial blades and examine the effect of blade spacing on stability. They show that for a given bit size, blade spacing may be changed to produce stable cutting in a desired speed range. This data, combined with downhole assembly design, may be used to optimize operating conditions for drillstrings.

  2. Low-bit-rate encoder for picture signals using a centre-clipping quantiser

    NASA Astrophysics Data System (ADS)

    Krishnasami, P.; Faruqui, M. N.

    1986-02-01

    For improved performance of a predictive coding system used for encoding picture signals at low bit rates, it is necessary to design not only an efficient predictor, but also an efficient quantizer that produces minimum perceptible distortion. The paper describes a scheme in which a special quantizer called a 'center-clipping quantizer' (CCQ) is used in a ratio PCM coder. The subjective quality of the picture at 1.58 bit/pel is similar to that of an ADPCM coder at 3 bit/pel, giving an advantage of more than 6 dB in performance.

  3. Optimization of the Sampling Periods and the Quantization Bit Lengths for Networked Estimation

    PubMed Central

    Suh, Young Soo; Ro, Young Sik; Kang, Hee Jun

    2010-01-01

    This paper is concerned with networked estimation, where sensor data are transmitted over a network of limited transmission rate. The transmission rate depends on the sampling periods and the quantization bit lengths. To investigate how the sampling periods and the quantization bit lengths affect the estimation performance, an equation to compute the estimation performance is provided. An algorithm is proposed to find sampling periods and quantization bit lengths combination, which gives good estimation performance while satisfying the transmission rate constraint. Through the numerical example, the proposed algorithm is verified. PMID:22163557

  4. Bit-strings and other modifications of Viviane model for language competition

    NASA Astrophysics Data System (ADS)

    de Oliveira, P. M. C.; Stauffer, D.; Lima, F. W. S.; Sousa, A. O.; Schulze, C.; Moss de Oliveira, S.

    2007-03-01

    The language competition model of Viviane de Oliveira et al. is modified by associating with each language a string of 32 bits. Whenever a language changes in this Viviane model, also one randomly selected bit is flipped. If then only languages with different bit-strings are counted as different, the resulting size distribution of languages agrees with the empirically observed slightly asymmetric log-normal distribution. Several other modifications were also tried but either had more free parameters or agreed less well with reality.

  5. Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation

    NASA Technical Reports Server (NTRS)

    Swift, G.

    2002-01-01

    JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.

  6. Room temperature single-photon detectors for high bit rate quantum key distribution

    SciTech Connect

    Comandar, L. C.; Patel, K. A.; Fröhlich, B. Lucamarini, M.; Sharpe, A. W.; Dynes, J. F.; Yuan, Z. L.; Shields, A. J.; Penty, R. V.

    2014-01-13

    We report room temperature operation of telecom wavelength single-photon detectors for high bit rate quantum key distribution (QKD). Room temperature operation is achieved using InGaAs avalanche photodiodes integrated with electronics based on the self-differencing technique that increases avalanche discrimination sensitivity. Despite using room temperature detectors, we demonstrate QKD with record secure bit rates over a range of fiber lengths (e.g., 1.26 Mbit/s over 50 km). Furthermore, our results indicate that operating the detectors at room temperature increases the secure bit rate for short distances.

  7. Fast nondeterministic random-bit generation using on-chip chaos lasers

    SciTech Connect

    Harayama, Takahisa; Sunada, Satoshi; Yoshimura, Kazuyuki; Davis, Peter; Tsuzuki, Ken; Uchida, Atsushi

    2011-03-15

    It is shown that broadband chaos suitable for fast nondeterministic random-bit generation in small devices can be achieved in a semiconductor laser with a short external cavity. The design of the device is based on a theoretical model for nondeterministic random-bit generation by amplification of microscopic noise. Moreover, it is demonstrated that bit sequences passing common tests of statistical randomness at rates up to 2.08 Gbits/s can be generated using on-chip lasers with a monolithically integrated external cavity, amplifiers, and a photodetector.

  8. Cheat sensitive quantum bit commitment via pre- and post-selected quantum states

    NASA Astrophysics Data System (ADS)

    Li, Yan-Bing; Wen, Qiao-Yan; Li, Zi-Chen; Qin, Su-Juan; Yang, Ya-Tao

    2014-01-01

    Cheat sensitive quantum bit commitment is a most important and realizable quantum bit commitment (QBC) protocol. By taking advantage of quantum mechanism, it can achieve higher security than classical bit commitment. In this paper, we propose a QBC schemes based on pre- and post-selected quantum states. The analysis indicates that both of the two participants' cheat strategies will be detected with non-zero probability. And the protocol can be implemented with today's technology as a long-term quantum memory is not needed.

  9. 77 FR 32997 - Certain Drill Bits and Products Containing the Same; Institution of Investigation Pursuant to 19...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-04

    ... COMMISSION Certain Drill Bits and Products Containing the Same; Institution of Investigation Pursuant to 19 U... within the United States after importation of certain drill bits and products containing the same by... after importation of certain drill bits and products containing the same that infringe one or more...

  10. 77 FR 25749 - Certain Drill Bits and Products Containing Same; Notice of Receipt of Complaint; Solicitation of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-01

    ... COMMISSION Certain Drill Bits and Products Containing Same; Notice of Receipt of Complaint; Solicitation of... entitled Certain Drill Bits and Products Containing the Same, DN 2893; the Commission is soliciting... the United States after importation of certain drill bits and products containing the same....

  11. Behind the Resource Domino. Part II: Allocation

    ERIC Educational Resources Information Center

    Thiemann, F. C.; Bumbarger, C. S.

    1972-01-01

    Discusses the problem of allocation and acquisition of resources from an administrative point of view. Suggests that an administrator's accountability as a leader is fixed in how efficiently and effectively resources are deployed in the organizational goal attainment efforts. (Author/DN)

  12. Ground data systems resource allocation process

    NASA Technical Reports Server (NTRS)

    Berner, Carol A.; Durham, Ralph; Reilly, Norman B.

    1989-01-01

    The Ground Data Systems Resource Allocation Process at the Jet Propulsion Laboratory provides medium- and long-range planning for the use of Deep Space Network and Mission Control and Computing Center resources in support of NASA's deep space missions and Earth-based science. Resources consist of radio antenna complexes and associated data processing and control computer networks. A semi-automated system was developed that allows operations personnel to interactively generate, edit, and revise allocation plans spanning periods of up to ten years (as opposed to only two or three weeks under the manual system) based on the relative merit of mission events. It also enhances scientific data return. A software system known as the Resource Allocation and Planning Helper (RALPH) merges the conventional methods of operations research, rule-based knowledge engineering, and advanced data base structures. RALPH employs a generic, highly modular architecture capable of solving a wide variety of scheduling and resource sequencing problems. The rule-based RALPH system has saved significant labor in resource allocation. Its successful use affirms the importance of establishing and applying event priorities based on scientific merit, and the benefit of continuity in planning provided by knowledge-based engineering. The RALPH system exhibits a strong potential for minimizing development cycles of resource and payload planning systems throughout NASA and the private sector.

  13. 45 CFR 1355.57 - Cost allocation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Welfare Regulations Relating to Public Welfare (Continued) OFFICE OF HUMAN DEVELOPMENT SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES THE ADMINISTRATION ON CHILDREN, YOUTH AND FAMILIES, FOSTER CARE MAINTENANCE PAYMENTS, ADOPTION ASSISTANCE, AND CHILD AND FAMILY SERVICES GENERAL § 1355.57 Cost allocation....

  14. 42 CFR 457.228 - Cost allocation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Cost allocation. 457.228 Section 457.228 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STATE CHILDREN'S HEALTH INSURANCE PROGRAMS (SCHIPs) ALLOTMENTS AND GRANTS TO STATES...

  15. 45 CFR 400.13 - Cost allocation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... costs, both direct and indirect, appropriately between the Refugee Resettlement Program (RRP) and other programs which it administers. (b) Within the RRP, a State must allocate costs appropriately among its CMA grant, social services grant, and any other Refugee Resettlement Program (RRP) grants which it...

  16. Issues in organ procurement, allocation, and transplantation.

    PubMed

    Nierste, Deborah

    2013-01-01

    Organ transplantation extends lives and improves health but presents complex ethical dilemmas for nurses caring for donors, recipients, and their families. This article overviews organ procurement and allocation, discusses ethical dilemmas in transplantation, and offers strategies from professional and biblical perspectives for coping with moral distress and maintaining compassionate care. PMID:23607154

  17. 50 CFR 660.320 - Allocations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Allocations. 660.320 Section 660.320 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE (CONTINUED) FISHERIES OFF WEST COAST STATES West Coast Groundfish Fisheries §...

  18. 42 CFR 433.34 - Cost allocation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Department in accordance with the requirements contained in subpart E of 45 CFR part 95. Subpart E also sets... 42 Public Health 4 2010-10-01 2010-10-01 false Cost allocation. 433.34 Section 433.34 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES...

  19. 42 CFR 457.228 - Cost allocation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... accordance with the requirements contained in subpart E of 45 CFR part 95. Subpart E also sets forth the... 42 Public Health 4 2010-10-01 2010-10-01 false Cost allocation. 457.228 Section 457.228 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES...

  20. 20 CFR 631.82 - Substate allocation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... THE JOB TRAINING PARTNERSHIP ACT Disaster Relief Employment Assistance § 631.82 Substate allocation..., within such disaster areas. The remainder of such funds may be reserved by the Governor for use, in... with such major disaster. (b) The JTPA title III program substate grantee for the disaster area...

  1. 50 CFR 660.55 - Allocations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... whiting is described in paragraph (i) of this section and in the PCGFMP. Allocation of black rockfish is... Slope RF South of 40°10′ N. lat. 63 37 Dover Sole 95 5 English Sole 95 5 Petrale Sole 95 5 Arrowtooth... management measures process. (l) Black rockfish harvest guideline. The commercial tribal harvest...

  2. 50 CFR 660.55 - Allocations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... whiting is described in paragraph (i) of this section and in the PCGFMP. Allocation of black rockfish is... Dover Sole 95 5 English Sole 95 5 Petrale Sole 95 5 Arrowtooth Flounder 95 5 Starry Flounder 50 50 Other... through the biennial harvest specifications and management measures process. (l) Black rockfish...

  3. Resource Allocation in High Schools. Final Report.

    ERIC Educational Resources Information Center

    Hartman, William T.

    This study investigated the resource allocation process--how school administrators obtain the proper resources to operate their schools, distribute the available resources among the various school programs appropriately, and manage resources for effective educational results--in four high schools during the 1984-85 school year. Information was…

  4. 20 CFR 631.82 - Substate allocation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Substate allocation. 631.82 Section 631.82 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR PROGRAMS UNDER TITLE III OF THE JOB TRAINING PARTNERSHIP ACT Disaster Relief Employment Assistance § 631.82 Substate...

  5. Discrete Resource Allocation in Visual Working Memory

    ERIC Educational Resources Information Center

    Barton, Brian; Ester, Edward F.; Awh, Edward

    2009-01-01

    Are resources in visual working memory allocated in a continuous or a discrete fashion? On one hand, flexible resource models suggest that capacity is determined by a central resource pool that can be flexibly divided such that items of greater complexity receive a larger share of resources. On the other hand, if capacity in working memory is…

  6. 45 CFR 400.13 - Cost allocation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Welfare Regulations Relating to Public Welfare OFFICE OF REFUGEE RESETTLEMENT, ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES REFUGEE RESETTLEMENT PROGRAM Grants to States for Refugee Resettlement Award of Grants to States § 400.13 Cost allocation. (a) A State must...

  7. Allocation of Resources. SPEC Kit 31.

    ERIC Educational Resources Information Center

    Association of Research Libraries, Washington, DC. Office of Management Studies.

    This kit on resource allocation in academic and research libraries contains nine primary source documents and a concise summary of a 1976 Association of Research Libraries (ARL) survey on management of fiscal spending activities in ARL libraries. Based on responses from 70 libraries, the summary discusses 3 specific subjects within the general…

  8. 50 CFR 648.87 - Sector allocation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... permit sanctions pursuant to 15 CFR part 904. If an ACE allocated to a sector is not exceeded in a given... CFR part 904, each sector, permit/vessel owner, and vessel operator participating in the sector may be..., the stock-level ACL plus the carryover amount, does not exceed the stock overfishing limit. Any...

  9. Principles of allocation of health care resources.

    PubMed Central

    Knox, E G

    1978-01-01

    The methods and principles of allocating centrally provided health care resources to regions and areas are reviewed using the report of the Resource Allocation Working Party (RAWP) (Department of Health and Social Security, 1976) and the consultative document (Department of Health and Social Security, 1976a) as a basis. A range of practical problems arising from these papers (especially the report of the RAWP) is described and traced to the terms of reference. It is concluded that the RAWP misinterpreted aspects of social and administrative reality, and it failed to recognise clearly that the several principles on which it had to work conflicted with each other and demanded decisions of priority. The consequential errors led to (a) an injudicious imposition of 'objectivity' at all levels of allocation, (b) an unjustified insistence that the same method be used at each administrative level in an additive and transitive manner, (c) the exclusion of general practitioner services from their considerations, (d) a failure to delineate those decisions which are in fact political decisions, thus to concatenate them, inappropriately, with technical and professional issues. The main requirement in a revised system is for a mechanism which allocates different priorities to different principles at each appropriate administrative and distributive level, and adapts the working methods of each tier to meet separately defined objectives. PMID:262585

  10. Ethics and resource allocation: an economist's view.

    PubMed

    McGuire, A

    1986-01-01

    This paper debates some of the issues involved in attempting to apply economic analysis to the health care sector when medical ethics plays such an important part in determining the allocation of resources in that sector. Two distinct ethical positions are highlighted as being fundamental to the understanding of resource allocation in this sector -- deontological and utilitarian theories of ethics. It is argued that medical ethics are often narrowly conceived in that there is a tendency for the individual, rather than society at large, to form the focal point of the production of the service "health care'. Thus medical ethics have been dominated by individualistic ethical coded which do not fully consider questions relating to resource allocation at a social level. It is further argued that the structure of the health care sector augments these "individualistic' ethics. It is also suggested that different actors in the health care sector address questions of resource allocation with respect to different time periods, and that this serves to further enhance the influence of "individualistic' ethical codes in this sector.

  11. 15 CFR 923.110 - Allocation formula.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Allocation of Section 306 Program Administration... Coastal Zone Act Reauthorization Amendments of 1990, 1 to 1 for any fiscal year. (2) For programs...

  12. 15 CFR 923.110 - Allocation formula.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Allocation of Section 306 Program Administration... Coastal Zone Act Reauthorization Amendments of 1990, 1 to 1 for any fiscal year. (2) For programs...

  13. 15 CFR 923.110 - Allocation formula.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Allocation of Section 306 Program Administration... Coastal Zone Act Reauthorization Amendments of 1990, 1 to 1 for any fiscal year. (2) For programs...

  14. Resource Allocation Patterns and Student Achievement

    ERIC Educational Resources Information Center

    James, Lori; Pate, James; Leech, Donald; Martin, Ellice; Brockmeier, Lantry; Dees, Elizabeth

    2011-01-01

    This quantitative research study was designed to examine the relationship between system resource allocation patterns and student achievement, as measured by eighth grade Criterion-Referenced Competency Test (CRCT) mathematics, eighth grade CRCT reading, eleventh grade Georgia High School Graduation Test (GHSGT) mathematics, eleventh grade and…

  15. 23 CFR 660.107 - Allocations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 23 Highways 1 2012-04-01 2012-04-01 false Allocations. 660.107 Section 660.107 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS SPECIAL PROGRAMS... approved Federal Lands Coordinated Technology Implementation Program studies....

  16. 24 CFR 574.130 - Formula allocations.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...-five percent is allocated among qualifying cities, but not States, where the per capita incidence of...'s with higher than average per capita incidence of AIDS. The high incidence factor is computed by... the high incidence factors for all EMSA's with higher than average per capita incidence of AIDS....

  17. 24 CFR 574.130 - Formula allocations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...-five percent is allocated among qualifying cities, but not States, where the per capita incidence of...'s with higher than average per capita incidence of AIDS. The high incidence factor is computed by... the high incidence factors for all EMSA's with higher than average per capita incidence of AIDS....

  18. Patient specific optimization-based treatment planning for catheter-based ultrasound hyperthermia and thermal ablation

    NASA Astrophysics Data System (ADS)

    Prakash, Punit; Chen, Xin; Wootton, Jeffery; Pouliot, Jean; Hsu, I.-Chow; Diederich, Chris J.

    2009-02-01

    A 3D optimization-based thermal treatment planning platform has been developed for the application of catheter-based ultrasound hyperthermia in conjunction with high dose rate (HDR) brachytherapy for treating advanced pelvic tumors. Optimal selection of applied power levels to each independently controlled transducer segment can be used to conform and maximize therapeutic heating and thermal dose coverage to the target region, providing significant advantages over current hyperthermia technology and improving treatment response. Critical anatomic structures, clinical target outlines, and implant/applicator geometries were acquired from sequential multi-slice 2D images obtained from HDR treatment planning and used to reconstruct patient specific 3D biothermal models. A constrained optimization algorithm was devised and integrated within a finite element thermal solver to determine a priori the optimal applied power levels and the resulting 3D temperature distributions such that therapeutic heating is maximized within the target, while placing constraints on maximum tissue temperature and thermal exposure of surrounding non-targeted tissue. This optimizationbased treatment planning and modeling system was applied on representative cases of clinical implants for HDR treatment of cervix and prostate to evaluate the utility of this planning approach. The planning provided significant improvement in achievable temperature distributions for all cases, with substantial increase in T90 and thermal dose (CEM43T90) coverage to the hyperthermia target volume while decreasing maximum treatment temperature and reducing thermal dose exposure to surrounding non-targeted tissues and thermally sensitive rectum and bladder. This optimization based treatment planning platform with catheter-based ultrasound applicators is a useful tool that has potential to significantly improve the delivery of hyperthermia in conjunction with HDR brachytherapy. The planning platform has been extended

  19. High bit rate germanium single photon detectors for 1310nm

    NASA Astrophysics Data System (ADS)

    Seamons, J. A.; Carroll, M. S.

    2008-04-01

    There is increasing interest in development of high speed, low noise and readily fieldable near infrared (NIR) single photon detectors. InGaAs/InP Avalanche photodiodes (APD) operated in Geiger mode (GM) are a leading choice for NIR due to their preeminence in optical networking. After-pulsing is, however, a primary challenge to operating InGaAs/InP single photon detectors at high frequencies1. After-pulsing is the effect of charge being released from traps that trigger false ("dark") counts. To overcome this problem, hold-off times between detection windows are used to allow the traps to discharge to suppress after-pulsing. The hold-off time represents, however, an upper limit on detection frequency that shows degradation beginning at frequencies of ~100 kHz in InGaAs/InP. Alternatively, germanium (Ge) single photon avalanche photodiodes (SPAD) have been reported to have more than an order of magnitude smaller charge trap densities than InGaAs/InP SPADs2, which allowed them to be successfully operated with passive quenching2 (i.e., no gated hold off times necessary), which is not possible with InGaAs/InP SPADs, indicating a much weaker dark count dependence on hold-off time consistent with fewer charge traps. Despite these encouraging results suggesting a possible higher operating frequency limit for Ge SPADs, little has been reported on Ge SPAD performance at high frequencies presumably because previous work with Ge SPADs has been discouraged by a strong demand to work at 1550 nm. NIR SPADs require cooling, which in the case of Ge SPADs dramatically reduces the quantum efficiency of the Ge at 1550 nm. Recently, however, advantages to working at 1310 nm have been suggested which combined with a need to increase quantum bit rates for quantum key distribution (QKD) motivates examination of Ge detectors performance at very high detection rates where InGaAs/InP does not perform as well. Presented in this paper are measurements of a commercially available Ge APD

  20. Simulating POVMs on EPR pairs with 5.7 bits of expected communication

    NASA Astrophysics Data System (ADS)

    Méthot, A. A.

    2004-06-01

    We present a classical protocol for simulating correlations obtained by bipartite POVMs on an EPR pair. The protocol uses shared random variables (also known as local hidden variables) augmented by 5.7 bits of expected communication.