Sample records for processes modulo generdor

  1. Rewriting Modulo SMT and Open System Analysis

    NASA Technical Reports Server (NTRS)

    Rocha, Camilo; Meseguer, Jose; Munoz, Cesar

    2014-01-01

    This paper proposes rewriting modulo SMT, a new technique that combines the power of SMT solving, rewriting modulo theories, and model checking. Rewriting modulo SMT is ideally suited to model and analyze infinite-state open systems, i.e., systems that interact with a non-deterministic environment. Such systems exhibit both internal non-determinism, which is proper to the system, and external non-determinism, which is due to the environment. In a reflective formalism, such as rewriting logic, rewriting modulo SMT can be reduced to standard rewriting. Hence, rewriting modulo SMT naturally extends rewriting-based reachability analysis techniques, which are available for closed systems, to open systems. The proposed technique is illustrated with the formal analysis of: (i) a real-time system that is beyond the scope of timed-automata methods and (ii) automatic detection of reachability violations in a synchronous language developed to support autonomous spacecraft operations.

  2. Absolute phase estimation: adaptive local denoising and global unwrapping.

    PubMed

    Bioucas-Dias, Jose; Katkovnik, Vladimir; Astola, Jaakko; Egiazarian, Karen

    2008-10-10

    The paper attacks absolute phase estimation with a two-step approach: the first step applies an adaptive local denoising scheme to the modulo-2 pi noisy phase; the second step applies a robust phase unwrapping algorithm to the denoised modulo-2 pi phase obtained in the first step. The adaptive local modulo-2 pi phase denoising is a new algorithm based on local polynomial approximations. The zero-order and the first-order approximations of the phase are calculated in sliding windows of varying size. The zero-order approximation is used for pointwise adaptive window size selection, whereas the first-order approximation is used to filter the phase in the obtained windows. For phase unwrapping, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that the proposed algorithm yields state-of-the-art performance, enabling strong noise attenuation while preserving image details. (c) 2008 Optical Society of America

  3. An organization of a digital subsystem for generating spacecraft timing and control signals

    NASA Technical Reports Server (NTRS)

    Perlman, M.

    1972-01-01

    A modulo-M counter (of clock pulses) is decomposed into parallel modulo-m sub i counters, where each m sub i is a prime power divisor of M. The modulo-p sub i counters are feedback shift registers which cycle through p sub i distinct states. By this organization, every possible nontrivial data frame subperiod and delayed subperiod may be derived. The number of clock pulses required to bring every modulo-p sub i counter to a respective designated state or count is determined by the Chinese remainder theorem. This corresponds to the solution of simultaneous congruences over relatively prime moduli.

  4. The golden ratio and Loshu-Fibonacci Diagram: novel research view on relationship of Chinese medicine and modern biology.

    PubMed

    Chen, Zhao-xue; Huang, Yun-kun; Sun, Ying

    2014-02-01

    Associating geometric arrangements of 9 Loshu numbers modulo 5, investigating property of golden rectangles and characteristics of Fibonacci sequence modulo 10 as well as the two subsequences of its modular sequence by modulo 5, the Loshu-Fibonacci Diagram is created based on strict logical deduction in this paper, which can disclose inherent relationship among Taiji sign, Loshu and Fibonacci sequence modulo 10 perfectly and unite such key ideas of holism, symmetry, holographic thought and yin-yang balance pursuit from Chinese medicine as a whole. Based on further analysis and reasoning, the authors discover that taking the golden ratio and Loshu-Fibonacci Diagram as a link, there is profound and universal association existing between researches of Chinese medicine and modern biology.

  5. System for generating timing and control signals

    NASA Technical Reports Server (NTRS)

    Perlman, M.; Rousey, W. J.; Messner, A. (Inventor)

    1975-01-01

    A system capable of generating every possible data frame subperiod and delayed subperiod of a data frame of length of M clock pulse intervals (CPIs) comprised of parallel modulo-m sub i counters is presented. Each m sub i is a prime power divisor of M and a cascade of alpha sub i identical modulo-p sub i counters. The modulo-p sub i counters are feedback shift registers which cycle through p sub i distinct states. Every possible nontrivial data frame subperiod and delayed subperiod is derived and a specific CPI in the data frame is detected. The number of clock pulses required to bring every modulo-p sub i counter to a respective designated state or count is determined by the Chinese remainder theorem. This corresponds to the solution of simultaneous congruences over relatively prime moduli.

  6. Graphene-assisted multiple-input high-base optical computing

    PubMed Central

    Hu, Xiao; Wang, Andong; Zeng, Mengqi; Long, Yun; Zhu, Long; Fu, Lei; Wang, Jian

    2016-01-01

    We propose graphene-assisted multiple-input high-base optical computing. We fabricate a nonlinear optical device based on a fiber pigtail cross-section coated with a single-layer graphene grown by chemical vapor deposition (CVD) method. An approach to implementing modulo 4 operations of three-input hybrid addition and subtraction of quaternary base numbers in the optical domain using multiple non-degenerate four-wave mixing (FWM) processes in graphene coated optical fiber device and (differential) quadrature phase-shift keying ((D)QPSK) signals is presented. We demonstrate 10-Gbaud modulo 4 operations of three-input quaternary hybrid addition and subtraction (A + B − C, A + C − B, B + C − A) in the experiment. The measured optical signal-to-noise ratio (OSNR) penalties for modulo 4 operations of three-input quaternary hybrid addition and subtraction (A + B − C, A + C − B, B + C − A) are measured to be less than 7 dB at a bit-error rate (BER) of 2 × 10−3. The BER performance as a function of the relative time offset between three signals (signal offset) is also evaluated showing favorable performance. PMID:27604866

  7. Rewriting Modulo SMT

    NASA Technical Reports Server (NTRS)

    Rocha, Camilo; Meseguer, Jose; Munoz, Cesar A.

    2013-01-01

    Combining symbolic techniques such as: (i) SMT solving, (ii) rewriting modulo theories, and (iii) model checking can enable the analysis of infinite-state systems outside the scope of each such technique. This paper proposes rewriting modulo SMT as a new technique combining the powers of (i)-(iii) and ideally suited to model and analyze infinite-state open systems; that is, systems that interact with a non-deterministic environment. Such systems exhibit both internal non-determinism due to the system, and external non-determinism due to the environment. They are not amenable to finite-state model checking analysis because they typically are infinite-state. By being reducible to standard rewriting using reflective techniques, rewriting modulo SMT can both naturally model and analyze open systems without requiring any changes to rewriting-based reachability analysis techniques for closed systems. This is illustrated by the analysis of a real-time system beyond the scope of timed automata methods.

  8. How to Differentiate an Integer Modulo n

    ERIC Educational Resources Information Center

    Emmons, Caleb; Krebs, Mike; Shaheen, Anthony

    2009-01-01

    A number derivative is a numerical mapping that satisfies the product rule. In this paper, we determine all number derivatives on the set of integers modulo n. We also give a list of undergraduate research projects to pursue using these maps as a starting point.

  9. Binomial Coefficients Modulo a Prime--A Visualization Approach to Undergraduate Research

    ERIC Educational Resources Information Center

    Bardzell, Michael; Poimenidou, Eirini

    2011-01-01

    In this article we present, as a case study, results of undergraduate research involving binomial coefficients modulo a prime "p." We will discuss how undergraduates were involved in the project, even with a minimal mathematical background beforehand. There are two main avenues of exploration described to discover these binomial…

  10. Robust Modulo Remaindering and Applications in Radar and Sensor Signal Processing

    DTIC Science & Technology

    2015-08-27

    Chinese Remainder Theorem in FDD Systems, Science China -- Information Sciences, vol.55, no.7, pp. 1605 -1616, July 2012. 3) Y. Liu, X.-G. Xia, and H. L...Sciences, vol.55, no.7, pp. 1605 -1616, July 2012. 3) Y. Liu, X.-G. Xia, and H. L. Zhang, Distributed Space-Time Coding for Full-DuplexAsynchronous

  11. Evidence for a hierarchical transcriptional circuit in Drosophila male germline involving testis-specific TAF and two gene-specific transcription factors, Mod and Acj6.

    PubMed

    Jiang, Mei; Gao, Zhengliang; Wang, Jian; Nurminsky, Dmitry I

    2018-01-01

    To analyze transcription factors involved in gene regulation by testis-specific TAF (tTAF), tTAF-dependent promoters were mapped and analyzed in silico. Core promoters show decreased AT content, paucity of classical promoter motifs, and enrichment with translation control element CAAAATTY. Scanning of putative regulatory regions for known position frequency matrices identified 19 transcription regulators possibly contributing to tTAF-driven gene expression. Decreased male fertility associated with mutation in one of the regulators, Acj6, indicates its involvement in male reproduction. Transcriptome study of testes from male mutants for tTAF, Acj6, and previously characterized tTAF-interacting factor Modulo implies the existence of a regulatory hierarchy of tTAF, Modulo and Acj6, in which Modulo and/or Acj6 regulate one-third of tTAF-dependent genes. © 2017 Federation of European Biochemical Societies.

  12. Concurrent remote entanglement with quantum error correction against photon losses

    NASA Astrophysics Data System (ADS)

    Roy, Ananda; Stone, A. Douglas; Jiang, Liang

    2016-09-01

    Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.

  13. [Application of individual light-curing resin tray as edge plastic material in complete denture modulo].

    PubMed

    Chai, Mei; Tang, Xuyan; Liang, Guangku

    2015-12-01

    To investigate clinical effect of individual light-curing resin tray as edge plastic material in complete denture modulo.
 A total of 30 patients with poor condition for alveolar ridge of mandible were chosen individual tray with individual light-curing resin tray for material edge shaping or traditional individual impression tray for edge shaping cream to produce complete denture. The operability, questionnaire about denture retention, comfort, mucosal cases and chewing function in the process of shaping the edge were investigated three months later after wearing dentures.
 There was no significant difference in retention, comfort, mucosa and the chewing function between the two mandibular denture impression methods. However, the patients with individual light-curing resin tray as edge shaping material felt better in the process than that in the patients with die-cream as the edge shaping material (P<0.05). Furthermore, the manipulation with individual light-curing resin tray as edge shaping material is easy for doctor.
 Although the clinical effect of Individual light-curing resin tray material as the edge shaping material is equal to that of impression cream, it saves time and human resource. Moreover, it is more acceptable for the patients and thus it can be spread in clinics.

  14. Standard random number generation for MBASIC

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    A machine-independent algorithm is presented and analyzed for generating pseudorandom numbers suitable for the standard MBASIC system. The algorithm used is the polynomial congruential or linear recurrence modulo 2 method. Numbers, formed as nonoverlapping adjacent 28-bit words taken from the bit stream produced by the formula a sub m + 532 = a sub m + 37 + a sub m (modulo 2), do not repeat within the projected age of the solar system, show no ensemble correlation, exhibit uniform distribution of adjacent numbers up to 19 dimensions, and do not deviate from random runs-up and runs-down behavior.

  15. Unitary circular code motifs in genomes of eukaryotes.

    PubMed

    El Soufi, Karim; Michel, Christian J

    A set X of 20 trinucleotides was identified in genes of bacteria, eukaryotes, plasmids and viruses, which has in average the highest occurrence in reading frame compared to its two shifted frames (Michel, 2015; Arquès and Michel, 1996). This set X has an interesting mathematical property as X is a circular code (Arquès and Michel, 1996). Thus, the motifs from this circular code X, called X motifs, have the property to always retrieve, synchronize and maintain the reading frame in genes. The origin of this circular code X in genes is an open problem since its discovery in 1996. Here, we first show that the unitary circular codes (UCC), i.e. sets of one word, allow to generate unitary circular code motifs (UCC motifs), i.e. a concatenation of the same motif (simple repeats) leading to low complexity DNA. Three classes of UCC motifs are studied here: repeated dinucleotides (D + motifs), repeated trinucleotides (T + motifs) and repeated tetranucleotides (T + motifs). Thus, the D + , T + and T + motifs allow to retrieve, synchronize and maintain a frame modulo 2, modulo 3 and modulo 4, respectively, and their shifted frames (1 modulo 2; 1 and 2 modulo 3; 1, 2 and 3 modulo 4 according to the C 2 , C 3 and C 4 properties, respectively) in the DNA sequences. The statistical distribution of the D + , T + and T + motifs is analyzed in the genomes of eukaryotes. A UCC motif and its comp lementary UCC motif have the same distribution in the eukaryotic genomes. Furthermore, a UCC motif and its complementary UCC motif have increasing occurrences contrary to their number of hydrogen bonds, very significant with the T + motifs. The longest D + , T + and T + motifs in the studied eukaryotic genomes are also given. Surprisingly, a scarcity of repeated trinucleotides (T + motifs) in the large eukaryotic genomes is observed compared to the D + and T + motifs. This result has been investigated and may be explained by two outcomes. Repeated trinucleotides (T + motifs) are identified in the X motifs of low composition (cardinality less than 10) in the genomes of eukaryotes. Furthermore, identical trinucleotide pairs of the circular code X are preferentially used in the gene sequences of eukaryotes. These two results suggest that the unitary circular codes of trinucleotides may have been involved in the formation of the trinucleotide circular code X. Indeed, repeated trinucleotides in the X motifs in the genomes of eukaryotes may represent an intermediary evolution from repeated trinucleotides of cardinality 1 (T + motifs) in the genomes of eukaryotes up to the X motifs of cardinality 20 in the gene sequences of eukaryotes. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. The recurrence sequences via Sylvester matrices

    NASA Astrophysics Data System (ADS)

    Karaduman, Erdal; Deveci, Ömür

    2017-07-01

    In this work, we define the Pell-Jacobsthal-Slyvester sequence and the Jacobsthal-Pell-Slyvester sequence by using the Slyvester matrices which are obtained from the characteristic polynomials of the Pell and Jacobsthal sequences and then, we study the sequences defined modulo m. Also, we obtain the cyclic groups and the semigroups from the generating matrices of these sequences when read modulo m and then, we derive the relationships among the orders of the cyclic groups and the periods of the sequences. Furthermore, we redefine Pell-Jacobsthal-Slyvester sequence and the Jacobsthal-Pell-Slyvester sequence by means of the elements of the groups and then, we examine them in the finite groups.

  17. Passive demodulation of miniature fiber-optic-based interferometric sensors using a time-multiplexing technique.

    PubMed

    Santos, J L; Jackson, D A

    1991-08-01

    A passive demodulation technique suitable for interferometric interrogation of short optical cavities is described. It is based on time multiplexing of two low-finesse Fabry-Perot interferometers subject to the same measurand and with a differential optical phase of pi/2 (modulo 2pi). Independently of the cavity length, two optical outputs in quadrature are generated, which permits signal reading free of fading. The concept is demonstrated for the measurement of vibration using a simple processing scheme.

  18. Security Analysis of Some Diffusion Mechanisms Used in Chaotic Ciphers

    NASA Astrophysics Data System (ADS)

    Zhang, Leo Yu; Zhang, Yushu; Liu, Yuansheng; Yang, Anjia; Chen, Guanrong

    As a variant of the substitution-permutation network, the permutation-diffusion structure has received extensive attention in the field of chaotic cryptography over the last three decades. Because of the high implementation speed and nonlinearity over GF(2), the Galois field of two elements, mixing modulo addition/multiplication and Exclusive OR becomes very popular in various designs to achieve the desired diffusion effect. This paper reports that some diffusion mechanisms based on modulo addition/multiplication and Exclusive OR are not resistant to plaintext attacks as claimed. By cracking several recently proposed chaotic ciphers as examples, it is demonstrated that a good understanding of the strength and weakness of these crypto-primitives is crucial for designing more practical chaotic encryption algorithms in the future.

  19. Techniques for computing the discrete Fourier transform using the quadratic residue Fermat number systems

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.

    1986-01-01

    The complex integer multiplier and adder over the direct sum of two copies of finite field developed by Cozzens and Finkelstein (1985) is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplication over the rings of integers modulo Fermat numbers can be performed by means of two integer multiplications, whereas the complex integer multiplication requires three integer multiplications. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed to compute a systolic array of the DFT can be reduced substantially. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  20. Implementacion de modulos constructivistas que atiendan "misconceptions" y lagunas conceptuales en temas de la fisica en estudiantes universitarios

    NASA Astrophysics Data System (ADS)

    Santacruz Sarmiento, Neida M.

    Este estudio se enfoco en los "misconception" y lagunas conceptuales en temas fundamentales de Fisica como son Equilibrio Termodinamico y Estatica de fluidos. En primer lugar se trabajo con la identificacion de "misconceptions" y lagunas conceptuales y se analizo en detalle la forma en que los estudiantes construyen sus propias teorias de fenomenos relacionados con los temas. Debido a la complejidad en la que los estudiantes asimilan los conceptos fisicos, se utilizo el metodo de investigacion mixto de tipo secuencial explicativo en dos etapas, una cuantitativa y otra cualitativa. La primera etapa comprendio cuatro fases: (1) Aplicacion de una prueba diagnostica para identificar el conocimiento previo y lagunas conceptuales. (2) Identificacion de "misconceptions" y lagunas del concepto a partir del conocimiento previo. (3) Implementacion de la intervencion por medio de modulos en el topico de Equilibrio Termodinamico y Estatica de Fluidos. (4) Y la realizacion de la pos prueba para analizar el impacto y la efectividad de la intervencion constructivista. En la segunda etapa se utilizo el metodo de investigacion cualitativo, por medio de una entrevista semiestructurada que partio de la elaboracion de un mapa conceptual y se finalizo con un analisis de datos conjuntamente. El desarrollo de este estudio permitio encontrar "misconceptions" y lagunas conceptuales a partir del conocimiento previo de los estudiantes participantes en los temas trabajados, que fueron atendidos en el desarrollo de las distintas actividades inquisitivas que se presentaron en el modulo constructivista. Se encontro marcadas diferencias entre la pre y pos prueba en los temas, esto se debio al requerimiento de habilidades abstractas para el tema de Estatica de Fluidos y al desarrollo intuitivo para el tema de Equilibrio Termodinamico, teniendo mejores respuestas en el segundo. Los participantes demostraron una marcada evolucion y/o cambio en sus estructuras de pensamiento, las pruebas estadisticas de t-pareada fueron significativas para ambos modulos a pesar que en la pos prueba no todos llegaron a la respuesta correcta. El analisis cualitativo de las respuestas de los participantes confirmo la dificultad de remover "misconception" y lagunas conceptuales.

  1. Nonlocal conservation laws of the constant astigmatism equation

    NASA Astrophysics Data System (ADS)

    Hlaváč, Adam; Marvan, Michal

    2017-03-01

    For the constant astigmatism equation, we construct a system of nonlocal conservation laws (an abelian covering) closed under the reciprocal transformations. The corresponding potentials are functionally independent modulo a Wronskian type relation.

  2. The nucleoplasmin homolog NLP mediates centromere clustering and anchoring to the nucleolus.

    PubMed

    Padeken, Jan; Mendiburo, María José; Chlamydas, Sarantis; Schwarz, Hans-Jürgen; Kremmer, Elisabeth; Heun, Patrick

    2013-04-25

    Centromere clustering during interphase is a phenomenon known to occur in many different organisms and cell types, yet neither the factors involved nor their physiological relevance is well understood. Using Drosophila tissue culture cells and flies, we identified a network of proteins, including the nucleoplasmin-like protein (NLP), the insulator protein CTCF, and the nucleolus protein Modulo, to be essential for the positioning of centromeres. Artificial targeting further demonstrated that NLP and CTCF are sufficient for clustering, while Modulo serves as the anchor to the nucleolus. Centromere clustering was found to depend on centric chromatin rather than specific DNA sequences. Moreover, unclustering of centromeres results in the spatial destabilization of pericentric heterochromatin organization, leading to partial defects in the silencing of repetitive elements, defects during chromosome segregation, and genome instability. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Multi-Purpose Logistics Module (MPLM) Cargo Heat Exchanger

    NASA Technical Reports Server (NTRS)

    Zampiceni, John J.; Harper, Lon T.

    2002-01-01

    This paper describes the New Shuttle Orbiter's Multi- Purpose Logistics Modulo (MPLM) Cargo Heat Exchanger (HX) and associated MPLM cooling system. This paper presents Heat Exchanger (HX) design and performance characteristics of the system.

  4. Determining Diagonal Branches in Mine Ventilation Networks

    NASA Astrophysics Data System (ADS)

    Krach, Andrzej

    2014-12-01

    The present paper discusses determining diagonal branches in a mine ventilation network by means of a method based on the relationship A⊗ PT(k, l) = M, which states that the nodal-branch incidence matrix A, modulo-2 multiplied by the transposed path matrix PT(k, l ) from node no. k to node no. l, yields the matrix M where all the elements in rows k and l - corresponding to the start and the end node - are 1, and where the elements in the remaining rows are 0, exclusively. If a row of the matrix M is to contain only "0" elements, the following condition has to be fulfilled: after multiplying the elements of a row of the matrix A by the elements of a column of the matrix PT(k, l), i.e. by the elements of a proper row of the matrix P(k, l ), the result row must display only "0" elements or an even number of "1" entries, as only such a number of "1" entries yields 0 when modulo-2 added - and since the rows of the matrix A correspond to the graph nodes, and the path nodes level is 2 (apart from the nodes k and l, whose level is 1), then the number of "1" elements in a row has to be 0 or 2. If, in turn, the rows k and l of the matrix M are to contain only "1" elements, the following condition has to be fulfilled: after multiplying the elements of the row k or l of the matrix A by the elements of a column of the matrix PT(k, l), the result row must display an uneven number of "1" entries, as only such a number of "1" entries yields 1 when modulo-2 added - and since the rows of the matrix A correspond to the graph nodes, and the level of the i and j path nodes is 1, then the number of "1" elements in a row has to be 1. The process of determining diagonal branches by means of this method was demonstrated using the example of a simple ventilation network with two upcast shafts and one downcast shaft. W artykule przedstawiono metodę wyznaczania bocznic przekątnych w sieci wentylacyjnej kopalni metodą bazującą na zależności A⊗PT(k, l) = M, która podaje, że macierz incydencji węzłowo bocznicowej A pomnożona modulo 2 przez transponowaną macierz ścieżek PT(k, l) od węzła nr k do węzła nr l daje w wyniku macierz M o takich własnościach że ma same jedynki w wierszach k i l, odpowiadającym węzłom początkowemu i końcowemu i same zera w pozostałych wierszach. Warunkiem na to, aby w wierszu macierzy M były same zera jest aby po pomnożeniu elementów wiersza macierzy A przez elementy kolumny macierzy PT(k, l), czyli przez elementy odpowiedniego wiersza macierzy P(k, l), w wierszu wynikowym były same zera lub parzysta liczba jedynek, ponieważ tylko taka liczba jedynek zsumowana modulo 2 daje w wyniku 0, a ponieważ wiersze macierzy A odpowiadają węzłom grafu, a węzły ścieżki są stopnia 2 (oprócz węzłów k i l, które są stopnia 1), to liczba jedynek w wierszu musi być równa 0 lub 2. Natomiast warunkiem na to, aby w wierszach k i l macierzy M były same jedynki jest aby po pomnożeniu elementów wiersza k lub l macierzy A przez elementy kolumny macierzy PT(k, l) w wierszu wynikowym była nieparzysta liczba jedynek, ponieważ tylko taka liczba jedynek zsumowana modulo 2 daje w wyniku 1, a ponieważ wiersze macierzy A odpowiadają węzłom grafu, a węzły k i j ścieżki są stopnia 1, to liczba jedynek w wierszu musi być równa 1. Wyznaczanie bocznic przekątnych tą metodą pokazano na przykładzie prostej sieci wentylacyjnej z dwoma szybami wydechowymi i jednym wdechowym.

  5. Automated absolute phase retrieval in across-track interferometry

    NASA Technical Reports Server (NTRS)

    Madsen, Soren N.; Zebker, Howard A.

    1992-01-01

    Discussed is a key element in the processing of topographic radar maps acquired by the NASA/JPL airborne synthetic aperture radar configured as an across-track interferometer (TOPSAR). TOPSAR utilizes a single transmit and two receive antennas; the three-dimensional target location is determined by triangulation based on a known baseline and two measured slant ranges. The slant range difference is determined very accurately from the phase difference between the signals received by the two antennas. This phase is measured modulo 2pi, whereas it is the absolute phase which relates directly to the difference in slant range. It is shown that splitting the range bandwidth into two subbands in the processor and processing each individually allows for the absolute phase. The underlying principles and system errors which must be considered are discussed, together with the implementation and results from processing data acquired during the summer of 1991.

  6. Congruences for central factorial numbers modulo powers of prime.

    PubMed

    Wang, Haiqing; Liu, Guodong

    2016-01-01

    Central factorial numbers are more closely related to the Stirling numbers than the other well-known special numbers, and they play a major role in a variety of branches of mathematics. In the present paper we prove some interesting congruences for central factorial numbers.

  7. On a question of Brown, Douglas, and Fillmore

    NASA Astrophysics Data System (ADS)

    Kim, Jaewoong; Lee, Woo Young

    2007-12-01

    In this note we answer an old question of Brown, Douglas, and Fillmore [L. Brown, R.G. Douglas, P. Fillmore, Unitary equivalence modulo the compact operators and extensions of C*-algebras, in: Proc. Conf. Operator Theory, in: Lecture Notes in Math., vol. 345, Springer, Berlin, 1973, pp. 58-128].

  8. A Finite Abelian Group of Two-Letter Inversions

    ERIC Educational Resources Information Center

    Balbuena, Sherwin E.

    2015-01-01

    In abstract algebra, the study of concrete groups is fundamentally important to beginners. Most commonly used groups as examples are integer addition modulo n, real number addition and multiplication, permutation groups, and groups of symmetry. The last two examples are finite non-abelian groups and can be investigated with the aid of concrete…

  9. Decision Engines for Software Analysis Using Satisfiability Modulo Theories Solvers

    NASA Technical Reports Server (NTRS)

    Bjorner, Nikolaj

    2010-01-01

    The area of software analysis, testing and verification is now undergoing a revolution thanks to the use of automated and scalable support for logical methods. A well-recognized premise is that at the core of software analysis engines is invariably a component using logical formulas for describing states and transformations between system states. The process of using this information for discovering and checking program properties (including such important properties as safety and security) amounts to automatic theorem proving. In particular, theorem provers that directly support common software constructs offer a compelling basis. Such provers are commonly called satisfiability modulo theories (SMT) solvers. Z3 is a state-of-the-art SMT solver. It is developed at Microsoft Research. It can be used to check the satisfiability of logical formulas over one or more theories such as arithmetic, bit-vectors, lists, records and arrays. The talk describes some of the technology behind modern SMT solvers, including the solver Z3. Z3 is currently mainly targeted at solving problems that arise in software analysis and verification. It has been applied to various contexts, such as systems for dynamic symbolic simulation (Pex, SAGE, Vigilante), for program verification and extended static checking (Spec#/Boggie, VCC, HAVOC), for software model checking (Yogi, SLAM), model-based design (FORMULA), security protocol code (F7), program run-time analysis and invariant generation (VS3). We will describe how it integrates support for a variety of theories that arise naturally in the context of the applications. There are several new promising avenues and the talk will touch on some of these and the challenges related to SMT solvers. Proceedings

  10. On Fibonacci Numbers Which Are Elliptic Korselt Numbers

    DTIC Science & Technology

    2014-11-17

    1, where (a|p) denotes the Legendre symbol of a with respect to p, then the order of group of points on E modulo p denoted #E(Fp), equals p+1. In...Fibonacci sequence, polynomials and the Euler function”, Indag. Math. (N.S.) 17 (2006), 611–625. [8] F. Luca and I. E. Shparlinski, “On the counting

  11. Analysis and Synthesis of Robust Data Structures

    DTIC Science & Technology

    1990-08-01

    1.3.2 Multiversion Software. .. .. .. .. .. .... .. ... .. ...... 5 1.3.3 Robust Data Structure .. .. .. .. .. .. .. .. .. ... .. ..... 6 1.4...context are 0 multiversion software, which is an adaptation oi N-modulo redundancy (NMR) tech- nique. * recovery blocks, which is an adaptation of...implementations using these features for such a hybrid approach. 1.3.2 Multiversion Software Avizienis [AC77] was the first to adapt NMR technique into

  12. Dirichlet to Neumann operator for Abelian Yang-Mills gauge fields

    NASA Astrophysics Data System (ADS)

    Díaz-Marín, Homero G.

    We consider the Dirichlet to Neumann operator for Abelian Yang-Mills boundary conditions. The aim is constructing a complex structure for the symplectic space of boundary conditions of Euler-Lagrange solutions modulo gauge for space-time manifolds with smooth boundary. Thus we prepare a suitable scenario for geometric quantization within the reduced symplectic space of boundary conditions of Abelian gauge fields.

  13. Proof without Words: Squares Modulo 3

    ERIC Educational Resources Information Center

    Nelsen, Roger B.

    2013-01-01

    Using the fact that the sum of the first n odd numbers is n[superscript 2], we show visually that n[superscript 2] is the same as 0 (mod 3) when n is the same as 0 (mod 3), and n[superscript 2] is the same as 1 (mod 3) when n is the same as plus or minus 1 (mod 3).

  14. Satisfiability modulo theory and binary puzzle

    NASA Astrophysics Data System (ADS)

    Utomo, Putranto

    2017-06-01

    The binary puzzle is a sudoku-like puzzle with values in each cell taken from the set {0, 1}. We look at the mathematical theory behind it. A solved binary puzzle is an n × n binary array where n is even that satisfies the following conditions: (1) No three consecutive ones and no three consecutive zeros in each row and each column, (2) Every row and column is balanced, that is the number of ones and zeros must be equal in each row and in each column, (3) Every two rows and every two columns must be distinct. The binary puzzle had been proven to be an NP-complete problem [5]. Research concerning the satisfiability of formulas with respect to some background theory is called satisfiability modulo theory (SMT). An SMT solver is an extension of a satisfiability (SAT) solver. The notion of SMT can be used for solving various problem in mathematics and industries such as formula verification and operation research [1, 7]. In this paper we apply SMT to solve binary puzzles. In addition, we do an experiment in solving different sizes and different number of blanks. We also made comparison with two other approaches, namely by a SAT solver and exhaustive search.

  15. Spin cat state generation for quadrupolar nuclei in semiconductor quantum dots or defect centers

    NASA Astrophysics Data System (ADS)

    Bulutay, Ceyhun

    Implementing spin-based quantum information encoding schemes in semiconductors has a high priority. The so-called cat codes offer a paradigm that enables hardware-efficient error correction. Their inauguration to semiconductor-based nuclear magnetic resonance framework hinges upon the realization of coherent spin states (CSS). In this work, we show how the crucial superpositions of CSS can be generated for the nuclear spins. This is through the intrinsic electric quadrupole interaction involving a critical role by the biaxiality term that is readily available, as in strained heterostructures of semiconductors, or defect centers having nearby quadrupolar spins. The persistence of the cat states is achieved using a rotation pulse so as to harness the underlying fixed points of the classical Hamiltonian. We classify the two distinct types as polar- and equator-bound over the Bloch sphere with respect to principal axes. Their optimal performance as well as sensitivity under numerous parameter deviations are analyzed. Finally, we present how these modulo-2 cat states can be extended to modulo-4 by a three-pulse scheme. This work was supported by TUBITAK, The Scientific and Technological Research Council of Turkey through the project No. 114F409.

  16. One-way transformation of information

    DOEpatents

    Cooper, James A.

    1989-01-01

    Method and apparatus are provided for one-way transformation of data according to multiplication and/or exponentiation modulo a prime number. An implementation of the invention permits the one way residue transformation, useful in encryption and similar applications, to be implemented by n-bit computers substantially with no increase in difficulty or complexity over a natural transformation thereby, using a modulus which is a power of two.

  17. A Comparison of Approaches for Solving Hard Graph-Theoretic Problems

    DTIC Science & Technology

    2015-05-01

    collaborative effort “ Adiabatic Quantum Computing Applications Research” (14-RI-CRADA-02) between the Information Directorate and Lock- 3 Algorithm 3...using Matlab, a quantum annealing approach using the D-Wave computer , and lastly using satisfiability modulo theory (SMT) and corresponding SMT...methods are explored and consist of a parallel computing approach using Matlab, a quantum annealing approach using the D-Wave computer , and lastly using

  18. Ideas, Creencias, Actitudes. Primer Modulo de una Serie para Maestros de Escuela Elemental (Ideas, Beliefs, Attitudes. First Module of a Series for Elementary Teachers).

    ERIC Educational Resources Information Center

    Molina, Carmen Eneida, Ed.; And Others

    This guide for teachers, in English and Spanish, examines the role of stereotypes within the context of contemporary beliefs, ideas, and attitudes. A pre-test and post-test are included to measure the user's awareness of stereotypes. Object lessons cover the following topics: (1) definition of stereotypes; (2) racial and ethnic stereotypes; (3)…

  19. Improved technique for one-way transformation of information

    DOEpatents

    Cooper, J.A.

    1987-05-11

    Method and apparatus are provided for one-way transformation of data according to multiplication and/or exponentiation modulo a prime number. An implementation of the invention permits the one way residue transformation, useful in encryption and similar applications, to be implemented by n-bit computers substantially with no increase in difficulty or complexity over a natural transformation thereby, using a modulus which is a power of two. 9 figs.

  20. Por Que Rosa No Es Valiente? Cuarto Modulo de una Serie para Maestros de Escuela Elemental (Why Isn't Rosie Brave? Fourth Module of a Series for Elementary School Teachers).

    ERIC Educational Resources Information Center

    Molina, Carmen Eneida, Ed.; And Others

    This guide in English and Spanish provides teachers with methods for identifying textbook bias and stereotyping. A pre-test and post-test designed to measure awareness of textbook stereotypes are included. Four object lessons discuss the function of repetition, cumulative effect, omission, and distortion in reinforcing stereotypes, especially…

  1. Remarks on Chern-Simons Invariants

    NASA Astrophysics Data System (ADS)

    Cattaneo, Alberto S.; Mnëv, Pavel

    2010-02-01

    The perturbative Chern-Simons theory is studied in a finite-dimensional version or assuming that the propagator satisfies certain properties (as is the case, e.g., with the propagator defined by Axelrod and Singer). It turns out that the effective BV action is a function on cohomology (with shifted degrees) that solves the quantum master equation and is defined modulo certain canonical transformations that can be characterized completely. Out of it one obtains invariants.

  2. Three-dimensional surface contouring of macroscopic objects by means of phase-difference images.

    PubMed

    Velásquez Prieto, Daniel; Garcia-Sucerquia, Jorge

    2006-09-01

    We report a technique to determine the 3D contour of objects with dimensions of at least 4 orders of magnitude larger than the illumination optical wavelength. Our proposal is based on the numerical reconstruction of the optical wave field of digitally recorded holograms. The required modulo 2pi phase map in any contouring process is obtained by means of the direct subtraction of two phase-contrast images under different illumination angles to create a phase-difference image of a still object. Obtaining the phase-difference images is only possible by using the capability of numerical reconstruction of the complex optical field provided by digital holography. This unique characteristic leads us to a robust, reliable, and fast procedure that requires only two images. A theoretical analysis of the contouring system is shown, with verification by means of numerical and experimental results.

  3. Symbolically Modeling Concurrent MCAPI Executions

    NASA Technical Reports Server (NTRS)

    Fischer, Topher; Mercer, Eric; Rungta, Neha

    2011-01-01

    Improper use of Inter-Process Communication (IPC) within concurrent systems often creates data races which can lead to bugs that are challenging to discover. Techniques that use Satisfiability Modulo Theories (SMT) problems to symbolically model possible executions of concurrent software have recently been proposed for use in the formal verification of software. In this work we describe a new technique for modeling executions of concurrent software that use a message passing API called MCAPI. Our technique uses an execution trace to create an SMT problem that symbolically models all possible concurrent executions and follows the same sequence of conditional branch outcomes as the provided execution trace. We check if there exists a satisfying assignment to the SMT problem with respect to specific safety properties. If such an assignment exists, it provides the conditions that lead to the violation of the property. We show how our method models behaviors of MCAPI applications that are ignored in previously published techniques.

  4. Possible resonance effect of axionic dark matter in Josephson junctions.

    PubMed

    Beck, Christian

    2013-12-06

    We provide theoretical arguments that dark-matter axions from the galactic halo that pass through Earth may generate a small observable signal in resonant S/N/S Josephson junctions. The corresponding interaction process is based on the uniqueness of the gauge-invariant axion Josephson phase angle modulo 2π and is predicted to produce a small Shapiro steplike feature without externally applied microwave radiation when the Josephson frequency resonates with the axion mass. A resonance signal of so far unknown origin observed by C. Hoffmann et al. [Phys. Rev. B 70, 180503(R) (2004)] is consistent with our theory and can be interpreted in terms of an axion mass m(a)c2=0.11  meV and a local galactic axionic dark-matter density of 0.05  GeV/cm3. We discuss future experimental checks to confirm the dark-matter nature of the observed signal.

  5. Los Rizos y el Beisbol. Septimo Modulo de una Serie para Maestros de Escuela Elemental. (Curls and Baseball. Seventh Module of a Series for Elementary School Teachers).

    ERIC Educational Resources Information Center

    Molina, Carmen Eneida, Ed.; And Others

    This guide for teachers, in English and Spanish, examines how assigned sex roles affect grade school girls in competitive sports, simple games, pastimes, and other extracurricular activities. A pre-test and post-test are included to measure the user's awareness of sexual stereotypes. Five object lessons cover the following topics: (1) myths that…

  6. Viva La Diferencia! Segundo Modulo de una Serie para Maestros de Escuela Elemental (Long Live the Difference! Second Module of a Series for Elementary School Teachers).

    ERIC Educational Resources Information Center

    Molina, Carmen Eneida, Ed.; And Others

    This guide, in English and Spanish, is designed to provide teachers with a scientific basis for identifying myths and distortions about men and women. A pre-test and post-test are included to measure the user's awareness of stereotypes. Object lessons address the following areas: (1) common sexual stereotypes; (2) sexual functions; (3) the…

  7. Personality Module. Test Booklet. Test Items for Booklets 1, 2, 3=Modulo de personalidad. Libro de prueba. Itemes de prueba para los libros 1, 2, 3.

    ERIC Educational Resources Information Center

    California State Univ., Los Angeles. National Dissemination and Assessment Center.

    The booklet is part of a grade 10-12 social studies series produced for bilingual education. The series consists of six major thematic modules, with four to five booklets in each. The interdisciplinary modules are based on major ideas and designed to help students understand some major human problems and make sound, responsive decisions to improve…

  8. Certified Satisfiability Modulo Theories (SMT) Solving for System Verification

    DTIC Science & Technology

    2017-01-01

    the compositionality of trustworthiness is also a critical capability: tools must be able to trust and use the results of other tools. One approach for...multiple reasoners to work together. Thus, the compositionality of trustworthiness is also a critical capability: tools must be able to trust and use the...level of confidence in the results returned by the underlying SMT solver. Unfortunately, obtaining the high level of trust required for, e.g., safety

  9. Irregularities and Forecast Studies of Equatorial Spread

    DTIC Science & Technology

    2016-07-13

    less certain and requires investigation. It should be possible to observe the Faraday rotation of the signals received at Jicamarca. This is another...indication of the line-integrated electron number 9 DISTRIBUTION A: Distribution approved for public release. density. Like the phase delay, the Faraday ...angle is a modulo-two-pi quantity that is best used to constrain the time evolution of the ionosphere. Both the Faraday angle and the phase delay are

  10. Dona Ana No Esta Aqui. Sexto Modulo de una Serie para Maestros de Escuela Elemental (Dona Ana Isn't Here. Sixth Module of a Series for Elementary School Teachers).

    ERIC Educational Resources Information Center

    Molina, Carmen Eneida, Ed.; And Others

    This guide in English and Spanish examines the roles assigned to women in social studies textbooks and the omission of women from history books. It analyzes the topics, textbooks, pictures, and narrations in use, and offers alternatives to these biased materials. A pre-test and post-test are included to measure the user's awareness of textbook…

  11. Monotonic sequences related to zeros of Bessel functions

    NASA Astrophysics Data System (ADS)

    Lorch, Lee; Muldoon, Martin

    2008-12-01

    In the course of their work on Salem numbers and uniform distribution modulo 1, A. Akiyama and Y. Tanigawa proved some inequalities concerning the values of the Bessel function J 0 at multiples of π, i.e., at the zeros of J 1/2. This raises the question of inequalities and monotonicity properties for the sequences of values of one cylinder function at the zeros of another such function. Here we derive such results by differential equations methods.

  12. AN/TAC-1 demultiplexer circuit card assembly

    NASA Astrophysics Data System (ADS)

    Krueger, Paul J.

    1989-01-01

    This report describes the design, operation, and testing of the AN/TAC-1 demultiplexer subassembly. It demultiplexes the 6144 kb/s digital data stream received over fiber optic cable or tropo satellite support radio, and converts it into 2 digital groups and 16 digital channels. Timing recovery is accomplished by generating a 18432 kHz master clock synchronized to the incoming data. This master clock is divided modulo two to generate the proper group and loop timing.

  13. Reduction of Flow Diagrams to Unfolded Form Modulo Snarls.

    DTIC Science & Technology

    1987-04-14

    the English name of the Greek letter zeta.) 1.) An unintelligent canonical method called the Ŗ-level crossbar/pole" representation (3cp). This... Second , it will make these pictorial representations (all of which go by the name fC. Even though this is an abuse of language , it is in the spirit...received an M.S. degree In computer and communications sciences from the University of Michigan. Bs Is currently teaching a course on assembly language

  14. Economic Organization Module. Test Booklet. Test Items for Booklets 1, 2, 3=Libro de prueba. Modulo de organizacion economica. Itemes de prueba para los libros 1, 2, 3.

    ERIC Educational Resources Information Center

    California State Univ., Los Angeles. National Dissemination and Assessment Center.

    The booklet is part of a grade 10-12 social studies series produced for bilingual education. The series consists of six major thematic modules, with four to five booklets in each. The interdisciplinary modules are based on major ideas and designed to help students understand some major human problems and make sound, responsive decisions to improve…

  15. Environment Module. Test Booklet. Test Items for Booklets 1, 2, 3, 4=Libro de prueba. Modulo del medio ambiente. Itemes de prueba para los libros 1, 2, 3, 4.

    ERIC Educational Resources Information Center

    California State Univ., Los Angeles. National Dissemination and Assessment Center.

    The booklet is part of a grade 10-12 social studies series produced for bilingual education. The series consists of six major thematic modules, with four to five booklets in each. The interdisciplinary modules are based on major ideas and are designed to help students understand some major human problems and make sound, responsive decisions to…

  16. The ZpiM algorithm: a method for interferometric image reconstruction in SAR/SAS.

    PubMed

    Dias, José M B; Leitao, José M N

    2002-01-01

    This paper presents an effective algorithm for absolute phase (not simply modulo-2-pi) estimation from incomplete, noisy and modulo-2pi observations in interferometric aperture radar and sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic resonance imaging and diffraction tomography. The Bayesian viewpoint is adopted; the observation density is 2-pi-periodic and accounts for the interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modeled by a compound Gauss-Markov random field (CGMRF) tailored to piecewise smooth absolute phase images. We propose an iterative scheme for the computation of the maximum a posteriori probability (MAP) absolute phase estimate. Each iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques and an iterative conditional modes (ICM) step (pi-step). Accordingly, the algorithm is termed ZpiM, where the letter M stands for maximization. An important contribution of the paper is the simultaneous implementation of phase unwrapping (inference of the 2pi-multiples) and smoothing (denoising of the observations). This improves considerably the accuracy of the absolute phase estimates compared to methods in which the data is low-pass filtered prior to unwrapping. A set of experimental results, comparing the proposed algorithm with alternative methods, illustrates the effectiveness of our approach.

  17. Performance Analysis of IEEE 802.11g Waveform Transmitted Over a Fading Channel with Pulse-Noise Interference

    DTIC Science & Technology

    2006-06-01

    called packet binary convolutional code (PBCC), was included as an option for performance at rate of either 5.5 or 11 Mpbs. The second offshoot...and the code rate is r k n= . A general convolutional encoder can be implemented with k shift-registers and n modulo-2 adders. Higher rates can be...derived from lower rate codes by employing “ puncturing .” Puncturing is a procedure for omitting some of the encoded bits in the transmitter (thus

  18. Super-Laplacians and their symmetries

    NASA Astrophysics Data System (ADS)

    Howe, P. S.; Lindström, U.

    2017-05-01

    A super-Laplacian is a set of differential operators in superspace whose highestdimensional component is given by the spacetime Laplacian. Symmetries of super-Laplacians are given by linear differential operators of arbitrary finite degree and are determined by superconformal Killing tensors. We investigate these in flat superspaces. The differential operators determining the symmetries give rise to algebras which can be identified in many cases with the tensor algebras of the relevant superconformal Lie algebras modulo certain ideals. They have applications to Higher Spin theories.

  19. Universal Relation among the Many-Body Chern Number, Rotation Symmetry, and Filling

    NASA Astrophysics Data System (ADS)

    Matsugatani, Akishi; Ishiguro, Yuri; Shiozaki, Ken; Watanabe, Haruki

    2018-03-01

    Understanding the interplay between the topological nature and the symmetry property of interacting systems has been a central matter of condensed matter physics in recent years. In this Letter, we establish nonperturbative constraints on the quantized Hall conductance of many-body systems with arbitrary interactions. Our results allow one to readily determine the many-body Chern number modulo a certain integer without performing any integrations, solely based on the rotation eigenvalues and the average particle density of the many-body ground state.

  20. Una Escoba para Ana, Cien Oficios para Juan. Quinto Modulo de una Serie para Maestros de Escuela Elemental. (A Broom for Anna, A Hundred Jobs for John. Fifth Module of a Series for Elementary School Teachers).

    ERIC Educational Resources Information Center

    Molina, Carmen Eneida, Ed.; And Others

    This guide for teachers, in English and Spanish, examines the stereotyped work roles assigned to men and women. The guide examines educational materials that perpetuate these roles and presents teaching alternatives which reinforce students' self esteem and confidence. A pre-test and post-test are included to measure the user's awareness of…

  1. Screenings and vertex operators of quantum superalgebra U{sub q}(sl-caret(N|1))

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kojima, Takeo

    2012-08-15

    We construct the screening currents of the quantum superalgebra U{sub q}(sl-caret(N|1)) for an arbitrary level k{ne}-N+ 1. We show that these screening currents commute with the superalgebra modulo total difference. We propose bosonizations of the vertex operators by using the screening currents. We check that these vertex operators are the intertwiners among the Fock-Wakimoto representation and the typical representation for rank N Less-Than-Or-Slanted-Equal-To 4.

  2. Por Que Mami No Puede Cambiar una Goma? Tercer Modulo de una Serie para Maestros de Escuela Elemental. (Why Can't Mommy Change a Flat Tire? Third Module of a Series for Elementary School Teachers).

    ERIC Educational Resources Information Center

    Molina, Carmen Eneida, Ed.; And Others

    This guide for teachers, in English and Spanish, examines the role parents play in the socialization of sex roles. A pre-test and post test are included to measure the user's awareness of sexual stereotyping. Five object lessons cover the following topics: (1) stereotypes which exist prior to a baby's birth; (2) behavioral standards on which…

  3. Culture and Social Organization Module. Test Booklet. Test Items for Booklets 1, 2, 3, 4=Libro de prueba. Modulo de cultura y organizacion social. Itemes de prueba para los libros 1, 2, 3, 4.

    ERIC Educational Resources Information Center

    California State Univ., Los Angeles. National Dissemination and Assessment Center.

    The booklet is part of a grade 10-12 social studies series produced for bilingual education. The series consists of six major thematic modules, with four to five booklets in each. The interdisciplinary modules are based on major ideas and are designed to help students understand some major human problems and make sound, responsive decisions to…

  4. Integer Flows and Circuit Covers of Graphs and Signed Graphs

    NASA Astrophysics Data System (ADS)

    Cheng, Jian

    The work in Chapter 2 is motivated by Tutte and Jaeger's pioneering work on converting modulo flows into integer-valued flows for ordinary graphs. For a signed graphs (G, sigma), we first prove that for each k ∈ {2, 3}, if (G, sigma) is (k - 1)-edge-connected and contains an even number of negative edges when k = 2, then every modulo k-flow of (G, sigma) can be converted into an integer-valued ( k + 1)-ow with a larger or the same support. We also prove that if (G, sigma) is odd-(2p+1)-edge-connected, then (G, sigma) admits a modulo circular (2 + 1/ p)-flows if and only if it admits an integer-valued circular (2 + 1/p)-flows, which improves all previous result by Xu and Zhang (DM2005), Schubert and Steffen (EJC2015), and Zhu (JCTB2015). Shortest circuit cover conjecture is one of the major open problems in graph theory. It states that every bridgeless graph G contains a set of circuits F such that each edge is contained in at least one member of F and the length of F is at most 7/5∥E(G)∥. This concept was recently generalized to signed graphs by Macajova et al. (JGT2015). In Chapter 3, we improve their upper bound from 11∥E( G)∥ to 14/3 ∥E(G)∥, and if G is 2-edgeconnected and has even negativeness, then it can be further reduced to 11/3 ∥E(G)∥. Tutte's 3-flow conjecture has been studied by many graph theorists in the last several decades. As a new approach to this conjecture, DeVos and Thomassen considered the vectors as ow values and found that there is a close relation between vector S1-flows and integer 3-NZFs. Motivated by their observation, in Chapter 4, we prove that if a graph G admits a vector S1-flow with rank at most two, then G admits an integer 3-NZF. The concept of even factors is highly related to the famous Four Color Theorem. We conclude this dissertation in Chapter 5 with an improvement of a recent result by Chen and Fan (JCTB2016) on the upperbound of even factors. We show that if a graph G contains an even factor, then it contains an even factor H with. ∥E(H)∥ ≥ 4/7 (∥ E(G)∥+1)+ 1/7 ∥V2 (G)∥, where V2( G) is the set of vertices of degree two.

  5. Automatic oscillator frequency control system

    NASA Technical Reports Server (NTRS)

    Smith, S. F. (Inventor)

    1985-01-01

    A frequency control system makes an initial correction of the frequency of its own timing circuit after comparison against a frequency of known accuracy and then sequentially checks and corrects the frequencies of several voltage controlled local oscillator circuits. The timing circuit initiates the machine cycles of a central processing unit which applies a frequency index to an input register in a modulo-sum frequency divider stage and enables a multiplexer to clock an accumulator register in the divider stage with a cyclical signal derived from the oscillator circuit being checked. Upon expiration of the interval, the processing unit compares the remainder held as the contents of the accumulator against a stored zero error constant and applies an appropriate correction word to a correction stage to shift the frequency of the oscillator being checked. A signal from the accumulator register may be used to drive a phase plane ROM and, with periodic shifts in the applied frequency index, to provide frequency shift keying of the resultant output signal. Interposition of a phase adder between the accumulator register and phase plane ROM permits phase shift keying of the output signal by periodic variation in the value of a phase index applied to one input of the phase adder.

  6. Auxilio, Socorro! Salvame! Los Esterioripos de la Mujer en la Television. Octavo Modulo de una Serie para Maestros de Escuela Elemental. Fara Usar con la Grabacion (Help! Help! Save me! Sexual Stereotyping of Women. Eighth Module of a Series for Elementary School Teachers. Audiotape Transcriptions).

    ERIC Educational Resources Information Center

    Garcia Ramis, Magali, Ed.; And Others

    This guide in English and Spanish provides information for teachers concerning the roles assigned to women in television, and the stereotypes on which these roles are based. The guide contains a pre-test and a post-test to measure the user's awareness of sexual stereotyping. Four object lessons examine: (1) the traditional role of women on…

  7. Simulated Assessment of Interference Effects in Direct Sequence Spread Spectrum (DSSS) QPSK Receiver

    DTIC Science & Technology

    2014-03-27

    bit error rate BPSK binary phase shift keying CDMA code division multiple access CSI comb spectrum interference CW continuous wave DPSK differential... CDMA ) and GPS systems which is a Gold code. This code is generated by a modulo-2 operation between two different preferred m-sequences. The preferred m...10 SNR Sim (dB) S N R O ut ( dB ) SNR RF SNR DS Figure 3.26: Comparison of input S NRS im and S NROut of the band-pass RF filter (S NRRF) and

  8. Extended Closed-form Expressions for the Robust Symmetrical Number System Dynamic Range and an Efficient Algorithm for its Computation

    DTIC Science & Technology

    2014-01-01

    and distance between all of the vector ambiguity pairs for the combined N−sequences. To simplify our derivation, we define the center of ambiguity (COA...modulo N . The resulting structure of the N sequences ensures that two successive RSNS vectors (paired terms from all N sequences) when considered...represented by a vector , Xh = [x1,h, x2,h, . . . , xN,h] T , of N paired integers from each se- quence at h. For example, a left-shifted, three-sequence

  9. Entanglement Equilibrium and the Einstein Equation.

    PubMed

    Jacobson, Ted

    2016-05-20

    A link between the semiclassical Einstein equation and a maximal vacuum entanglement hypothesis is established. The hypothesis asserts that entanglement entropy in small geodesic balls is maximized at fixed volume in a locally maximally symmetric vacuum state of geometry and quantum fields. A qualitative argument suggests that the Einstein equation implies the validity of the hypothesis. A more precise argument shows that, for first-order variations of the local vacuum state of conformal quantum fields, the vacuum entanglement is stationary if and only if the Einstein equation holds. For nonconformal fields, the same conclusion follows modulo a conjecture about the variation of entanglement entropy.

  10. Techniques for Computing the DFT Using the Residue Fermat Number Systems and VLSI

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.

    1985-01-01

    The integer complex multiplier and adder over the direct sum of two copies of a finite field is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed for the DFT can be reduced substantially over the previous approach. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  11. Factoring 51 and 85 with 8 qubits

    PubMed Central

    Geller, Michael R.; Zhou, Zhongyuan

    2013-01-01

    We construct simplified quantum circuits for Shor's order-finding algorithm for composites N given by products of the Fermat primes 3, 5, 17, 257, and 65537. Such composites, including the previously studied case of 15, as well as 51, 85, 771, 1285, 4369, … have the simplifying property that the order of a modulo N for every base a coprime to N is a power of 2, significantly reducing the usual phase estimation precision requirement. Prime factorization of 51 and 85 can be demonstrated with only 8 qubits and a modular exponentiation circuit consisting of no more than four CNOT gates. PMID:24162074

  12. Factoring 51 and 85 with 8 qubits.

    PubMed

    Geller, Michael R; Zhou, Zhongyuan

    2013-10-28

    We construct simplified quantum circuits for Shor's order-finding algorithm for composites N given by products of the Fermat primes 3, 5, 17, 257, and 65537. Such composites, including the previously studied case of 15, as well as 51, 85, 771, 1285, 4369, … have the simplifying property that the order of a modulo N for every base a coprime to N is a power of 2, significantly reducing the usual phase estimation precision requirement. Prime factorization of 51 and 85 can be demonstrated with only 8 qubits and a modular exponentiation circuit consisting of no more than four CNOT gates.

  13. ID-based encryption scheme with revocation

    NASA Astrophysics Data System (ADS)

    Othman, Hafizul Azrie; Ismail, Eddie Shahril

    2017-04-01

    In 2015, Meshram proposed an efficient ID-based cryptographic encryption based on the difficulty of solving discrete logarithm and integer-factoring problems. The scheme was pairing free and claimed to be secure against adaptive chosen plaintext attacks (CPA). Later, Tan et al. proved that the scheme was insecure by presenting a method to recover the secret master key and to obtain prime factorization of modulo n. In this paper, we propose a new pairing-free ID-based encryption scheme with revocation based on Meshram's ID-based encryption scheme, which is also secure against Tan et al.'s attacks.

  14. Squeezed states: A geometric framework

    NASA Technical Reports Server (NTRS)

    Ali, S. T.; Brooke, J. A.; Gazeau, J.-P.

    1992-01-01

    A general definition of squeezed states is proposed and its main features are illustrated through a discussion of the standard optical coherent states represented by 'Gaussian pure states'. The set-up involves representations of groups on Hilbert spaces over homogeneous spaces of the group, and relies on the construction of a square integrable (coherent state) group representation modulo a subgroup. This construction depends upon a choice of a Borel section which has a certain permissible arbitrariness in its selection; this freedom is attributable to a squeezing of the defining coherent states of the representation, and corresponds in this way to a sort of gauging.

  15. Newman-Penrose constants of the Kerr-Newman metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong Xuefei; Shang Yu; Bai Shan

    The Newman-Unti formalism of the Kerr-Newman metric near future null infinity is developed, with which the Newman-Penrose constants for both the gravitational and electromagnetic fields of the Kerr-Newman metric are computed and shown to be zero. The multipole structure near future null infinity in the sense of Janis-Newman of the Kerr-Newman metric is then further studied. It is found that up to the 2{sup 4}-pole, modulo a constant dependent upon the order of the pole, these multipole moments agree with those of Geroch-Hansen multipole moments defined at spatial infinity.

  16. Boolean Modeling of Neural Systems with Point-Process Inputs and Outputs. Part I: Theory and Simulations

    PubMed Central

    Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.

    2010-01-01

    This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238

  17. Evidence for Direct CP Violation in the Measurement of the Cabbibo-Kobayashi-Maskawa Angle {gamma} with B{sup {+-}}{yields}D(*)K{sup (*){+-}} Decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amo Sanchez, P. del; Lees, J. P.; Poireau, V.

    2010-09-17

    We report the measurement of the Cabibbo-Kobayashi-Maskawa CP-violating angle {gamma} through a Dalitz plot analysis of neutral D-meson decays to K{sub S}{sup 0}{pi}{sup +}{pi}{sup -} and K{sub S}{sup 0}K{sup +}K{sup -} produced in the processes B{sup {+-}}{yields}DK{sup {+-}}, B{sup {+-}}{yields}D*K{sup {+-}} with D*{yields}D{pi}{sup 0}, D{gamma}, and B{sup {+-}}{yields}DK*{sup {+-}} with K*{sup {+-}}{yields}K{sub S}{sup 0}{pi}{+-}, using 468 million BB pairs collected by the BABAR detector at the PEP-II asymmetric-energy e{sup +}e{sup -} collider at SLAC. We measure {gamma}=(68{+-}14{+-}4{+-}3) deg. (modulo 180 deg.), where the first error is statistical, the second is the experimental systematic uncertainty, and the third reflects the uncertaintymore » in the description of the neutral D decay amplitudes. This result is inconsistent with {gamma}=0 (no direct CP violation) with a significance of 3.5 standard deviations.« less

  18. Evidence for Direct CP Violation in the Measurement of the Cabibbo-Kobayashi-Maskawa Angle gamma with B-+ to D(*) K(*)-+ Decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Amo Sanchez, P.; Lees, J.P.; Poireau, V.

    2011-08-19

    We report the measurement of the Cabibbo-Kobayashi-Maskawa CP-violating angle {gamma} through a Dalitz plot analysis of neutral D meson decays to K{sub S}{sup 0}{pi}{sup +}{pi}{sup -} and K{sub S}{sup 0} K{sup +}K{sup -} produced in the processes B{sup {-+}} {yields} DK{sup {-+}}, B{sup {-+}} {yields} D* K{sup {-+}} with D* {yields} D{pi}{sup 0}, D{gamma}, and B{sup {-+}} {yields} DK*{sup {-+}} with K*{sup {-+}} {yields} K{sub S}{sup 0}{pi}{sup {-+}}, using 468 million B{bar B} pairs collected by the BABAR detector at the PEP-II asymmetric-energy e{sup +}e{sup -} collider at SLAC. We measure {gamma} = (68 {+-} 14 {+-} 4 {+-} 3){supmore » o} (modulo 180{sup o}), where the first error is statistical, the second is the experimental systematic uncertainty and the third reflects the uncertainty in the description of the neutral D decay amplitudes. This result is inconsistent with {gamma} = 0 (no direct CP violation) with a significance of 3.5 standard deviations.« less

  19. MPCM: a hardware coder for super slow motion video sequences

    NASA Astrophysics Data System (ADS)

    Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.

    2013-12-01

    In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.

  20. Symmetrical treatment of "Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition", for major depressive disorders.

    PubMed

    Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun

    2016-01-01

    We previously presented a group theoretical model that describes psychiatric patient states or clinical data in a graded vector-like format based on modulo groups. Meanwhile, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5, the current version), is frequently used for diagnosis in daily psychiatric treatments and biological research. The diagnostic criteria of DSM-5 contain simple binominal items relating to the presence or absence of specific symptoms. In spite of its simple form, the practical structure of the DSM-5 system is not sufficiently systemized for data to be treated in a more rationally sophisticated way. To view the disease states in terms of symmetry in the manner of abstract algebra is considered important for the future systematization of clinical medicine. We provide a simple idea for the practical treatment of the psychiatric diagnosis/score of DSM-5 using depressive symptoms in line with our previously proposed method. An expression is given employing modulo-2 and -7 arithmetic (in particular, additive group theory) for Criterion A of a 'major depressive episode' that must be met for the diagnosis of 'major depressive disorder' in DSM-5. For this purpose, the novel concept of an imaginary value 0 that can be recognized as an explicit 0 or implicit 0 was introduced to compose the model. The zeros allow the incorporation or deletion of an item between any other symptoms if they are ordered appropriately. Optionally, a vector-like expression can be used to rate/select only specific items when modifying the criterion/scale. Simple examples are illustrated concretely. Further development of the proposed method for the criteria/scale of a disease is expected to raise the level of formalism of clinical medicine to that of other fields of natural science.

  1. Tilt-effect of holograms and images displayed on a spatial light modulator.

    PubMed

    Harm, Walter; Roider, Clemens; Bernet, Stefan; Ritsch-Marte, Monika

    2015-11-16

    We show that a liquid crystal spatial light modulator (LCOS-SLM) can be used to display amplitude images, or phase holograms, which change in a pre-determined way when the display is tilted, i.e. observed under different angles. This is similar to the tilt-effect (also called "latent image effect") known from various security elements ("kinegrams") on credit cards or bank notes. The effect is achieved without any specialized optical components, simply by using the large phase shifting capability of a "thick" SLM, which extends over several multiples of 2π, in combination with the angular dependence of the phase shift. For hologram projection one can use the fact that the phase of a monochromatic wave is only defined modulo 2π. Thus one can design a phase pattern extending over several multiples of 2π, which transforms at different readout angles into different 2π-wrapped phase structures, due to the angular dependence of the modulo 2π operation. These different beams then project different holograms at the respective readout angles. In amplitude modulation mode (with inserted polarizer) the intensity of each SLM pixel oscillates over several periods when tuning its control voltage. Since the oscillation period depends on the readout angle, it is possible to find a certain control voltage which produces two (or more) selectable gray levels at a corresponding number of pre-determined readout angles. This is done with all SLM pixels individually, thus constructing different images for the selected angles. We experimentally demonstrate the reconstruction of multiple (Fourier- and Fresnel-) holograms, and of different amplitude images, by readout of static diffractive patterns in a variable angular range between 0° and 60°.

  2. Λ scattering equations

    NASA Astrophysics Data System (ADS)

    Gomez, Humberto

    2016-06-01

    The CHY representation of scattering amplitudes is based on integrals over the moduli space of a punctured sphere. We replace the punctured sphere by a double-cover version. The resulting scattering equations depend on a parameter Λ controlling the opening of a branch cut. The new representation of scattering amplitudes possesses an enhanced redundancy which can be used to fix, modulo branches, the location of four punctures while promoting Λ to a variable. Via residue theorems we show how CHY formulas break up into sums of products of smaller (off-shell) ones times a propagator. This leads to a powerful way of evaluating CHY integrals of generic rational functions, which we call the Λ algorithm.

  3. Importance of Broken Gauge Symmetry in Addressing Three, Key, Unanswered Questions Posed by Low Nuclear Reactions (LENR's)

    NASA Astrophysics Data System (ADS)

    Chubb, Scott

    2003-03-01

    Three, Key, Unanswered Questions posed by LENR's are: 1. How do we explain the lack of high energy particles (HEP's)? 2. Can we understand and prioritize the way coupling can occur between nuclear- and atomic- lengthscales, and 3. What are the roles of Surface-Like (SL), as opposed to Bulk-Like (BL), processes in triggering nuclear phenomena. One important source of confusion associated with each of these questions is the common perception that the quantum mechanical phases of different particles are not correlated with each other. When the momenta p of interacting particles is large, and reactions occur rapidly (between HEP's, for example), this is a valid assumption. But when the relative difference in p becomes vanishingly small, between one charge, and many others, as a result of implicit electromagnetic coupling, each charge can share a common phase, relative to the others, modulo 2nπ, where n is an integer, even when outside forces are introduced. The associated forms of broken gauge symmetry, distinguish BL from SL phenomena, at room temperature, also explain super- and normal- conductivity in solids, and can be used to address the Three, Key, Unanswered Questions posed by LENR's.

  4. Category-theoretic models of algebraic computer systems

    NASA Astrophysics Data System (ADS)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  5. Separating OR, SUM, and XOR Circuits.

    PubMed

    Find, Magnus; Göös, Mika; Järvisalo, Matti; Kaski, Petteri; Koivisto, Mikko; Korhonen, Janne H

    2016-08-01

    Given a boolean n × n matrix A we consider arithmetic circuits for computing the transformation x ↦ Ax over different semirings. Namely, we study three circuit models: monotone OR-circuits, monotone SUM-circuits (addition of non-negative integers), and non-monotone XOR-circuits (addition modulo 2). Our focus is on separating OR-circuits from the two other models in terms of circuit complexity: We show how to obtain matrices that admit OR-circuits of size O ( n ), but require SUM-circuits of size Ω( n 3/2 /log 2 n ).We consider the task of rewriting a given OR-circuit as a XOR-circuit and prove that any subquadratic-time algorithm for this task violates the strong exponential time hypothesis.

  6. Symmetric Trajectories for the 2N-Body Problem with Equal Masses

    NASA Astrophysics Data System (ADS)

    Terracini, Susanna; Venturelli, Andrea

    2007-06-01

    We consider the problem of 2 N bodies of equal masses in mathbb{R}^3 for the Newtonian-like weak-force potential r -σ, and we prove the existence of a family of collision-free nonplanar and nonhomographic symmetric solutions that are periodic modulo rotations. In addition, the rotation number with respect to the vertical axis ranges in a suitable interval. These solutions have the hip-hop symmetry, a generalization of that introduced in [19], for the case of many bodies and taking account of a topological constraint. The argument exploits the variational structure of the problem, and is based on the minimization of Lagrangian action on a given class of paths.

  7. From grid cells and visual place cells to multimodal place cell: a new robotic architecture

    PubMed Central

    Jauffret, Adrien; Cuperlier, Nicolas; Gaussier, Philippe

    2015-01-01

    In the present study, a new architecture for the generation of grid cells (GC) was implemented on a real robot. In order to test this model a simple place cell (PC) model merging visual PC activity and GC was developed. GC were first built from a simple “several to one” projection (similar to a modulo operation) performed on a neural field coding for path integration (PI). Robotics experiments raised several practical and theoretical issues. To limit the important angular drift of PI, head direction information was introduced in addition to the robot proprioceptive signal coming from the wheel rotation. Next, a simple associative learning between visual place cells and the neural field coding for the PI has been used to recalibrate the PI and to limit its drift. Finally, the parameters controlling the shape of the PC built from the GC have been studied. Increasing the number of GC obviously improves the shape of the resulting place field. Yet, other parameters such as the discretization factor of PI or the lateral interactions between GC can have an important impact on the place field quality and avoid the need of a very large number of GC. In conclusion, our results show our GC model based on the compression of PI is congruent with neurobiological studies made on rodent. GC firing patterns can be the result of a modulo transformation of PI information. We argue that such a transformation may be a general property of the connectivity from the cortex to the entorhinal cortex. Our model predicts that the effect of similar transformations on other kinds of sensory information (visual, tactile, auditory, etc…) in the entorhinal cortex should be observed. Consequently, a given EC cell should react to non-contiguous input configurations in non-spatial conditions according to the projection from its different inputs. PMID:25904862

  8. Periodic binary sequence generators: VLSI circuits considerations

    NASA Technical Reports Server (NTRS)

    Perlman, M.

    1984-01-01

    Feedback shift registers are efficient periodic binary sequence generators. Polynomials of degree r over a Galois field characteristic 2(GF(2)) characterize the behavior of shift registers with linear logic feedback. The algorithmic determination of the trinomial of lowest degree, when it exists, that contains a given irreducible polynomial over GF(2) as a factor is presented. This corresponds to embedding the behavior of an r-stage shift register with linear logic feedback into that of an n-stage shift register with a single two-input modulo 2 summer (i.e., Exclusive-OR gate) in its feedback. This leads to Very Large Scale Integrated (VLSI) circuit architecture of maximal regularity (i.e., identical cells) with intercell communications serialized to a maximal degree.

  9. A Priori Bound on the Velocity in Axially Symmetric Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Lei, Zhen; Navas, Esteban A.; Zhang, Qi S.

    2016-01-01

    Let v be the velocity of Leray-Hopf solutions to the axially symmetric three-dimensional Navier-Stokes equations. Under suitable conditions for initial values, we prove the following a priori bound |v(x, t)| ≤ C |ln r|^{1/2}/r^2, qquad 0 < r ≤ 1/2, where r is the distance from x to the z axis, and C is a constant depending only on the initial value. This provides a pointwise upper bound (worst case scenario) for possible singularities, while the recent papers (Chiun-Chuan et al., Commun PDE 34(1-3):203-232, 2009; Koch et al., Acta Math 203(1):83-105, 2009) gave a lower bound. The gap is polynomial order 1 modulo a half log term.

  10. Reliable computation from contextual correlations

    NASA Astrophysics Data System (ADS)

    Oestereich, André L.; Galvão, Ernesto F.

    2017-12-01

    An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.

  11. Complexity transitions in global algorithms for sparse linear systems over finite fields

    NASA Astrophysics Data System (ADS)

    Braunstein, A.; Leone, M.; Ricci-Tersenghi, F.; Zecchina, R.

    2002-09-01

    We study the computational complexity of a very basic problem, namely that of finding solutions to a very large set of random linear equations in a finite Galois field modulo q. Using tools from statistical mechanics we are able to identify phase transitions in the structure of the solution space and to connect them to the changes in the performance of a global algorithm, namely Gaussian elimination. Crossing phase boundaries produces a dramatic increase in memory and CPU requirements necessary for the algorithms. In turn, this causes the saturation of the upper bounds for the running time. We illustrate the results on the specific problem of integer factorization, which is of central interest for deciphering messages encrypted with the RSA cryptosystem.

  12. History dependent quantum walk on the cycle with an unbalanced coin

    NASA Astrophysics Data System (ADS)

    Krawec, Walter O.

    2015-06-01

    Recently, a new model of quantum walk, utilizing recycled coins, was introduced; however little is yet known about its properties. In this paper, we study its behavior on the cycle graph. In particular, we will consider its time averaged distribution and how it is affected by the walk's "memory parameter"-a real parameter, between zero and eight, which affects the walk's coin flip operator. Despite an infinite number of different parameters, our analysis provides evidence that only a few produce non-uniform behavior. Our analysis also shows that the initial state, and cycle size modulo four all affect the behavior of this walk. We also prove an interesting relationship between the recycled coin model and a different memory-based quantum walk recently proposed.

  13. Final Scientific/Technical Report: Breakthrough Design and Implementation of Many-Body Theories for Electron Correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    So Hirata

    2012-01-03

    This report discusses the following highlights of the project: (1) grid-based Hartree-Fock equation solver; (2) explicitly correlated coupled-cluster and perturbation methods; (3) anharmonic vibrational frequencies and vibrationally averaged NMR and structural parameters of FHF; (4) anharmonic vibrational frequencies and vibrationally averaged structures of hydrocarbon combustion species; (5) anharmonic vibrational analysis of the guanine-cytosine base pair; (6) the nature of the Born-Oppenheimer approximation; (7) Polymers and solids Brillouin-zone downsampling - the modulo MP2 method; (8) explicitly correlated MP2 for extended systems; (9) fast correlated method for molecular crystals - solid formic acid; and (10) fast correlated method for molecular crystals -more » solid hydrogen fluoride.« less

  14. Separating OR, SUM, and XOR Circuits☆

    PubMed Central

    Find, Magnus; Göös, Mika; Järvisalo, Matti; Kaski, Petteri; Koivisto, Mikko; Korhonen, Janne H.

    2017-01-01

    Given a boolean n × n matrix A we consider arithmetic circuits for computing the transformation x ↦ Ax over different semirings. Namely, we study three circuit models: monotone OR-circuits, monotone SUM-circuits (addition of non-negative integers), and non-monotone XOR-circuits (addition modulo 2). Our focus is on separating OR-circuits from the two other models in terms of circuit complexity: We show how to obtain matrices that admit OR-circuits of size O(n), but require SUM-circuits of size Ω(n3/2/log2n).We consider the task of rewriting a given OR-circuit as a XOR-circuit and prove that any subquadratic-time algorithm for this task violates the strong exponential time hypothesis. PMID:28529379

  15. Statistical properties of filtered pseudorandom digital sequences formed from the sum of maximum-length sequences

    NASA Technical Reports Server (NTRS)

    Wallace, G. R.; Weathers, G. D.; Graf, E. R.

    1973-01-01

    The statistics of filtered pseudorandom digital sequences called hybrid-sum sequences, formed from the modulo-two sum of several maximum-length sequences, are analyzed. The results indicate that a relation exists between the statistics of the filtered sequence and the characteristic polynomials of the component maximum length sequences. An analysis procedure is developed for identifying a large group of sequences with good statistical properties for applications requiring the generation of analog pseudorandom noise. By use of the analysis approach, the filtering process is approximated by the convolution of the sequence with a sum of unit step functions. A parameter reflecting the overall statistical properties of filtered pseudorandom sequences is derived. This parameter is called the statistical quality factor. A computer algorithm to calculate the statistical quality factor for the filtered sequences is presented, and the results for two examples of sequence combinations are included. The analysis reveals that the statistics of the signals generated with the hybrid-sum generator are potentially superior to the statistics of signals generated with maximum-length generators. Furthermore, fewer calculations are required to evaluate the statistics of a large group of hybrid-sum generators than are required to evaluate the statistics of the same size group of approximately equivalent maximum-length sequences.

  16. Secure multi-party quantum summation based on quantum Fourier transform

    NASA Astrophysics Data System (ADS)

    Yang, Hui-Yi; Ye, Tian-Yu

    2018-06-01

    In this paper, we propose a novel secure multi-party quantum summation protocol based on quantum Fourier transform, where the traveling particles are transmitted in a tree-type mode. The party who prepares the initial quantum states is assumed to be semi-honest, which means that she may misbehave on her own but will not conspire with anyone. The proposed protocol can resist both the outside attacks and the participant attacks. Especially, one party cannot obtain other parties' private integer strings; and it is secure for the colluding attack performed by at most n - 2 parties, where n is the number of parties. In addition, the proposed protocol calculates the addition of modulo d and implements the calculation of addition in a secret-by-secret way rather than a bit-by-bit way.

  17. Existence, Uniqueness and Asymptotic Stability of Time Periodic Traveling Waves for a Periodic Lotka-Volterra Competition System with Diffusion

    PubMed Central

    Zhao, Guangyu; Ruan, Shigui

    2011-01-01

    We study the existence, uniqueness, and asymptotic stability of time periodic traveling wave solutions to a periodic diffusive Lotka-Volterra competition system. Under certain conditions, we prove that there exists a maximal wave speed c* such that for each wave speed c ≤ c*, there is a time periodic traveling wave connecting two semi-trivial periodic solutions of the corresponding kinetic system. It is shown that such a traveling wave is unique modulo translation and is monotone with respect to its co-moving frame coordinate. We also show that the traveling wave solutions with wave speed c < c* are asymptotically stable in certain sense. In addition, we establish the nonexistence of time periodic traveling waves for nonzero speed c > c*. PMID:21572575

  18. Concurrent error detecting codes for arithmetic processors

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1979-01-01

    A method of concurrent error detection for arithmetic processors is described. Low-cost residue codes with check-length l and checkbase m = 2 to the l power - 1 are described for checking arithmetic operations of addition, subtraction, multiplication, division complement, shift, and rotate. Of the three number representations, the signed-magnitude representation is preferred for residue checking. Two methods of residue generation are described: the standard method of using modulo m adders and the method of using a self-testing residue tree. A simple single-bit parity-check code is described for checking the logical operations of XOR, OR, and AND, and also the arithmetic operations of complement, shift, and rotate. For checking complement, shift, and rotate, the single-bit parity-check code is simpler to implement than the residue codes.

  19. Kinetic Monte Carlo simulations of travelling pulses and spiral waves in the lattice Lotka-Volterra model.

    PubMed

    Makeev, Alexei G; Kurkina, Elena S; Kevrekidis, Ioannis G

    2012-06-01

    Kinetic Monte Carlo simulations are used to study the stochastic two-species Lotka-Volterra model on a square lattice. For certain values of the model parameters, the system constitutes an excitable medium: travelling pulses and rotating spiral waves can be excited. Stable solitary pulses travel with constant (modulo stochastic fluctuations) shape and speed along a periodic lattice. The spiral waves observed persist sometimes for hundreds of rotations, but they are ultimately unstable and break-up (because of fluctuations and interactions between neighboring fronts) giving rise to complex dynamic behavior in which numerous small spiral waves rotate and interact with each other. It is interesting that travelling pulses and spiral waves can be exhibited by the model even for completely immobile species, due to the non-local reaction kinetics.

  20. A comparison of approaches for finding minimum identifying codes on graphs

    NASA Astrophysics Data System (ADS)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  1. An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves

    PubMed Central

    Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing

    2014-01-01

    Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181

  2. A m-ary linear feedback shift register with binary logic

    NASA Technical Reports Server (NTRS)

    Perlman, M. (Inventor)

    1973-01-01

    A family of m-ary linear feedback shift registers with binary logic is disclosed. Each m-ary linear feedback shift register with binary logic generates a binary representation of a nonbinary recurring sequence, producible with a m-ary linear feedback shift register without binary logic in which m is greater than 2. The state table of a m-ary linear feedback shift register without binary logic, utilizing sum modulo m feedback, is first tubulated for a given initial state. The entries in the state table are coded in binary and the binary entries are used to set the initial states of the stages of a plurality of binary shift registers. A single feedback logic unit is employed which provides a separate feedback binary digit to each binary register as a function of the states of corresponding stages of the binary registers.

  3. SimCheck: An Expressive Type System for Simulink

    NASA Technical Reports Server (NTRS)

    Roy, Pritam; Shankar, Natarajan

    2010-01-01

    MATLAB Simulink is a member of a class of visual languages that are used for modeling and simulating physical and cyber-physical systems. A Simulink model consists of blocks with input and output ports connected using links that carry signals. We extend the type system of Simulink with annotations and dimensions/units associated with ports and links. These types can capture invariants on signals as well as relations between signals. We define a type-checker that checks the wellformedness of Simulink blocks with respect to these type annotations. The type checker generates proof obligations that are solved by SRI's Yices solver for satisfiability modulo theories (SMT). This translation can be used to detect type errors, demonstrate counterexamples, generate test cases, or prove the absence of type errors. Our work is an initial step toward the symbolic analysis of MATLAB Simulink models.

  4. Taking a vector supermultiplet apart: Alternative Fayet-Iliopoulos-type terms

    NASA Astrophysics Data System (ADS)

    Kuzenko, Sergei M.

    2018-06-01

    Starting from an Abelian N = 1 vector supermultiplet V coupled to conformal supergravity, we construct from it a nilpotent real scalar Goldstino superfield V of the type proposed in arxiv:arXiv:1702.02423. It contains only two independent component fields, the Goldstino and the auxiliary D-field. The important properties of this Goldstino superfield are: (i) it is gauge invariant; and (ii) it is super-Weyl invariant. As a result, the gauge prepotential can be represented as V = V + V, where V contains only one independent component field, modulo gauge degrees of freedom, which is the gauge one-form. Making use of V allows us to introduce new Fayet-Iliopoulos-type terms, which differ from the one proposed in arxiv:arXiv:1712.08601 and share with the latter the property that gauged R-symmetry is not required.

  5. Semiempirical prediction of protein folds

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel; Colubri, Andrés; Appignanesi, Gustavo

    2001-08-01

    We introduce a semiempirical approach to predict ab initio expeditious pathways and native backbone geometries of proteins that fold under in vitro renaturation conditions. The algorithm is engineered to incorporate a discrete codification of local steric hindrances that constrain the movements of the peptide backbone throughout the folding process. Thus, the torsional state of the chain is assumed to be conditioned by the fact that hopping from one basin of attraction to another in the Ramachandran map (local potential energy surface) of each residue is energetically more costly than the search for a specific (Φ, Ψ) torsional state within a single basin. A combinatorial procedure is introduced to evaluate coarsely defined torsional states of the chain defined ``modulo basins'' and translate them into meaningful patterns of long range interactions. Thus, an algorithm for structure prediction is designed based on the fact that local contributions to the potential energy may be subsumed into time-evolving conformational constraints defining sets of restricted backbone geometries whereupon the patterns of nonbonded interactions are constructed. The predictive power of the algorithm is assessed by (a) computing ab initio folding pathways for mammalian ubiquitin that ultimately yield a stable structural pattern reproducing all of its native features, (b) determining the nucleating event that triggers the hydrophobic collapse of the chain, and (c) comparing coarse predictions of the stable folds of moderately large proteins (N~100) with structural information extracted from the protein data bank.

  6. A simple proof of a lemma of Coleman

    NASA Astrophysics Data System (ADS)

    Saikia, A.

    2001-03-01

    Let p be an odd prime. The results in this paper concern the units of the infinite extension of Qp generated by all p-power roots of unity. Letformula herewhere [mu]pn+1 denote the pn+1th roots of 1. Let [script p]n be the maximal ideal of the ring of integers of [Phi]n and let Un be the units congruent to 1 modulo [script p]n.Let [zeta]n be a fixed primitive pn+1th root of unity such that [zeta]pn = [zeta]n [minus sign] 1, [for all]n [gt-or-equal, slanted] 1. Put [pi]n = [zeta]n [minus sign] 1. Thus [pi]n is a local parameter for [Phi]n. Letformula hereKummer already exploited the obvious fact that every u0 [set membership] U0 can be written in the formformula herewhere f0(T) is some power series in Zp[[T

  7. CEBAF Superconducting Cavity RF Drive System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugitt, Jock; Moore, Thomas

    1987-03-01

    The CEBAR RF system consists of 418 individual RF amplifier chains. Each superconducting cavity is phase locked to the master drive reference line to within 1 degree, and the cavity field gradient is regulated to within 1 part in 10 by a state-of-the-art RF control module. Precision, continuously adjustable, modulo 360 phase shifters are used to generate the individual phase references, and a compensated RF detector is used for level feedback. The close coupled digital system enhances system accuracy, provides self-calibration, and continuously checks the system for malfunction. Calibration curves, the operating program, and system history are stored in anmore » on board EEPROM. The RF power is generated by a 5Kw, water cooled, permanent magnet focused klystorn. The klystons are clustered in groups of 8 and powered from a common supply. RF power is transmitted to the accelerator sections by semiflexible waveguide.« less

  8. Symmetric digit sets for elliptic curve scalar multiplication without precomputation

    PubMed Central

    Heuberger, Clemens; Mazzoli, Michela

    2014-01-01

    We describe a method to perform scalar multiplication on two classes of ordinary elliptic curves, namely E:y2=x3+Ax in prime characteristic p≡1mod4, and E:y2=x3+B in prime characteristic p≡1mod3. On these curves, the 4-th and 6-th roots of unity act as (computationally efficient) endomorphisms. In order to optimise the scalar multiplication, we consider a width-w-NAF (Non-Adjacent Form) digit expansion of positive integers to the complex base of τ, where τ is a zero of the characteristic polynomial x2−tx+p of the Frobenius endomorphism associated to the curve. We provide a precomputationless algorithm by means of a convenient factorisation of the unit group of residue classes modulo τ in the endomorphism ring, whereby we construct a digit set consisting of powers of subgroup generators, which are chosen as efficient endomorphisms of the curve. PMID:25190900

  9. A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing

    NASA Technical Reports Server (NTRS)

    Takaki, Mitsuo; Cavalcanti, Diego; Gheyi, Rohit; Iyoda, Juliano; dAmorim, Marcelo; Prudencio, Ricardo

    2009-01-01

    The complexity of constraints is a major obstacle for constraint-based software verification. Automatic constraint solvers are fundamentally incomplete: input constraints often build on some undecidable theory or some theory the solver does not support. This paper proposes and evaluates several randomized solvers to address this issue. We compare the effectiveness of a symbolic solver (CVC3), a random solver, three hybrid solvers (i.e., mix of random and symbolic), and two heuristic search solvers. We evaluate the solvers on two benchmarks: one consisting of manually generated constraints and another generated with a concolic execution of 8 subjects. In addition to fully decidable constraints, the benchmarks include constraints with non-linear integer arithmetic, integer modulo and division, bitwise arithmetic, and floating-point arithmetic. As expected symbolic solving (in particular, CVC3) subsumes the other solvers for the concolic execution of subjects that only generate decidable constraints. For the remaining subjects the solvers are complementary.

  10. Ka-Band Phased Array System Characterization

    NASA Technical Reports Server (NTRS)

    Acosta, R.; Johnson, S.; Sands, O.; Lambert, K.

    2001-01-01

    Phased Array Antennas (PAAs) using patch-radiating elements are projected to transmit data at rates several orders of magnitude higher than currently offered with reflector-based systems. However, there are a number of potential sources of degradation in the Bit Error Rate (BER) performance of the communications link that are unique to PAA-based links. Short spacing of radiating elements can induce mutual coupling between radiating elements, long spacing can induce grating lobes, modulo 2 pi phase errors can add to Inter Symbol Interference (ISI), phase shifters and power divider network introduce losses into the system. This paper describes efforts underway to test and evaluate the effects of the performance degrading features of phased-array antennas when used in a high data rate modulation link. The tests and evaluations described here uncover the interaction between the electrical characteristics of a PAA and the BER performance of a communication link.

  11. Duality-symmetric supersymmetric Yang-Mills theory in three dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishino, Hitoshi; Rajpoot, Subhash

    We formulate a duality-symmetric N=1 supersymmetric Yang-Mills theory in three dimensions. Our field content is (A{sub {mu}}{sup I},{lambda}{sup I},{phi}{sup I}), where the index I is for the adjoint representation of an arbitrary gauge group G. Our Hodge duality symmetry is F{sub {mu}{nu}}{sup I}=+{epsilon}{sub {mu}{nu}}{sup {rho}D}{sub {rho}{phi}}{sup I}. Because of this relationship, the presence of two physical fields A{sub {mu}}{sup I} and {phi}{sup I} within the same N=1 supermultiplet poses no problem. We can couple this multiplet to another vector multiplet (C{sub {mu}}{sup I},{chi}{sup I};B{sub {mu}{nu}}{sup I}) with 1+1 physical degrees of freedom modulo dim G. Thanks to peculiar couplings andmore » supersymmetry, the usual problem with an extra vector field in a nontrivial representation does not arise in our system.« less

  12. Topology Change and the Unity of Space

    NASA Astrophysics Data System (ADS)

    Callender, Craig; Weingard, Robert

    Must space be a unity? This question, which exercised Aristotle, Descartes and Kant, is a specific instance of a more general one; namely, can the topology of physical space change with time? In this paper we show how the discussion of the unity of space has been altered but survives in contemporary research in theoretical physics. With a pedagogical review of the role played by the Euler characteristic in the mathematics of relativistic spacetimes, we explain how classical general relativity (modulo considerations about energy conditions) allows virtually unrestrained spatial topology change in four dimensions. We also survey the situation in many other dimensions of interest. However, topology change comes with a cost: a famous theorem by Robert Geroch shows that, for many interesting types of such change, transitions of spatial topology imply the existence of closed timelike curves or temporal non-orientability. Ways of living with this theorem and of evading it are discussed.

  13. Scattering amplitudes from multivariate polynomial division

    NASA Astrophysics Data System (ADS)

    Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano

    2012-11-01

    We show that the evaluation of scattering amplitudes can be formulated as a problem of multivariate polynomial division, with the components of the integration-momenta as indeterminates. We present a recurrence relation which, independently of the number of loops, leads to the multi-particle pole decomposition of the integrands of the scattering amplitudes. The recursive algorithm is based on the weak Nullstellensatz theorem and on the division modulo the Gröbner basis associated to all possible multi-particle cuts. We apply it to dimensionally regulated one-loop amplitudes, recovering the well-known integrand-decomposition formula. Finally, we focus on the maximum-cut, defined as a system of on-shell conditions constraining the components of all the integration-momenta. By means of the Finiteness Theorem and of the Shape Lemma, we prove that the residue at the maximum-cut is parametrized by a number of coefficients equal to the number of solutions of the cut itself.

  14. Diffeomorphic Sulcal Shape Analysis on the Cortex

    PubMed Central

    Joshi, Shantanu H.; Cabeen, Ryan P.; Joshi, Anand A.; Sun, Bo; Dinov, Ivo; Narr, Katherine L.; Toga, Arthur W.; Woods, Roger P.

    2014-01-01

    We present a diffeomorphic approach for constructing intrinsic shape atlases of sulci on the human cortex. Sulci are represented as square-root velocity functions of continuous open curves in ℝ3, and their shapes are studied as functional representations of an infinite-dimensional sphere. This spherical manifold has some advantageous properties – it is equipped with a Riemannian metric on the tangent space and facilitates computational analyses and correspondences between sulcal shapes. Sulcal shape mapping is achieved by computing geodesics in the quotient space of shapes modulo scales, translations, rigid rotations and reparameterizations. The resulting sulcal shape atlas preserves important local geometry inherently present in the sample population. The sulcal shape atlas is integrated in a cortical registration framework and exhibits better geometric matching compared to the conventional euclidean method. We demonstrate experimental results for sulcal shape mapping, cortical surface registration, and sulcal classification for two different surface extraction protocols for separate subject populations. PMID:22328177

  15. Non-Abelian sigma models from Yang-Mills theory compactified on a circle

    NASA Astrophysics Data System (ADS)

    Ivanova, Tatiana A.; Lechtenfeld, Olaf; Popov, Alexander D.

    2018-06-01

    We consider SU(N) Yang-Mills theory on R 2 , 1 ×S1, where S1 is a spatial circle. In the infrared limit of a small-circle radius the Yang-Mills action reduces to the action of a sigma model on R 2 , 1 whose target space is a 2 (N - 1)-dimensional torus modulo the Weyl-group action. We argue that there is freedom in the choice of the framing of the gauge bundles, which leads to more general options. In particular, we show that this low-energy limit can give rise to a target space SU (N) ×SU (N) /ZN. The latter is the direct product of SU(N) and its Langlands dual SU (N) /ZN, and it contains the above-mentioned torus as its maximal Abelian subgroup. An analogous result is obtained for any non-Abelian gauge group.

  16. New quantum codes derived from a family of antiprimitive BCH codes

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  17. On classical de Sitter and Minkowski solutions with intersecting branes

    NASA Astrophysics Data System (ADS)

    Andriot, David

    2018-03-01

    Motivated by the connection of string theory to cosmology or particle physics, we study solutions of type II supergravities having a four-dimensional de Sitter or Minkowski space-time, with intersecting D p -branes and orientifold O p -planes. Only few such solutions are known, and we aim at a better characterisation. Modulo a few restrictions, we prove that there exists no classical de Sitter solution for any combination of D 3/ O 3 and D 7/ O 7, while we derive interesting constraints for intersecting D 5/ O 5 or D 6/ O 6, or combinations of D 4/ O 4 and D 8/ O 8. Concerning classical Minkowski solutions, we understand some typical features, and propose a solution ansatz. Overall, a central information appears to be the way intersecting D p / O p overlap each other, a point we focus on.

  18. A limit for large R-charge correlators in N = 2 theories

    NASA Astrophysics Data System (ADS)

    Bourget, Antoine; Rodriguez-Gomez, Diego; Russo, Jorge G.

    2018-05-01

    Using supersymmetric localization, we study the sector of chiral primary operators (Tr ϕ 2) n with large R-charge 4 n in N = 2 four-dimensional superconformal theories in the weak coupling regime g → 0, where λ ≡ g 2 n is kept fixed as n → ∞, g representing the gauge theory coupling(s). In this limit, correlation functions G 2 n of these operators behave in a simple way, with an asymptotic behavior of the form {G}_{2n}≈ {F}_{∞}(λ){(λ/2π e)}^{2n} n α , modulo O(1 /n) corrections, with α =1/2 \\dim (g) for a gauge algebra g and a universal function F ∞(λ). As a by-product we find several new formulas both for the partition function as well as for perturbative correlators in N=2 su(N) gauge theory with 2 N fundamental hypermultiplets.

  19. Two-dimensional phase unwrapping using robust derivative estimation and adaptive integration.

    PubMed

    Strand, Jarle; Taxt, Torfinn

    2002-01-01

    The adaptive integration (ADI) method for two-dimensional (2-D) phase unwrapping is presented. The method uses an algorithm for noise robust estimation of partial derivatives, followed by a noise robust adaptive integration process. The ADI method can easily unwrap phase images with moderate noise levels, and the resulting images are congruent modulo 2pi with the observed, wrapped, input images. In a quantitative evaluation, both the ADI and the BLS methods (Strand et al.) were better than the least-squares methods of Ghiglia and Romero (GR), and of Marroquin and Rivera (MRM). In a qualitative evaluation, the ADI, the BLS, and a conjugate gradient version of the MRM method (MRMCG), were all compared using a synthetic image with shear, using 115 magnetic resonance images, and using 22 fiber-optic interferometry images. For the synthetic image and the interferometry images, the ADI method gave consistently visually better results than the other methods. For the MR images, the MRMCG method was best, and the ADI method second best. The ADI method was less sensitive to the mask definition and the block size than the BLS method, and successfully unwrapped images with shears that were not marked in the masks. The computational requirements of the ADI method for images of nonrectangular objects were comparable to only two iterations of many least-squares-based methods (e.g., GR). We believe the ADI method provides a powerful addition to the ensemble of tools available for 2-D phase unwrapping.

  20. An all digital phase locked loop for synchronization of a sinusoidal signal embedded in white Gaussian noise

    NASA Technical Reports Server (NTRS)

    Reddy, C. P.; Gupta, S. C.

    1973-01-01

    An all digital phase locked loop which tracks the phase of the incoming sinusoidal signal once per carrier cycle is proposed. The different elements and their functions and the phase lock operation are explained in detail. The nonlinear difference equations which govern the operation of the digital loop when the incoming signal is embedded in white Gaussian noise are derived, and a suitable model is specified. The performance of the digital loop is considered for the synchronization of a sinusoidal signal. For this, the noise term is suitably modelled which allows specification of the output probabilities for the two level quantizer in the loop at any given phase error. The loop filter considered increases the probability of proper phase correction. The phase error states in modulo two-pi forms a finite state Markov chain which enables the calculation of steady state probabilities, RMS phase error, transient response and mean time for cycle skipping.

  1. Efficient Residue to Binary Conversion Based on a Modified Flexible Moduli Set

    NASA Astrophysics Data System (ADS)

    Molahosseini, Amir Sabbagh

    2011-09-01

    The Residue Number System (RNS) is a non-weighted number system which can perform addition (subtraction) and multiplication on residues without carry-propagation; resulting in high-speed hardware implementations of computation systems. The problem of converting residue numbers to equivalent binary weighted form has been attracted a lot of research for many years. Recently, some researchers proposed using flexible moduli sets instead of previous traditional moduli sets to enhance the performance of residue to binary converters. This paper introduces the modified flexible moduli set {22p+k. 22p+1, 2p+1, 2p-1} which is achieved from the flexible set {2p+k, 22p+1, 2p+1, 2p-1} by enhancing modulo 2p+k. Next, new Chinese remainder theorem-1 is used to design simple and efficient residue to binary converter for this modified set with better performance than the converter of the moduli set {2p+k, 22p+1, 2p+1, 2p-1}.

  2. Gauge backgrounds and zero-mode counting in F-theory

    NASA Astrophysics Data System (ADS)

    Bies, Martin; Mayrhofer, Christoph; Weigand, Timo

    2017-11-01

    Computing the exact spectrum of charged massless matter is a crucial step towards understanding the effective field theory describing F-theory vacua in four dimensions. In this work we further develop a coherent framework to determine the charged massless matter in F-theory compactified on elliptic fourfolds, and demonstrate its application in a concrete example. The gauge background is represented, via duality with M-theory, by algebraic cycles modulo rational equivalence. Intersection theory within the Chow ring allows us to extract coherent sheaves on the base of the elliptic fibration whose cohomology groups encode the charged zero-mode spectrum. The dimensions of these cohomology groups are computed with the help of modern techniques from algebraic geometry, which we implement in the software gap. We exemplify this approach in models with an Abelian and non-Abelian gauge group and observe jumps in the exact massless spectrum as the complex structure moduli are varied. An extended mathematical appendix gives a self-contained introduction to the algebro-geometric concepts underlying our framework.

  3. Thermodynamics of Newman-Unti-Tamburino charged spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, Robert; Department of Physics, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L 3G1; Stelea, Cristian

    We discuss and compare at length the results of two methods used recently to describe the thermodynamics of Taub-Newman-Unti-Tamburino (NUT) solutions in a de Sitter background. In the first approach (C approach), one deals with an analytically continued version of the metric while in the second approach (R approach), the discussion is carried out using the unmodified metric with Lorentzian signature. No analytic continuation is performed on the coordinates and/or the parameters that appear in the metric. We find that the results of both these approaches are completely equivalent modulo analytic continuation and we provide the exact prescription that relatesmore » the results in both methods. The extension of these results to the AdS/flat cases aims to give a physical interpretation of the thermodynamics of NUT-charged spacetimes in the Lorentzian sector. We also briefly discuss the higher-dimensional spaces and note that, analogous with the absence of hyperbolic NUTs in AdS backgrounds, there are no spherical Taub-NUT-dS solutions.« less

  4. New variational bounds on convective transport. I. Formulation and analysis

    NASA Astrophysics Data System (ADS)

    Tobasco, Ian; Souza, Andre N.; Doering, Charles R.

    2016-11-01

    We study the maximal rate of scalar transport between parallel walls separated by distance h, by an incompressible fluid with scalar diffusion coefficient κ. Given velocity vector field u with intensity measured by the Péclet number Pe =h2 < | ∇ u |2 >1/2 / κ (where < . > is space-time average) the challenge is to determine the largest enhancement of wall-to-wall scalar flux over purely diffusive transport, i.e., the Nusselt number Nu . Variational formulations of the problem are presented and it is determined that Nu <= cPe 2 / 3 , where c is an absolute constant, as Pe -> ∞ . Moreover, this scaling for optimal transport-possibly modulo logarithmic corrections-is asymptotically sharp: admissible steady flows with Nu >=c' Pe 2 / 3 /[ log Pe ] 2 are constructed. The structure of (nearly) maximally transporting flow fields is discussed. Supported in part by National Science Foundation Graduate Research Fellowship DGE-0813964, awards OISE-0967140, PHY-1205219, DMS-1311833, and DMS-1515161, and the John Simon Guggenheim Memorial Foundation.

  5. The Crab pulsar in the visible and ultraviolet with 20 microsecond effective time resolution

    NASA Technical Reports Server (NTRS)

    Percival, J. W.; Biggs, J. D.; Dolan, J. F.; Robinson, E. L.; Taylor, M. J.; Bless, R. C.; Elliot, J. L.; Nelson, M. J.; Ramseyer, T. F.; Van Citters, G. W.

    1993-01-01

    Observations of PSR 0531+21 with the High Speed Photometer on the HST in the visible in October 1991 and in the UV in January 1992 are presented. The time resolution of the instrument was 10.74 microsec; the effective time resolution of the light curves folded modulo the pulsar period was 21.5 microsec. The main pulse arrival time is the same in the UV as in the visible and radio to within the accuracy of the establishment of the spacecraft clock, +/- 1.05 ms. The peak of the main pulse is resolved in time. Corrected for reddening, the intensity spectral index of the Crab pulsar from 1680 to 7400 A is 0.11 +/- 0.13. The pulsed flux has an intensity less than 0.9 percent of the peak flux just before the onset of the main pulse. The variations in intensity of individual main and secondary pulses are uncorrelated, even within the same rotational period.

  6. Algebraic cycles and local anomalies in F-theory

    NASA Astrophysics Data System (ADS)

    Bies, Martin; Mayrhofer, Christoph; Weigand, Timo

    2017-11-01

    We introduce a set of identities in the cohomology ring of elliptic fibrations which are equivalent to the cancellation of gauge and mixed gauge-gravitational anomalies in F-theory compactifications to four and six dimensions. The identities consist in (co)homological relations between complex codimension-two cycles. The same set of relations, once evaluated on elliptic Calabi-Yau three-folds and four-folds, is shown to universally govern the structure of anomalies and their Green-Schwarz cancellation in six- and four-dimensional F-theory vacua, respectively. We furthermore conjecture that these relations hold not only within the cohomology ring, but even at the level of the Chow ring, i.e. as relations among codimension-two cycles modulo rational equivalence. We verify this conjecture in non-trivial examples with Abelian and non-Abelian gauge groups factors. Apart from governing the structure of local anomalies, the identities in the Chow ring relate different types of gauge backgrounds on elliptically fibred Calabi-Yau four-folds.

  7. Incremental Query Rewriting with Resolution

    NASA Astrophysics Data System (ADS)

    Riazanov, Alexandre; Aragão, Marcelo A. T.

    We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a resolution-based first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent translation of these schematic answers to SQL queries which are evaluated using a conventional relational DBMS. We call our method incremental query rewriting, because an original semantic query is rewritten into a (potentially infinite) series of SQL queries. In this chapter, we outline the main idea of our technique - using abstractions of databases and constrained clauses for deriving schematic answers, and provide completeness and soundness proofs to justify the applicability of this technique to the case of resolution for FOL without equality. The proposed method can be directly used with regular RDBs, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.

  8. VizieR Online Data Catalog: Detection of Kepler multiple M-star systems (Rappaport+, 2014)

    NASA Astrophysics Data System (ADS)

    Rappaport, S.; Swift, J.; Levine, A.; Joss, M.; Sanchis-Ojeda, R.; Barclay, T.; Still, M.; Handler, G.; Olah, K.; Muirhead, P. S.; Huber, D.; Vida, K.

    2017-07-01

    In all, we find 297 of the 3897 targets exhibit the requisite significant Fourier transform (FT) signal comprising a base frequency plus its harmonic, with the base frequency exceeding 0.5 cycles/day (i.e., Prot<2 days). We believe that the majority of these periodicities are likely to be due to stellar rotation manifested via starspots, but a significant number may be due to planet transits and binary eclipses. The individual FTs for these systems were further examined to eliminate those which were clearly not due to rotating starspots. In all cases we folded the data modulo the detected fundamental period, and were readily able to rule out cases due to transiting planets since their well-known sharp, relatively rectangular dipping profiles are characteristic. Of course, we also checked the KOI list for matches. Any of the objects that appear in the Kepler eclipsing binary ("EB") star catalog (e.g., Matijevic et al. 2012AJ....143..123M) were likewise eliminated. (2 data files).

  9. On the equivalence among stress tensors in a gauge-fluid system

    NASA Astrophysics Data System (ADS)

    Mitra, Arpan Krishna; Banerjee, Rabin; Ghosh, Subir

    2017-12-01

    In this paper, we bring out the subtleties involved in the study of a first-order relativistic field theory with auxiliary field variables playing an essential role. In particular, we discuss the nonisentropic Eulerian (or Hamiltonian) fluid model. Interactions are introduced by coupling the fluid to a dynamical Maxwell (U(1)) gauge field. This dynamical nature of the gauge field is crucial in showing the equivalence, on the physical subspace, of the stress tensor derived from two definitions, i.e. the canonical (Noether) one and the symmetric one. In the conventional equal-time formalism, we have shown that the generators of the space-time transformations obtained from these two definitions agree modulo the Gauss constraint. This equivalence in the physical sector has been achieved only because of the dynamical nature of the gauge fields. Subsequently, we have explicitly demonstrated the validity of the Schwinger condition. A detailed analysis of the model in lightcone formalism has also been done where several interesting features are revealed.

  10. Global and Local Translation Designs of Quantum Image Based on FRQI

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Tan, Canyun; Ian, Hou

    2017-04-01

    In this paper, two kinds of quantum image translation are designed based on FRQI, including global translation and local translation. Firstly, global translation is realized by employing adder modulo N, where all pixels in the image will be moved, and the circuit of right translation is designed. Meanwhile, left translation can also be implemented by using right translation. Complexity analysis shows that the circuits of global translation in this paper have lower complexity and cost less qubits. Secondly, local translation, consisted of single-column translation, multiple-columns translation and translation in the restricted area, is designed by adopting Gray code. In local translation, any parts of pixels in the image can be translated while other pixels remain unchanged. In order to lower complexity when the number of columns needing to be translated are more than one, multiple-columns translation is proposed, which has the approximate complexity with single-column translation. To perform multiple-columns translation, three conditions must be satisfied. In addition, all translations in this paper are cyclic.

  11. Are Khovanov-Rozansky polynomials consistent with evolution in the space of knots?

    NASA Astrophysics Data System (ADS)

    Anokhina, A.; Morozov, A.

    2018-04-01

    R-coloured knot polynomials for m-strand torus knots Torus [ m, n] are described by the Rosso-Jones formula, which is an example of evolution in n with Lyapunov exponents, labelled by Young diagrams from R ⊗ m . This means that they satisfy a finite-difference equation (recursion) of finite degree. For the gauge group SL( N ) only diagrams with no more than N lines can contribute and the recursion degree is reduced. We claim that these properties (evolution/recursion and reduction) persist for Khovanov-Rozansky (KR) polynomials, obtained by additional factorization modulo 1 + t, which is not yet adequately described in quantum field theory. Also preserved is some weakened version of differential expansion, which is responsible at least for a simple relation between reduced and unreduced Khovanov polynomials. However, in the KR case evolution is incompatible with the mirror symmetry under the change n -→ - n, what can signal about an ambiguity in the KR factorization even for torus knots.

  12. Optical modular arithmetic

    NASA Astrophysics Data System (ADS)

    Pavlichin, Dmitri S.; Mabuchi, Hideo

    2014-06-01

    Nanoscale integrated photonic devices and circuits offer a path to ultra-low power computation at the few-photon level. Here we propose an optical circuit that performs a ubiquitous operation: the controlled, random-access readout of a collection of stored memory phases or, equivalently, the computation of the inner product of a vector of phases with a binary selector" vector, where the arithmetic is done modulo 2pi and the result is encoded in the phase of a coherent field. This circuit, a collection of cascaded interferometers driven by a coherent input field, demonstrates the use of coherence as a computational resource, and of the use of recently-developed mathematical tools for modeling optical circuits with many coupled parts. The construction extends in a straightforward way to the computation of matrix-vector and matrix-matrix products, and, with the inclusion of an optical feedback loop, to the computation of a weighted" readout of stored memory phases. We note some applications of these circuits for error correction and for computing tasks requiring fast vector inner products, e.g. statistical classification and some machine learning algorithms.

  13. Fuzzy Logic Controller Stability Analysis Using a Satisfiability Modulo Theories Approach

    NASA Technical Reports Server (NTRS)

    Arnett, Timothy; Cook, Brandon; Clark, Matthew A.; Rattan, Kuldip

    2017-01-01

    While many widely accepted methods and techniques exist for validation and verification of traditional controllers, at this time no solutions have been accepted for Fuzzy Logic Controllers (FLCs). Due to the highly nonlinear nature of such systems, and the fact that developing a valid FLC does not require a mathematical model of the system, it is quite difficult to use conventional techniques to prove controller stability. Since safety-critical systems must be tested and verified to work as expected for all possible circumstances, the fact that FLC controllers cannot be tested to achieve such requirements poses limitations on the applications for such technology. Therefore, alternative methods for verification and validation of FLCs needs to be explored. In this study, a novel approach using formal verification methods to ensure the stability of a FLC is proposed. Main research challenges include specification of requirements for a complex system, conversion of a traditional FLC to a piecewise polynomial representation, and using a formal verification tool in a nonlinear solution space. Using the proposed architecture, the Fuzzy Logic Controller was found to always generate negative feedback, but inconclusive for Lyapunov stability.

  14. Generalized continued fractions and ergodic theory

    NASA Astrophysics Data System (ADS)

    Pustyl'nikov, L. D.

    2003-02-01

    In this paper a new theory of generalized continued fractions is constructed and applied to numbers, multidimensional vectors belonging to a real space, and infinite-dimensional vectors with integral coordinates. The theory is based on a concept generalizing the procedure for constructing the classical continued fractions and substantially using ergodic theory. One of the versions of the theory is related to differential equations. In the finite-dimensional case the constructions thus introduced are used to solve problems posed by Weyl in analysis and number theory concerning estimates of trigonometric sums and of the remainder in the distribution law for the fractional parts of the values of a polynomial, and also the problem of characterizing algebraic and transcendental numbers with the use of generalized continued fractions. Infinite-dimensional generalized continued fractions are applied to estimate sums of Legendre symbols and to obtain new results in the classical problem of the distribution of quadratic residues and non-residues modulo a prime. In the course of constructing these continued fractions, an investigation is carried out of the ergodic properties of a class of infinite-dimensional dynamical systems which are also of independent interest.

  15. Towards a second law for Lovelock theories

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Sayantani; Haehl, Felix M.; Kundu, Nilay; Loganayagam, R.; Rangamani, Mukund

    2017-03-01

    In classical general relativity described by Einstein-Hilbert gravity, black holes behave as thermodynamic objects. In particular, the laws of black hole mechanics can be interpreted as laws of thermodynamics. The first law of black hole mechanics extends to higher derivative theories via the Noether charge construction of Wald. One also expects the statement of the second law, which in Einstein-Hilbert theory owes to Hawking's area theorem, to extend to higher derivative theories. To argue for this however one needs a notion of entropy for dynamical black holes, which the Noether charge construction does not provide. We propose such an entropy function for the family of Lovelock theories, treating the higher derivative terms as perturbations to the Einstein-Hilbert theory. Working around a dynamical black hole solution, and making no assumptions about the amplitude of departure from equilibrium, we construct a candidate entropy functional valid to all orders in the low energy effective field theory. This entropy functional satisfies a second law, modulo a certain subtle boundary term, which deserves further investigation in non-spherically symmetric situations.

  16. Improving Strategies via SMT Solving

    NASA Astrophysics Data System (ADS)

    Gawlitza, Thomas Martin; Monniaux, David

    We consider the problem of computing numerical invariants of programs by abstract interpretation. Our method eschews two traditional sources of imprecision: (i) the use of widening operators for enforcing convergence within a finite number of iterations (ii) the use of merge operations (often, convex hulls) at the merge points of the control flow graph. It instead computes the least inductive invariant expressible in the domain at a restricted set of program points, and analyzes the rest of the code en bloc. We emphasize that we compute this inductive invariant precisely. For that we extend the strategy improvement algorithm of Gawlitza and Seidl [17]. If we applied their method directly, we would have to solve an exponentially sized system of abstract semantic equations, resulting in memory exhaustion. Instead, we keep the system implicit and discover strategy improvements using SAT modulo real linear arithmetic (SMT). For evaluating strategies we use linear programming. Our algorithm has low polynomial space complexity and performs for contrived examples in the worst case exponentially many strategy improvement steps; this is unsurprising, since we show that the associated abstract reachability problem is Π2 P -complete.

  17. GALFIT-CORSAIR: Implementing the Core-Sérsic Model Into GALFIT

    NASA Astrophysics Data System (ADS)

    Bonfini, Paolo

    2014-10-01

    We introduce GALFIT-CORSAIR: a publicly available, fully retro-compatible modification of the 2D fitting software GALFIT (v.3) which adds an implementation of the core-Sersic model. We demonstrate the software by fitting the images of NGC 5557 and NGC 5813, which have been previously identified as core-Sersic galaxies by their 1D radial light profiles. These two examples are representative of different dust obscuration conditions, and of bulge/disk decomposition. To perform the analysis, we obtained deep Hubble Legacy Archive (HLA) mosaics in the F555W filter (~V-band). We successfully reproduce the results of the previous 1D analysis, modulo the intrinsic differences between the 1D and the 2D fitting procedures. The code and the analysis procedure described here have been developed for the first coherent 2D analysis of a sample of core-Sersic galaxies, which will be presented in a forth-coming paper. As the 2D analysis provides better constraining on multi-component fitting, and is fully seeing-corrected, it will yield complementary constraints on the missing mass in depleted galaxy cores.

  18. Svetlichny's inequality and genuine tripartite nonlocality in three-qubit pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajoy, Ashok; NMR Research Centre, Indian Institute of Science, Bangalore 560012; Rungta, Pranaw

    2010-05-15

    The violation of the Svetlichny's inequality (SI) [Phys. Rev. D 35, 3066 (1987)] is sufficient but not necessary for genuine tripartite nonlocal correlations. Here we quantify the relationship between tripartite entanglement and the maximum expectation value of the Svetlichny operator (which is bounded from above by the inequality) for the two inequivalent subclasses of pure three-qubit states: the Greenberger-Horne-Zeilinger (GHZ) class and the W class. We show that the maximum for the GHZ-class states reduces to Mermin's inequality [Phys. Rev. Lett. 65, 1838 (1990)] modulo a constant factor, and although it is a function of the three tangle and themore » residual concurrence, large numbers of states do not violate the inequality. We further show that by design SI is more suitable as a measure of genuine tripartite nonlocality between the three qubits in the W-class states, and the maximum is a certain function of the bipartite entanglement (the concurrence) of the three reduced states, and only when their sum attains a certain threshold value do they violate the inequality.« less

  19. Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework

    PubMed Central

    Antonopoulos, Georgios C.; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko

    2015-01-01

    A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984

  20. Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework.

    PubMed

    Antonopoulos, Georgios C; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko

    2015-01-01

    A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.

  1. Nonisothermal Brownian motion: Thermophoresis as the macroscopic manifestation of thermally biased molecular motion.

    PubMed

    Brenner, Howard

    2005-12-01

    A quiescent single-component gravity-free gas subject to a small steady uniform temperature gradient T, despite being at rest, is shown to experience a drift velocity UD=-D* gradient ln T, where D* is the gas's nonisothermal self-diffusion coefficient. D* is identified as being the gas's thermometric diffusivity alpha. The latter differs from the gas's isothermal isotopic self-diffusion coefficient D, albeit only slightly. Two independent derivations are given of this drift velocity formula, one kinematical and the other dynamical, both derivations being strictly macroscopic in nature. Within modest experimental and theoretical uncertainties, this virtual drift velocity UD=-alpha gradient ln T is shown to be constitutively and phenomenologically indistinguishable from the well-known experimental and theoretical formulas for the thermophoretic velocity U of a macroscopic (i.e., non-Brownian) non-heat-conducting particle moving under the influence of a uniform temperature gradient through an otherwise quiescent single-component rarefied gas continuum at small Knudsen numbers. Coupled with the size independence of the particle's thermophoretic velocity, the empirically observed equality, U=UD, leads naturally to the hypothesis that these two velocities, the former real and the latter virtual, are, in fact, simply manifestations of the same underlying molecular phenomenon, namely the gas's Brownian movement, albeit biased by the temperature gradient. This purely hydrodynamic continuum-mechanical equality is confirmed by theoretical calculations effected at the kinetic-molecular level on the basis of an existing solution of the Boltzmann equation for a quasi-Lorentzian gas, modulo small uncertainties pertaining to the choice of collision model. Explicitly, this asymptotically valid molecular model allows the virtual drift velocity UD of the light gas and the thermophoretic velocity U of the massive, effectively non-Brownian, particle, now regarded as the tracer particle of the light gas's drift velocity, to each be identified with the Chapman-Enskog "thermal diffusion velocity" of the quasi-Lorentzian gas, here designated by the symbol UM/M, as calculated by de la Mora and Mercer. It is further pointed out that, modulo the collective uncertainties cited above, the common velocities UD,U, and UM/M are identical to the single-component gas's diffuse volume current jv, the latter representing yet another, independent, strictly continuum-mechanical concept. Finally, comments are offered on the extension of the single-component drift velocity notion to liquids, and its application towards rationalizing Soret thermal-diffusion separation phenomena in quasi-Lorentzian liquid-phase binary mixtures composed of disparately sized solute and solvent molecules, with the massive Brownian solute molecules (e.g., colloidal particles) present in disproportionately small amounts relative to that of the solvent.

  2. Representation of natural numbers in quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benioff, Paul

    2001-03-01

    This paper represents one approach to making explicit some of the assumptions and conditions implied in the widespread representation of numbers by composite quantum systems. Any nonempty set and associated operations is a set of natural numbers or a model of arithmetic if the set and operations satisfy the axioms of number theory or arithmetic. This paper is limited to k-ary representations of length L and to the axioms for arithmetic modulo k{sup L}. A model of the axioms is described based on an abstract L-fold tensor product Hilbert space H{sup arith}. Unitary maps of this space onto a physicalmore » parameter based product space H{sup phy} are then described. Each of these maps makes states in H{sup phy}, and the induced operators, a model of the axioms. Consequences of the existence of many of these maps are discussed along with the dependence of Grover's and Shor's algorithms on these maps. The importance of the main physical requirement, that the basic arithmetic operations are efficiently implementable, is discussed. This condition states that there exist physically realizable Hamiltonians that can implement the basic arithmetic operations and that the space-time and thermodynamic resources required are polynomial in L.« less

  3. A preliminary evaluation of LANDSAT-4 thematic mapper data for their geometric and radiometric accuracies

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Bender, L. U.; Falcone, N.; Jones, O. D.

    1983-01-01

    Some LANDSAT thematic mapper data collected over the eastern United States were analyzed for their whole scene geometric accuracy, band to band registration and radiometric accuracy. Band ratio images were created for a part of one scene in order to assess the capability of mapping geologic units with contrasting spectral properties. Systematic errors were found in the geometric accuracy of whole scenes, part of which were attributable to the film writing device used to record the images to film. Band to band registration showed that bands 1 through 4 were registered to within one pixel. Likewise, bands 5 and 7 also were registered to within one pixel. However, bands 5 and 7 were misregistered with bands 1 through 4 by 1 to 2 pixels. Band 6 was misregistered by 4 pixels to bands 1 through 4. Radiometric analysis indicated two kinds of banding, a modulo-16 stripping and an alternate light dark group of 16 scanlines. A color ratio composite image consisting of TM band ratios 3/4, 5/2, and 5/7 showed limonitic clay rich soils, limonitic clay poor soils, and nonlimonitic materials as distinctly different colors on the image.

  4. Some Properties of Generalized Connections in Quantum Gravity

    NASA Astrophysics Data System (ADS)

    Velhinho, J. M.

    2002-12-01

    Theories of connections play an important role in fundamental interactions, including Yang-Mills theories and gravity in the Ashtekar formulation. Typically in such cases, the classical configuration space {A}/ {G} of connections modulo gauge transformations is an infinite dimensional non-linear space of great complexity. Having in mind a rigorous quantization procedure, methods of functional calculus in an extension of {A}/ {G} have been developed. For a compact gauge group G, the compact space /line { {A}{ {/}} {G}} ( ⊃ {A}/ {G}) introduced by Ashtekar and Isham using C*-algebraic methods is a natural candidate to replace {A}/ {G} in the quantum context, 1 allowing the construction of diffeomorphism invariant measures. 2,3,4 Equally important is the space of generalized connections bar {A} introduced in a similar way by Baez. 5 bar {A} is particularly useful for the definition of vector fields in /line { {A}{ {/}} {G}} , fundamental in the construction of quantum observables. 6 These works crucially depend on the use of (generalized) Wilson variables associated to certain types of curves. We will consider the case of piecewise analytic curves, 1,2,5 althought most of the arguments apply equally to the piecewise smooth case. 7,8...

  5. Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division

    NASA Astrophysics Data System (ADS)

    Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano

    2013-04-01

    We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.

  6. Pictorial depth probed through relative sizes

    PubMed Central

    Wagemans, Johan; van Doorn, Andrea J; Koenderink, Jan J

    2011-01-01

    In the physical environment familiar size is an effective depth cue because the distance from the eye to an object equals the ratio of its physical size to its angular extent in the visual field. Such simple geometrical relations do not apply to pictorial space, since the eye itself is not in pictorial space, and consequently the notion “distance from the eye” is meaningless. Nevertheless, relative size in the picture plane is often used by visual artists to suggest depth differences. The depth domain has no natural origin, nor a natural unit; thus only ratios of depth differences could have an invariant significance. We investigate whether the pictorial relative size cue yields coherent depth structures in pictorial spaces. Specifically, we measure the depth differences for all pairs of points in a 20-point configuration in pictorial space, and we account for these observations through 19 independent parameters (the depths of the points modulo an arbitrary offset), with no meaningful residuals. We discuss a simple formal framework that allows one to handle individual differences. We also compare the depth scale obtained by way of this method with depth scales obtained in totally different ways, finding generally good agreement. PMID:23145258

  7. Interpretation for scales of measurement linking with abstract algebra

    PubMed Central

    2014-01-01

    The Stevens classification of levels of measurement involves four types of scale: “Nominal”, “Ordinal”, “Interval” and “Ratio”. This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; ‘Abelian modulo additive group’ for “Ordinal scale” accompanied with ‘zero’, ‘Abelian additive group’ for “Interval scale”, and ‘field’ for “Ratio scale”. Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected. PMID:24987515

  8. Measurement of CP asymmetry in B s 0 → D s ∓ K ± decays

    NASA Astrophysics Data System (ADS)

    Aaij, R.; Adeva, B.; Adinolfi, M.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Alfonso Albero, A.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Andreassi, G.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Archilli, F.; d'Argent, P.; Arnau Romeu, J.; Artamonov, A.; Artuso, M.; Aslanides, E.; Atzeni, M.; Auriemma, G.; Baalouch, M.; Babuschkin, I.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baker, S.; Balagura, V.; Baldini, W.; Baranov, A.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Baryshnikov, F.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Beiter, A.; Bel, L. J.; Beliy, N.; Bellee, V.; Belloli, N.; Belous, K.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Beranek, S.; Berezhnoy, A.; Bernet, R.; Berninghoff, D.; Bertholet, E.; Bertolin, A.; Betancourt, C.; Betti, F.; Bettler, M. O.; van Beuzekom, M.; Bezshyiko, Ia.; Bifani, S.; Billoir, P.; Birnkraut, A.; Bizzeti, A.; Bjørn, M.; Blake, T.; Blanc, F.; Blusk, S.; Bocci, V.; Boettcher, T.; Bondar, A.; Bondar, N.; Bordyuzhin, I.; Borghi, S.; Borisyak, M.; Borsato, M.; Bossu, F.; Boubdir, M.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Braun, S.; Brodzicka, J.; Brundu, D.; Buchanan, E.; Burr, C.; Bursche, A.; Buytaert, J.; Byczynski, W.; Cadeddu, S.; Cai, H.; Calabrese, R.; Calladine, R.; Calvi, M.; Calvo Gomez, M.; Camboni, A.; Campana, P.; Campora Perez, D. H.; Capriotti, L.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carniti, P.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Cassina, L.; Cattaneo, M.; Cavallero, G.; Cenci, R.; Chamont, D.; Chapman, M. G.; Charles, M.; Charpentier, Ph.; Chatzikonstantinidis, G.; Chefdeville, M.; Chen, S.; Cheung, S. F.; Chitic, S.-G.; Chobanova, V.; Chrzaszcz, M.; Chubykin, A.; Ciambrone, P.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coco, V.; Cogan, J.; Cogneras, E.; Cogoni, V.; Cojocariu, L.; Collins, P.; Colombo, T.; Comerma-Montells, A.; Contu, A.; Coombs, G.; Coquereau, S.; Corti, G.; Corvo, M.; Costa Sobral, C. M.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Crocombe, A.; Cruz Torres, M.; Currie, R.; D'Ambrosio, C.; Da Cunha Marinho, F.; Da Silva, C. L.; Dall'Occo, E.; Dalseno, J.; Davis, A.; De Aguiar Francisco, O.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Serio, M.; De Simone, P.; Dean, C. T.; Decamp, D.; Del Buono, L.; Dembinski, H.-P.; Demmer, M.; Dendek, A.; Derkach, D.; Deschamps, O.; Dettori, F.; Dey, B.; Di Canto, A.; Di Nezza, P.; Dijkstra, H.; Dordei, F.; Dorigo, M.; Dosil Suárez, A.; Douglas, L.; Dovbnya, A.; Dreimanis, K.; Dufour, L.; Dujany, G.; Durante, P.; Durham, J. M.; Dutta, D.; Dzhelyadin, R.; Dziewiecki, M.; Dziurda, A.; Dzyuba, A.; Easo, S.; Egede, U.; Egorychev, V.; Eidelman, S.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; Ely, S.; Esen, S.; Evans, H. M.; Evans, T.; Falabella, A.; Farley, N.; Farry, S.; Fazzini, D.; Federici, L.; Ferguson, D.; Fernandez, G.; Fernandez Declara, P.; Fernandez Prieto, A.; Ferrari, F.; Ferreira Lopes, L.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fini, R. A.; Fiorini, M.; Firlej, M.; Fitzpatrick, C.; Fiutowski, T.; Fleuret, F.; Fontana, M.; Fontanelli, F.; Forty, R.; Franco Lima, V.; Frank, M.; Frei, C.; Fu, J.; Funk, W.; Furfaro, E.; Färber, C.; Gabriel, E.; Gallas Torreira, A.; Galli, D.; Gallorini, S.; Gambetta, S.; Gandelman, M.; Gandini, P.; Gao, Y.; Garcia Martin, L. M.; García Pardiñas, J.; Garra Tico, J.; Garrido, L.; Gascon, D.; Gaspar, C.; Gavardi, L.; Gazzoni, G.; Gerick, D.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gianì, S.; Gibson, V.; Girard, O. G.; Giubega, L.; Gizdov, K.; Gligorov, V. V.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gorelov, I. V.; Gotti, C.; Govorkova, E.; Grabowski, J. P.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graverini, E.; Graziani, G.; Grecu, A.; Greim, R.; Griffith, P.; Grillo, L.; Gruber, L.; Gruberg Cazon, B. R.; Grünberg, O.; Gushchin, E.; Guz, Yu.; Gys, T.; Göbel, C.; Hadavizadeh, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hamilton, B.; Han, X.; Hancock, T. H.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Hasse, C.; Hatch, M.; He, J.; Hecker, M.; Heinicke, K.; Heister, A.; Hennessy, K.; Henrard, P.; Henry, L.; van Herwijnen, E.; Heß, M.; Hicheur, A.; Hill, D.; Hopchev, P. H.; Hu, W.; Huang, W.; Huard, Z. C.; Hulsbergen, W.; Humair, T.; Hushchyn, M.; Hutchcroft, D.; Ibis, P.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jalocha, J.; Jans, E.; Jawahery, A.; Jiang, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Jurik, N.; Kandybei, S.; Karacson, M.; Kariuki, J. M.; Karodia, S.; Kazeev, N.; Kecke, M.; Keizer, F.; Kelsey, M.; Kenzie, M.; Ketel, T.; Khairullin, E.; Khanji, B.; Khurewathanakul, C.; Kirn, T.; Klaver, S.; Klimaszewski, K.; Klimkovich, T.; Koliiev, S.; Kolpin, M.; Kopecna, R.; Koppenburg, P.; Kosmyntseva, A.; Kotriakhova, S.; Kozeiha, M.; Kravchuk, L.; Kreps, M.; Kress, F.; Krokovny, P.; Krzemien, W.; Kucewicz, W.; Kucharczyk, M.; Kudryavtsev, V.; Kuonen, A. K.; Kvaratskheliya, T.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lanfranchi, G.; Langenbruch, C.; Latham, T.; Lazzeroni, C.; Le Gac, R.; Leflat, A.; Lefrançois, J.; Lefèvre, R.; Lemaitre, F.; Lemos Cid, E.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, P.-R.; Li, T.; Li, Y.; Li, Z.; Liang, X.; Likhomanenko, T.; Lindner, R.; Lionetto, F.; Lisovskyi, V.; Liu, X.; Loh, D.; Loi, A.; Longstaff, I.; Lopes, J. H.; Lucchesi, D.; Lucio Martinez, M.; Luo, H.; Lupato, A.; Luppi, E.; Lupton, O.; Lusiani, A.; Lyu, X.; Machefert, F.; Maciuc, F.; Macko, V.; Mackowiak, P.; Maddrell-Mander, S.; Maev, O.; Maguire, K.; Maisuzenko, D.; Majewski, M. W.; Malde, S.; Malecki, B.; Malinin, A.; Maltsev, T.; Manca, G.; Mancinelli, G.; Marangotto, D.; Maratas, J.; Marchand, J. F.; Marconi, U.; Marin Benito, C.; Marinangeli, M.; Marino, P.; Marks, J.; Martellotti, G.; Martin, M.; Martinelli, M.; Martinez Santos, D.; Martinez Vidal, F.; Massafferri, A.; Matev, R.; Mathad, A.; Mathe, Z.; Matteuzzi, C.; Mauri, A.; Maurice, E.; Maurin, B.; Mazurov, A.; McCann, M.; McNab, A.; McNulty, R.; Mead, J. V.; Meadows, B.; Meaux, C.; Meier, F.; Meinert, N.; Melnychuk, D.; Merk, M.; Merli, A.; Michielin, E.; Milanes, D. A.; Millard, E.; Minard, M.-N.; Minzoni, L.; Mitzel, D. S.; Mogini, A.; Molina Rodriguez, J.; Mombächer, T.; Monroy, I. A.; Monteil, S.; Morandin, M.; Morello, M. J.; Morgunova, O.; Moron, J.; Morris, A. B.; Mountain, R.; Muheim, F.; Mulder, M.; Müller, D.; Müller, J.; Müller, K.; Müller, V.; Naik, P.; Nakada, T.; Nandakumar, R.; Nandi, A.; Nasteva, I.; Needham, M.; Neri, N.; Neubert, S.; Neufeld, N.; Neuner, M.; Nguyen, T. D.; Nguyen-Mau, C.; Nieswand, S.; Niet, R.; Nikitin, N.; Nikodem, T.; Nogay, A.; O'Hanlon, D. P.; Oblakowska-Mucha, A.; Obraztsov, V.; Ogilvy, S.; Oldeman, R.; Onderwater, C. J. G.; Ossowska, A.; Otalora Goicochea, J. M.; Owen, P.; Oyanguren, A.; Pais, P. R.; Palano, A.; Palutan, M.; Papanestis, A.; Pappagallo, M.; Pappalardo, L. L.; Parker, W.; Parkes, C.; Passaleva, G.; Pastore, A.; Patel, M.; Patrignani, C.; Pearce, A.; Pellegrino, A.; Penso, G.; Pepe Altarelli, M.; Perazzini, S.; Pereima, D.; Perret, P.; Pescatore, L.; Petridis, K.; Petrolini, A.; Petrov, A.; Petruzzo, M.; Picatoste Olloqui, E.; Pietrzyk, B.; Pietrzyk, G.; Pikies, M.; Pinci, D.; Pisani, F.; Pistone, A.; Piucci, A.; Placinta, V.; Playfer, S.; Plo Casasus, M.; Polci, F.; Poli Lener, M.; Poluektov, A.; Polyakov, I.; Polycarpo, E.; Pomery, G. J.; Ponce, S.; Popov, A.; Popov, D.; Poslavskii, S.; Potterat, C.; Price, E.; Prisciandaro, J.; Prouve, C.; Pugatch, V.; Puig Navarro, A.; Pullen, H.; Punzi, G.; Qian, W.; Qin, J.; Quagliani, R.; Quintana, B.; Rachwal, B.; Rademacker, J. H.; Rama, M.; Ramos Pernas, M.; Rangel, M. S.; Raniuk, I.; Ratnikov, F.; Raven, G.; Ravonel Salzgeber, M.; Reboud, M.; Redi, F.; Reichert, S.; dos Reis, A. C.; Remon Alepuz, C.; Renaudin, V.; Ricciardi, S.; Richards, S.; Rihl, M.; Rinnert, K.; Robbe, P.; Robert, A.; Rodrigues, A. B.; Rodrigues, E.; Rodriguez Lopez, J. A.; Rogozhnikov, A.; Roiser, S.; Rollings, A.; Romanovskiy, V.; Romero Vidal, A.; Rotondo, M.; Rudolph, M. S.; Ruf, T.; Ruiz Valls, P.; Ruiz Vidal, J.; Saborido Silva, J. J.; Sadykhov, E.; Sagidova, N.; Saitta, B.; Salustino Guimaraes, V.; Sanchez Mayordomo, C.; Sanmartin Sedes, B.; Santacesaria, R.; Santamarina Rios, C.; Santimaria, M.; Santovetti, E.; Sarpis, G.; Sarti, A.; Satriano, C.; Satta, A.; Saunders, D. M.; Savrina, D.; Schael, S.; Schellenberg, M.; Schiller, M.; Schindler, H.; Schmelling, M.; Schmelzer, T.; Schmidt, B.; Schneider, O.; Schopper, A.; Schreiner, H. F.; Schubiger, M.; Schune, M. H.; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Semennikov, A.; Sepulveda, E. S.; Sergi, A.; Serra, N.; Serrano, J.; Sestini, L.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, V.; Siddi, B. G.; Silva Coutinho, R.; Silva de Oliveira, L.; Simi, G.; Simone, S.; Sirendi, M.; Skidmore, N.; Skwarnicki, T.; Smith, I. T.; Smith, J.; Smith, M.; Soares Lavra, l.; Sokoloff, M. D.; Soler, F. J. P.; Souza De Paula, B.; Spaan, B.; Spradlin, P.; Sridharan, S.; Stagni, F.; Stahl, M.; Stahl, S.; Stefko, P.; Stefkova, S.; Steinkamp, O.; Stemmle, S.; Stenyakin, O.; Stepanova, M.; Stevens, H.; Stone, S.; Storaci, B.; Stracka, S.; Stramaglia, M. E.; Straticiuc, M.; Straumann, U.; Sun, J.; Sun, L.; Swientek, K.; Syropoulos, V.; Szumlak, T.; Szymanski, M.; T'Jampens, S.; Tayduganov, A.; Tekampe, T.; Tellarini, G.; Teubert, F.; Thomas, E.; van Tilburg, J.; Tilley, M. J.; Tisserand, V.; Tobin, M.; Tolk, S.; Tomassetti, L.; Tonelli, D.; Tourinho Jadallah Aoude, R.; Tournefier, E.; Traill, M.; Tran, M. T.; Tresch, M.; Trisovic, A.; Tsaregorodtsev, A.; Tsopelas, P.; Tully, A.; Tuning, N.; Ukleja, A.; Usachov, A.; Ustyuzhanin, A.; Uwer, U.; Vacca, C.; Vagner, A.; Vagnoni, V.; Valassi, A.; Valat, S.; Valenti, G.; Vazquez Gomez, R.; Vazquez Regueiro, P.; Vecchi, S.; van Veghel, M.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Venkateswaran, A.; Verlage, T. A.; Vernet, M.; Vesterinen, M.; Viana Barbosa, J. V.; Vieira, D.; Vieites Diaz, M.; Viemann, H.; Vilasis-Cardona, X.; Vitti, M.; Volkov, V.; Vollhardt, A.; Voneki, B.; Vorobyev, A.; Vorobyev, V.; Voß, C.; de Vries, J. A.; Vázquez Sierra, C.; Waldi, R.; Walsh, J.; Wang, J.; Wang, Y.; Ward, D. R.; Wark, H. M.; Watson, N. K.; Websdale, D.; Weiden, A.; Weisser, C.; Whitehead, M.; Wicht, J.; Wilkinson, G.; Wilkinson, M.; Williams, M.; Williams, M.; Williams, T.; Wilson, F. F.; Wimberley, J.; Winn, M.; Wishahi, J.; Wislicki, W.; Witek, M.; Wormser, G.; Wotton, S. A.; Wyllie, K.; Xie, Y.; Xu, M.; Xu, Q.; Xu, Z.; Xu, Z.; Yang, Z.; Yang, Z.; Yao, Y.; Yin, H.; Yu, J.; Yuan, X.; Yushchenko, O.; Zarebski, K. A.; Zavertyaev, M.; Zhang, L.; Zhang, Y.; Zhelezov, A.; Zheng, Y.; Zhu, X.; Zhukov, V.; Zonneveld, J. B.; Zucchelli, S.

    2018-03-01

    We report the measurements of the CP -violating parameters in B s 0 → D s ∓ K ± decays observed in pp collisions, using a data set corresponding to an integrated luminosity of 3.0 fb-1 recorded with the LHCb detector. We measure C f = 0 .73 ± 0 .14 ± 0 .05, A f ΔΓ = 0.39 ± 0.28 ± 0.15, {A}_{\\overline{f}}^{Δ Γ }=0.31± 0.28± 0.15 , S f = -0 .52 ± 0 .20 ± 0 .07, {S}_{\\overline{f}}=-0.49± 0.20± 0.07 , where the uncertainties are statistical and systematic, respectively. These parameters are used together with the world-average value of the B s 0 mixing phase, -2 β s , to obtain a measurement of the CKM angle γ from B s 0 → D s ∓ K ± decays, yielding γ = (128 - 22 + 17 ) ° modulo 180°, where the uncertainty contains both statistical and systematic contributions. This corresponds to 3 .8 σ evidence for CP violation in the interference between decay and decay after mixing. [Figure not available: see fulltext.

  9. Serre duality, Abel's theorem, and Jacobi inversion for supercurves over a thick superpoint

    NASA Astrophysics Data System (ADS)

    Rothstein, Mitchell J.; Rabin, Jeffrey M.

    2015-04-01

    The principal aim of this paper is to extend Abel's theorem to the setting of complex supermanifolds of dimension 1 | q over a finite-dimensional local supercommutative C-algebra. The theorem is proved by establishing a compatibility of Serre duality for the supercurve with Poincaré duality on the reduced curve. We include an elementary algebraic proof of the requisite form of Serre duality, closely based on the account of the reduced case given by Serre in Algebraic groups and class fields, combined with an invariance result for the topology on the dual of the space of répartitions. Our Abel map, taking Cartier divisors of degree zero to the dual of the space of sections of the Berezinian sheaf, modulo periods, is defined via Penkov's characterization of the Berezinian sheaf as the cohomology of the de Rham complex of the sheaf D of differential operators. We discuss the Jacobi inversion problem for the Abel map and give an example demonstrating that if n is an integer sufficiently large that the generic divisor of degree n is linearly equivalent to an effective divisor, this need not be the case for all divisors of degree n.

  10. Interpretation for scales of measurement linking with abstract algebra.

    PubMed

    Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun

    2014-01-01

    THE STEVENS CLASSIFICATION OF LEVELS OF MEASUREMENT INVOLVES FOUR TYPES OF SCALE: "Nominal", "Ordinal", "Interval" and "Ratio". This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; 'Abelian modulo additive group' for "Ordinal scale" accompanied with 'zero', 'Abelian additive group' for "Interval scale", and 'field' for "Ratio scale". Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected.

  11. Unifying perspective: Solitary traveling waves as discrete breathers in Hamiltonian lattices and energy criteria for their stability

    NASA Astrophysics Data System (ADS)

    Cuevas-Maraver, Jesús; Kevrekidis, Panayotis G.; Vainchtein, Anna; Xu, Haitao

    2017-09-01

    In this work, we provide two complementary perspectives for the (spectral) stability of solitary traveling waves in Hamiltonian nonlinear dynamical lattices, of which the Fermi-Pasta-Ulam and the Toda lattice are prototypical examples. One is as an eigenvalue problem for a stationary solution in a cotraveling frame, while the other is as a periodic orbit modulo shifts. We connect the eigenvalues of the former with the Floquet multipliers of the latter and using this formulation derive an energy-based spectral stability criterion. It states that a sufficient (but not necessary) condition for a change in the wave stability occurs when the functional dependence of the energy (Hamiltonian) H of the model on the wave velocity c changes its monotonicity. Moreover, near the critical velocity where the change of stability occurs, we provide an explicit leading-order computation of the unstable eigenvalues, based on the second derivative of the Hamiltonian H''(c0) evaluated at the critical velocity c0. We corroborate this conclusion with a series of analytically and numerically tractable examples and discuss its parallels with a recent energy-based criterion for the stability of discrete breathers.

  12. Estudio de un microcable de par trenzado para la comunicacion y lectura del modulo de pixeles del experimento CMS

    NASA Astrophysics Data System (ADS)

    Oliveros Tautiva, Sandra Jimena

    The Compact Muon Solenoid (CMS) is one of the two most important experiments at the Large Hadron Collider (LHC). The pixel detector is the component closest to the collision in CMS and it receives large doses of radiation which will affect its performance. The pixel detector will be replaced by a new one after four years. The aim is to reduce material in the sensitive zone of the new pixel detector, which leads to the implementation of a type of micro twisted pair cable that will replace the existing kapton cables and some connections will be eliminated. The purpose of this work was to study the viability of using these micro twisted pair cables in the existing 40 MHz analog readout. The electrical parameters of micro cables were determined, and operational tests were performed in a module using these cables for communicating and reading. Three different lengths of micro cables were used, 1.0, 1.5 and 2.0 m, in order to compare test results with those obtained using the kapton cable. It was found that the use of these cables does not affect the programming and reading of the pixels in one module, so the micro cables are viable to be used in place of the kapton cables.

  13. Advances in Scanning Reflectarray Antennas Based on Ferroelectric Thin Film Phase Shifters for Deep Space Communications

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R.

    2007-01-01

    Though there are a few examples of scanning phased array antennas that have flown successfully in space, the quest for low-cost, high-efficiency, large aperture microwave phased arrays continues. Fixed and mobile applications that may be part of a heterogeneous exploration communication architecture will benefit from the agile (rapid) beam steering and graceful degradation afforded by phased array antennas. The reflectarray promises greater efficiency and economy compared to directly-radiating varieties. Implementing a practical scanning version has proven elusive. The ferroelectric reflectarray, under development and described herein, involves phase shifters based on coupled microstrip patterned on Ba(x)Sr(1-x)TiO3 films, that were laser ablated onto LaAlO3 substrates. These devices outperform their semiconductor counterparts from X- through and K-band frequencies. There are special issues associated with the implementation of a scanning reflectarray antenna, especially one realized with thin film ferroelectric phase shifters. This paper will discuss these issues which include: relevance of phase shifter loss; modulo 2(pi) effects and phase shifter transient effects on bit error rate; scattering from the ground plane; presentation of a novel hybrid ferroelectric-semiconductor phase shifter; and the effect of mild radiation exposure on phase shifter performance.

  14. Testing the Kerr metric with the iron line and the KRZ parametrization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ni, Yueying; Jiang, Jiachen; Bambi, Cosimo, E-mail: yyni13@fudan.edu.cn, E-mail: jcjiang12@fudan.edu.cn, E-mail: bambi@fudan.edu.cn

    The spacetime geometry around astrophysical black holes is supposed to be well approximated by the Kerr metric, but deviations from the Kerr solution are predicted in a number of scenarios involving new physics. Broad iron Kα lines are commonly observed in the X-ray spectrum of black holes and originate by X-ray fluorescence of the inner accretion disk. The profile of the iron line is sensitively affected by the spacetime geometry in the strong gravity region and can be used to test the Kerr black hole hypothesis. In this paper, we extend previous work in the literature. In particular: i )more » as test-metric, we employ the parametrization recently proposed by Konoplya, Rezzolla, and Zhidenko, which has a number of subtle advantages with respect to the existing approaches; ii ) we perform simulations with specific X-ray missions, and we consider NuSTAR as a prototype of current observational facilities and eXTP as an example of the next generation of X-ray observatories. We find a significant difference between the constraining power of NuSTAR and eXTP. With NuSTAR, it is difficult or impossible to constrain deviations from the Kerr metric. With eXTP, in most cases we can obtain quite stringent constraints (modulo we have the correct astrophysical model).« less

  15. How well-connected is the surface of the global ocean?

    PubMed

    Froyland, Gary; Stuart, Robyn M; van Sebille, Erik

    2014-09-01

    The Ekman dynamics of the ocean surface circulation is known to contain attracting regions such as the great oceanic gyres and the associated garbage patches. Less well-known are the extents of the basins of attractions of these regions and how strongly attracting they are. Understanding the shape and extent of the basins of attraction sheds light on the question of the strength of connectivity of different regions of the ocean, which helps in understanding the flow of buoyant material like plastic litter. Using short flow time trajectory data from a global ocean model, we create a Markov chain model of the surface ocean dynamics. The surface ocean is not a conservative dynamical system as water in the ocean follows three-dimensional pathways, with upwelling and downwelling in certain regions. Using our Markov chain model, we easily compute net surface upwelling and downwelling, and verify that it matches observed patterns of upwelling and downwelling in the real ocean. We analyze the Markov chain to determine multiple attracting regions. Finally, using an eigenvector approach, we (i) identify the five major ocean garbage patches, (ii) partition the ocean into basins of attraction for each of the garbage patches, and (iii) partition the ocean into regions that demonstrate transient dynamics modulo the attracting garbage patches.

  16. E-I balance emerges naturally from continuous Hebbian learning in autonomous neural networks.

    PubMed

    Trapp, Philip; Echeveste, Rodrigo; Gros, Claudius

    2018-06-12

    Spontaneous brain activity is characterized in part by a balanced asynchronous chaotic state. Cortical recordings show that excitatory (E) and inhibitory (I) drivings in the E-I balanced state are substantially larger than the overall input. We show that such a state arises naturally in fully adapting networks which are deterministic, autonomously active and not subject to stochastic external or internal drivings. Temporary imbalances between excitatory and inhibitory inputs lead to large but short-lived activity bursts that stabilize irregular dynamics. We simulate autonomous networks of rate-encoding neurons for which all synaptic weights are plastic and subject to a Hebbian plasticity rule, the flux rule, that can be derived from the stationarity principle of statistical learning. Moreover, the average firing rate is regulated individually via a standard homeostatic adaption of the bias of each neuron's input-output non-linear function. Additionally, networks with and without short-term plasticity are considered. E-I balance may arise only when the mean excitatory and inhibitory weights are themselves balanced, modulo the overall activity level. We show that synaptic weight balance, which has been considered hitherto as given, naturally arises in autonomous neural networks when the here considered self-limiting Hebbian synaptic plasticity rule is continuously active.

  17. Bose–Einstein graviton condensate in a Schwarzschild black hole

    NASA Astrophysics Data System (ADS)

    Alfaro, Jorge; Espriu, Domènec; Gabbanelli, Luciano

    2018-01-01

    We analyze in detail a previous proposal by Dvali and Gómez that black holes could be treated as consisting of a Bose–Einstein condensate of gravitons. In order to do so we extend the Einstein–Hilbert action with a chemical potential-like term, thus placing ourselves in a grand-canonical ensemble. The form and characteristics of this chemical potential-like piece are discussed in some detail. We argue that the resulting equations of motion derived from the action could be interpreted as the Gross–Pitaevskii equation describing a graviton Bose–Einstein condensate trapped by the black hole gravitational field. After this, we proceed to expand the ensuring equations of motion up to second order around the classical Schwarzschild metric so that some non-linear terms in the metric fluctuation are kept. Next we search for solutions and, modulo some very plausible assumptions, we find out that the condensate vanishes outside the horizon but is non-zero in its interior. Inspired by a linearized approximation around the horizon we are able to find an exact solution for the mean-field wave function describing the graviton Bose–Einstein condensate in the black hole interior. After this, we can rederive some of the relations involving the number of gravitons N and the black hole characteristics along the lines suggested by Dvali and Gómez.

  18. Nonstandard neutrino interactions at DUNE, T2HK and T2HKK

    DOE PAGES

    Liao, Jiajun; Marfatia, Danny; Whisnant, Kerry

    2017-01-17

    Here, we study the matter effect caused by nonstandard neutrino interactions (NSI) in the next generation long-baseline neutrino experiments, DUNE, T2HK and T2HKK. If multiple NSI parameters are nonzero, the potential of these experiments to detect CP violation, determine the mass hierarchy and constrain NSI is severely impaired by degeneracies between the NSI parameters and by the generalized mass hierarchy degeneracy. In particular, a cancellation between leading order terms in the appearance channels when ϵ eτ= cot θ 23ϵ eμ, strongly affects the sensitivities to these two NSI parameters at T2HK and T2HKK. We also study the dependence of themore » sensitivities on the true CP phase and the true mass hierarchy, and find that overall DUNE has the best sensitivity to the magnitude of the NSI parameters, while T2HKK has the best sensitivity to CP violation whether or not there are NSI. Furthermore, for T2HKK a smaller off-axis angle for the Korean detector is better overall. We find that due to the structure of the leading order terms in the appearance channel probabilities, the NSI sensitivities in a given experiment are similar for both mass hierarchies, modulo the phase change δ→δ + 180°.« less

  19. Emergent Ising degrees of freedom above a double-stripe magnetic ground state

    NASA Astrophysics Data System (ADS)

    Zhang, Guanghua; Flint, Rebecca

    2017-12-01

    Double-stripe magnetism [Q =(π /2 ,π /2 )] has been proposed as the magnetic ground state for both the iron-telluride and BaTi2Sb2O families of superconductors. Double-stripe order is captured within a J1-J2-J3 Heisenberg model in the regime J3≫J2≫J1 . Intriguingly, besides breaking spin-rotational symmetry, the ground-state manifold has three additional Ising degrees of freedom associated with bond ordering. Via their coupling to the lattice, they give rise to an orthorhombic distortion and to two nonuniform lattice distortions with wave vector (π ,π ) . Because the ground state is fourfold degenerate, modulo rotations in spin space, only two of these Ising bond order parameters are independent. Here, we introduce an effective field theory to treat all Ising order parameters, as well as magnetic order, and solve it within a large-N limit. All three transitions, corresponding to the condensations of two Ising bond order parameters and one magnetic order parameter are simultaneous and first order in three dimensions, but lower dimensionality, or equivalently weaker interlayer coupling, and weaker magnetoelastic coupling can split the three transitions, and in some cases allows for two separate Ising phase transitions above the magnetic one.

  20. Measuring 3D point configurations in pictorial space

    PubMed Central

    Wagemans, Johan; van Doorn, Andrea J; Koenderink, Jan J

    2011-01-01

    We propose a novel method to probe the depth structure of the pictorial space evoked by paintings. The method involves an exocentric pointing paradigm that allows one to find the slope of the geodesic connection between any pair of points in pictorial space. Since the locations of the points in the picture plane are known, this immediately yields the depth difference between the points. A set of depth differences between all pairs of points from an N-point (N > 2) configuration then yields the configuration in depth up to an arbitrary depth offset. Since an N-point configuration implies N(N−1) (ordered) pairs, the number of observations typically far exceeds the number of inferred depths. This yields a powerful check on the geometrical consistency of the results. We report that the remaining inconsistencies are fully accounted for by the spread encountered in repeated observations. This implies that the concept of ‘pictorial space’ indeed has an empirical significance. The method is analyzed and empirically verified in considerable detail. We report large quantitative interobserver differences, though the results of all observers agree modulo a certain affine transformation that describes the basic cue ambiguities. This is expected on the basis of a formal analysis of monocular optical structure. The method will prove useful in a variety of potential applications. PMID:23145227

  1. Generalized Riemann hypothesis and stochastic time series

    NASA Astrophysics Data System (ADS)

    Mussardo, Giuseppe; LeClair, André

    2018-06-01

    Using the Dirichlet theorem on the equidistribution of residue classes modulo q and the Lemke Oliver–Soundararajan conjecture on the distribution of pairs of residues on consecutive primes, we show that the domain of convergence of the infinite product of Dirichlet L-functions of non-principal characters can be extended from down to , without encountering any zeros before reaching this critical line. The possibility of doing so can be traced back to a universal diffusive random walk behavior of a series C N over the primes which underlies the convergence of the infinite product of the Dirichlet functions. The series C N presents several aspects in common with stochastic time series and its control requires to address a problem similar to the single Brownian trajectory problem in statistical mechanics. In the case of the Dirichlet functions of non principal characters, we show that this problem can be solved in terms of a self-averaging procedure based on an ensemble of block variables computed on extended intervals of primes. Those intervals, called inertial intervals, ensure the ergodicity and stationarity of the time series underlying the quantity C N . The infinity of primes also ensures the absence of rare events which would have been responsible for a different scaling behavior than the universal law of the random walks.

  2. The mass distribution of Population III stars

    NASA Astrophysics Data System (ADS)

    Fraser, M.; Casey, A. R.; Gilmore, G.; Heger, A.; Chan, C.

    2017-06-01

    Extremely metal-poor (EMP) stars are uniquely informative on the nature of massive Population III stars. Modulo a few elements that vary with stellar evolution, the present-day photospheric abundances observed in EMP stars are representative of their natal gas cloud composition. For this reason, the chemistry of EMP stars closely reflects the nucleosynthetic yields of supernovae from massive Population III stars. Here we collate detailed abundances of 53 EMP stars from the literature and infer the masses of their Population III progenitors. We fit a simple initial mass function (IMF) to a subset of 29 of the inferred Population III star masses, and find that the mass distribution is well represented by a power-law IMF with exponent α = 2.35^{+0.29}_{-0.24}. The inferred maximum progenitor mass for supernovae from massive Population III stars is M_{max} = 87^{+13}_{-33} M⊙, and we find no evidence in our sample for a contribution from stars with masses above ˜120 M⊙. The minimum mass is strongly consistent with the theoretical lower mass limit for Population III supernovae. We conclude that the IMF for massive Population III stars is consistent with the IMF of present-day massive stars and there may well have formed stars much below the supernova mass limit that could have survived to the present day.

  3. Nonstandard neutrino interactions at DUNE, T2HK and T2HKK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Jiajun; Marfatia, Danny; Whisnant, Kerry

    Here, we study the matter effect caused by nonstandard neutrino interactions (NSI) in the next generation long-baseline neutrino experiments, DUNE, T2HK and T2HKK. If multiple NSI parameters are nonzero, the potential of these experiments to detect CP violation, determine the mass hierarchy and constrain NSI is severely impaired by degeneracies between the NSI parameters and by the generalized mass hierarchy degeneracy. In particular, a cancellation between leading order terms in the appearance channels when ϵ eτ= cot θ 23ϵ eμ, strongly affects the sensitivities to these two NSI parameters at T2HK and T2HKK. We also study the dependence of themore » sensitivities on the true CP phase and the true mass hierarchy, and find that overall DUNE has the best sensitivity to the magnitude of the NSI parameters, while T2HKK has the best sensitivity to CP violation whether or not there are NSI. Furthermore, for T2HKK a smaller off-axis angle for the Korean detector is better overall. We find that due to the structure of the leading order terms in the appearance channel probabilities, the NSI sensitivities in a given experiment are similar for both mass hierarchies, modulo the phase change δ→δ + 180°.« less

  4. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  5. Volume-preserving normal forms of Hopf-zero singularity

    NASA Astrophysics Data System (ADS)

    Gazor, Majid; Mokhtari, Fahimeh

    2013-10-01

    A practical method is described for computing the unique generator of the algebra of first integrals associated with a large class of Hopf-zero singularity. The set of all volume-preserving classical normal forms of this singularity is introduced via a Lie algebra description. This is a maximal vector space of classical normal forms with first integral; this is whence our approach works. Systems with a nonzero condition on their quadratic parts are considered. The algebra of all first integrals for any such system has a unique (modulo scalar multiplication) generator. The infinite level volume-preserving parametric normal forms of any nondegenerate perturbation within the Lie algebra of any such system is computed, where it can have rich dynamics. The associated unique generator of the algebra of first integrals are derived. The symmetry group of the infinite level normal forms are also discussed. Some necessary formulas are derived and applied to appropriately modified Rössler and generalized Kuramoto-Sivashinsky equations to demonstrate the applicability of our theoretical results. An approach (introduced by Iooss and Lombardi) is applied to find an optimal truncation for the first level normal forms of these examples with exponentially small remainders. The numerically suggested radius of convergence (for the first integral) associated with a hypernormalization step is discussed for the truncated first level normal forms of the examples. This is achieved by an efficient implementation of the results using Maple.

  6. On the v-representability of ensemble densities of electron systems

    NASA Astrophysics Data System (ADS)

    Gonis, A.; Däne, M.

    2018-05-01

    Analogously to the case at zero temperature, where the density of the ground state of an interacting many-particle system determines uniquely (within an arbitrary additive constant) the external potential acting on the system, the thermal average of the density over an ensemble defined by the Boltzmann distribution at the minimum of the thermodynamic potential, or the free energy, determines the external potential uniquely (and not just modulo a constant) acting on a system described by this thermodynamic potential or free energy. The paper describes a formal procedure that generates the domain of a constrained search over general ensembles (at zero or elevated temperatures) that lead to a given density, including as a special case a density thermally averaged at a given temperature, and in the case of a v-representable density determines the external potential leading to the ensemble density. As an immediate consequence of the general formalism, the concept of v-representability is extended beyond the hitherto discussed case of ground state densities to encompass excited states as well. Specific application to thermally averaged densities solves the v-representability problem in connection with the Mermin functional in a manner analogous to that in which this problem was recently settled with respect to the Hohenberg and Kohn functional. The main formalism is illustrated with numerical results for ensembles of one-dimensional, non-interacting systems of particles under a harmonic potential.

  7. On the v-representability of ensemble densities of electron systems

    DOE PAGES

    Gonis, A.; Dane, M.

    2017-12-30

    Analogously to the case at zero temperature, where the density of the ground state of an interacting many-particle system determines uniquely (within an arbitrary additive constant) the external potential acting on the system, the thermal average of the density over an ensemble defined by the Boltzmann distribution at the minimum of the thermodynamic potential, or the free energy, determines the external potential uniquely (and not just modulo a constant) acting on a system described by this thermodynamic potential or free energy. The study describes a formal procedure that generates the domain of a constrained search over general ensembles (at zeromore » or elevated temperatures) that lead to a given density, including as a special case a density thermally averaged at a given temperature, and in the case of a v-representable density determines the external potential leading to the ensemble density. As an immediate consequence of the general formalism, the concept of v-representability is extended beyond the hitherto discussed case of ground state densities to encompass excited states as well. Specific application to thermally averaged densities solves the v-representability problem in connection with the Mermin functional in a manner analogous to that in which this problem was recently settled with respect to the Hohenberg and Kohn functional. Finally, the main formalism is illustrated with numerical results for ensembles of one-dimensional, non-interacting systems of particles under a harmonic potential.« less

  8. Emergent Ising degrees of freedom above a double-stripe magnetic ground state [Emergent Ising degrees of freedom above double-stripe magnetism

    DOE PAGES

    Zhang, Guanghua; Flint, Rebecca

    2017-12-27

    Here, double-stripe magnetism [Q=(π/2,π/2)] has been proposed as the magnetic ground state for both the iron-telluride and BaTi 2Sb 2O families of superconductors. Double-stripe order is captured within a J 1–J 2–J 3 Heisenberg model in the regime J 3 >> J 2 >> J 1. Intriguingly, besides breaking spin-rotational symmetry, the ground-state manifold has three additional Ising degrees of freedom associated with bond ordering. Via their coupling to the lattice, they give rise to an orthorhombic distortion and to two nonuniform lattice distortions with wave vector (π,π). Because the ground state is fourfold degenerate, modulo rotations in spin space,more » only two of these Ising bond order parameters are independent. Here, we introduce an effective field theory to treat all Ising order parameters, as well as magnetic order, and solve it within a large-N limit. All three transitions, corresponding to the condensations of two Ising bond order parameters and one magnetic order parameter are simultaneous and first order in three dimensions, but lower dimensionality, or equivalently weaker interlayer coupling, and weaker magnetoelastic coupling can split the three transitions, and in some cases allows for two separate Ising phase transitions above the magnetic one.« less

  9. Renormalized Hamiltonian for a peptide chain: Digitalizing the protein folding problem

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel; Colubri, Andrés

    2000-05-01

    A renormalized Hamiltonian for a flexible peptide chain is derived to generate the long-time limit dynamics compatible with a coarsening of torsional conformation space. The renormalization procedure is tailored taking into account the coarse graining imposed by the backbone torsional constraints due to the local steric hindrance and the local backbone-side-group interactions. Thus, the torsional degrees of freedom for each residue are resolved modulo basins of attraction in its so-called Ramachandran map. This Ramachandran renormalization (RR) procedure is implemented so that the chain is energetically driven to form contact patterns as their respective collective topological constraints are fulfilled within the coarse description. In this way, the torsional dynamics are digitalized and become codified as an evolving pattern in a binary matrix. Each accepted Monte Carlo step in a canonical ensemble simulation is correlated with the real mean first passage time it takes to reach the destination coarse topological state. This real-time correlation enables us to test the RR dynamics by comparison with experimentally probed kinetic bottlenecks along the dominant folding pathway. Such intermediates are scarcely populated at any given time, but they determine the kinetic funnel leading to the active structure. This landscape region is reached through kinetically controlled steps needed to overcome the conformational entropy of the random coil. The results are specialized for the bovine pancreatic trypsin inhibitor, corroborating the validity of our method.

  10. An ultraviolet-optical flare from the tidal disruption of a helium-rich stellar core.

    PubMed

    Gezari, S; Chornock, R; Rest, A; Huber, M E; Forster, K; Berger, E; Challis, P J; Neill, J D; Martin, D C; Heckman, T; Lawrence, A; Norman, C; Narayan, G; Foley, R J; Marion, G H; Scolnic, D; Chomiuk, L; Soderberg, A; Smith, K; Kirshner, R P; Riess, A G; Smartt, S J; Stubbs, C W; Tonry, J L; Wood-Vasey, W M; Burgett, W S; Chambers, K C; Grav, T; Heasley, J N; Kaiser, N; Kudritzki, R-P; Magnier, E A; Morgan, J S; Price, P A

    2012-05-02

    The flare of radiation from the tidal disruption and accretion of a star can be used as a marker for supermassive black holes that otherwise lie dormant and undetected in the centres of distant galaxies. Previous candidate flares have had declining light curves in good agreement with expectations, but with poor constraints on the time of disruption and the type of star disrupted, because the rising emission was not observed. Recently, two 'relativistic' candidate tidal disruption events were discovered, each of whose extreme X-ray luminosity and synchrotron radio emission were interpreted as the onset of emission from a relativistic jet. Here we report a luminous ultraviolet-optical flare from the nuclear region of an inactive galaxy at a redshift of 0.1696. The observed continuum is cooler than expected for a simple accreting debris disk, but the well-sampled rise and decay of the light curve follow the predicted mass accretion rate and can be modelled to determine the time of disruption to an accuracy of two days. The black hole has a mass of about two million solar masses, modulo a factor dependent on the mass and radius of the star disrupted. On the basis of the spectroscopic signature of ionized helium from the unbound debris, we determine that the disrupted star was a helium-rich stellar core.

  11. Line of magnetic monopoles and an extension of the Aharonov–Bohm effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chee, J.; Lu, W.

    2016-10-15

    In the Landau problem on the two-dimensional plane, physical displacement of a charged particle (i.e., magnetic translation) can be induced by an in-plane electric field. The geometric phase accompanying such magnetic translation around a closed path differs from the topological phase of Aharonov and Bohm in two essential aspects: The particle is in direct contact with the magnetic field and the geometric phase has an opposite sign from the Aharonov–Bohm phase. We show that magnetic translation on the two-dimensional cylinder implemented by the Schrödinger time evolution truly leads to the Aharonov–Bohm effect. The magnetic field normal to the cylinder’s surfacemore » corresponds to a line of magnetic monopoles of uniform density whose simulation is currently under investigation in cold atom physics. In order to characterize the quantum problem, one needs to specify the value of the magnetic flux (modulo the flux unit) that threads but not in touch with the cylinder. A general closed path on the cylinder may enclose both the Aharonov–Bohm flux and the local magnetic field that is in direct contact with the charged particle. This suggests an extension of the Aharonov–Bohm experiment that naturally takes into account both the geometric phase due to local interaction with the magnetic field and the topological phase of Aharonov and Bohm.« less

  12. Array Phase Shifters: Theory and Technology

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R.

    2007-01-01

    While there are a myriad of applications for microwave phase shifters in instrumentation and metrology, power combining, amplifier linearization, and so on, the most prevalent use is in scanning phased-array antennas. And while this market continues to be dominated by military radar and tracking platforms, many commercial applications have emerged in the past decade or so. These new and potential applications span low-Earth-orbit (LEO) communications satellite constellations and collision warning radar, an aspect of the Intelligent Vehicle Highway System or Automated Highway System. In any case, the phase shifters represent a considerable portion of the overall antenna cost, with some estimates approaching 40 percent for receive arrays. Ferrite phase shifters continue to be the workhorse in military-phased arrays, and while there have been advances in thin film ferrite devices, the review of this device technology in the previous edition of this book is still highly relevant. This chapter will focus on three types of phase shifters that have matured in the past decade: GaAs MESFET monolithic microwave integrated circuit (MMIC), micro-electromechanical systems (MEMS), and thin film ferroelectric-based devices. A brief review of some novel devices including thin film ferrite phase shifters and superconducting switches for phase shifter applications will be provided. Finally, the effects of modulo 2 phase shift limitations, phase errors, and transient response on bit error rate degradation will be considered.

  13. Emergent Ising degrees of freedom above a double-stripe magnetic ground state [Emergent Ising degrees of freedom above double-stripe magnetism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guanghua; Flint, Rebecca

    Here, double-stripe magnetism [Q=(π/2,π/2)] has been proposed as the magnetic ground state for both the iron-telluride and BaTi 2Sb 2O families of superconductors. Double-stripe order is captured within a J 1–J 2–J 3 Heisenberg model in the regime J 3 >> J 2 >> J 1. Intriguingly, besides breaking spin-rotational symmetry, the ground-state manifold has three additional Ising degrees of freedom associated with bond ordering. Via their coupling to the lattice, they give rise to an orthorhombic distortion and to two nonuniform lattice distortions with wave vector (π,π). Because the ground state is fourfold degenerate, modulo rotations in spin space,more » only two of these Ising bond order parameters are independent. Here, we introduce an effective field theory to treat all Ising order parameters, as well as magnetic order, and solve it within a large-N limit. All three transitions, corresponding to the condensations of two Ising bond order parameters and one magnetic order parameter are simultaneous and first order in three dimensions, but lower dimensionality, or equivalently weaker interlayer coupling, and weaker magnetoelastic coupling can split the three transitions, and in some cases allows for two separate Ising phase transitions above the magnetic one.« less

  14. Towers of generalized divisible quantum codes

    NASA Astrophysics Data System (ADS)

    Haah, Jeongwan

    2018-04-01

    A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.

  15. Geometry of Conservation Laws for a Class of Parabolic Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Clelland, Jeanne Nielsen

    1996-08-01

    I consider the problem of computing the space of conservation laws for a second-order, parabolic partial differential equation for one function of three independent variables. The PDE is formulated as an exterior differential system {cal I} on a 12 -manifold M, and its conservation laws are identified with the vector space of closed 3-forms in the infinite prolongation of {cal I} modulo the so -called "trivial" conservation laws. I use the tools of exterior differential systems and Cartan's method of equivalence to study the structure of the space of conservation laws. My main result is:. Theorem. Any conservation law for a second-order, parabolic PDE for one function of three independent variables can be represented by a closed 3-form in the differential ideal {cal I} on the original 12-manifold M. I show that if a nontrivial conservation law exists, then {cal I} has a deprolongation to an equivalent system {cal J} on a 7-manifold N, and any conservation law for {cal I} can be expressed as a closed 3-form on N which lies in {cal J}. Furthermore, any such system in the real analytic category is locally equivalent to a system generated by a (parabolic) equation of the formA(u _{xx}u_{yy}-u_sp {xy}{2}) + B_1u_{xx }+2B_2u_{xy} +B_3u_ {yy}+C=0crwhere A, B_{i}, C are functions of x, y, t, u, u_{x}, u _{y}, u_{t}. I compute the space of conservation laws for several examples, and I begin the process of analyzing the general case using Cartan's method of equivalence. I show that the non-linearizable equation u_{t} = {1over2}e ^{-u}(u_{xx}+u_ {yy})has an infinite-dimensional space of conservation laws. This stands in contrast to the two-variable case, for which Bryant and Griffiths showed that any equation whose space of conservation laws has dimension 4 or more is locally equivalent to a linear equation, i.e., is linearizable.

  16. Simple algorithm for improved security in the FDDI protocol

    NASA Astrophysics Data System (ADS)

    Lundy, G. M.; Jones, Benjamin

    1993-02-01

    We propose a modification to the Fiber Distributed Data Interface (FDDI) protocol based on a simple algorithm which will improve confidential communication capability. This proposed modification provides a simple and reliable system which exploits some of the inherent security properties in a fiber optic ring network. This method differs from conventional methods in that end to end encryption can be facilitated at the media access control sublayer of the data link layer in the OSI network model. Our method is based on a variation of the bit stream cipher method. The transmitting station takes the intended confidential message and uses a simple modulo two addition operation against an initialization vector. The encrypted message is virtually unbreakable without the initialization vector. None of the stations on the ring will have access to both the encrypted message and the initialization vector except the transmitting and receiving stations. The generation of the initialization vector is unique for each confidential transmission and thus provides a unique approach to the key distribution problem. The FDDI protocol is of particular interest to the military in terms of LAN/MAN implementations. Both the Army and the Navy are considering the standard as the basis for future network systems. A simple and reliable security mechanism with the potential to support realtime communications is a necessary consideration in the implementation of these systems. The proposed method offers several advantages over traditional methods in terms of speed, reliability, and standardization.

  17. Measurement of the CKM angle γ from a combination of B±→Dh± analyses

    NASA Astrophysics Data System (ADS)

    Aaij, R.; Abellan Beteta, C.; Adeva, B.; Adinolfi, M.; Adrover, C.; Affolder, A.; Ajaltouni, Z.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; Anderlini, L.; Anderson, J.; Andreassen, R.; Appleby, R. B.; Aquines Gutierrez, O.; Archilli, F.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Bachmann, S.; Back, J. J.; Baesso, C.; Balagura, V.; Baldini, W.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Bauer, Th.; Bay, A.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Belogurov, S.; Belous, K.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Benton, J.; Berezhnoy, A.; Bernet, R.; Bettler, M.-O.; van Beuzekom, M.; Bien, A.; Bifani, S.; Bird, T.; Bizzeti, A.; Bjørnstad, P. M.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Bondar, A.; Bondar, N.; Bonivento, W.; Borghi, S.; Borgia, A.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Brambach, T.; van den Brand, J.; Bressieux, J.; Brett, D.; Britsch, M.; Britton, T.; Brook, N. H.; Brown, H.; Burducea, I.; Bursche, A.; Busetto, G.; Buytaert, J.; Cadeddu, S.; Callot, O.; Calvi, M.; Calvo Gomez, M.; Camboni, A.; Campana, P.; Campora Perez, D.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carranza-Mejia, H.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Castillo Garcia, L.; Cattaneo, M.; Cauet, Ch.; Charles, M.; Charpentier, Ph.; Chen, P.; Chiapolini, N.; Chrzaszcz, M.; Ciba, K.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coca, C.; Coco, V.; Cogan, J.; Cogneras, E.; Collins, P.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombes, M.; Coquereau, S.; Corti, G.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Cunliffe, S.; Currie, R.; D'Ambrosio, C.; David, P.; David, P. N. Y.; Davis, A.; De Bonis, I.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Silva, W.; De Simone, P.; Decamp, D.; Deckenhoff, M.; Del Buono, L.; Déléage, N.; Derkach, D.; Deschamps, O.; Dettori, F.; Di Canto, A.; Dijkstra, H.; Dogaru, M.; Donleavy, S.; Dordei, F.; Dosil Suárez, A.; Dossett, D.; Dovbnya, A.; Dupertuis, F.; Dzhelyadin, R.; Dziurda, A.; Dzyuba, A.; Easo, S.; Egede, U.; Egorychev, V.; Eidelman, S.; van Eijk, D.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; El Rifai, I.; Elsasser, Ch.; Elsby, D.; Falabella, A.; Färber, C.; Fardell, G.; Farinelli, C.; Farry, S.; Fave, V.; Ferguson, D.; Fernandez Albor, V.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fiore, M.; Fitzpatrick, C.; Fontana, M.; Fontanelli, F.; Forty, R.; Francisco, O.; Frank, M.; Frei, C.; Frosini, M.; Furcas, S.; Furfaro, E.; Gallas Torreira, A.; Galli, D.; Gandelman, M.; Gandini, P.; Gao, Y.; Garofoli, J.; Garosi, P.; Garra Tico, J.; Garrido, L.; Gaspar, C.; Gauld, R.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gibson, V.; Gligorov, V. V.; Göbel, C.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gordon, H.; Grabalosa Gándara, M.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graziani, G.; Grecu, A.; Greening, E.; Gregson, S.; Griffith, P.; Grünberg, O.; Gui, B.; Gushchin, E.; Guz, Yu.; Gys, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hall, S.; Hampson, T.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; Hartmann, T.; He, J.; Heijne, V.; Hennessy, K.; Henrard, P.; Hernando Morata, J. A.; van Herwijnen, E.; Hicheur, A.; Hicks, E.; Hill, D.; Hoballah, M.; Hombach, C.; Hopchev, P.; Hulsbergen, W.; Hunt, P.; Huse, T.; Hussain, N.; Hutchcroft, D.; Hynds, D.; Iakovenko, V.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jaeger, A.; Jans, E.; Jaton, P.; Jawahery, A.; Jing, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Kaballo, M.; Kandybei, S.; Karacson, M.; Karbach, T. M.; Kenyon, I. R.; Kerzel, U.; Ketel, T.; Keune, A.; Khanji, B.; Kochebina, O.; Komarov, I.; Koopman, R. F.; Koppenburg, P.; Korolev, M.; Kozlinskiy, A.; Kravchuk, L.; Kreplin, K.; Kreps, M.; Krocker, G.; Krokovny, P.; Kruse, F.; Kucharczyk, M.; Kudryavtsev, V.; Kvaratskheliya, T.; La Thi, V. N.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lambert, D.; Lambert, R. W.; Lanciotti, E.; Lanfranchi, G.; Langenbruch, C.; Latham, T.; Lazzeroni, C.; Le Gac, R.; van Leerdam, J.; Lees, J.-P.; Lefèvre, R.; Leflat, A.; Lefrançois, J.; Leo, S.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, Y.; Li Gioi, L.; Liles, M.; Lindner, R.; Linn, C.; Liu, B.; Liu, G.; Lohn, S.; Longstaff, I.; Lopes, J. H.; Lopez Asamar, E.; Lopez-March, N.; Lu, H.; Lucchesi, D.; Luisier, J.; Luo, H.; Machefert, F.; Machikhiliyan, I. V.; Maciuc, F.; Maev, O.; Malde, S.; Manca, G.; Mancinelli, G.; Marconi, U.; Märki, R.; Marks, J.; Martellotti, G.; Martens, A.; Martín Sánchez, A.; Martinelli, M.; Martinez Santos, D.; Martins Tostes, D.; Massafferri, A.; Matev, R.; Mathe, Z.; Matteuzzi, C.; Maurice, E.; Mazurov, A.; Mc Skelly, B.; McCarthy, J.; McNab, A.; McNulty, R.; Meadows, B.; Meier, F.; Meissner, M.; Merk, M.; Milanes, D. A.; Minard, M.-N.; Molina Rodriguez, J.; Monteil, S.; Moran, D.; Morawski, P.; Morello, M. J.; Mountain, R.; Mous, I.; Muheim, F.; Müller, K.; Muresan, R.; Muryn, B.; Muster, B.; Naik, P.; Nakada, T.; Nandakumar, R.; Nasteva, I.; Needham, M.; Neufeld, N.; Nguyen, A. D.; Nguyen, T. D.; Nguyen-Mau, C.; Nicol, M.; Niess, V.; Niet, R.; Nikitin, N.; Nikodem, T.; Nomerotski, A.; Novoselov, A.; Oblakowska-Mucha, A.; Obraztsov, V.; Oggero, S.; Ogilvy, S.; Okhrimenko, O.; Oldeman, R.; Orlandea, M.; Otalora Goicochea, J. M.; Owen, P.; Oyanguren, A.; Pal, B. K.; Palano, A.; Palutan, M.; Panman, J.; Papanestis, A.; Pappagallo, M.; Parkes, C.; Parkinson, C. J.; Passaleva, G.; Patel, G. D.; Patel, M.; Patrick, G. N.; Patrignani, C.; Pavel-Nicorescu, C.; Pazos Alvarez, A.; Pellegrino, A.; Penso, G.; Pepe Altarelli, M.; Perazzini, S.; Perego, D. L.; Perez Trigo, E.; Pérez-Calero Yzquierdo, A.; Perret, P.; Perrin-Terrin, M.; Pessina, G.; Petridis, K.; Petrolini, A.; Phan, A.; Picatoste Olloqui, E.; Pietrzyk, B.; Pilař, T.; Pinci, D.; Playfer, S.; Plo Casasus, M.; Polci, F.; Polok, G.; Poluektov, A.; Polycarpo, E.; Popov, A.; Popov, D.; Popovici, B.; Potterat, C.; Powell, A.; Prisciandaro, J.; Pritchard, A.; Prouve, C.; Pugatch, V.; Puig Navarro, A.; Punzi, G.; Qian, W.; Rademacker, J. H.; Rakotomiaramanana, B.; Rama, M.; Rangel, M. S.; Raniuk, I.; Rauschmayr, N.; Raven, G.; Redford, S.; Reid, M. M.; dos Reis, A. C.; Ricciardi, S.; Richards, A.; Rinnert, K.; Rives Molina, V.; Roa Romero, D. A.; Robbe, P.; Rodrigues, E.; Rodriguez Perez, P.; Roiser, S.; Romanovsky, V.; Romero Vidal, A.; Rouvinet, J.; Ruf, T.; Ruffini, F.; Ruiz, H.; Ruiz Valls, P.; Sabatino, G.; Saborido Silva, J. J.; Sagidova, N.; Sail, P.; Saitta, B.; Salustino Guimaraes, V.; Salzmann, C.; Sanmartin Sedes, B.; Sannino, M.; Santacesaria, R.; Santamarina Rios, C.; Santovetti, E.; Sapunov, M.; Sarti, A.; Satriano, C.; Satta, A.; Savrie, M.; Savrina, D.; Schaack, P.; Schiller, M.; Schindler, H.; Schlupp, M.; Schmelling, M.; Schmidt, B.; Schneider, O.; Schopper, A.; Schune, M.-H.; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Seco, M.; Semennikov, A.; Senderowska, K.; Sepp, I.; Serra, N.; Serrano, J.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shatalov, P.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, O.; Shevchenko, V.; Shires, A.; Silva Coutinho, R.; Skwarnicki, T.; Smith, N. A.; Smith, E.; Smith, M.; Sokoloff, M. D.; Soler, F. J. P.; Soomro, F.; Souza, D.; Souza De Paula, B.; Spaan, B.; Sparkes, A.; Spradlin, P.; Stagni, F.; Stahl, S.; Steinkamp, O.; Stoica, S.; Stone, S.; Storaci, B.; Straticiuc, M.; Straumann, U.; Subbiah, V. K.; Sun, L.; Swientek, S.; Syropoulos, V.; Szczekowski, M.; Szczypka, P.; Szumlak, T.; T'Jampens, S.; Teklishyn, M.; Teodorescu, E.; Teubert, F.; Thomas, C.; Thomas, E.; van Tilburg, J.; Tisserand, V.; Tobin, M.; Tolk, S.; Tonelli, D.; Topp-Joergensen, S.; Torr, N.; Tournefier, E.; Tourneur, S.; Tran, M. T.; Tresch, M.; Tsaregorodtsev, A.; Tsopelas, P.; Tuning, N.; Ubeda Garcia, M.; Ukleja, A.; Urner, D.; Uwer, U.; Vagnoni, V.; Valenti, G.; Vazquez Gomez, R.; Vazquez Regueiro, P.; Vecchi, S.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Vesterinen, M.; Viaud, B.; Vieira, D.; Vilasis-Cardona, X.; Vollhardt, A.; Volyanskyy, D.; Voong, D.; Vorobyev, A.; Vorobyev, V.; Voß, C.; Voss, H.; Waldi, R.; Wallace, R.; Wandernoth, S.; Wang, J.; Ward, D. R.; Watson, N. K.; Webber, A. D.; Websdale, D.; Whitehead, M.; Wicht, J.; Wiechczynski, J.; Wiedner, D.; Wiggers, L.; Wilkinson, G.; Williams, M. P.; Williams, M.; Wilson, F. F.; Wishahi, J.; Witek, M.; Wotton, S. A.; Wright, S.; Wu, S.; Wyllie, K.; Xie, Y.; Xing, Z.; Yang, Z.; Young, R.; Yuan, X.; Yushchenko, O.; Zangoli, M.; Zavertyaev, M.; Zhang, F.; Zhang, L.; Zhang, W. C.; Zhang, Y.; Zhelezov, A.; Zhokhov, A.; Zhong, L.; Zvyagin, A.

    2013-10-01

    A combination of three LHCb measurements of the CKM angle γ is presented. The decays B±→DK± and B±→Dπ± are used, where D denotes an admixture of D0 and D mesons, decaying into K+K-, π+π-, K±π∓, K±π∓π±π∓, KS0π+π-, or KS0K+K- final states. All measurements use a dataset corresponding to 1.0 fb of integrated luminosity. Combining results from B±→DK± decays alone a best-fit value of γ=72.0° is found, and confidence intervals are set γ∈[56.4,86.7]° at 68% CL, γ∈[42.6,99.6]° at 95% CL. The best-fit value of γ found from a combination of results from B±→Dπ± decays alone, is γ=18.9°, and the confidence intervals γ∈[7.4,99.2]°∪[167.9,176.4]° at 68% CL are set, without constraint at 95% CL. The combination of results from B±→DK± and B±→Dπ± decays gives a best-fit value of γ=72.6° and the confidence intervals γ∈[55.4,82.3]° at 68% CL, γ∈[40.2,92.7]° at 95% CL are set. All values are expressed modulo 180°, and are obtained taking into account the effect of D0-D mixing.

  18. Equivalence principle for quantum systems: dephasing and phase shift of free-falling particles

    NASA Astrophysics Data System (ADS)

    Anastopoulos, C.; Hu, B. L.

    2018-02-01

    We ask the question of how the (weak) equivalence principle established in classical gravitational physics should be reformulated and interpreted for massive quantum objects that may also have internal degrees of freedom (dof). This inquiry is necessary because even elementary concepts like a classical trajectory are not well defined in quantum physics—trajectories originating from quantum histories become viable entities only under stringent decoherence conditions. From this investigation we posit two logically and operationally distinct statements of the equivalence principle for quantum systems. Version A: the probability distribution of position for a free-falling particle is the same as the probability distribution of a free particle, modulo a mass-independent shift of its mean. Version B: any two particles with the same velocity wave-function behave identically in free fall, irrespective of their masses. Both statements apply to all quantum states, including those without a classical correspondence, and also for composite particles with quantum internal dof. We also investigate the consequences of the interaction between internal and external dof induced by free fall. For a class of initial states, we find dephasing occurs for the translational dof, namely, the suppression of the off-diagonal terms of the density matrix, in the position basis. We also find a gravitational phase shift in the reduced density matrix of the internal dof that does not depend on the particle’s mass. For classical states, the phase shift has a natural classical interpretation in terms of gravitational red-shift and special relativistic time-dilation.

  19. Metal Abundances of KISS Galaxies. VI. New Metallicity Relations for the KISS Sample of Star-forming Galaxies

    NASA Astrophysics Data System (ADS)

    Hirschauer, Alec S.; Salzer, John J.; Janowiecki, Steven; Wegner, Gary A.

    2018-02-01

    We present updated metallicity relations for the spectral database of star-forming galaxies (SFGs) found in the KPNO International Spectroscopic Survey (KISS). New spectral observations of emission-line galaxies obtained from a variety of telescope facilities provide oxygen abundance information. A nearly fourfold increase in the number of KISS objects with robust metallicities relative to our previous analysis provides for an empirical abundance calibration to compute self-consistent metallicity estimates for all SFGs in the sample with adequate spectral data. In addition, a sophisticated spectral energy distribution fitting routine has provided robust calculations of stellar mass. With these new and/or improved galaxy characteristics, we have developed luminosity–metallicity (L–Z) relations, mass–metallicity (M *–Z) relations, and the so-called fundamental metallicity relation (FMR) for over 1450 galaxies from the KISS sample. This KISS M *–Z relation is presented for the first time and demonstrates markedly lower scatter than the KISS L–Z relation. We find that our relations agree reasonably well with previous publications, modulo modest offsets due to differences in the strong emission line metallicity calibrations used. We illustrate an important bias present in previous L–Z and M *–Z studies involving direct-method (T e ) abundances that may result in systematically lower slopes in these relations. Our KISS FMR shows consistency with those found in the literature, albeit with a larger scatter. This is likely a consequence of the KISS sample being biased toward galaxies with high levels of activity.

  20. Box codes of lengths 48 and 72

    NASA Technical Reports Server (NTRS)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  1. X-Ray and UV Orbital Phase Dependence in LMC X-3

    NASA Technical Reports Server (NTRS)

    Dolan, Joseph F.; Boyd, P. T.; Smale, A. P.

    2001-01-01

    The black-hole binary LMC X-3 is known to be variable on time scales of days to years. We investigated X-ray and ultraviolet variability in the system as a function of the 1.7 d binary orbit using a 6.4 day observation with the Rossi X-ray Timing Explorer (RXTE) in 1998 December. An abrupt 14 % flux decrease lasting nearly an entire orbit was followed by a return to previous flux levels. This behavior occurred twice at nearly the same binary phase, but is not present in consecutive orbits. When the X-ray flux is at lower intensity, a periodic amplitude modulation of 7 % is evident in data folded modulo the orbital period. The higher intensity data show weaker correlation with phase. This is the first report of X-ray variability at the orbital period of LMC X-3. Archival RXTE observations of LMC X-3 during a high flux state in 1996 December show similar phase dependence. An ultraviolet light curve obtained with the High Speed Photometer (HSP) on the Hubble Space Telescope (HST) shows a phase dependent variability consistent with that observed in the visible, ascribed to the ellipsoidal variation of the visible star. The X-ray spectrum of LMC X-3 is acceptably represented by a phenomenological disk black-body plus a power law. Changes in the spectrum of LMX X-3 during our observations are compatible with earlier observations during which variations in the 2-10 keV flux are closely correlated with the disk geometry spectral model parameter.

  2. Radiation damping and reciprocity in nuclear magnetic resonance: the replacement of the filling factor.

    PubMed

    Tropp, James; Van Criekinge, Mark

    2010-09-01

    The basic equation describing radiation damping in nuclear magnetic resonance (NMR) is rewritten by means of the reciprocity principle, to remove the dependence of the damping constant upon filling factor - a parameter which is neither uniquely defined for easily measured. The new equation uses instead the transceive efficiency, i.e. the peak amplitude of the radiofrequency B field in laboratory coordinates, divided by the square root of the resistance of the detection coil, for which a simple and direct means of measurement exists. We use the efficiency to define the intrinsic damping constant, i.e. that which obtains when both probe and preamplifier are perfectly matched to the system impedance. For imperfect matching of the preamp, it is shown that the damping constant varies with electrical distance to the probe, and equations are given and simulations performed, to predict the distance dependence, which (for lossless lines) is periodic modulo a half wavelength. Experimental measurements of the radiation-damped free induction NMR signal of protons in neat water are performed at a static B field strength of 14.1T; and an intrinsic damping constant measured using the variable line method. For a sample of 5mm diameter, in an inverse detection probe we measure an intrinsic damping constant of 204 s(-1), corresponding to a damping linewidth of 65 Hz for small tip angles. The predicted intrinsic linewidth, based upon three separate measurements of the efficiency, is 52.3 Hz, or 80% of the measured value. (c) 2010 Elsevier Inc. All rights reserved.

  3. Beta Function Quintessence Cosmological Parameters and Fundamental Constants I: Power and Inverse Power Law Dark Energy Potentials

    NASA Astrophysics Data System (ADS)

    Thompson, Rodger I.

    2018-04-01

    This investigation explores using the beta function formalism to calculate analytic solutions for the observable parameters in rolling scalar field cosmologies. The beta function in this case is the derivative of the scalar ϕ with respect to the natural log of the scale factor a, β (φ )=d φ /d ln (a). Once the beta function is specified, modulo a boundary condition, the evolution of the scalar ϕ as a function of the scale factor is completely determined. A rolling scalar field cosmology is defined by its action which can contain a range of physically motivated dark energy potentials. The beta function is chosen so that the associated "beta potential" is an accurate, but not exact, representation of the appropriate dark energy model potential. The basic concept is that the action with the beta potential is so similar to the action with the model potential that solutions using the beta action are accurate representations of solutions using the model action. The beta function provides an extra equation to calculate analytic functions of the cosmologies parameters as a function of the scale factor that are that are not calculable using only the model action. As an example this investigation uses a quintessence cosmology to demonstrate the method for power and inverse power law dark energy potentials. An interesting result of the investigation is that the Hubble parameter H is almost completely insensitive to the power of the potentials and that ΛCDM is part of the family of quintessence cosmology power law potentials with a power of zero.

  4. The most distant, luminous, dusty star-forming galaxies: redshifts from NOEMA and ALMA spectral scans

    NASA Astrophysics Data System (ADS)

    Fudamoto, Y.; Ivison, R. J.; Oteo, I.; Krips, M.; Zhang, Z.-Y.; Weiss, A.; Dannerbauer, H.; Omont, A.; Chapman, S. C.; Christensen, L.; Arumugam, V.; Bertoldi, F.; Bremer, M.; Clements, D. L.; Dunne, L.; Eales, S. A.; Greenslade, J.; Maddox, S.; Martinez-Navajas, P.; Michalowski, M.; Pérez-Fournon, I.; Riechers, D.; Simpson, J. M.; Stalder, B.; Valiante, E.; van der Werf, P.

    2017-12-01

    We present 1.3- and/or 3-mm continuum images and 3-mm spectral scans, obtained using Northern Extended Millimeter Array (NOEMA) and Atacama Large Millimeter Array (ALMA), of 21 distant, dusty, star-forming galaxies. Our sample is a subset of the galaxies selected by Ivison et al. on the basis of their extremely red far-infrared (far-IR) colours and low Herschel flux densities; most are thus expected to be unlensed, extraordinarily luminous starbursts at z ≳ 4, modulo the considerable cross-section to gravitational lensing implied by their redshift. We observed 17 of these galaxies with NOEMA and four with ALMA, scanning through the 3-mm atmospheric window. We have obtained secure redshifts for seven galaxies via detection of multiple CO lines, one of them a lensed system at z = 6.027 (two others are also found to be lensed); a single emission line was detected in another four galaxies, one of which has been shown elsewhere to lie at z = 4.002. Where we find no spectroscopic redshifts, the galaxies are generally less luminous by 0.3-0.4 dex, which goes some way to explaining our failure to detect line emission. We show that this sample contains the most luminous known star-forming galaxies. Due to their extreme star-formation activity, these galaxies will consume their molecular gas in ≲ 100 Myr, despite their high molecular gas masses, and are therefore plausible progenitors of the massive, 'red-and-dead' elliptical galaxies at z ≈ 3.

  5. Fairy-Tale Physics Farewell to Reality Bankrupting Physics: Baggott-Unzicker-Jones Critiques Shame Physics' Shameless Media-Hype P.R. Spin-Doctoring Touting Sci-Fi Veracity-Abandonment ``Show-Biz'' Spectacle: Caveat Emptor!!!

    NASA Astrophysics Data System (ADS)

    Siegel, Edward

    2014-03-01

    Baggott[Farewell to Reality: How Fairy-Tale Physics Betrayed Search For Scientific Truth]-Unzicker [Bankrupting Physics: How Top Scientists Are Gambling Away Credibility] shame physics shameless rock-star media-hype P.R. spin-doctoring veracity-abandoning touting sci-fi show-biz aided by online proliferation of uncritical pop-sci science-writers verbal diarrhea, all spectacle vs little truth, lacking Kant-Popper skepticism/ falsification, lemming-like stampedes to truth abandonment, qualified by vague adverbs: might, could, should, may,...vs factual is! Physics, motivated by financial greed, swept up in its very own hype, touts whatever next big thing/cutting-edge bombast ad infinitum/ad nauseum, turning it into mere trendy carney sideshow, full of fury(FOF) but signifying absolutely nothing! Witness: GIGO claims string-theory holographic-universe causes cuprates optical conductivity; failed Anderson RVB cuprates theory vs. Keimer discovery all cuprates ``paramagnons'' bosons aka Overhauser SDWs; Overbye NYT holographic-universe jargonial-obfuscation comments including one from APS journals editor-in-chief re. its unintelligibility, FOF but signifying absolutely nothing INTELLIGIBLE!; Bak/BNL SOC tad late rediscovery of F =ma mere renaming of Siegel acoustic-emission!; 2007 physics Nobel-prize Fert-Gruenberg rediscovery of Siegel[JMMM 7,312(78); https://www.flickr.com/search/?q = GIANT-MAGNETORESISTANCE] GMR. Each trendy latest big thing modulo lack of prior attribution aka out and out bombastic chicanery! Siegel caveat emptor ``Buzzwordism, Bandwagonism, Sloganeering for Fun Profit Survival Ego'' sociological-dysfunctionality thrives!

  6. Fairy-Tale Physics Farewell to Reality Bankrupting Physics: Baggott-Unzicker-Jones Critiques Shame Physics' Shameless Media-Hype P.R. Spin-Doctoring Touting Sci-Fi Veracity-Abandonment ``Show-Biz'' Spectacle: Caveat Emptor!!!

    NASA Astrophysics Data System (ADS)

    Siegel, Edward

    2014-03-01

    Baggott[Farewell to Reality: How Fairy-Tale Physics Betrayed Search For Scientific Truth]-Unzicker [Bankrupting Physics: How Top Scientists Are Gambling Away Credibility] shame physics shameless rock-star media-hype P.R. spin-doctoring veracity-abandoning touting sci-fi show-biz aided by online proliferation of uncritical pop-sci science-writers verbal diarrhea, all spectacle vs little truth, lacking Kant-Popper skepticism/falsification, lemming-like stampedes to truth abandonment, qualified by vague adverbs: might, could, should, may,...vs factual is! Physics, motivated by financial greed, swept up in its very own hype, touts whatever next big thing/cutting-edge bombast ad infinitum/ad nauseum, turning it into mere trendy carney sideshow, full of fury(FOF) but signifying absolutely nothing! Witness: GIGO claims string-theory holographic-universe causes cuprates optical conductivity; failed Anderson RVB cuprates theory vs. Keimer discovery all cuprates ``paramagnons'' bosons aka Overhauser SDWs; Overbye NYT holographic-universe jargonial-obfuscation comments including one from APS journals editor-in-chief re. its unintelligibility, FOF but signifying absolutely nothing INTELLIGIBLE!; Bak/BNL SOC tad late rediscovery of F =ma mere renaming of Siegel acoustic-emission!; 2007 physics Nobel-prize Fert-Gruenberg rediscovery of Siegel[JMMM 7,312(78); https://www.flickr.com/search/?q=GIANT-MAGNETORESISTANCE] GMR. Each trendy latest big thing modulo lack of prior attribution aka out and out bombastic chicanery! Siegel caveat emptor ``Buzzwordism, Bandwagonism, Sloganeering for Fun Profit Survival Ego'' sociological-dysfunctionality thrives!

  7. Fast-response IR spatial light modulators with a polymer network liquid crystal

    NASA Astrophysics Data System (ADS)

    Peng, Fenglin; Chen, Haiwei; Tripathi, Suvagata; Twieg, Robert J.; Wu, Shin-Tson

    2015-03-01

    Liquid crystals (LC) have widespread applications for amplitude modulation (e.g. flat panel displays) and phase modulation (e.g. beam steering). For phase modulation, a 2π phase modulo is required. To extend the electro-optic application into infrared region (MWIR and LWIR), several key technical challenges have to be overcome: 1. low absorption loss, 2. high birefringence, 3. low operation voltage, and 4. fast response time. After three decades of extensive development, an increasing number of IR devices adopting LC technology have been demonstrated, such as liquid crystal waveguide, laser beam steering at 1.55μm and 10.6 μm, spatial light modulator in the MWIR (3~5μm) band, dynamic scene projectors for infrared seekers in the LWIR (8~12μm) band. However, several fundamental molecular vibration bands and overtones exist in the MWIR and LWIR regions, which contribute to high absorption coefficient and hinder its widespread application. Therefore, the inherent absorption loss becomes a major concern for IR devices. To suppress IR absorption, several approaches have been investigated: 1) Employing thin cell gap by choosing a high birefringence liquid crystal mixture; 2) Shifting the absorption bands outside the spectral region of interest by deuteration, fluorination and chlorination; 3) Reducing the overlap vibration bands by using shorter alkyl chain compounds. In this paper, we report some chlorinated LC compounds and mixtures with a low absorption loss in the near infrared and MWIR regions. To achieve fast response time, we have demonstrated a polymer network liquid crystal with 2π phase change at MWIR and response time less than 5 ms.

  8. Beta function quintessence cosmological parameters and fundamental constants - I. Power and inverse power law dark energy potentials

    NASA Astrophysics Data System (ADS)

    Thompson, Rodger I.

    2018-07-01

    This investigation explores using the beta function formalism to calculate analytic solutions for the observable parameters in rolling scalar field cosmologies. The beta function in this case is the derivative of the scalar φ with respect to the natural log of the scale factor a, β (φ)=d φ/d ln (a). Once the beta function is specified, modulo a boundary condition, the evolution of the scalar φ as a function of the scale factor is completely determined. A rolling scalar field cosmology is defined by its action which can contain a range of physically motivated dark energy potentials. The beta function is chosen so that the associated `beta potential' is an accurate, but not exact, representation of the appropriate dark energy model potential. The basic concept is that the action with the beta potential is so similar to the action with the model potential that solutions using the beta action are accurate representations of solutions using the model action. The beta function provides an extra equation to calculate analytic functions of the cosmologies parameters as a function of the scale factor that are not calculable using only the model action. As an example, this investigation uses a quintessence cosmology to demonstrate the method for power and inverse power law dark energy potentials. An interesting result of the investigation is that the Hubble parameter H is almost completely insensitive to the power of the potentials and that Λ cold dark matter is part of the family of quintessence cosmology power-law potentials with a power of zero.

  9. Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8

    NASA Astrophysics Data System (ADS)

    Sison, Virgilio; Remillion, Monica

    2017-10-01

    Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.

  10. From non-preemptive to preemptive scheduling using synchronization synthesis.

    PubMed

    Černý, Pavol; Clarke, Edmund M; Henzinger, Thomas A; Radhakrishna, Arjun; Ryzhyk, Leonid; Samanta, Roopsha; Tarrach, Thorsten

    2017-01-01

    We present a computer-aided programming approach to concurrency. The approach allows programmers to program assuming a friendly, non-preemptive scheduler, and our synthesis procedure inserts synchronization to ensure that the final program works even with a preemptive scheduler. The correctness specification is implicit, inferred from the non-preemptive behavior. Let us consider sequences of calls that the program makes to an external interface. The specification requires that any such sequence produced under a preemptive scheduler should be included in the set of sequences produced under a non-preemptive scheduler. We guarantee that our synthesis does not introduce deadlocks and that the synchronization inserted is optimal w.r.t. a given objective function. The solution is based on a finitary abstraction, an algorithm for bounded language inclusion modulo an independence relation, and generation of a set of global constraints over synchronization placements. Each model of the global constraints set corresponds to a correctness-ensuring synchronization placement. The placement that is optimal w.r.t. the given objective function is chosen as the synchronization solution. We apply the approach to device-driver programming, where the driver threads call the software interface of the device and the API provided by the operating system. Our experiments demonstrate that our synthesis method is precise and efficient. The implicit specification helped us find one concurrency bug previously missed when model-checking using an explicit, user-provided specification. We implemented objective functions for coarse-grained and fine-grained locking and observed that different synchronization placements are produced for our experiments, favoring a minimal number of synchronization operations or maximum concurrency, respectively.

  11. Proceedings of the Second NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar (Editor)

    2010-01-01

    This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.

  12. Vacuum stress energy density and its gravitational implications

    NASA Astrophysics Data System (ADS)

    Estrada, Ricardo; Fulling, Stephen A.; Kaplan, Lev; Kirsten, Klaus; Liu, Zhonghai; Milton, Kimball A.

    2008-04-01

    In nongravitational physics the local density of energy is often regarded as merely a bookkeeping device; only total energy has an experimental meaning—and it is only modulo a constant term. But in general relativity the local stress-energy tensor is the source term in Einstein's equation. In closed universes, and those with Kaluza-Klein dimensions, theoretical consistency demands that quantum vacuum energy should exist and have gravitational effects, although there are no boundary materials giving rise to that energy by van der Waals interactions. In the lab there are boundaries, and in general the energy density has a nonintegrable singularity as a boundary is approached (for idealized boundary conditions). As pointed out long ago by Candelas and Deutsch, in this situation there is doubt about the viability of the semiclassical Einstein equation. Our goal is to show that the divergences in the linearized Einstein equation can be renormalized to yield a plausible approximation to the finite theory that presumably exists for realistic boundary conditions. For a scalar field with Dirichlet or Neumann boundary conditions inside a rectangular parallelepiped, we have calculated by the method of images all components of the stress tensor, for all values of the conformal coupling parameter and an exponential ultraviolet cutoff parameter. The qualitative features of contributions from various classes of closed classical paths are noted. Then the Estrada-Kanwal distributional theory of asymptotics, particularly the moment expansion, is used to show that the linearized Einstein equation with the stress-energy near a plane boundary as source converges to a consistent theory when the cutoff is removed. This paper reports work in progress on a project combining researchers in Texas, Louisiana and Oklahoma. It is supported by NSF Grants PHY-0554849 and PHY-0554926.

  13. The fourfold way of the genetic code.

    PubMed

    Jiménez-Montaño, Miguel Angel

    2009-11-01

    We describe a compact representation of the genetic code that factorizes the table in quartets. It represents a "least grammar" for the genetic language. It is justified by the Klein-4 group structure of RNA bases and codon doublets. The matrix of the outer product between the column-vector of bases and the corresponding row-vector V(T)=(C G U A), considered as signal vectors, has a block structure consisting of the four cosets of the KxK group of base transformations acting on doublet AA. This matrix, translated into weak/strong (W/S) and purine/pyrimidine (R/Y) nucleotide classes, leads to a code table with mixed and unmixed families in separate regions. A basic difference between them is the non-commuting (R/Y) doublets: AC/CA, GU/UG. We describe the degeneracy in the canonical code and the systematic changes in deviant codes in terms of the divisors of 24, employing modulo multiplication groups. We illustrate binary sub-codes characterizing mutations in the quartets. We introduce a decision-tree to predict the mode of tRNA recognition corresponding to each codon, and compare our result with related findings by Jestin and Soulé [Jestin, J.-L., Soulé, C., 2007. Symmetries by base substitutions in the genetic code predict 2' or 3' aminoacylation of tRNAs. J. Theor. Biol. 247, 391-394], and the rearrangements of the table by Delarue [Delarue, M., 2007. An asymmetric underlying rule in the assignment of codons: possible clue to a quick early evolution of the genetic code via successive binary choices. RNA 13, 161-169] and Rodin and Rodin [Rodin, S.N., Rodin, A.S., 2008. On the origin of the genetic code: signatures of its primordial complementarity in tRNAs and aminoacyl-tRNA synthetases. Heredity 100, 341-355], respectively.

  14. Bulk-edge correspondence, spectral flow and Atiyah-Patodi-Singer theorem for the Z2-invariant in topological insulators

    NASA Astrophysics Data System (ADS)

    Yu, Yue; Wu, Yong-Shi; Xie, Xincheng

    2017-03-01

    We study the bulk-edge correspondence in topological insulators by taking Fu-Kane spin pumping model as an example. We show that the Kane-Mele invariant in this model is Z2 invariant modulo the spectral flow of a single-parameter family of 1 + 1-dimensional Dirac operators with a global boundary condition induced by the Kramers degeneracy of the system. This spectral flow is defined as an integer which counts the difference between the number of eigenvalues of the Dirac operator family that flow from negative to non-negative and the number of eigenvalues that flow from non-negative to negative. Since the bulk states of the insulator are completely gapped and the ground state is assumed being no more degenerate except the Kramers, they do not contribute to the spectral flow and only edge states contribute to. The parity of the number of the Kramers pairs of gapless edge states is exactly the same as that of the spectral flow. This reveals the origin of the edge-bulk correspondence, i.e., why the edge states can be used to characterize the topological insulators. Furthermore, the spectral flow is related to the reduced η-invariant and thus counts both the discrete ground state degeneracy and the continuous gapless excitations, which distinguishes the topological insulator from the conventional band insulator even if the edge states open a gap due to a strong interaction between edge modes. We emphasize that these results are also valid even for a weak disordered and/or weak interacting system. The higher spectral flow to categorize the higher-dimensional topological insulators is expected.

  15. Weak-Lensing Determination of the Mass in Galaxy Halos

    NASA Astrophysics Data System (ADS)

    Smith, D. R.; Bernstein, G. M.; Fischer, P.; Jarvis, M.

    2001-04-01

    We detect the weak gravitational lensing distortion of 450,000 background galaxies (20

  16. Long T2 suppression in native lung 3-D imaging using k-space reordered inversion recovery dual-echo ultrashort echo time MRI.

    PubMed

    Gai, Neville D; Malayeri, Ashkan A; Bluemke, David A

    2017-08-01

    Long T2 species can interfere with visualization of short T2 tissue imaging. For example, visualization of lung parenchyma can be hindered by breathing artifacts primarily from fat in the chest wall. The purpose of this work was to design and evaluate a scheme for long T2 species suppression in lung parenchyma imaging using 3-D inversion recovery double-echo ultrashort echo time imaging with a k-space reordering scheme for artifact suppression. A hyperbolic secant (HS) pulse was evaluated for different tissues (T1/T2). Bloch simulations were performed with the inversion pulse followed by segmented UTE acquisition. Point spread function (PSF) was simulated for a standard interleaved acquisition order and a modulo 2 forward-reverse acquisition order. Phantom and in vivo images (eight volunteers) were acquired with both acquisition orders. Contrast to noise ratio (CNR) was evaluated in in vivo images prior to and after introduction of the long T2 suppression scheme. The PSF as well as phantom and in vivo images demonstrated reduction in artifacts arising from k-space modulation after using the reordering scheme. CNR measured between lung and fat and lung and muscle increased from -114 and -148.5 to +12.5 and 2.8 after use of the IR-DUTE sequence. Paired t test between the CNRs obtained from UTE and IR-DUTE showed significant positive change (p < 0.001 for lung-fat CNR and p = 0.03 for lung-muscle CNR). Full 3-D lung parenchyma imaging with improved positive contrast between lung and other long T2 tissue types can be achieved robustly in a clinically feasible time using IR-DUTE with image subtraction when segmented radial acquisition with k-space reordering is employed.

  17. Analysis of Interferometric Synthetic Aperture Radar Phase Data at Brady Hot Springs, Nevada, USA Using Prior Information

    NASA Astrophysics Data System (ADS)

    Reinisch, E. C.; Ali, S. T.; Cardiff, M. A.; Morency, C.; Kreemer, C.; Feigl, K. L.; Team, P.

    2016-12-01

    Time-dependent deformation has been observed at Brady Hot Springs using interferometric synthetic aperture radar (InSAR) [Ali et al. 2016, http://dx.doi.org/10.1016/j.geothermics.2016.01.008]. Our goal is to evaluate multiple competing hypotheses to explain the observed deformation at Brady. To do so requires statistical tests that account for uncertainty. Graph theory is useful for such an analysis of InSAR data [Reinisch, et al. 2016, http://dx.doi.org/10.1007/s00190-016-0934-5]. In particular, the normalized edge Laplacian matrix calculated from the edge-vertex incidence matrix of the graph of the pair-wise data set represents its correlation and leads to a full data covariance matrix in the weighted least squares problem. This formulation also leads to the covariance matrix of the epoch-wise measurements, representing their relative uncertainties. While the formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, the modulo-2π ambiguity of wrapped phase renders the problem non-linear. The conventional practice is to unwrap InSAR phase before modeling, which can introduce mistakes without increasing the corresponding measurement uncertainty. To address this issue, we are applying Bayesian inference. To build the likelihood, we use three different observables: (a) wrapped phase [e.g., Feigl and Thurber 2009, http://dx.doi.org/10.1111/j.1365-246X.2008.03881.x]; (b) range gradients, as defined by Ali and Feigl [2012, http://dx.doi.org/10.1029/2012GC004112]; and (c) unwrapped phase, i.e. range change in mm, which we validate using GPS data. We apply our method to InSAR data taken over Brady Hot Springs geothermal field in Nevada as part of a project entitled "Poroelastic Tomography by Adjoint Inverse Modeling of Data from Seismology, Geodesy, and Hydrology" (PoroTomo) [ http://geoscience.wisc.edu/feigl/porotomo].

  18. Measurement of CP observables in B{sup {+-}{yields}D}{sub CP}K{sup {+-}}decays and constraints on the CKM angle {gamma}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amo Sanchez, P. del; Lees, J. P.; Poireau, V.

    Using the entire sample of 467x10{sup 6} {Upsilon}(4S){yields}BB decays collected with the BABAR detector at the PEP-II asymmetric-energy B factory at the SLAC National Accelerator Laboratory, we perform an analysis of B{sup {+-}}{yields}DK{sup {+-}}decays, using decay modes in which the neutral D meson decays to either CP-eigenstates or non-CP-eigenstates. We measure the partial decay rate charge asymmetries for CP-even and CP-odd D final states to be A{sub CP+}=0.25{+-}0.06{+-}0.02 and A{sub CP-}=-0.09{+-}0.07{+-}0.02, respectively, where the first error is the statistical and the second is the systematic uncertainty. The parameter A{sub CP+} is different from zero with a significance of 3.6 standardmore » deviations, constituting evidence for direct CP violation. We also measure the ratios of the charged-averaged B partial decay rates in CP and non-CP decays, R{sub CP+}=1.18{+-}0.09{+-}0.05 and R{sub CP-}=1.07{+-}0.08{+-}0.04. We infer frequentist confidence intervals for the angle {gamma} of the unitarity triangle, for the strong phase difference {delta}{sub B}, and for the amplitude ratio r{sub B}, which are related to the B{sup -}{yields}DK{sup -} decay amplitude by r{sub B}e{sup i({delta}{sub B}-{gamma})}=A(B{sup -}{yields}D{sup 0}K{sup -})/A(B{sup -}{yields}D{sup 0}K{sup -}). Including statistical and systematic uncertainties, we obtain 0.24

  19. F Ring Core Stability: Corotation Resonance Plus Antiresonance

    NASA Technical Reports Server (NTRS)

    Cuzzi, Jeffrey N.; Marouf, Essam; French, Richard; Jacobson, Robert

    2014-01-01

    The decades-or-longer stability of the narrow F Ring core in a sea of orbital chaos appears to be due to an unusual combination of traditional corotation resonance and a novel kind of "antiresonance". At a series of specific locations in the F Ring region, apse precession between synodic encounters with Prometheus allows semimajor axis perturbations to promptly cancel before significant orbital period changes can occur. This cancellation fails for particles that encounter Prometheus when it is near its apoapse, especially during periods of antialignment of its apse with that of the F Ring. At these times, the strength of the semimajor axis perturbation is large (tens of km) and highly nonsinusoidal in encounter longitude, making it impossible to cancel promptly on a subsequent encounter and leading to chaotic orbital diffusion. Only particles that consistently encounter Prometheus away from its apoapse can use antiresonance to maintain stable orbits, implying that the true mean motion nF of the stable core must be defined by a corotational resonance of the form nF = nP(-kappa)P/m, where (nP, kappaP) are Prometheus' mean motion and epicycle frequency. To test this hypothesis we used the fact that Cassini RSS occultations only sporadically detect a "massive" F Ring core, composed of several-cm-and-larger particles. We regressed the inertial longitudes of 24 Cassini RSS (and VGR) detections and 43 nondetections to a common epoch, using a comb of candidate nP, and then folded them modulo the anticipated m-number of the corotational resonance (Prometheus m = 110 outer CER), to see if clustering appears. We find the "true F Ring core" is actually arranged in a series of short longitudinal arcs separated by nearly empty longitudes, orbiting at a well determined semimajor axis of 140222.4 km (from 2005-2012 at least). Small particles seen by imaging and stellar occultations spread quickly in azimuth and obscure this clumpy structure. Small chaotic variations in the mean motion and/or apse longitude of Prometheus quickly become manifest in the F Ring core, and we suggest that the core must adapt to these changes for the F Ring to maintain stability over timescales of decades and longer

  20. New developments for determination of uncertainty in phase evaluation

    NASA Astrophysics Data System (ADS)

    Liu, Sheng

    Phase evaluation exists mostly in, but not limited to, interferometric applications that utilize coherent multidimensional signals to modulate the physical quantity of interest into a nonlinear form, represented by repeating the phase modulo of 271 radians. In order to estimate the underlying physical quantity, the wrapped phase has to be unwrapped by an evaluation procedure which is usually called phase unwrapping. The procedure of phase unwrapping will obviously face the challenge of inconsistent phase, which could bring errors in phase evaluation. The main objectives of this research include addressing the problem of inconsistent phase in phase unwrapping and applications in modern optical techniques. In this research, a new phase unwrapping algorithm is developed. The creative idea of doing phase unwrapping between regions has an advantage over conventional pixel-to-pixel unwrapping methods because the unwrapping result is more consistent by using a voting mechanism based on all Zit-discontinuities hints. Furthermore, a systematic sequence of regional unwrapping is constructed in order to achieve a global consistent result. An implementation of the idea is illustrated in dct.il with step-by-step pseudo codes. The performance of the algorithm is demonstrated on real world applications. In order to solve a phase unwrapping problem which is caused by depth discontinuities in 3D shape measurement, a new absolute phase coding strategy is developed. The algorithm presented has two merits: effectively extends the coding range and preserves the measurement sensitivity. The performance of the proposed absolute coding strategy is proved by results of 3D shape measurement for objects with surface discontinuities. As a powerful tool for real world applications a universal software package, Optical Measurement and Evaluation Software (OMES), is designed for the purposes of automatic measurement and quantitative evaluation in 3D shape measurement and laser interferometry. Combined with different sensors or setups, OMES has been successfully applied in the industries, for example, GM Powertrain, Coming, and Ford Optical Lab., and used for various applications such as shape measurement, deformation/displacement measurement, strain/stress analysis, non-destructive testing, vibration/modal analysis, and biomechanics analysis.

  1. The Geometry of Quadratic Polynomial Differential Systems with a Finite and an Infinite Saddle-Node (C)

    NASA Astrophysics Data System (ADS)

    Artés, Joan C.; Rezende, Alex C.; Oliveira, Regilene D. S.

    Planar quadratic differential systems occur in many areas of applied mathematics. Although more than one thousand papers have been written on these systems, a complete understanding of this family is still missing. Classical problems, and in particular, Hilbert's 16th problem [Hilbert, 1900, 1902], are still open for this family. Our goal is to make a global study of the family QsnSN of all real quadratic polynomial differential systems which have a finite semi-elemental saddle-node and an infinite saddle-node formed by the collision of two infinite singular points. This family can be divided into three different subfamilies, all of them with the finite saddle-node in the origin of the plane with the eigenvectors on the axes and with the eigenvector associated with the zero eigenvalue on the horizontal axis and (A) with the infinite saddle-node in the horizontal axis, (B) with the infinite saddle-node in the vertical axis and (C) with the infinite saddle-node in the bisector of the first and third quadrants. These three subfamilies modulo the action of the affine group and time homotheties are three-dimensional and we give the bifurcation diagram of their closure with respect to specific normal forms, in the three-dimensional real projective space. The subfamilies (A) and (B) have already been studied [Artés et al., 2013b] and in this paper we provide the complete study of the geometry of the last family (C). The bifurcation diagram for the subfamily (C) yields 371 topologically distinct phase portraits with and without limit cycles for systems in the closure /line{QsnSN(C)} within the representatives of QsnSN(C) given by a chosen normal form. Algebraic invariants are used to construct the bifurcation set. The phase portraits are represented on the Poincaré disk. The bifurcation set of /line{QsnSN(C)} is not only algebraic due to the presence of some surfaces found numerically. All points in these surfaces correspond to either connections of separatrices, or the presence of a double limit cycle.

  2. Lévy/Anomalous Diffusion as a Mean-Field Theory for 3D Cloud Effects in Shortwave Radiative Transfer: Empirical Support, New Analytical Formulation, and Impact on Atmospheric Absorption

    NASA Astrophysics Data System (ADS)

    Buldyrev, S.; Davis, A.; Marshak, A.; Stanley, H. E.

    2001-12-01

    Two-stream radiation transport models, as used in all current GCM parameterization schemes, are mathematically equivalent to ``standard'' diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. The space/time spread (technically, the Green function) of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows directly from first principles (the radiative transfer equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the ``1-g'' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as ``anomalous'' diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics literature to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of Lévy/anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM radiation parameterization.

  3. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the minimum amount of time. Given a list of numbers, try to find one or more solutions in which, if each number is compressed by use of the modulo function by some value, then a unique value is generated.

  4. A symmetry model for genetic coding via a wallpaper group composed of the traditional four bases and an imaginary base E: towards category theory-like systematization of molecular/genetic biology.

    PubMed

    Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun

    2014-05-07

    Previously, we suggested prototypal models that describe some clinical states based on group postulates. Here, we demonstrate a group/category theory-like model for molecular/genetic biology as an alternative application of our previous model. Specifically, we focus on deoxyribonucleic acid (DNA) base sequences. We construct a wallpaper pattern based on a five-letter cruciform motif with letters C, A, T, G, and E. Whereas the first four letters represent the standard DNA bases, the fifth is introduced for ease in formulating group operations that reproduce insertions and deletions of DNA base sequences. A basic group Z5 = {r, u, d, l, n} of operations is defined for the wallpaper pattern, with which a sequence of points can be generated corresponding to changes of a base in a DNA sequence by following the orbit of a point of the pattern under operations in group Z5. Other manipulations of DNA sequence can be treated using a vector-like notation 'Dj' corresponding to a DNA sequence but based on the five-letter base set; also, 'Dj's are expressed graphically. Insertions and deletions of a series of letters 'E' are admitted to assist in describing DNA recombination. Likewise, a vector-like notation Rj can be constructed for sequences of ribonucleic acid (RNA). The wallpaper group B = {Z5×∞, ●} (an ∞-fold Cartesian product of Z5) acts on Dj (or Rj) yielding changes to Dj (or Rj) denoted by 'Dj◦B(j→k) = Dk' (or 'Rj◦B(j→k) = Rk'). Based on the operations of this group, two types of groups-a modulo 5 linear group and a rotational group over the Gaussian plane, acting on the five bases-are linked as parts of the wallpaper group for broader applications. As a result, changes, insertions/deletions and DNA (RNA) recombination (partial/total conversion) are described. As an exploratory study, a notation for the canonical "central dogma" via a category theory-like way is presented for future developments. Despite the large incompleteness of our methodology, there is fertile ground to consider a symmetry model for genetic coding based on our specific wallpaper group. A more integrated formulation containing "central dogma" for future molecular/genetic biology remains to be explored.

  5. Symmetry boost of the fidelity of Shor factoring

    NASA Astrophysics Data System (ADS)

    Nam, Y. S.; Blümel, R.

    2018-05-01

    In Shor's algorithm quantum subroutines occur with the structure F U F-1 , where F is a unitary transform and U is performing a quantum computation. Examples are quantum adders and subunits of quantum modulo adders. In this paper we show, both analytically and numerically, that if, in analogy to spin echoes, F and F-1 can be implemented symmetrically when executing Shor's algorithm on actual, imperfect quantum hardware, such that F and F-1 have the same hardware errors, a symmetry boost in the fidelity of the combined F U F-1 quantum operation results when compared to the case in which the errors in F and F-1 are independently random. Running the complete gate-by-gate implemented Shor algorithm, we show that the symmetry-induced fidelity boost can be as large as a factor 4. While most of our analytical and numerical results concern the case of over- and under-rotation of controlled rotation gates, in the numerically accessible case of Shor's algorithm with a small number of qubits, we show explicitly that the symmetry boost is robust with respect to more general types of errors. While, expectedly, additional error types reduce the symmetry boost, we show explicitly, by implementing general off-diagonal SU (N ) errors (N =2 ,4 ,8 ), that the boost factor scales like a Lorentzian in δ /σ , where σ and δ are the error strengths of the diagonal over- and underrotation errors and the off-diagonal SU (N ) errors, respectively. The Lorentzian shape also shows that, while the boost factor may become small with increasing δ , it declines slowly (essentially like a power law) and is never completely erased. We also investigate the effect of diagonal nonunitary errors, which, in analogy to unitary errors, reduce but never erase the symmetry boost. Going beyond the case of small quantum processors, we present analytical scaling results that show that the symmetry boost persists in the practically interesting case of a large number of qubits. We illustrate this result explicitly for the case of Shor factoring of the semiprime RSA-1024, where, analytically, focusing on over- and underrotation errors, we obtain a boost factor of about 10. In addition, we provide a proof of the fidelity product formula, including its range of applicability.

  6. F Ring Core Stability: Corotation Resonance Plus Antiresonance

    NASA Astrophysics Data System (ADS)

    Cuzzi, Jeffrey N.; Marouf, Essam; French, Richard; Jacobson, Robert

    2014-11-01

    The decades-or-longer stability of the narrow F Ring core in a sea of orbital chaos appears to be due to an unusual combination of traditional corotation resonance and a novel kind of “antiresonance”. At a series of specific locations in the F Ring region, apse precession between synodic encounters with Prometheus allows semimajor axis perturbations to promptly cancel before significant orbital period changes can occur (Cuzzi et al. 2014, Icarus 232, 157-175). This cancellation fails for particles that encounter Prometheus when it is near its apoapse, especially during periods of antialignment of its apse with that of the F Ring. At these times, the strength of the semimajor axis perturbation is large (tens of km) and highly nonsinusoidal in encounter longitude, making it impossible to cancel promptly on a subsequent encounter and leading to chaotic orbital diffusion. Only particles that consistently encounter Prometheus away from its apoapse can use antiresonance to maintain stable orbits, implying that the true mean motion nF of the stable core must be defined by a corotational resonance of the form nF = nP-κP/m, where (nP, κP) are Prometheus’ mean motion and epicycle frequency. To test this hypothesis we used the fact that Cassini RSS occultations only sporadically detect a “massive” F Ring core, composed of several-cm-and-larger particles. We regressed the inertial longitudes of 24 Cassini RSS (and VGR) detections and 43 nondetections to a common epoch, using a comb of candidate nP, and then folded them modulo the anticipated m-number of the corotational resonance (Prometheus m=110 outer CER), to see if clustering appears. We find the “true F Ring core” is actually arranged in a series of short longitudinal arcs separated by nearly empty longitudes, orbiting at a well determined semimajor axis of 140222.4km (from 2005-2012 at least). Small particles seen by imaging and stellar occultations spread quickly in azimuth and obscure this clumpy structure. Small chaotic variations in the mean motion and/or apse longitude of Prometheus quickly become manifest in the F Ring core, and we suggest that the core must adapt to these changes for the F Ring to maintain stability over timescales of decades and longer.

  7. Lévy/Anomalous Diffusion as a Mean-Field Theory for 3D Cloud Effects in SW-RT: Empirical Support, New Analytical Formulation, and Impact on Atmospheric Absorption

    NASA Astrophysics Data System (ADS)

    Pfeilsticker, K.; Davis, A.; Marshak, A.; Suszcynsky, D. M.; Buldryrev, S.; Barker, H.

    2001-12-01

    2-stream RT models, as used in all current GCMs, are mathematically equivalent to standard diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. In other words, after the conventional van de Hulst rescaling by 1/(1-g) in R3 and also by (1-g) in t, solar photons follow convoluted fractal trajectories in the atmosphere. For instance, we know that transmitted light is typically scattered about (1-g)τ 2 times while reflected light is scattered on average about τ times, where τ is the optical depth of the column. The space/time spread of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows from directly from first principles (the RT equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the '1-g' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as anomalous diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM RT parameterization.

  8. A symmetry model for genetic coding via a wallpaper group composed of the traditional four bases and an imaginary base E: Towards category theory-like systematization of molecular/genetic biology

    PubMed Central

    2014-01-01

    Background Previously, we suggested prototypal models that describe some clinical states based on group postulates. Here, we demonstrate a group/category theory-like model for molecular/genetic biology as an alternative application of our previous model. Specifically, we focus on deoxyribonucleic acid (DNA) base sequences. Results We construct a wallpaper pattern based on a five-letter cruciform motif with letters C, A, T, G, and E. Whereas the first four letters represent the standard DNA bases, the fifth is introduced for ease in formulating group operations that reproduce insertions and deletions of DNA base sequences. A basic group Z5 = {r, u, d, l, n} of operations is defined for the wallpaper pattern, with which a sequence of points can be generated corresponding to changes of a base in a DNA sequence by following the orbit of a point of the pattern under operations in group Z5. Other manipulations of DNA sequence can be treated using a vector-like notation ‘Dj’ corresponding to a DNA sequence but based on the five-letter base set; also, ‘Dj’s are expressed graphically. Insertions and deletions of a series of letters ‘E’ are admitted to assist in describing DNA recombination. Likewise, a vector-like notation Rj can be constructed for sequences of ribonucleic acid (RNA). The wallpaper group B = {Z5×∞, ●} (an ∞-fold Cartesian product of Z5) acts on Dj (or Rj) yielding changes to Dj (or Rj) denoted by ‘Dj◦B(j→k) = Dk’ (or ‘Rj◦B(j→k) = Rk’). Based on the operations of this group, two types of groups—a modulo 5 linear group and a rotational group over the Gaussian plane, acting on the five bases—are linked as parts of the wallpaper group for broader applications. As a result, changes, insertions/deletions and DNA (RNA) recombination (partial/total conversion) are described. As an exploratory study, a notation for the canonical “central dogma” via a category theory-like way is presented for future developments. Conclusions Despite the large incompleteness of our methodology, there is fertile ground to consider a symmetry model for genetic coding based on our specific wallpaper group. A more integrated formulation containing “central dogma” for future molecular/genetic biology remains to be explored. PMID:24885369

  9. A Virtual Observatory Census to Address Dwarfs Origins (AVOCADO). I. Science goals, sample selection, and analysis tools

    NASA Astrophysics Data System (ADS)

    Sánchez-Janssen, R.; Amorín, R.; García-Vargas, M.; Gomes, J. M.; Huertas-Company, M.; Jiménez-Esteban, F.; Mollá, M.; Papaderos, P.; Pérez-Montero, E.; Rodrigo, C.; Sánchez Almeida, J.; Solano, E.

    2013-06-01

    Context. Even though they are by far the most abundant of all galaxy types, the detailed properties of dwarf galaxies are still only poorly characterised - especially because of the observational challenge that their intrinsic faintness and weak clustering properties represent. Aims: AVOCADO aims at establishing firm conclusions on the formation and evolution of dwarf galaxies by constructing and analysing a homogeneous, multiwavelength dataset for a statistically significant sample of approximately 6500 nearby dwarfs (Mi - 5 log h100 > - 18 mag). The sample is selected to lie within the 20 < D < 60 h100-1 Mpc volume covered by the SDSS-DR7 footprint, and is thus volume-limited for Mi - 5 log h100 < -16 mag dwarfs - but includes ≈1500 fainter systems. We will investigate the roles of mass and environment in determining the current properties of the different dwarf morphological types - including their structure, their star formation activity, their chemical enrichment history, and a breakdown of their stellar, dust, and gas content. Methods: We present the sample selection criteria and describe the suite of analysis tools, some of them developed in the framework of the Virtual Observatory. We use optical spectra and UV-to-NIR imaging of the dwarf sample to derive star formation rates, stellar masses, ages, and metallicities - which are supplemented with structural parameters that are used to classify them morphologically. This unique dataset, coupled with a detailed characterisation of each dwarf's environment, allows for a fully comprehensive investigation of their origins and enables us to track the (potential) evolutionary paths between the different dwarf types. Results: We characterise the local environment of all dwarfs in our sample, paying special attention to trends with current star formation activity. We find that virtually all quiescent dwarfs are located in the vicinity (projected distances ≲ 1.5 h100-1 Mpc) of ≳ L∗ companions, consistent with recent results. While star-forming dwarfs are preferentially found at separations of the order of 1 h100-1 Mpc, there appears to be a tail towards low separations (≲ 100 h100-1 kpc) in the distribution of projected distances. We speculate that, modulo projection effects, this probably represents a genuine population of late-type dwarfs caught upon first infall about their host and before environmental quenching has fully operated. In this context, these results suggest that internal mechanisms - such as gas exhaustion via star formation or feedback effects - are not sufficient to completely cease the star formation activity in dwarf galaxies, and that becoming the satellite of a massive central galaxy appears to be a necessary condition to create a quiescent dwarf.

  10. TaylUR 3, a multivariate arbitrary-order automatic differentiation package for Fortran 95

    NASA Astrophysics Data System (ADS)

    von Hippel, G. M.

    2010-03-01

    This new version of TaylUR is based on a completely new core, which now is able to compute the numerical values of all of a complex-valued function's partial derivatives up to an arbitrary order, including mixed partial derivatives. New version program summaryProgram title: TaylUR Catalogue identifier: ADXR_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXR_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 6750 No. of bytes in distributed program, including test data, etc.: 19 162 Distribution format: tar.gz Programming language: Fortran 95 Computer: Any computer with a conforming Fortran 95 compiler Operating system: Any system with a conforming Fortran 95 compiler Classification: 4.12, 4.14 Catalogue identifier of previous version: ADXR_v2_0 Journal reference of previous version: Comput. Phys. Comm. 176 (2007) 710 Does the new version supersede the previous version?: Yes Nature of problem: Problems that require potentially high orders of partial derivatives with respect to several variables or derivatives of complex-valued functions, such as e.g. momentum or mass expansions of Feynman diagrams in perturbative QFT, and which previous versions of this TaylUR [1,2] cannot handle due to their lack of support for mixed partial derivatives. Solution method: Arithmetic operators and Fortran intrinsics are overloaded to act correctly on objects of a defined type taylor, which encodes a function along with its first few partial derivatives with respect to the user-defined independent variables. Derivatives of products and composite functions are computed using multivariate forms [3] of Leibniz's rule D(fg)=∑{ν!}/{μ!(μ-ν)!}DfDg where ν=(ν,…,ν), |ν|=∑j=1dν, ν!=∏j=1dν!, Df=∂f/(∂x⋯∂x), and μ<ν iff either |μ|<|ν| or |μ|=|ν|,μ=ν,…,μ=ν,μ<ν for some k∈{0,…,d-1}, and of Fàa di Bruno's formula D(f○g)=∑p=1|ν|(f○g)∑s=1|ν|∑,…,k;λ,…,λ)}ν!/(∏j=1sk!λ!)(g)k where the sum is over {(k,…,k;λ,…,λ)∈Z:k>0,0<λ<⋯<λ, ∑i=1sk=p,∑i=1skλ=ν}. An indexed storage system is used to store the higher-order derivative tensors in a one-dimensional array. The relevant indices (k,…,k;λ,…,λ) and the weights occurring in the sums in Leibniz's and Fàa di Bruno's formula are precomputed at startup and stored in static arrays for later use. Reasons for new version: The earlier version lacked support for mixed partial derivatives, but a number of projects of interest required them. Summary of revisions: The internal representation of a taylor object has changed to a one-dimensional array which contains the partial derivatives in ascending order, and in lexicographic order of the corresponding multiindex within the same order. The necessary mappings between multiindices and indices into the taylor objects' internal array are computed at startup. To support the change to a genuinely multivariate taylor type, the DERIVATIVE function is now implemented via an interface that accepts both the older format derivative(f,mu,n)=∂μnf and also a new format derivative(f,mu(:))=Df that allows access to mixed partial derivatives. Another related extension to the functionality of the module is the HESSIAN function that returns the Hessian matrix of second derivatives of its argument. Since the calculation of all mixed partial derivatives can be very costly, and in many cases only some subset is actually needed, a masking facility has been added. Calling the subroutine DEACTIVATE_DERIVATIVE with a multiindex as an argument will deactivate the calculation of the partial derivative belonging to that multiindex, and of all partial derivatives it can feed into. Similarly, calling the subroutine ACTIVATE_DERIVATIVE will activate the calculation of the partial derivative belonging to its argument, and of all partial derivatives that can feed into it. Moreover, it is possible to turn off the computation of mixed derivatives altogether by setting Diagonal_taylors to .TRUE.. It should be noted that any change of Diagonal_taylors or Taylor_order invalidates all existing taylor objects. To aid the better integration of TaylUR into the HPSrc library [4], routines SET_DERIVATIVE and SET_ALL_DERIVATIVES are provided as a means of manually constructing a taylor object with given derivatives. Restrictions: Memory and CPU time constraints may restrict the number of variables and Taylor expansion order that can be achieved. Loss of numerical accuracy due to cancellation may become an issue at very high orders. Unusual features: These are the same as in previous versions, but are enumerated again here for clarity. The complex conjugation operation assumes all independent variables to be real. The functions REAL and AIMAG do not convert to real type, but return a result of type taylor (with the real/imaginary part of each derivative taken) instead. The user-defined functions VALUE, REALVALUE and IMAGVALUE, which return the value of a taylor object as a complex number, and the real and imaginary part of this value, respectively, as a real number are also provided. Fortran 95 intrinsics that are defined only for arguments of real type ( ACOS, AINT, ANINT, ASIN, ATAN, ATAN2, CEILING, DIM, FLOOR, INT, LOG10, MAX, MAXLOC, MAXVAL, MIN, MINLOC, MINVAL, MOD, MODULO, NINT, SIGN) will silently take the real part of taylor-valued arguments unless the module variable Real_args_warn is set to .TRUE., in which case they will return a quiet NaN value (if supported by the compiler) when called with a taylor argument whose imaginary part exceeds the module variable Real_args_tol. In those cases where the derivative of a function becomes undefined at certain points (as for ABS, AINT, ANINT, MAX, MIN, MOD, and MODULO), while the value is well defined, the derivative fields will be filled with quiet NaN values (if supported by the compiler). Additional comments: This version of TaylUR is released under the second version of the GNU General Public License (GPLv2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, it is requested that any publications including results from the use of TaylUR or any modification derived from it cite Refs. [1,2] as well as this paper. Finally, users are also requested to communicate to the author details of such publications, as well as of any bugs found or of required or useful modifications made or desired by them. Running time: The running time of TaylUR operations grows rapidly with both the number of variables and the Taylor expansion order. Judicious use of the masking facility to drop unneeded higher derivatives can lead to significant accelerations, as can activation of the Diagonal_taylors variable whenever mixed partial derivatives are not needed. Acknowledgments: The author thanks Alistair Hart for helpful comments and suggestions. This work is supported by the Deutsche Forschungsgemeinschaft in the SFB/TR 09. References:G.M. von Hippel, TaylUR, an arbitrary-order diagonal automatic differentiation package for Fortran 95, Comput. Phys. Comm. 174 (2006) 569. G.M. von Hippel, New version announcement for TaylUR, an arbitrary-order diagonal automatic differentiation package for Fortran 95, Comput. Phys. Comm. 176 (2007) 710. G.M. Constantine, T.H. Savits, A multivariate Faa di Bruno formula with applications, Trans. Amer. Math. Soc. 348 (2) (1996) 503. A. Hart, G.M. von Hippel, R.R. Horgan, E.H. Müller, Automated generation of lattice QCD Feynman rules, Comput. Phys. Comm. 180 (2009) 2698, doi:10.1016/j.cpc.2009.04.021, arXiv:0904.0375.

  11. Elastoplasticidad anisotropa de metales en grandes deformaciones

    NASA Astrophysics Data System (ADS)

    Caminero Torija, Miguel Angel

    El objetivo de este trabajo es el desarrollo de modelos y algoritmos numericos que simulen el comportamiento del material bajo estas condiciones en el contexto de programas de elementos finitos, dando como resultado predicciones mas precisas de los procesos de conformado y deformacion plastica en general. Para lograr este objetivo se han desarrollado diversas tareas destinadas a mejorar las predicciones en tres aspectos fundamentales. El primer aspecto consiste en la mejora de la descripcion del endurecimiento cinematico anisotropo en pequenas deformaciones, lo cual se ha realizado a traves de modelos y algoritmos implicitos de superficies multiples. Ha sido estudiada la consistencia de este tipo de modelos tanto si estan basados en una regla implicita similar a la de Mroz o en la regla de Prager. Ademas se han simulado los ensayos de Lamba y Sidebottom, obteniendo, en contra de la creencia general, muy buenas predicciones con la regla de Prager. Dichos modelos podrian ser extendidos de forma relativamente facil para considerar grandes deformaciones a traves de procedimientos en deformaciones logaritmicas, similares a los desarrollados en esta tesis y detallados a continuacion. El segundo aspecto consiste en la descripcion de la anisotropia elastoplastica inicial. Esto se ha conseguido mediante el desarrollo de modelos y algoritmos para plasticidad anisotropa en grandes deformaciones, bien ignorando la posible anisotropia elastica, bien considerandola simultaneamente con la anisotropia plastica. Para ello ha sido necesario desarrollar primero un nuevo algoritmo de elastoplasticidad anisotropa en pequenas deformaciones consistentemente linealizado y sin despreciar ningun termino, de tal forma que se conserve la convergencia cuadratica de los metodos de Newton. Este algoritmo en pequenas deformaciones ha servido para realizar la correccion plastica de dos algoritmos en grandes deformaciones. El primero de estos algoritmos es una variacion del clasico algoritmo de Eterovic y Bathe para incluir la posibilidad de plasticidad anisotropa con endurecimiento mixto. Este primer algoritmo esta restringido a casos de isotropia elastica. La isotropia elastica es una hipotesis bastante habitual en plasticidad anisotropa y tiene la ventaja de que permite el uso de formulaciones mixtas u/p. El segundo algoritmo, mas complejo y general, incluye la posibilidad de elasticidad anisotropa, plasticidad anisotropa y endurecimiento mixto. Este algoritmo supone una contribucion importante ya que esta basado en hipotesis comunmente aceptadas y utilizadas en elastoplasticidad isotropa: descomposicion multiplicativa del gradiente de deformaciones en parte elastica y parte plastica, descripcion hiperelastica sencilla en funcion de deformaciones logaritmicas e integracion exponencial que conserva el volumen. Ademas, la estructura final del algoritmo es modular y relativamente sencilla, consistiendo en un pre- y un postprocesador geometrico y una correccion plastica realizada en pequenas deformaciones. El algoritmo esta consistentemente linealizado para conservar la convergencia cuadratica asintotica de los metodos de Newton y la forma final que toma dicha linealizacion es similar al caso de isotropia elastoplastica implementado; consiste en el modulo tangente algoritmico de pequenas deformaciones sobre el que se aplica una transformacion para convertirlo en el de grandes deformaciones. Todos estos modelos han sido implementados en un codigo propio de elementos finitos denominado DULCINEA, el cual tiene formulaciones lagrangianas totales y actualizadas para grandes deformaciones. Una de las tareas necesarias para poder realizar las simulaciones, ha sido el estudio e implementacion de diferentes elementos que no sufran el bloqueo volumetrico severo que se observa en formulaciones estandar basadas en desplazamientos. Este bloqueo se debe a la condicion de quasi-incompresibilidad que imponen los modelos de plasticidad desviadores y consiste en una respuesta exageradamente rigida de la solucion obtenida por el metodo de los elementos finitos estandar. Entre los elementos implementados cabe destacar el basado en la formulacion mixta u/p, que contiene una interpolacion adicional de grados de libertad de presion. Estos grados de libertad adicionales habitualmente son internos al elemento en mecanica de solidos. En este trabajo se ha desarrollado e implementado en DULCINEA una familia de elementos tridimensionales mixtos en grandes deformaciones que incluye el caso particular BMIX 27/27/4, basado en la formulacion u/p, constituido por 27 nudos, con 27 puntos de integracion estandar y 4 grados de libertad de presiones, y que pasa la condicion Inf-Sup o de Babuska-Brezzi. Sin embargo, se ha observado que la formulacion u/p presenta ciertas limitaciones bajo las hipotesis conjuntas de anisotropia elastica y anisotropia plastica. (Abstract shortened by UMI.)

  12. Regression Verification Using Impact Summaries

    NASA Technical Reports Server (NTRS)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An evaluation of our regression verification technique shows that our approach is capable of leveraging similarities between program versions to reduce the size of the queries and the time required to check for logical equivalence. The main contributions of this work are: - A regression verification technique to generate impact summaries that can be checked for functional equivalence using an off-the-shelf decision procedure. - A proof that our approach is sound and complete with respect to the depth bound of symbolic execution. - An implementation of our technique using the LLVMcompiler infrastructure, the klee Symbolic Virtual Machine [4], and a variety of Satisfiability Modulo Theory (SMT) solvers, e.g., STP [7] and Z3 [6]. - An empirical evaluation on a set of C artifacts which shows that the use of impact summaries can reduce the cost of regression verification.

  13. Introduction

    NASA Astrophysics Data System (ADS)

    Cohen, E. G. D.

    Lecture notes are organized around the key word dissipation, while focusing on a presentation of modern theoretical developments in the study of irreversible phenomena. A broad cross-disciplinary perspective towards non-equilibrium statistical mechanics is backed by the general theory of nonlinear and complex dynamical systems. The classical-quantum intertwine and semiclassical dissipative borderline issue (decoherence, "classical out of quantum") are here included . Special emphasis is put on links between the theory of classical and quantum dynamical systems (temporal disorder, dynamical chaos and transport processes) with central problems of non-equilibrium statistical mechanics like e.g. the connection between dynamics and thermodynamics, relaxation towards equilibrium states and mechanisms capable to drive and next maintain the physical system far from equilibrium, in a non-equilibrium steady (stationary) state. The notion of an equilibrium state - towards which a system naturally evolves if left undisturbed - is a fundamental concept of equilibrium statistical mechanics. Taken as a primitive point of reference that allows to give an unambiguous status to near equilibrium and far from equilibrium systems, together with the dynamical notion of a relaxation (decay) towards a prescribed asymptotic invariant measure or probability distribution (properties of ergodicity and mixing are implicit). A related issue is to keep under control the process of driving a physical system away from an initial state of equilibrium and either keeping it in another (non-equilibrium) steady state or allowing to restore the initial data (return back, relax). To this end various models of environment (heat bath, reservoir, thermostat, measuring instrument etc.), and the environment - system coupling are analyzed. The central theme of the book is the dynamics of dissipation and various mechanisms responsible for the irreversible behaviour (transport properties) of open systems on classical and quantum levels of description. A distinguishing feature of these lecture notes is that microscopic foundations of irreversibility are investigated basically in terms of "small" systems, when the "system" and/or "environment" may have a finite (and small) number of degrees of freedom and may be bounded. This is to be contrasted with the casual understanding of statistical mechanics which is regarded to refer to systems with a very large number of degrees of freedom. In fact, it is commonly accepted that the accumulation of effects due to many (range of the Avogadro number) particles is required for statistical mechanics reasoning. Albeit those large numbers are not at all sufficient for transport properties. A helpful hint towards this conceptual turnover comes from the observation that for chaotic dynamical systems the random time evolution proves to be compatible with the underlying purely deterministic laws of motion. Chaotic features of the classical dynamics already appear in systems with two degrees of freedom and such systems need to be described in statistical terms, if we wish to quantify the dynamics of relaxation towards an invariant ergodic measure. The relaxation towards equilibrium finds a statistical description through an analysis of statistical ensembles. This entails an extension of the range of validity of statistical mechanics to small classical systems. On the other hand, the dynamics of fluctuations in macroscopic dissipative systems (due to their molecular composition and thermal mobility) may render a characterization of such systems as being chaotic. That motivates attempts of understanding the role of microscopic chaos and various "chaotic hypotheses" - dynamical systems approach is being pushed down to the level of atoms, molecules and complex matter constituents, whose natural substitute are low-dimensional model subsystems (encompassing as well the mesoscopic "quantum chaos") - in non-equilibrium transport phenomena. On the way a number of questions is addressed like e.g.: is there, or what is the nature of a connection between chaos (modern theory of dynamical systems) and irreversible thermodynamics; can really quantum chaos explain some peculiar features of quantum transport? The answer in both cases is positive, modulo a careful discrimination between viewing the dynamical chaos as a necessary or sufficient basis for irreversibility. In those dynamical contexts, another key term dynamical semigroups refers to major technical tools appropriate for the "dissipative mathematics", modelling irreversible behaviour on the classical and quantum levels of description. Dynamical systems theory and "quantum chaos" research involve both a high level of mathematical sophistication and heavy computer "experimentation". One of the present volume specific flavors is a tutorial access to quite advanced mathematical tools. They gradually penetrate the classical and quantum dynamical semigroup description, while culminating in the noncommutative Brillouin zone construction as a prerequisite to understand transport in aperiodic solids. Lecture notes are structured into chapters to give a better insight into major conceptual streamlines. Chapter I is devoted to a discussion of non-equilibrium steady states and, through so-called chaotic hypothesis combined with suitable fluctuation theorems, elucidates the role of Sinai-Ruelle-Bowen distribution in both equilibrium and non-equilibrium statistical physics frameworks (E. G. D. Cohen). Links between dynamics and statistics (Boltzmann versus Tsallis) are also discussed. Fluctuation relations and a survey of deterministic thermostats are given in the context of non-equilibrium steady states of fluids (L. Rondoni). Response of systems driven far from equilibrium is analyzed on the basis of a central assertion about the existence of the statistical representation in terms of an ensemble of dynamical realizations of the driving process. Non-equilibrium work relation is deduced for irreversible processes (C. Jarzynski). The survey of non-equilibrium steady states in statistical mechanics of classical and quantum systems employs heat bath models and the random matrix theory input. The quantum heat bath analysis and derivation of fluctuation-dissipation theorems is performed by means of the influence functional technique adopted to solve quantum master equations (D. Kusnezov). Chapter II deals with an issue of relaxation and its dynamical theory in both classical and quantum contexts. Pollicott-Ruelle resonance background for the exponential decay scenario is discussed for irreversible processes of diffusion in the Lorentz gas and multibaker models (P. Gaspard). The Pollicott-Ruelle theory reappears as a major inspiration in the survey of the behaviour of ensembles of chaotic systems, with a focus on model systems for which no rigorous results concerning the exponential decay of correlations in time is available (S. Fishman). The observation, that non-equilibrium transport processes in simple classical chaotic systems can be described in terms of fractal structures developing in the system phase space, links their formation and properties with the entropy production in the course of diffusion processes displaying a low dimensional deterministic (chaotic) origin (J. R. Dorfman). Chapter III offers an introduction to the theory of dynamical semigroups. Asymptotic properties of Markov operators and Markov semigroups acting in the set of probability densities (statistical ensemble notion is implicit) are analyzed. Ergodicity, mixing, strong (complete) mixing and sweeping are discussed in the familiar setting of "noise, chaos and fractals" (R. Rudnicki). The next step comprises a passage to quantum dynamical semigroups and completely positive dynamical maps, with an ultimate goal to introduce a consistent framework for the analysis of irreversible phenomena in open quantum systems, where dissipation and decoherence are crucial concepts (R. Alicki). Friction and damping in classical and quantum mechanics of finite dissipative systems is analyzed by means of Markovian quantum semigroups with special emphasis on the issue of complete positivity (M. Fannes). Specific two-level model systems of elementary particle physics (kaons) and rudiments of neutron interferometry are employed to elucidate a distinction between positivity and complete positivity (F. Benatti). Quantization of dynamics of stochastic models related to equilibrium Gibbs states results in dynamical maps which form quantum stochastic dynamical semigroups (W. A. Majewski). Chapter IV addresses diverse but deeply interrelated features of driven chaotic (mesoscopic) classical and quantum systems, their dissipative properties, notions of quantum irreversibility, entanglement, dephasing and decoherence. A survey of non-perturbative quantum effects for open quantum systems is concluded by outlining the discrepancies between random matrix theory and non-perturbative semiclassical predictions (D. Cohen). As a useful supplement to the subject of bounded open systems, methods of quantum state control in a cavity (coherent versus incoherent dynamics and dissipation) are described for low dimensional quantum systems (A. Buchleitner). The dynamics of open quantum systems can be alternatively described by means of non-Markovian stochastic Schrödinger equation, jointly for an open system and its environment, which moves us beyond the Linblad evolution scenario of Markovian dynamical semigroups. The quantum Brownian motion is considered (W. Strunz) . Chapter V enforces a conceptual transition 'from "small" to "large" systems with emphasis on irreversible thermodynamics of quantum transport. Typical features of the statistical mechanics of infinitely extended systems and the dynamical (small) systems approach are described by means of representative examples of relaxation towards asymptotic steady states: quantum one-dimensional lattice conductor and an open multibaker map (S. Tasaki). Dissipative transport in aperiodic solids is reviewed by invoking methods on noncommutative geometry. The anomalous Drude formula is derived. The occurence of quantum chaos is discussed together with its main consequences (J. Bellissard). The chapter is concluded by a survey of scaling limits of the N-body Schrödinger quantum dynamics, where classical evolution equations of irreversible statistical mechanics (linear Boltzmann, Hartree, Vlasov) emerge "out of quantum". In particular, a scaling limit of one body quantum dynamics with impurities (static random potential) and that of quantum dynamics with weakly coupled phonons are shown to yield the linear Boltzmann equation (L. Erdös). Various interrelations between chapters and individual lectures, plus a detailed fine-tuned information about the subject matter coverage of the volume, can be recovered by examining an extensive index.

  14. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    NASA Astrophysics Data System (ADS)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  15. Extensible packet processing architecture

    DOEpatents

    Robertson, Perry J.; Hamlet, Jason R.; Pierson, Lyndon G.; Olsberg, Ronald R.; Chun, Guy D.

    2013-08-20

    A technique for distributed packet processing includes sequentially passing packets associated with packet flows between a plurality of processing engines along a flow through data bus linking the plurality of processing engines in series. At least one packet within a given packet flow is marked by a given processing engine to signify by the given processing engine to the other processing engines that the given processing engine has claimed the given packet flow for processing. A processing function is applied to each of the packet flows within the processing engines and the processed packets are output on a time-shared, arbitered data bus coupled to the plurality of processing engines.

  16. Processed and ultra-processed foods are associated with lower-quality nutrient profiles in children from Colombia.

    PubMed

    Cornwell, Brittany; Villamor, Eduardo; Mora-Plazas, Mercedes; Marin, Constanza; Monteiro, Carlos A; Baylin, Ana

    2018-01-01

    To determine if processed and ultra-processed foods consumed by children in Colombia are associated with lower-quality nutrition profiles than less processed foods. We obtained information on sociodemographic and anthropometric variables and dietary information through dietary records and 24 h recalls from a convenience sample of the Bogotá School Children Cohort. Foods were classified into three categories: (i) unprocessed and minimally processed foods, (ii) processed culinary ingredients and (iii) processed and ultra-processed foods. We also examined the combination of unprocessed foods and processed culinary ingredients. Representative sample of children from low- to middle-income families in Bogotá, Colombia. Children aged 5-12 years in 2011 Bogotá School Children Cohort. We found that processed and ultra-processed foods are of lower dietary quality in general. Nutrients that were lower in processed and ultra-processed foods following adjustment for total energy intake included: n-3 PUFA, vitamins A, B12, C and E, Ca and Zn. Nutrients that were higher in energy-adjusted processed and ultra-processed foods compared with unprocessed foods included: Na, sugar and trans-fatty acids, although we also found that some healthy nutrients, including folate and Fe, were higher in processed and ultra-processed foods compared with unprocessed and minimally processed foods. Processed and ultra-processed foods generally have unhealthy nutrition profiles. Our findings suggest the categorization of foods based on processing characteristics is promising for understanding the influence of food processing on children's dietary quality. More studies accounting for the type and degree of food processing are needed.

  17. Dynamic control of remelting processes

    DOEpatents

    Bertram, Lee A.; Williamson, Rodney L.; Melgaard, David K.; Beaman, Joseph J.; Evans, David G.

    2000-01-01

    An apparatus and method of controlling a remelting process by providing measured process variable values to a process controller; estimating process variable values using a process model of a remelting process; and outputting estimated process variable values from the process controller. Feedback and feedforward control devices receive the estimated process variable values and adjust inputs to the remelting process. Electrode weight, electrode mass, electrode gap, process current, process voltage, electrode position, electrode temperature, electrode thermal boundary layer thickness, electrode velocity, electrode acceleration, slag temperature, melting efficiency, cooling water temperature, cooling water flow rate, crucible temperature profile, slag skin temperature, and/or drip short events are employed, as are parameters representing physical constraints of electroslag remelting or vacuum arc remelting, as applicable.

  18. On Intelligent Design and Planning Method of Process Route Based on Gun Breech Machining Process

    NASA Astrophysics Data System (ADS)

    Hongzhi, Zhao; Jian, Zhang

    2018-03-01

    The paper states an approach of intelligent design and planning of process route based on gun breech machining process, against several problems, such as complex machining process of gun breech, tedious route design and long period of its traditional unmanageable process route. Based on gun breech machining process, intelligent design and planning system of process route are developed by virtue of DEST and VC++. The system includes two functional modules--process route intelligent design and its planning. The process route intelligent design module, through the analysis of gun breech machining process, summarizes breech process knowledge so as to complete the design of knowledge base and inference engine. And then gun breech process route intelligently output. On the basis of intelligent route design module, the final process route is made, edited and managed in the process route planning module.

  19. Gasoline from coal in the state of Illinois: feasibility study. Volume I. Design. [KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-01-01

    Volume 1 describes the proposed plant: KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process, and also with ancillary processes, such as oxygen plant, shift process, RECTISOL purification process, sulfur recovery equipment and pollution control equipment. Numerous engineering diagrams are included. (LTN)

  20. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  1. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  2. Situation awareness acquired from monitoring process plants - the Process Overview concept and measure.

    PubMed

    Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd

    2016-07-01

    We introduce Process Overview, a situation awareness characterisation of the knowledge derived from monitoring process plants. Process Overview is based on observational studies of process control work in the literature. The characterisation is applied to develop a query-based measure called the Process Overview Measure. The goal of the measure is to improve coupling between situation and awareness according to process plant properties and operator cognitive work. A companion article presents the empirical evaluation of the Process Overview Measure in a realistic process control setting. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA based on data collected by process experts. Practitioner Summary: The Process Overview Measure is a query-based measure for assessing operator situation awareness from monitoring process plants in representative settings.

  3. 43 CFR 2804.19 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false How will BLM process my Processing... process my Processing Category 6 application? (a) For Processing Category 6 applications, you and BLM must enter into a written agreement that describes how BLM will process your application. The final agreement...

  4. 43 CFR 2804.19 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false How will BLM process my Processing... process my Processing Category 6 application? (a) For Processing Category 6 applications, you and BLM must enter into a written agreement that describes how BLM will process your application. The final agreement...

  5. Process Correlation Analysis Model for Process Improvement Identification

    PubMed Central

    Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data. PMID:24977170

  6. Process correlation analysis model for process improvement identification.

    PubMed

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  7. Cleanliness of Ti-bearing Al-killed ultra-low-carbon steel during different heating processes

    NASA Astrophysics Data System (ADS)

    Guo, Jian-long; Bao, Yan-ping; Wang, Min

    2017-12-01

    During the production of Ti-bearing Al-killed ultra-low-carbon (ULC) steel, two different heating processes were used when the converter tapping temperature or the molten steel temperature in the Ruhrstahl-Heraeus (RH) process was low: heating by Al addition during the RH decarburization process and final deoxidation at the end of the RH decarburization process (process-I), and increasing the oxygen content at the end of RH decarburization, heating and final deoxidation by one-time Al addition (process-II). Temperature increases of 10°C by different processes were studied; the results showed that the two heating processes could achieve the same heating effect. The T.[O] content in the slab and the refining process was better controlled by process-I than by process-II. Statistical analysis of inclusions showed that the numbers of inclusions in the slab obtained by process-I were substantially less than those in the slab obtained by process-II. For process-I, the Al2O3 inclusions produced by Al added to induce heating were substantially removed at the end of decarburization. The amounts of inclusions were substantially greater for process-II than for process-I at different refining stages because of the higher dissolved oxygen concentration in process-II. Industrial test results showed that process-I was more beneficial for improving the cleanliness of molten steel.

  8. Application of agent-based system for bioprocess description and process improvement.

    PubMed

    Gao, Ying; Kipling, Katie; Glassey, Jarka; Willis, Mark; Montague, Gary; Zhou, Yuhong; Titchener-Hooker, Nigel J

    2010-01-01

    Modeling plays an important role in bioprocess development for design and scale-up. Predictive models can also be used in biopharmaceutical manufacturing to assist decision-making either to maintain process consistency or to identify optimal operating conditions. To predict the whole bioprocess performance, the strong interactions present in a processing sequence must be adequately modeled. Traditionally, bioprocess modeling considers process units separately, which makes it difficult to capture the interactions between units. In this work, a systematic framework is developed to analyze the bioprocesses based on a whole process understanding and considering the interactions between process operations. An agent-based approach is adopted to provide a flexible infrastructure for the necessary integration of process models. This enables the prediction of overall process behavior, which can then be applied during process development or once manufacturing has commenced, in both cases leading to the capacity for fast evaluation of process improvement options. The multi-agent system comprises a process knowledge base, process models, and a group of functional agents. In this system, agent components co-operate with each other in performing their tasks. These include the description of the whole process behavior, evaluating process operating conditions, monitoring of the operating processes, predicting critical process performance, and providing guidance to decision-making when coping with process deviations. During process development, the system can be used to evaluate the design space for process operation. During manufacture, the system can be applied to identify abnormal process operation events and then to provide suggestions as to how best to cope with the deviations. In all cases, the function of the system is to ensure an efficient manufacturing process. The implementation of the agent-based approach is illustrated via selected application scenarios, which demonstrate how such a framework may enable the better integration of process operations by providing a plant-wide process description to facilitate process improvement. Copyright 2009 American Institute of Chemical Engineers

  9. Electricity from sunlight. [low cost silicon for solar cells

    NASA Technical Reports Server (NTRS)

    Yaws, C. L.; Miller, J. W.; Lutwack, R.; Hsu, G.

    1978-01-01

    The paper discusses a number of new unconventional processes proposed for the low-cost production of silicon for solar cells. Consideration is given to: (1) the Battelle process (Zn/SiCl4), (2) the Battelle process (SiI4), (3) the Silane process, (4) the Motorola process (SiF4/SiF2), (5) the Westinghouse process (Na/SiCl4), (6) the Dow Corning process (C/SiO2), (7) the AeroChem process (SiCl4/H atom), and the Stanford process (Na/SiF4). Preliminary results indicate that the conventional process and the SiI4 processes cannot meet the project goal of $10/kg by 1986. Preliminary cost evaluation results for the Zn/SiCl4 process are favorable.

  10. Composing Models of Geographic Physical Processes

    NASA Astrophysics Data System (ADS)

    Hofer, Barbara; Frank, Andrew U.

    Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.

  11. Process-based tolerance assessment of connecting rod machining process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, P. Srinivasa; Surendra Babu, B.

    2016-06-01

    Process tolerancing based on the process capability studies is the optimistic and pragmatic approach of determining the manufacturing process tolerances. On adopting the define-measure-analyze-improve-control approach, the process potential capability index ( C p) and the process performance capability index ( C pk) values of identified process characteristics of connecting rod machining process are achieved to be greater than the industry benchmark of 1.33, i.e., four sigma level. The tolerance chain diagram methodology is applied to the connecting rod in order to verify the manufacturing process tolerances at various operations of the connecting rod manufacturing process. This paper bridges the gap between the existing dimensional tolerances obtained via tolerance charting and process capability studies of the connecting rod component. Finally, the process tolerancing comparison has been done by adopting a tolerance capability expert software.

  12. Intranode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2014-01-07

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  13. Intranode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2013-07-23

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  14. Canadian Libraries and Mass Deacidification.

    ERIC Educational Resources Information Center

    Pacey, Antony

    1992-01-01

    Considers the advantages and disadvantages of six mass deacidification processes that libraries can use to salvage printed materials: the Wei T'o process, the Diethyl Zinc (DEZ) process, the FMC (Lithco) process, the Book Preservation Associates (BPA) process, the "Bookkeeper" process, and the "Lyophilization" process. The…

  15. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  16. Depth-of-processing effects on priming in stem completion: tests of the voluntary-contamination, conceptual-processing, and lexical-processing hypotheses.

    PubMed

    Richardson-Klavehn, A; Gardiner, J M

    1998-05-01

    Depth-of-processing effects on incidental perceptual memory tests could reflect (a) contamination by voluntary retrieval, (b) sensitivity of involuntary retrieval to prior conceptual processing, or (c) a deficit in lexical processing during graphemic study tasks that affects involuntary retrieval. The authors devised an extension of incidental test methodology--making conjunctive predictions about response times as well as response proportions--to discriminate among these alternatives. They used graphemic, phonemic, and semantic study tasks, and a word-stem completion test with incidental, intentional, and inclusion instructions. Semantic study processing was superior to phonemic study processing in the intentional and inclusion tests, but semantic and phonemic study processing produced equal priming in the incidental test, showing that priming was uncontaminated by voluntary retrieval--a conclusion reinforced by the response-time data--and that priming was insensitive to prior conceptual processing. The incidental test nevertheless showed a priming deficit following graphemic study processing, supporting the lexical-processing hypothesis. Adding a lexical decision to the 3 study tasks eliminated the priming deficit following graphemic study processing, but did not influence priming following phonemic and semantic processing. The results provide the first clear evidence that depth-of-processing effects on perceptual priming can reflect lexical processes, rather than voluntary contamination or conceptual processes.

  17. Improving operational anodising process performance using simulation approach

    NASA Astrophysics Data System (ADS)

    Liong, Choong-Yeun; Ghazali, Syarah Syahidah

    2015-10-01

    The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist of five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.

  18. Value-driven process management: using value to improve processes.

    PubMed

    Melnyk, S A; Christensen, R T

    2000-08-01

    Every firm can be viewed as consisting of various processes. These processes affect everything that the firm does from accepting orders and designing products to scheduling production. In many firms, the management of processes often reflects considerations of efficiency (cost) rather than effectiveness (value). In this article, we introduce a well-structured process for managing processes that begins not with the process, but rather with the customer and the product and the concept of value. This process progresses through a number of steps which include issues such as defining value, generating the appropriate metrics, identifying the critical processes, mapping and assessing the performance of these processes, and identifying long- and short-term areas for action. What makes the approach presented in this article so powerful is that it explicitly links the customer to the process and that the process is evaluated in term of its ability to effectively serve the customers.

  19. Method for routing events from key strokes in a multi-processing computer systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhodes, D.A.; Rustici, E.; Carter, K.H.

    1990-01-23

    The patent describes a method of routing user input in a computer system which concurrently runs a plurality of processes. It comprises: generating keycodes representative of keys typed by a user; distinguishing generated keycodes by looking up each keycode in a routing table which assigns each possible keycode to an individual assigned process of the plurality of processes, one of which processes being a supervisory process; then, sending each keycode to its assigned process until a keycode assigned to the supervisory process is received; sending keycodes received subsequent to the keycode assigned to the supervisory process to a buffer; next,more » providing additional keycodes to the supervisory process from the buffer until the supervisory process has completed operation; and sending keycodes stored in the buffer to processes assigned therewith after the supervisory process has completedoperation.« less

  20. Issues Management Process Course # 38401

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binion, Ula Marie

    The purpose of this training it to advise Issues Management Coordinators (IMCs) on the revised Contractor Assurance System (CAS) Issues Management (IM) process. Terminal Objectives: Understand the Laboratory’s IM process; Understand your role in the Laboratory’s IM process. Learning Objectives: Describe the IM process within the context of the CAS; Describe the importance of implementing an institutional IM process at LANL; Describe the process flow for the Laboratory’s IM process; Apply the definition of an issue; Use available resources to determine initial screening risk levels for issues; Describe the required major process steps for each risk level; Describe the personnelmore » responsibilities for IM process implementation; Access available resources to support IM process implementation.« less

  1. Social network supported process recommender system.

    PubMed

    Ye, Yanming; Yin, Jianwei; Xu, Yueshen

    2014-01-01

    Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced.

  2. [Definition and stabilization of processes I. Management processes and support in a Urology Department].

    PubMed

    Pascual, Carlos; Luján, Marcos; Mora, José Ramón; Chiva, Vicente; Gamarra, Manuela

    2015-01-01

    The implantation of total quality management models in clinical departments can better adapt to the 2009 ISO 9004 model. An essential part of implantation of these models is the establishment of processes and their stabilization. There are four types of processes: key, management, support and operative (clinical). Management processes have four parts: process stabilization form, process procedures form, medical activities cost estimation form and, process flow chart. In this paper we will detail the creation of an essential process in a surgical department, such as the process of management of the surgery waiting list.

  3. T-Check in Technologies for Interoperability: Business Process Management in a Web Services Context

    DTIC Science & Technology

    2008-09-01

    UML Sequence Diagram) 6  Figure 3:   BPMN Diagram of the Order Processing Business Process 9  Figure 4:   T-Check Process for Technology Evaluation 10...Figure 5:  Notional System Architecture 12  Figure 6:  Flow Chart of the Order Processing Business Process 14  Figure 7:  Order Processing Activities...features. Figure 3 (created with Intalio BPMS Designer [Intalio 2008]) shows a BPMN view of the Order Processing business process that is used in the

  4. A case study: application of statistical process control tool for determining process capability and sigma level.

    PubMed

    Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona

    2012-01-01

    Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.

  5. A practical approach for exploration and modeling of the design space of a bacterial vaccine cultivation process.

    PubMed

    Streefland, M; Van Herpen, P F G; Van de Waterbeemd, B; Van der Pol, L A; Beuvery, E C; Tramper, J; Martens, D E; Toft, M

    2009-10-15

    A licensed pharmaceutical process is required to be executed within the validated ranges throughout the lifetime of product manufacturing. Changes to the process, especially for processes involving biological products, usually require the manufacturer to demonstrate that the safety and efficacy of the product remains unchanged by new or additional clinical testing. Recent changes in the regulations for pharmaceutical processing allow broader ranges of process settings to be submitted for regulatory approval, the so-called process design space, which means that a manufacturer can optimize his process within the submitted ranges after the product has entered the market, which allows flexible processes. In this article, the applicability of this concept of the process design space is investigated for the cultivation process step for a vaccine against whooping cough disease. An experimental design (DoE) is applied to investigate the ranges of critical process parameters that still result in a product that meets specifications. The on-line process data, including near infrared spectroscopy, are used to build a descriptive model of the processes used in the experimental design. Finally, the data of all processes are integrated in a multivariate batch monitoring model that represents the investigated process design space. This article demonstrates how the general principles of PAT and process design space can be applied for an undefined biological product such as a whole cell vaccine. The approach chosen for model development described here, allows on line monitoring and control of cultivation batches in order to assure in real time that a process is running within the process design space.

  6. Processing approaches to cognition: the impetus from the levels-of-processing framework.

    PubMed

    Roediger, Henry L; Gallo, David A; Geraci, Lisa

    2002-01-01

    Processing approaches to cognition have a long history, from act psychology to the present, but perhaps their greatest boost was given by the success and dominance of the levels-of-processing framework. We review the history of processing approaches, and explore the influence of the levels-of-processing approach, the procedural approach advocated by Paul Kolers, and the transfer-appropriate processing framework. Processing approaches emphasise the procedures of mind and the idea that memory storage can be usefully conceptualised as residing in the same neural units that originally processed information at the time of encoding. Processing approaches emphasise the unity and interrelatedness of cognitive processes and maintain that they can be dissected into separate faculties only by neglecting the richness of mental life. We end by pointing to future directions for processing approaches.

  7. Global Sensitivity Analysis for Process Identification under Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.

    2015-12-01

    The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.

  8. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  9. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  10. Social Network Supported Process Recommender System

    PubMed Central

    Ye, Yanming; Yin, Jianwei; Xu, Yueshen

    2014-01-01

    Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced. PMID:24672309

  11. A model for process representation and synthesis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thomas, R. H.

    1971-01-01

    The problem of representing groups of loosely connected processes is investigated, and a model for process representation useful for synthesizing complex patterns of process behavior is developed. There are three parts, the first part isolates the concepts which form the basis for the process representation model by focusing on questions such as: What is a process; What is an event; Should one process be able to restrict the capabilities of another? The second part develops a model for process representation which captures the concepts and intuitions developed in the first part. The model presented is able to describe both the internal structure of individual processes and the interface structure between interacting processes. Much of the model's descriptive power derives from its use of the notion of process state as a vehicle for relating the internal and external aspects of process behavior. The third part demonstrates by example that the model for process representation is a useful one for synthesizing process behavior patterns. In it the model is used to define a variety of interesting process behavior patterns. The dissertation closes by suggesting how the model could be used as a semantic base for a very potent language extension facility.

  12. Process and Post-Process: A Discursive History.

    ERIC Educational Resources Information Center

    Matsuda, Paul Kei

    2003-01-01

    Examines the history of process and post-process in composition studies, focusing on ways in which terms, such as "current-traditional rhetoric,""process," and "post-process" have contributed to the discursive construction of reality. Argues that use of the term post-process in the context of second language writing needs to be guided by a…

  13. Improving operational anodising process performance using simulation approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liong, Choong-Yeun, E-mail: lg@ukm.edu.my; Ghazali, Syarah Syahidah, E-mail: syarah@gapps.kptm.edu.my

    The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist ofmore » five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.« less

  14. Feller processes: the next generation in modeling. Brownian motion, Lévy processes and beyond.

    PubMed

    Böttcher, Björn

    2010-12-03

    We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes.

  15. Feller Processes: The Next Generation in Modeling. Brownian Motion, Lévy Processes and Beyond

    PubMed Central

    Böttcher, Björn

    2010-01-01

    We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular Brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes. PMID:21151931

  16. AIRSAR Automated Web-based Data Processing and Distribution System

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen

    2005-01-01

    In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.

  17. Simplified process model discovery based on role-oriented genetic mining.

    PubMed

    Zhao, Weidong; Liu, Xi; Dai, Weihui

    2014-01-01

    Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.

  18. Electrotechnologies to process foods

    USDA-ARS?s Scientific Manuscript database

    Electrical energy is being used to process foods. In conventional food processing plants, electricity drives mechanical devices and controls the degree of process. In recent years, several processing technologies are being developed to process foods directly with electricity. Electrotechnologies use...

  19. Challenges associated with the implementation of the nursing process: A systematic review.

    PubMed

    Zamanzadeh, Vahid; Valizadeh, Leila; Tabrizi, Faranak Jabbarzadeh; Behshid, Mojghan; Lotfi, Mojghan

    2015-01-01

    Nursing process is a scientific approach in the provision of qualified nursing cares. However, in practice, the implementation of this process is faced with numerous challenges. With the knowledge of the challenges associated with the implementation of the nursing process, the nursing processes can be developed appropriately. Due to the lack of comprehensive information on this subject, the current study was carried out to assess the key challenges associated with the implementation of the nursing process. To achieve and review related studies on this field, databases of Iran medix, SID, Magiran, PUBMED, Google scholar, and Proquest were assessed using the main keywords of nursing process and nursing process systematic review. The articles were retrieved in three steps including searching by keywords, review of the proceedings based on inclusion criteria, and final retrieval and assessment of available full texts. Systematic assessment of the articles showed different challenges in implementation of the nursing process. Intangible understanding of the concept of nursing process, different views of the process, lack of knowledge and awareness among nurses related to the execution of process, supports of managing systems, and problems related to recording the nursing process were the main challenges that were extracted from review of literature. On systematically reviewing the literature, intangible understanding of the concept of nursing process has been identified as the main challenge in nursing process. To achieve the best strategy to minimize the challenge, in addition to preparing facilitators for implementation of nursing process, intangible understanding of the concept of nursing process, different views of the process, and forming teams of experts in nursing education are recommended for internalizing the nursing process among nurses.

  20. Challenges associated with the implementation of the nursing process: A systematic review

    PubMed Central

    Zamanzadeh, Vahid; Valizadeh, Leila; Tabrizi, Faranak Jabbarzadeh; Behshid, Mojghan; Lotfi, Mojghan

    2015-01-01

    Background: Nursing process is a scientific approach in the provision of qualified nursing cares. However, in practice, the implementation of this process is faced with numerous challenges. With the knowledge of the challenges associated with the implementation of the nursing process, the nursing processes can be developed appropriately. Due to the lack of comprehensive information on this subject, the current study was carried out to assess the key challenges associated with the implementation of the nursing process. Materials and Methods: To achieve and review related studies on this field, databases of Iran medix, SID, Magiran, PUBMED, Google scholar, and Proquest were assessed using the main keywords of nursing process and nursing process systematic review. The articles were retrieved in three steps including searching by keywords, review of the proceedings based on inclusion criteria, and final retrieval and assessment of available full texts. Results: Systematic assessment of the articles showed different challenges in implementation of the nursing process. Intangible understanding of the concept of nursing process, different views of the process, lack of knowledge and awareness among nurses related to the execution of process, supports of managing systems, and problems related to recording the nursing process were the main challenges that were extracted from review of literature. Conclusions: On systematically reviewing the literature, intangible understanding of the concept of nursing process has been identified as the main challenge in nursing process. To achieve the best strategy to minimize the challenge, in addition to preparing facilitators for implementation of nursing process, intangible understanding of the concept of nursing process, different views of the process, and forming teams of experts in nursing education are recommended for internalizing the nursing process among nurses. PMID:26257793

  1. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  2. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  3. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    DOEpatents

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  4. SEMICONDUCTOR TECHNOLOGY A signal processing method for the friction-based endpoint detection system of a CMP process

    NASA Astrophysics Data System (ADS)

    Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang

    2010-12-01

    A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.

  5. Composite faces are not (necessarily) processed coactively: A test using systems factorial technology and logical-rule models.

    PubMed

    Cheng, Xue Jun; McCarthy, Callum J; Wang, Tony S L; Palmeri, Thomas J; Little, Daniel R

    2018-06-01

    Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of holistic processing in composite faces in computational terms. We use logical-rule models (Fifić, Little, & Nosofsky, 2010) and Systems Factorial Technology (Townsend & Nozawa, 1995) to examine whether composite faces are processed through pooling top and bottom face halves into a single processing channel-coactive processing-which is one common mechanistic definition of holistic processing. By specifically operationalizing holistic processing as the pooling of features into a single decision process in our task, we are able to distinguish it from other processing models that may underlie composite face processing. For instance, a failure of selective attention might result even when top and bottom components of composite faces are processed in serial or in parallel without processing the entire face coactively. Our results show that performance is best explained by a mixture of serial and parallel processing architectures across all 4 upright and inverted, aligned and misaligned face conditions. The results indicate multichannel, featural processing of composite faces in a manner inconsistent with the notion of coactivity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  7. DESIGNING ENVIRONMENTAL, ECONOMIC AND ENERGY EFFICIENT CHEMICAL PROCESSES

    EPA Science Inventory

    The design and improvement of chemical processes can be very challenging. The earlier energy conservation, process economics and environmental aspects are incorporated into the process development, the easier and less expensive it is to alter the process design. Process emissio...

  8. Reversing the conventional leather processing sequence for cleaner leather production.

    PubMed

    Saravanabhavan, Subramani; Thanikaivelan, Palanisamy; Rao, Jonnalagadda Raghava; Nair, Balachandran Unni; Ramasami, Thirumalachari

    2006-02-01

    Conventional leather processing generally involves a combination of single and multistep processes that employs as well as expels various biological, inorganic, and organic materials. It involves nearly 14-15 steps and discharges a huge amount of pollutants. This is primarily due to the fact that conventional leather processing employs a "do-undo" process logic. In this study, the conventional leather processing steps have been reversed to overcome the problems associated with the conventional method. The charges of the skin matrix and of the chemicals and pH profiles of the process have been judiciously used for reversing the process steps. This reversed process eventually avoids several acidification and basification/neutralization steps used in conventional leather processing. The developed process has been validated through various analyses such as chromium content, shrinkage temperature, softness measurements, scanning electron microscopy, and physical testing of the leathers. Further, the performance of the leathers is shown to be on par with conventionally processed leathers through bulk property evaluation. The process enjoys a significant reduction in COD and TS by 53 and 79%, respectively. Water consumption and discharge is reduced by 65 and 64%, respectively. Also, the process benefits from significant reduction in chemicals, time, power, and cost compared to the conventional process.

  9. Group processing in an undergraduate biology course for preservice teachers: Experiences and attitudes

    NASA Astrophysics Data System (ADS)

    Schellenberger, Lauren Brownback

    Group processing is a key principle of cooperative learning in which small groups discuss their strengths and weaknesses and set group goals or norms. However, group processing has not been well-studied at the post-secondary level or from a qualitative or mixed methods perspective. This mixed methods study uses a phenomenological framework to examine the experience of group processing for students in an undergraduate biology course for preservice teachers. The effect of group processing on students' attitudes toward future group work and group processing is also examined. Additionally, this research investigated preservice teachers' plans for incorporating group processing into future lessons. Students primarily experienced group processing as a time to reflect on past performance. Also, students experienced group processing as a time to increase communication among group members and become motivated for future group assignments. Three factors directly influenced students' experiences with group processing: (1) previous experience with group work, (2) instructor interaction, and (3) gender. Survey data indicated that group processing had a slight positive effect on students' attitudes toward future group work and group processing. Participants who were interviewed felt that group processing was an important part of group work and that it had increased their group's effectiveness as well as their ability to work effectively with other people. Participants held positive views on group work prior to engaging in group processing, and group processing did not alter their atittude toward group work. Preservice teachers who were interviewed planned to use group work and a modified group processing protocol in their future classrooms. They also felt that group processing had prepared them for their future professions by modeling effective collaboration and group skills. Based on this research, a new model for group processing has been created which includes extensive instructor interaction and additional group processing sessions. This study offers a new perspective on the phenomenon of group processing and informs science educators and teacher educators on the effective implementation of this important component of small-group learning.

  10. Properties of the Bivariate Delayed Poisson Process

    DTIC Science & Technology

    1974-07-01

    and Lewis (1972) in their Berkeley Symposium paper and here their analysis of the bivariate Poisson processes (without Poisson noise) is carried... Poisson processes . They cannot, however, be independent Poisson processes because their events are associated in pairs by the displace- ment centres...process because its marginal processes for events of each type are themselves (univariate) Poisson processes . Cox and Lewis (1972) assumed a

  11. The Application of Six Sigma Methodologies to University Processes: The Use of Student Teams

    ERIC Educational Resources Information Center

    Pryor, Mildred Golden; Alexander, Christine; Taneja, Sonia; Tirumalasetty, Sowmya; Chadalavada, Deepthi

    2012-01-01

    The first student Six Sigma team (activated under a QEP Process Sub-team) evaluated the course and curriculum approval process. The goal was to streamline the process and thereby shorten process cycle time and reduce confusion about how the process works. Members of this team developed flowcharts on how the process is supposed to work (by…

  12. Impact of Radio Frequency Identification (RFID) on the Marine Corps’ Supply Process

    DTIC Science & Technology

    2006-09-01

    Hypothetical Improvement Using a Real-Time Order Processing System Vice a Batch Order Processing System ................56 3. As-Is: The Current... Processing System Vice a Batch Order Processing System ................58 V. RESULTS ................................................69 A. SIMULATION...Time: Hypothetical Improvement Using a Real-Time Order Processing System Vice a Batch Order Processing System ................71 3. As-Is: The

  13. Global-local processing relates to spatial and verbal processing: implications for sex differences in cognition.

    PubMed

    Pletzer, Belinda; Scheuringer, Andrea; Scherndl, Thomas

    2017-09-05

    Sex differences have been reported for a variety of cognitive tasks and related to the use of different cognitive processing styles in men and women. It was recently argued that these processing styles share some characteristics across tasks, i.e. male approaches are oriented towards holistic stimulus aspects and female approaches are oriented towards stimulus details. In that respect, sex-dependent cognitive processing styles share similarities with attentional global-local processing. A direct relationship between cognitive processing and global-local processing has however not been previously established. In the present study, 49 men and 44 women completed a Navon paradigm and a Kimchi Palmer task as well as a navigation task and a verbal fluency task with the goal to relate the global advantage (GA) effect as a measure of global processing to holistic processing styles in both tasks. Indeed participants with larger GA effects displayed more holistic processing during spatial navigation and phonemic fluency. However, the relationship to cognitive processing styles was modulated by the specific condition of the Navon paradigm, as well as the sex of participants. Thus, different types of global-local processing play different roles for cognitive processing in men and women.

  14. 21 CFR 113.83 - Establishing scheduled processes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...

  15. 21 CFR 113.83 - Establishing scheduled processes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...

  16. 21 CFR 113.83 - Establishing scheduled processes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...

  17. A mathematical study of a random process proposed as an atmospheric turbulence model

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1977-01-01

    A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.

  18. Standard services for the capture, processing, and distribution of packetized telemetry data

    NASA Technical Reports Server (NTRS)

    Stallings, William H.

    1989-01-01

    Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.

  19. Assessment of hospital processes using a process mining technique: Outpatient process analysis at a tertiary hospital.

    PubMed

    Yoo, Sooyoung; Cho, Minsu; Kim, Eunhye; Kim, Seok; Sim, Yerim; Yoo, Donghyun; Hwang, Hee; Song, Minseok

    2016-04-01

    Many hospitals are increasing their efforts to improve processes because processes play an important role in enhancing work efficiency and reducing costs. However, to date, a quantitative tool has not been available to examine the before and after effects of processes and environmental changes, other than the use of indirect indicators, such as mortality rate and readmission rate. This study used process mining technology to analyze process changes based on changes in the hospital environment, such as the construction of a new building, and to measure the effects of environmental changes in terms of consultation wait time, time spent per task, and outpatient care processes. Using process mining technology, electronic health record (EHR) log data of outpatient care before and after constructing a new building were analyzed, and the effectiveness of the technology in terms of the process was evaluated. Using the process mining technique, we found that the total time spent in outpatient care did not increase significantly compared to that before the construction of a new building, considering that the number of outpatients increased, and the consultation wait time decreased. These results suggest that the operation of the outpatient clinic was effective after changes were implemented in the hospital environment. We further identified improvements in processes using the process mining technique, thereby demonstrating the usefulness of this technique for analyzing complex hospital processes at a low cost. This study confirmed the effectiveness of process mining technology at an actual hospital site. In future studies, the use of process mining technology will be expanded by applying this approach to a larger variety of process change situations. Copyright © 2016. Published by Elsevier Ireland Ltd.

  20. Study of process variables associated with manufacturing hermetically-sealed nickel-cadmium cells

    NASA Technical Reports Server (NTRS)

    Miller, L.

    1974-01-01

    A two year study of the major process variables associated with the manufacturing process for sealed, nickel-cadmium, areospace cells is summarized. Effort was directed toward identifying the major process variables associated with a manufacturing process, experimentally assessing each variable's effect, and imposing the necessary changes (optimization) and controls for the critical process variables to improve results and uniformity. A critical process variable associated with the sintered nickel plaque manufacturing process was identified as the manual forming operation. Critical process variables identified with the positive electrode impregnation/polarization process were impregnation solution temperature, free acid content, vacuum impregnation, and sintered plaque strength. Positive and negative electrodes were identified as a major source of carbonate contamination in sealed cells.

  1. Monitoring autocorrelated process: A geometric Brownian motion process approach

    NASA Astrophysics Data System (ADS)

    Li, Lee Siaw; Djauhari, Maman A.

    2013-09-01

    Autocorrelated process control is common in today's modern industrial process control practice. The current practice of autocorrelated process control is to eliminate the autocorrelation by using an appropriate model such as Box-Jenkins models or other models and then to conduct process control operation based on the residuals. In this paper we show that many time series are governed by a geometric Brownian motion (GBM) process. Therefore, in this case, by using the properties of a GBM process, we only need an appropriate transformation and model the transformed data to come up with the condition needs in traditional process control. An industrial example of cocoa powder production process in a Malaysian company will be presented and discussed to illustrate the advantages of the GBM approach.

  2. Meta-control of combustion performance with a data mining approach

    NASA Astrophysics Data System (ADS)

    Song, Zhe

    Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.

  3. 5 CFR 1653.13 - Processing legal processes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 3 2014-01-01 2014-01-01 false Processing legal processes. 1653.13 Section 1653.13 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD COURT ORDERS AND LEGAL PROCESSES AFFECTING THRIFT SAVINGS PLAN ACCOUNTS Legal Process for the Enforcement of a Participant's Legal...

  4. 5 CFR 1653.13 - Processing legal processes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Processing legal processes. 1653.13 Section 1653.13 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD COURT ORDERS AND LEGAL PROCESSES AFFECTING THRIFT SAVINGS PLAN ACCOUNTS Legal Process for the Enforcement of a Participant's Legal...

  5. A Search Algorithm for Generating Alternative Process Plans in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Tehrani, Hossein; Sugimura, Nobuhiro; Tanimizu, Yoshitaka; Iwamura, Koji

    Capabilities and complexity of manufacturing systems are increasing and striving for an integrated manufacturing environment. Availability of alternative process plans is a key factor for integration of design, process planning and scheduling. This paper describes an algorithm for generation of alternative process plans by extending the existing framework of the process plan networks. A class diagram is introduced for generating process plans and process plan networks from the viewpoint of the integrated process planning and scheduling systems. An incomplete search algorithm is developed for generating and searching the process plan networks. The benefit of this algorithm is that the whole process plan network does not have to be generated before the search algorithm starts. This algorithm is applicable to large and enormous process plan networks and also to search wide areas of the network based on the user requirement. The algorithm can generate alternative process plans and to select a suitable one based on the objective functions.

  6. PyMS: a Python toolkit for processing of gas chromatography-mass spectrometry (GC-MS) data. Application and comparative study of selected tools.

    PubMed

    O'Callaghan, Sean; De Souza, David P; Isaac, Andrew; Wang, Qiao; Hodkinson, Luke; Olshansky, Moshe; Erwin, Tim; Appelbe, Bill; Tull, Dedreia L; Roessner, Ute; Bacic, Antony; McConville, Malcolm J; Likić, Vladimir A

    2012-05-30

    Gas chromatography-mass spectrometry (GC-MS) is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX), noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI) fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI), allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS). PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs as well or better than leading software packages. We demonstrate data processing scenarios simple to implement in PyMS, yet difficult to achieve with many conventional GC-MS data processing software. Automated sample processing and quantitation with PyMS can provide substantial time savings compared to more traditional interactive software systems that tightly integrate data processing with the graphical user interface.

  7. Processing mode during repetitive thinking in socially anxious individuals: evidence for a maladaptive experiential mode.

    PubMed

    Wong, Quincy J J; Moulds, Michelle L

    2012-12-01

    Evidence from the depression literature suggests that an analytical processing mode adopted during repetitive thinking leads to maladaptive outcomes relative to an experiential processing mode. To date, in socially anxious individuals, the impact of processing mode during repetitive thinking related to an actual social-evaluative situation has not been investigated. We thus tested whether an analytical processing mode would be maladaptive relative to an experiential processing mode during anticipatory processing and post-event rumination. High and low socially anxious participants were induced to engage in either an analytical or experiential processing mode during: (a) anticipatory processing before performing a speech (Experiment 1; N = 94), or (b) post-event rumination after performing a speech (Experiment 2; N = 74). Mood, cognition, and behavioural measures were employed to examine the effects of processing mode. For high socially anxious participants, the modes had a similar effect on self-reported anxiety during both anticipatory processing and post-event rumination. Unexpectedly, relative to the analytical mode, the experiential mode led to stronger high standard and conditional beliefs during anticipatory processing, and stronger unconditional beliefs during post-event rumination. These experiments are the first to investigate processing mode during anticipatory processing and post-event rumination. Hence, these results are novel and will need to be replicated. These findings suggest that an experiential processing mode is maladaptive relative to an analytical processing mode during repetitive thinking characteristic of socially anxious individuals. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Using process elicitation and validation to understand and improve chemotherapy ordering and delivery.

    PubMed

    Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L

    2012-11-01

    Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.

  9. Modeling interdependencies between business and communication processes in hospitals.

    PubMed

    Brigl, Birgit; Wendt, Thomas; Winter, Alfred

    2003-01-01

    The optimization and redesign of business processes in hospitals is an important challenge for the hospital information management who has to design and implement a suitable HIS architecture. Nevertheless, there are no tools available specializing in modeling information-driven business processes and the consequences on the communication between information processing, tools. Therefore, we will present an approach which facilitates the representation and analysis of business processes and resulting communication processes between application components and their interdependencies. This approach aims not only to visualize those processes, but to also to evaluate if there are weaknesses concerning the information processing infrastructure which hinder the smooth implementation of the business processes.

  10. Life cycle analysis within pharmaceutical process optimization and intensification: case study of active pharmaceutical ingredient production.

    PubMed

    Ott, Denise; Kralisch, Dana; Denčić, Ivana; Hessel, Volker; Laribi, Yosra; Perrichon, Philippe D; Berguerand, Charline; Kiwi-Minsker, Lioubov; Loeb, Patrick

    2014-12-01

    As the demand for new drugs is rising, the pharmaceutical industry faces the quest of shortening development time, and thus, reducing the time to market. Environmental aspects typically still play a minor role within the early phase of process development. Nevertheless, it is highly promising to rethink, redesign, and optimize process strategies as early as possible in active pharmaceutical ingredient (API) process development, rather than later at the stage of already established processes. The study presented herein deals with a holistic life-cycle-based process optimization and intensification of a pharmaceutical production process targeting a low-volume, high-value API. Striving for process intensification by transfer from batch to continuous processing, as well as an alternative catalytic system, different process options are evaluated with regard to their environmental impact to identify bottlenecks and improvement potentials for further process development activities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. SOI-CMOS Process for Monolithic, Radiation-Tolerant, Science-Grade Imagers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, George; Lee, Adam

    In Phase I, Voxtel worked with Jazz and Sandia to document and simulate the processes necessary to implement a DH-BSI SOI CMOS imaging process. The development is based upon mature SOI CMOS process at both fabs, with the addition of only a few custom processing steps for integration and electrical interconnection of the fully-depleted photodetectors. In Phase I, Voxtel also characterized the Sandia process, including the CMOS7 design rules, and we developed the outline of a process option that included a “BOX etch”, that will permit a “detector in handle” SOI CMOS process to be developed The process flows weremore » developed in cooperation with both Jazz and Sandia process engineers, along with detailed TCAD modeling and testing of the photodiode array architectures. In addition, Voxtel tested the radiation performance of the Jazz’s CA18HJ process, using standard and circular-enclosed transistors.« less

  12. Face to face with emotion: holistic face processing is modulated by emotional state.

    PubMed

    Curby, Kim M; Johnson, Kareem J; Tyson, Alyssa

    2012-01-01

    Negative emotions are linked with a local, rather than global, visual processing style, which may preferentially facilitate feature-based, relative to holistic, processing mechanisms. Because faces are typically processed holistically, and because social contexts are prime elicitors of emotions, we examined whether negative emotions decrease holistic processing of faces. We induced positive, negative, or neutral emotions via film clips and measured holistic processing before and after the induction: participants made judgements about cued parts of chimeric faces, and holistic processing was indexed by the interference caused by task-irrelevant face parts. Emotional state significantly modulated face-processing style, with the negative emotion induction leading to decreased holistic processing. Furthermore, self-reported change in emotional state correlated with changes in holistic processing. These results contrast with general assumptions that holistic processing of faces is automatic and immune to outside influences, and they illustrate emotion's power to modulate socially relevant aspects of visual perception.

  13. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  14. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  15. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  16. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  17. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  18. 20 CFR 405.725 - Effect of expedited appeals process agreement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... PROCESS FOR ADJUDICATING INITIAL DISABILITY CLAIMS Expedited Appeals Process for Constitutional Issues § 405.725 Effect of expedited appeals process agreement. After an expedited appeals process agreement is... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Effect of expedited appeals process agreement...

  19. Common and distinct networks for self-referential and social stimulus processing in the human brain.

    PubMed

    Herold, Dorrit; Spengler, Stephanie; Sajonz, Bastian; Usnich, Tatiana; Bermpohl, Felix

    2016-09-01

    Self-referential processing is a complex cognitive function, involving a set of implicit and explicit processes, complicating investigation of its distinct neural signature. The present study explores the functional overlap and dissociability of self-referential and social stimulus processing. We combined an established paradigm for explicit self-referential processing with an implicit social stimulus processing paradigm in one fMRI experiment to determine the neural effects of self-relatedness and social processing within one study. Overlapping activations were found in the orbitofrontal cortex and in the intermediate part of the precuneus. Stimuli judged as self-referential specifically activated the posterior cingulate cortex, the ventral medial prefrontal cortex, extending into anterior cingulate cortex and orbitofrontal cortex, the dorsal medial prefrontal cortex, the ventral and dorsal lateral prefrontal cortex, the left inferior temporal gyrus, and occipital cortex. Social processing specifically involved the posterior precuneus and bilateral temporo-parietal junction. Taken together, our data show, not only, first, common networks for both processes in the medial prefrontal and the medial parietal cortex, but also, second, functional differentiations for self-referential processing versus social processing: an anterior-posterior gradient for social processing and self-referential processing within the medial parietal cortex and specific activations for self-referential processing in the medial and lateral prefrontal cortex and for social processing in the temporo-parietal junction.

  20. Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement.

    PubMed

    Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B

    2007-03-01

    The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.

  1. Use of Analogies in the Study of Diffusion

    ERIC Educational Resources Information Center

    Letic, Milorad

    2014-01-01

    Emergent processes, such as diffusion, are considered more difficult to understand than direct processes. In physiology, most processes are presented as direct processes, so emergent processes, when encountered, are even more difficult to understand. It has been suggested that, when studying diffusion, misconceptions about random processes are the…

  2. Is Analytic Information Processing a Feature of Expertise in Medicine?

    ERIC Educational Resources Information Center

    McLaughlin, Kevin; Rikers, Remy M.; Schmidt, Henk G.

    2008-01-01

    Diagnosing begins by generating an initial diagnostic hypothesis by automatic information processing. Information processing may stop here if the hypothesis is accepted, or analytical processing may be used to refine the hypothesis. This description portrays analytic processing as an optional extra in information processing, leading us to…

  3. 5 CFR 582.305 - Honoring legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...

  4. 5 CFR 582.305 - Honoring legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...

  5. 5 CFR 581.305 - Honoring legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...

  6. 5 CFR 581.305 - Honoring legal process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...

  7. 5 CFR 582.305 - Honoring legal process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...

  8. 5 CFR 581.305 - Honoring legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...

  9. Articulating the Resources for Business Process Analysis and Design

    ERIC Educational Resources Information Center

    Jin, Yulong

    2012-01-01

    Effective process analysis and modeling are important phases of the business process management lifecycle. When many activities and multiple resources are involved, it is very difficult to build a correct business process specification. This dissertation provides a resource perspective of business processes. It aims at a better process analysis…

  10. An Integrated Model of Emotion Processes and Cognition in Social Information Processing.

    ERIC Educational Resources Information Center

    Lemerise, Elizabeth A.; Arsenio, William F.

    2000-01-01

    Interprets literature on contributions of social cognitive and emotion processes to children's social competence in the context of an integrated model of emotion processes and cognition in social information processing. Provides neurophysiological and functional evidence for the centrality of emotion processes in personal-social decision making.…

  11. Data Processing and First Products from the Hyperspectral Imager for the Coastal Ocean (HICO) on the International Space Station

    DTIC Science & Technology

    2010-04-01

    NRL Stennis Space Center (NRL-SSC) for further processing using the NRL SSC Automated Processing System (APS). APS was developed for processing...have not previously developed automated processing for 73 hyperspectral ocean color data. The hyperspectral processing branch includes several

  12. DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.

    DTIC Science & Technology

    A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the

  13. Management of processes of electrochemical dimensional processing

    NASA Astrophysics Data System (ADS)

    Akhmetov, I. D.; Zakirova, A. R.; Sadykov, Z. B.

    2017-09-01

    In different industries a lot high-precision parts are produced from hard-processed scarce materials. Forming such details can only be acting during non-contact processing, or a minimum of effort, and doable by the use, for example, of electro-chemical processing. At the present stage of development of metal working processes are important management issues electrochemical machining and its automation. This article provides some indicators and factors of electrochemical machining process.

  14. The Hyperspectral Imager for the Coastal Ocean (HICO): Sensor and Data Processing Overview

    DTIC Science & Technology

    2010-01-20

    backscattering coefficients, and others. Several of these software modules will be developed within the Automated Processing System (APS), a data... Automated Processing System (APS) NRL developed APS, which processes satellite data into ocean color data products. APS is a collection of methods...used for ocean color processing which provide the tools for the automated processing of satellite imagery [1]. These tools are in the process of

  15. [Study on culture and philosophy of processing of traditional Chinese medicines].

    PubMed

    Yang, Ming; Zhang, Ding-Kun; Zhong, Ling-Yun; Wang, Fang

    2013-07-01

    According to cultural views and philosophical thoughts, this paper studies the cultural origin, thinking modes, core principles, general regulation and methods of processing, backtracks processing's culture and history which contains generation and deduction process, experienced and promoting process, and core value, summarizes processing's basic principles which are directed by holistic, objective, dynamic, balanced and appropriate thoughts; so as to propagate cultural characteristic and philosophical wisdom of traditional Chinese medicine processing, to promote inheritance and development of processing and to ensure the maximum therapeutic value of Chinese medical clinical.

  16. Containerless automated processing of intermetallic compounds and composites

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Joslin, S. M.; Reviere, R. D.; Oliver, B. F.; Noebe, R. D.

    1993-01-01

    An automated containerless processing system has been developed to directionally solidify high temperature materials, intermetallic compounds, and intermetallic/metallic composites. The system incorporates a wide range of ultra-high purity chemical processing conditions. The utilization of image processing for automated control negates the need for temperature measurements for process control. The list of recent systems that have been processed includes Cr, Mo, Mn, Nb, Ni, Ti, V, and Zr containing aluminides. Possible uses of the system, process control approaches, and properties and structures of recently processed intermetallics are reviewed.

  17. A continuous process for the development of Kodak Aerochrome Infrared Film 2443 as a negative

    NASA Astrophysics Data System (ADS)

    Klimes, D.; Ross, D. I.

    1993-02-01

    A process for the continuous dry-to-dry development of Kodak Aerochrome Infrared Film 2443 as a negative (CIR-neg) is described. The process is well suited for production processing of long film lengths. Chemicals from three commercial film processes are used with modifications. Sensitometric procedures are recommended for the monitoring of processing quality control. Sensitometric data and operational aerial exposures indicate that films developed in this process have approximately the same effective aerial film speed as films processed in the reversal process recommended by the manufacturer (Kodak EA-5). The CIR-neg process is useful when aerial photography is acquired for resources management applications which require print reproductions. Originals can be readily reproduced using conventional production equipment (electronic dodging) in black and white or color (color compensation).

  18. Antibiotics with anaerobic ammonium oxidation in urban wastewater treatment

    NASA Astrophysics Data System (ADS)

    Zhou, Ruipeng; Yang, Yuanming

    2017-05-01

    Biofilter process is based on biological oxidation process on the introduction of fast water filter design ideas generated by an integrated filtration, adsorption and biological role of aerobic wastewater treatment process various purification processes. By engineering example, we show that the process is an ideal sewage and industrial wastewater treatment process of low concentration. Anaerobic ammonia oxidation process because of its advantage of the high efficiency and low consumption, wastewater biological denitrification field has broad application prospects. The process in practical wastewater treatment at home and abroad has become a hot spot. In this paper, anammox bacteria habitats and species diversity, and anaerobic ammonium oxidation process in the form of diversity, and one and split the process operating conditions are compared, focusing on a review of the anammox process technology various types of wastewater laboratory research and engineering applications, including general water quality and pressure filtrate sludge digestion, landfill leachate, aquaculture wastewater, monosodium glutamate wastewater, wastewater, sewage, fecal sewage, waste water salinity wastewater characteristics, research progress and application of the obstacles. Finally, we summarize the anaerobic ammonium oxidation process potential problems during the processing of the actual waste water, and proposed future research focus on in-depth study of water quality anammox obstacle factor and its regulatory policy, and vigorously develop on this basis, and combined process optimization.

  19. Understanding scaling through history-dependent processes with collapsing sample space.

    PubMed

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2015-04-28

    History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space-reducing (SSR) processes necessarily lead to Zipf's law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power laws, p(x) ~ x(-λ), where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to which a given process reduces its sample space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from α = 2 to ∞. We discuss several applications showing how SSR processes can be used to understand Zipf's law in word frequencies, and how they are related to diffusion processes in directed networks, or aging processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organized critical processes.

  20. Effects of Processing Parameters on the Forming Quality of C-Shaped Thermosetting Composite Laminates in Hot Diaphragm Forming Process

    NASA Astrophysics Data System (ADS)

    Bian, X. X.; Gu, Y. Z.; Sun, J.; Li, M.; Liu, W. P.; Zhang, Z. G.

    2013-10-01

    In this study, the effects of processing temperature and vacuum applying rate on the forming quality of C-shaped carbon fiber reinforced epoxy resin matrix composite laminates during hot diaphragm forming process were investigated. C-shaped prepreg preforms were produced using a home-made hot diaphragm forming equipment. The thickness variations of the preforms and the manufacturing defects after diaphragm forming process, including fiber wrinkling and voids, were evaluated to understand the forming mechanism. Furthermore, both interlaminar slipping friction and compaction behavior of the prepreg stacks were experimentally analyzed for showing the importance of the processing parameters. In addition, autoclave processing was used to cure the C-shaped preforms to investigate the changes of the defects before and after cure process. The results show that the C-shaped prepreg preforms with good forming quality can be achieved through increasing processing temperature and reducing vacuum applying rate, which obviously promote prepreg interlaminar slipping process. The process temperature and forming rate in hot diaphragm forming process strongly influence prepreg interply frictional force, and the maximum interlaminar frictional force can be taken as a key parameter for processing parameter optimization. Autoclave process is effective in eliminating voids in the preforms and can alleviate fiber wrinkles to a certain extent.

  1. Assessment of Advanced Coal Gasification Processes

    NASA Technical Reports Server (NTRS)

    McCarthy, John; Ferrall, Joseph; Charng, Thomas; Houseman, John

    1981-01-01

    This report represents a technical assessment of the following advanced coal gasification processes: AVCO High Throughput Gasification (HTG) Process; Bell Single-Stage High Mass Flux (HMF) Process; Cities Service/Rockwell (CS/R) Hydrogasification Process; Exxon Catalytic Coal Gasification (CCG) Process. Each process is evaluated for its potential to produce SNG from a bituminous coal. In addition to identifying the new technology these processes represent, key similarities/differences, strengths/weaknesses, and potential improvements to each process are identified. The AVCO HTG and the Bell HMF gasifiers share similarities with respect to: short residence time (SRT), high throughput rate, slagging and syngas as the initial raw product gas. The CS/R Hydrogasifier is also SRT but is non-slagging and produces a raw gas high in methane content. The Exxon CCG gasifier is a long residence time, catalytic, fluidbed reactor producing all of the raw product methane in the gasifier. The report makes the following assessments: 1) while each process has significant potential as coal gasifiers, the CS/R and Exxon processes are better suited for SNG production; 2) the Exxon process is the closest to a commercial level for near-term SNG production; and 3) the SRT processes require significant development including scale-up and turndown demonstration, char processing and/or utilization demonstration, and reactor control and safety features development.

  2. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    PubMed

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  3. Effects of rigor status during high-pressure processing on the physical qualities of farm-raised abalone (Haliotis rufescens).

    PubMed

    Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I

    2015-01-01

    High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P < 0.05) less firm and chewy than pre-rigor processed irrespective of muscle type, processing time, or pressure. L values increased with pressure to 68.9 at 300 MPa for pre-rigor processed foot, 73.8 for post-rigor processed foot, 90.9 for pre-rigor processed adductor, and 89.0 for post-rigor processed adductor. Scanning electron microscopy images showed fraying of collagen fibers in processed adductor, but did not show pressure-induced compaction of the foot myofibrils. Post-rigor processed abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®

  4. PROCESSING ALTERNATIVES FOR DESTRUCTION OF TETRAPHENYLBORATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, D; Thomas Peters, T; Samuel Fink, S

    Two processes were chosen in the 1980's at the Savannah River Site (SRS) to decontaminate the soluble High Level Waste (HLW). The In Tank Precipitation (ITP) process (1,2) was developed at SRS for the removal of radioactive cesium and actinides from the soluble HLW. Sodium tetraphenylborate was added to the waste to precipitate cesium and monosodium titanate (MST) was added to adsorb actinides, primarily uranium and plutonium. Two products of this process were a low activity waste stream and a concentrated organic stream containing cesium tetraphenylborate and actinides adsorbed on monosodium titanate (MST). A copper catalyzed acid hydrolysis process wasmore » built to process (3, 4) the Tank 48H cesium tetraphenylborate waste in the SRS's Defense Waste Processing Facility (DWPF). Operation of the DWPF would have resulted in the production of benzene for incineration in SRS's Consolidated Incineration Facility. This process was abandoned together with the ITP process in 1998 due to high benzene in ITP caused by decomposition of excess sodium tetraphenylborate. Processing in ITP resulted in the production of approximately 1.0 million liters of HLW. SRS has chosen a solvent extraction process combined with adsorption of the actinides to decontaminate the soluble HLW stream (5). However, the waste in Tank 48H is incompatible with existing waste processing facilities. As a result, a processing facility is needed to disposition the HLW in Tank 48H. This paper will describe the process for searching for processing options by SRS task teams for the disposition of the waste in Tank 48H. In addition, attempts to develop a caustic hydrolysis process for in tank destruction of tetraphenylborate will be presented. Lastly, the development of both a caustic and acidic copper catalyzed peroxide oxidation process will be discussed.« less

  5. Manufacturing Process Selection of Composite Bicycle’s Crank Arm using Analytical Hierarchy Process (AHP)

    NASA Astrophysics Data System (ADS)

    Luqman, M.; Rosli, M. U.; Khor, C. Y.; Zambree, Shayfull; Jahidi, H.

    2018-03-01

    Crank arm is one of the important parts in a bicycle that is an expensive product due to the high cost of material and production process. This research is aimed to investigate the potential type of manufacturing process to fabricate composite bicycle crank arm and to describe an approach based on analytical hierarchy process (AHP) that assists decision makers or manufacturing engineers in determining the most suitable process to be employed in manufacturing of composite bicycle crank arm at the early stage of the product development process to reduce the production cost. There are four types of processes were considered, namely resin transfer molding (RTM), compression molding (CM), vacuum bag molding and filament winding (FW). The analysis ranks these four types of process for its suitability in the manufacturing of bicycle crank arm based on five main selection factors and 10 sub factors. Determining the right manufacturing process was performed based on AHP process steps. Consistency test was performed to make sure the judgements are consistent during the comparison. The results indicated that the compression molding was the most appropriate manufacturing process because it has the highest value (33.6%) among the other manufacturing processes.

  6. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  7. Quantitative analysis of geomorphic processes using satellite image data at different scales

    NASA Technical Reports Server (NTRS)

    Williams, R. S., Jr.

    1985-01-01

    When aerial and satellite photographs and images are used in the quantitative analysis of geomorphic processes, either through direct observation of active processes or by analysis of landforms resulting from inferred active or dormant processes, a number of limitations in the use of such data must be considered. Active geomorphic processes work at different scales and rates. Therefore, the capability of imaging an active or dormant process depends primarily on the scale of the process and the spatial-resolution characteristic of the imaging system. Scale is an important factor in recording continuous and discontinuous active geomorphic processes, because what is not recorded will not be considered or even suspected in the analysis of orbital images. If the geomorphic process of landform change caused by the process is less than 200 m in x to y dimension, then it will not be recorded. Although the scale factor is critical, in the recording of discontinuous active geomorphic processes, the repeat interval of orbital-image acquisition of a planetary surface also is a consideration in order to capture a recurring short-lived geomorphic process or to record changes caused by either a continuous or a discontinuous geomorphic process.

  8. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  9. Microstructure and Texture of Al-2.5wt.%Mg Processed by Combining Accumulative Roll Bonding and Conventional Rolling

    NASA Astrophysics Data System (ADS)

    Gatti, J. R.; Bhattacharjee, P. P.

    2014-12-01

    Evolution of microstructure and texture during severe deformation and annealing was studied in Al-2.5%Mg alloy processed by two different routes, namely, monotonic Accumulative Roll Bonding (ARB) and a hybrid route combining ARB and conventional rolling (CR). For this purpose Al-2.5%Mg sheets were subjected to 5 cycles of monotonic ARB (equivalent strain (ɛeq) = 4.0) processing while in the hybrid route (ARB + CR) 3 cycle ARB-processed sheets were further deformed by conventional rolling to 75% reduction in thickness (ɛeq = 4.0). Although formation of ultrafine structure was observed in the two processing routes, the monotonic ARB—processed material showed finer microstructure but weak texture as compared to the ARB + CR—processed material. After complete recrystallization, the ARB + CR-processed material showed weak cube texture ({001}<100>) but the cube component was almost negligible in the monotonic ARB-processed material-processed material. However, the ND-rotated cube components were stronger in the monotonic ARB-processed material-processed material. The observed differences in the microstructure and texture evolution during deformation and annealing could be explained by the characteristic differences of the two processing routes.

  10. Process Materialization Using Templates and Rules to Design Flexible Process Models

    NASA Astrophysics Data System (ADS)

    Kumar, Akhil; Yao, Wen

    The main idea in this paper is to show how flexible processes can be designed by combining generic process templates and business rules. We instantiate a process by applying rules to specific case data, and running a materialization algorithm. The customized process instance is then executed in an existing workflow engine. We present an architecture and also give an algorithm for process materialization. The rules are written in a logic-based language like Prolog. Our focus is on capturing deeper process knowledge and achieving a holistic approach to robust process design that encompasses control flow, resources and data, as well as makes it easier to accommodate changes to business policy.

  11. HMI conventions for process control graphics.

    PubMed

    Pikaar, Ruud N

    2012-01-01

    Process operators supervise and control complex processes. To enable the operator to do an adequate job, instrumentation and process control engineers need to address several related topics, such as console design, information design, navigation, and alarm management. In process control upgrade projects, usually a 1:1 conversion of existing graphics is proposed. This paper suggests another approach, efficiently leading to a reduced number of new powerful process graphics, supported by a permanent process overview displays. In addition a road map for structuring content (process information) and conventions for the presentation of objects, symbols, and so on, has been developed. The impact of the human factors engineering approach on process control upgrade projects is illustrated by several cases.

  12. A novel processed food classification system applied to Australian food composition databases.

    PubMed

    O'Halloran, S A; Lacy, K E; Grimes, C A; Woods, J; Campbell, K J; Nowson, C A

    2017-08-01

    The extent of food processing can affect the nutritional quality of foodstuffs. Categorising foods by the level of processing emphasises the differences in nutritional quality between foods within the same food group and is likely useful for determining dietary processed food consumption. The present study aimed to categorise foods within Australian food composition databases according to the level of food processing using a processed food classification system, as well as assess the variation in the levels of processing within food groups. A processed foods classification system was applied to food and beverage items contained within Australian Food and Nutrient (AUSNUT) 2007 (n = 3874) and AUSNUT 2011-13 (n = 5740). The proportion of Minimally Processed (MP), Processed Culinary Ingredients (PCI) Processed (P) and Ultra Processed (ULP) by AUSNUT food group and the overall proportion of the four processed food categories across AUSNUT 2007 and AUSNUT 2011-13 were calculated. Across the food composition databases, the overall proportions of foods classified as MP, PCI, P and ULP were 27%, 3%, 26% and 44% for AUSNUT 2007 and 38%, 2%, 24% and 36% for AUSNUT 2011-13. Although there was wide variation in the classifications of food processing within the food groups, approximately one-third of foodstuffs were classified as ULP food items across both the 2007 and 2011-13 AUSNUT databases. This Australian processed food classification system will allow researchers to easily quantify the contribution of processed foods within the Australian food supply to assist in assessing the nutritional quality of the dietary intake of population groups. © 2017 The British Dietetic Association Ltd.

  13. Process and domain specificity in regions engaged for face processing: an fMRI study of perceptual differentiation.

    PubMed

    Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E

    2012-12-01

    The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.

  14. Process- and Domain-Specificity in Regions Engaged for Face Processing: An fMRI Study of Perceptual Differentiation

    PubMed Central

    Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.

    2015-01-01

    The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402

  15. Achieving Continuous Manufacturing for Final Dosage Formation: Challenges and How to Meet Them May 20-21 2014 Continuous Manufacturing Symposium.

    PubMed

    Byrn, Stephen; Futran, Maricio; Thomas, Hayden; Jayjock, Eric; Maron, Nicola; Meyer, Robert F; Myerson, Allan S; Thien, Michael P; Trout, Bernhardt L

    2015-03-01

    We describe the key issues and possibilities for continuous final dosage formation, otherwise known as downstream processing or drug product manufacturing. A distinction is made between heterogeneous processing and homogeneous processing, the latter of which is expected to add more value to continuous manufacturing. We also give the key motivations for moving to continuous manufacturing, some of the exciting new technologies, and the barriers to implementation of continuous manufacturing. Continuous processing of heterogeneous blends is the natural first step in converting existing batch processes to continuous. In heterogeneous processing, there are discrete particles that can segregate, versus in homogeneous processing, components are blended and homogenized such that they do not segregate. Heterogeneous processing can incorporate technologies that are closer to existing technologies, where homogeneous processing necessitates the development and incorporation of new technologies. Homogeneous processing has the greatest potential for reaping the full rewards of continuous manufacturing, but it takes long-term vision and a more significant change in process development than heterogeneous processing. Heterogeneous processing has the detriment that, as the technologies are adopted rather than developed, there is a strong tendency to incorporate correction steps, what we call below "The Rube Goldberg Problem." Thus, although heterogeneous processing will likely play a major role in the near-term transformation of heterogeneous to continuous processing, it is expected that homogeneous processing is the next step that will follow. Specific action items for industry leaders are. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  16. An Analysis of the Air Force Government Operated Civil Engineering Supply Store Logistic System: How Can It Be Improved?

    DTIC Science & Technology

    1990-09-01

    6 Logistics Systems ............ 7 GOCESS Operation . . . . . . . ..... 9 Work Order Processing . . . . ... 12 Job Order Processing . . . . . . . . . . 14...orders and job orders to the Material Control Section will be discussed separately. Work Order Processing . Figure 2 illustrates typical WO processing...logistics function. The JO processing is similar. Job Order Processing . Figure 3 illustrates typical JO processing in a GOCESS operation. As with WOs, this

  17. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  18. Data processing system for the Sneg-2MP experiment

    NASA Technical Reports Server (NTRS)

    Gavrilova, Y. A.

    1980-01-01

    The data processing system for scientific experiments on stations of the "Prognoz" type provides for the processing sequence to be broken down into a number of consecutive stages: preliminary processing, primary processing, secondary processing. The tasks of each data processing stage are examined for an experiment designed to study gamma flashes of galactic origin and solar flares lasting from several minutes to seconds in the 20 kev to 1000 kev energy range.

  19. General RMP Guidance - Appendix D: OSHA Guidance on PSM

    EPA Pesticide Factsheets

    OSHA's Process Safety Management (PSM) Guidance on providing complete and accurate written information concerning process chemicals, process technology, and process equipment; including process hazard analysis and material safety data sheets.

  20. Elaboration Likelihood and the Counseling Process: The Role of Affect.

    ERIC Educational Resources Information Center

    Stoltenberg, Cal D.; And Others

    The role of affect in counseling has been examined from several orientations. The depth of processing model views the efficiency of information processing as a function of the extent to which the information is processed. The notion of cognitive processing capacity states that processing information at deeper levels engages more of one's limited…

  1. 5 CFR 582.202 - Service of legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Service of legal process. 582.202 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.202 Service of legal process. (a) A person using this part shall serve interrogatories and legal process on the agent to receive process as...

  2. 5 CFR 582.202 - Service of legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Service of legal process. 582.202 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.202 Service of legal process. (a) A person using this part shall serve interrogatories and legal process on the agent to receive process as...

  3. Information Processing Concepts: A Cure for "Technofright." Information Processing in the Electronic Office. Part 1: Concepts.

    ERIC Educational Resources Information Center

    Popyk, Marilyn K.

    1986-01-01

    Discusses the new automated office and its six major technologies (data processing, word processing, graphics, image, voice, and networking), the information processing cycle (input, processing, output, distribution/communication, and storage and retrieval), ergonomics, and ways to expand office education classes (versus class instruction). (CT)

  4. Facial Speech Gestures: The Relation between Visual Speech Processing, Phonological Awareness, and Developmental Dyslexia in 10-Year-Olds

    ERIC Educational Resources Information Center

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.

    2016-01-01

    Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…

  5. 40 CFR 65.62 - Process vent group determination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., or Group 2B) for each process vent. Group 1 process vents require control, and Group 2A and 2B... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Process vent group determination. 65... (CONTINUED) CONSOLIDATED FEDERAL AIR RULE Process Vents § 65.62 Process vent group determination. (a) Group...

  6. 40 CFR 63.138 - Process wastewater provisions-performance standards for treatment processes managing Group 1...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .../or Table 9 compounds are similar and often identical. (3) Biological treatment processes. Biological treatment processes in compliance with this section may be either open or closed biological treatment processes as defined in § 63.111. An open biological treatment process in compliance with this section need...

  7. 5 CFR 581.202 - Service of process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Service of process. 581.202 Section 581... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Service of Process § 581.202 Service of process. (a) A... facilitate proper service of process on its designated agent(s). If legal process is not directed to any...

  8. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  9. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  10. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 3 2013-07-01 2013-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  11. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  12. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 3 2014-07-01 2014-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  13. Processing Depth, Elaboration of Encoding, Memory Stores, and Expended Processing Capacity.

    ERIC Educational Resources Information Center

    Eysenck, Michael W.; Eysenck, M. Christine

    1979-01-01

    The effects of several factors on expended processing capacity were measured. Expended processing capacity was greater when information was retrieved from secondary memory than from primary memory, when processing was of a deep, semantic nature than when it was shallow and physical, and when processing was more elaborate. (Author/GDC)

  14. Speed isn’t everything: Complex processing speed measures mask individual differences and developmental changes in executive control

    PubMed Central

    Cepeda, Nicholas J.; Blackwell, Katharine A.; Munakata, Yuko

    2012-01-01

    The rate at which people process information appears to influence many aspects of cognition across the lifespan. However, many commonly accepted measures of “processing speed” may require goal maintenance, manipulation of information in working memory, and decision-making, blurring the distinction between processing speed and executive control and resulting in overestimation of processing-speed contributions to cognition. This concern may apply particularly to studies of developmental change, as even seemingly simple processing speed measures may require executive processes to keep children and older adults on task. We report two new studies and a re-analysis of a published study, testing predictions about how different processing speed measures influence conclusions about executive control across the life span. We find that the choice of processing speed measure affects the relationship observed between processing speed and executive control, in a manner that changes with age, and that choice of processing speed measure affects conclusions about development and the relationship among executive control measures. Implications for understanding processing speed, executive control, and their development are discussed. PMID:23432836

  15. The impact of working memory and the “process of process modelling” on model quality: Investigating experienced versus inexperienced modellers

    NASA Astrophysics Data System (ADS)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R.; Weber, Barbara

    2016-05-01

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  16. A new class of random processes with application to helicopter noise

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.; Miamee, A. G.

    1989-01-01

    The concept of dividing random processes into classes (e.g., stationary, locally stationary, periodically correlated, and harmonizable) has long been employed. A new class of random processes is introduced which includes many of these processes as well as other interesting processes which fall into none of the above classes. Such random processes are denoted as linearly correlated. This class is shown to include the familiar stationary and periodically correlated processes as well as many other, both harmonizable and non-harmonizable, nonstationary processes. When a process is linearly correlated for all t and harmonizable, its two-dimensional power spectral density S(x) (omega 1, omega 2) is shown to take a particularly simple form, being non-zero only on lines such that omega 1 to omega 2 = + or - r(k) where the r(k's) are (not necessarily equally spaced) roots of a characteristic function. The relationship of such processes to the class of stationary processes is examined. In addition, the application of such processes in the analysis of typical helicopter noise signals is described.

  17. The impact of working memory and the “process of process modelling” on model quality: Investigating experienced versus inexperienced modellers

    PubMed Central

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R.; Weber, Barbara

    2016-01-01

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling. PMID:27157858

  18. A new class of random processes with application to helicopter noise

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.; Miamee, A. G.

    1989-01-01

    The concept of dividing random processes into classes (e.g., stationary, locally stationary, periodically correlated, and harmonizable) has long been employed. A new class of random processes is introduced which includes many of these processes as well as other interesting processes which fall into none of the above classes. Such random processes are denoted as linearly correlated. This class is shown to include the familiar stationary and periodically correlated processes as well as many other, both harmonizable and non-harmonizable, nonstationary processes. When a process is linearly correlated for all t and harmonizable, its two-dimensional power spectral density S(x)(omega 1, omega 2) is shown to take a particularly simple form, being non-zero only on lines such that omega 1 to omega 2 = + or - r(k) where the r(k's) are (not necessarily equally spaced) roots of a characteristic function. The relationship of such processes to the class of stationary processes is examined. In addition, the application of such processes in the analysis of typical helicopter noise signals is described.

  19. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers.

    PubMed

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara

    2016-05-09

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  20. Rapid Automatized Naming in Children with Dyslexia: Is Inhibitory Control Involved?

    PubMed

    Bexkens, Anika; van den Wildenberg, Wery P M; Tijms, Jurgen

    2015-08-01

    Rapid automatized naming (RAN) is widely seen as an important indicator of dyslexia. The nature of the cognitive processes involved in rapid naming is however still a topic of controversy. We hypothesized that in addition to the involvement of phonological processes and processing speed, RAN is a function of inhibition processes, in particular of interference control. A total 86 children with dyslexia and 31 normal readers were recruited. Our results revealed that in addition to phonological processing and processing speed, interference control predicts rapid naming in dyslexia, but in contrast to these other two cognitive processes, inhibition is not significantly associated with their reading and spelling skills. After variance in reading and spelling associated with processing speed, interference control and phonological processing was partialled out, naming speed was no longer consistently associated with the reading and spelling skills of children with dyslexia. Finally, dyslexic children differed from normal readers on naming speed, literacy skills, phonological processing and processing speed, but not on inhibition processes. Both theoretical and clinical interpretations of these results are discussed. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Feasibility of using continuous chromatography in downstream processing: Comparison of costs and product quality for a hybrid process vs. a conventional batch process.

    PubMed

    Ötes, Ozan; Flato, Hendrik; Winderl, Johannes; Hubbuch, Jürgen; Capito, Florian

    2017-10-10

    The protein A capture step is the main cost-driver in downstream processing, with high attrition costs especially when using protein A resin not until end of resin lifetime. Here we describe a feasibility study, transferring a batch downstream process to a hybrid process, aimed at replacing batch protein A capture chromatography with a continuous capture step, while leaving the polishing steps unchanged to minimize required process adaptations compared to a batch process. 35g of antibody were purified using the hybrid approach, resulting in comparable product quality and step yield compared to the batch process. Productivity for the protein A step could be increased up to 420%, reducing buffer amounts by 30-40% and showing robustness for at least 48h continuous run time. Additionally, to enable its potential application in a clinical trial manufacturing environment cost of goods were compared for the protein A step between hybrid process and batch process, showing a 300% cost reduction, depending on processed volumes and batch cycles. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Non-Conscious Perception of Emotions in Psychiatric Disorders: The Unsolved Puzzle of Psychopathology.

    PubMed

    Lee, Seung A; Kim, Chai-Youn; Lee, Seung-Hwan

    2016-03-01

    Psychophysiological and functional neuroimaging studies have frequently and consistently shown that emotional information can be processed outside of the conscious awareness. Non-conscious processing comprises automatic, uncontrolled, and fast processing that occurs without subjective awareness. However, how such non-conscious emotional processing occurs in patients with various psychiatric disorders requires further examination. In this article, we reviewed and discussed previous studies on the non-conscious emotional processing in patients diagnosed with anxiety disorder, schizophrenia, bipolar disorder, and depression, to further understand how non-conscious emotional processing varies across these psychiatric disorders. Although the symptom profile of each disorder does not often overlap with one another, these patients commonly show abnormal emotional processing based on the pathology of their mood and cognitive function. This indicates that the observed abnormalities of emotional processing in certain social interactions may derive from a biased mood or cognition process that precedes consciously controlled and voluntary processes. Since preconscious forms of emotional processing appear to have a major effect on behaviour and cognition in patients with these disorders, further investigation is required to understand these processes and their impact on patient pathology.

  3. Empirical evaluation of the Process Overview Measure for assessing situation awareness in process plants.

    PubMed

    Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd

    2016-03-01

    The Process Overview Measure is a query-based measure developed to assess operator situation awareness (SA) from monitoring process plants. A companion paper describes how the measure has been developed according to process plant properties and operator cognitive work. The Process Overview Measure demonstrated practicality, sensitivity, validity and reliability in two full-scope simulator experiments investigating dramatically different operational concepts. Practicality was assessed based on qualitative feedback of participants and researchers. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA in full-scope simulator settings based on data collected on process experts. Thus, full-scope simulator studies can employ the Process Overview Measure to reveal the impact of new control room technology and operational concepts on monitoring process plants. Practitioner Summary: The Process Overview Measure is a query-based measure that demonstrated practicality, sensitivity, validity and reliability for assessing operator situation awareness (SA) from monitoring process plants in representative settings.

  4. A Framework for Business Process Change Requirements Analysis

    NASA Astrophysics Data System (ADS)

    Grover, Varun; Otim, Samuel

    The ability to quickly and continually adapt business processes to accommodate evolving requirements and opportunities is critical for success in competitive environments. Without appropriate linkage between redesign decisions and strategic inputs, identifying processes that need to be modified will be difficult. In this paper, we draw attention to the analysis of business process change requirements in support of process change initiatives. Business process redesign is a multifaceted phenomenon involving processes, organizational structure, management systems, human resource architecture, and many other aspects of organizational life. To be successful, the business process initiative should focus not only on identifying the processes to be redesigned, but also pay attention to various enablers of change. Above all, a framework is just a blueprint; management must lead change. We hope our modest contribution will draw attention to the broader framing of requirements for business process change.

  5. Process Analytical Technology (PAT): batch-to-batch reproducibility of fermentation processes by robust process operational design and control.

    PubMed

    Gnoth, S; Jenzsch, M; Simutis, R; Lübbert, A

    2007-10-31

    The Process Analytical Technology (PAT) initiative of the FDA is a reaction on the increasing discrepancy between current possibilities in process supervision and control of pharmaceutical production processes and its current application in industrial manufacturing processes. With rigid approval practices based on standard operational procedures, adaptations of production reactors towards the state of the art were more or less inhibited for long years. Now PAT paves the way for continuous process and product improvements through improved process supervision based on knowledge-based data analysis, "Quality-by-Design"-concepts, and, finally, through feedback control. Examples of up-to-date implementations of this concept are presented. They are taken from one key group of processes in recombinant pharmaceutical protein manufacturing, the cultivations of genetically modified Escherichia coli bacteria.

  6. When teams shift among processes: insights from simulation and optimization.

    PubMed

    Kennedy, Deanna M; McComb, Sara A

    2014-09-01

    This article introduces process shifts to study the temporal interplay among transition and action processes espoused in the recurring phase model proposed by Marks, Mathieu, and Zacarro (2001). Process shifts are those points in time when teams complete a focal process and change to another process. By using team communication patterns to measure process shifts, this research explores (a) when teams shift among different transition processes and initiate action processes and (b) the potential of different interventions, such as communication directives, to manipulate process shift timing and order and, ultimately, team performance. Virtual experiments are employed to compare data from observed laboratory teams not receiving interventions, simulated teams receiving interventions, and optimal simulated teams generated using genetic algorithm procedures. Our results offer insights about the potential for different interventions to affect team performance. Moreover, certain interventions may promote discussions about key issues (e.g., tactical strategies) and facilitate shifting among transition processes in a manner that emulates optimal simulated teams' communication patterns. Thus, we contribute to theory regarding team processes in 2 important ways. First, we present process shifts as a way to explore the timing of when teams shift from transition to action processes. Second, we use virtual experimentation to identify those interventions with the greatest potential to affect performance by changing when teams shift among processes. Additionally, we employ computational methods including neural networks, simulation, and optimization, thereby demonstrating their applicability in conducting team research. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. Nitrous oxide and methane emissions from different treatment processes in full-scale municipal wastewater treatment plants.

    PubMed

    Rena, Y G; Wang, J H; Li, H F; Zhang, J; Qi, P Y; Hu, Z

    2013-01-01

    Nitrous oxide (N2O) and methane (CH4) are two important greenhouse gases (GHG) emitted from biological nutrient removal (BNR) processes in municipal wastewater treatment plants (WWTP). In this study, three typical biological wastewater treatment processes were studied in WWTP of Northern China: pre-anaerobic carrousel oxidation ditch (A+OD) process, pre-anoxic anaerobic-anoxic-oxic (A-A/ A/O) process and reverse anaerobic-anoxic-oxic (r-A/ A/O) process. The N2O and CH4 emissions from these three different processes were measured in every processing unit of each WWTP. Results showed that N2O and CH4 were mainly discharged during the nitrification/denitrification process and the anaerobic/anoxic treatment process, respectively and the amounts of their formation and release were significantly influenced by different BNR processes implemented in these WWTP. The N2O conversion ratio of r-A/ A/O process was the lowest among the three WWTP, which were 10.9% and 18.6% lower than that of A-A/A/O process and A+OD process, respectively. Similarly, the CH4 conversion ratio of r-A/ A/O process was the lowest among the three WWTP, which were 89. I% and 80.8% lower than that of A-A/ A/O process and A+OD process, respectively. The factors influencing N2O and CH4 formation and emission in the three WWTP were investigated to explain the difference between these processes. The nitrite concentration and oxidation-reduction potential (ORP) value were found to be the dominant influencing factors affecting N2O and CH4 production, respectively. The flow-based emission factors of N2O and CH4 of the WWTP were figured out for better quantification of GHG emissions and further technical assessments of mitigation options.

  8. Effects of children's working memory capacity and processing speed on their sentence imitation performance.

    PubMed

    Poll, Gerard H; Miller, Carol A; Mainela-Arnold, Elina; Adams, Katharine Donnelly; Misra, Maya; Park, Ji Sook

    2013-01-01

    More limited working memory capacity and slower processing for language and cognitive tasks are characteristics of many children with language difficulties. Individual differences in processing speed have not consistently been found to predict language ability or severity of language impairment. There are conflicting views on whether working memory and processing speed are integrated or separable abilities. To evaluate four models for the relations of individual differences in children's processing speed and working memory capacity in sentence imitation. The models considered whether working memory and processing speed are integrated or separable, as well as the effect of the number of operations required per sentence. The role of working memory as a mediator of the effect of processing speed on sentence imitation was also evaluated. Forty-six children with varied language and reading abilities imitated sentences. Working memory was measured with the Competing Language Processing Task (CLPT), and processing speed was measured with a composite of truth-value judgment and rapid automatized naming tasks. Mixed-effects ordinal regression models evaluated the CLPT and processing speed as predictors of sentence imitation item scores. A single mediator model evaluated working memory as a mediator of the effect of processing speed on sentence imitation total scores. Working memory was a reliable predictor of sentence imitation accuracy, but processing speed predicted sentence imitation only as a component of a processing speed by number of operations interaction. Processing speed predicted working memory capacity, and there was evidence that working memory acted as a mediator of the effect of processing speed on sentence imitation accuracy. The findings support a refined view of working memory and processing speed as separable factors in children's sentence imitation performance. Processing speed does not independently explain sentence imitation accuracy for all sentence types, but contributes when the task requires more mental operations. Processing speed also has an indirect effect on sentence imitation by contributing to working memory capacity. © 2013 Royal College of Speech and Language Therapists.

  9. Q-marker based strategy for CMC research of Chinese medicine: A case study of Panax Notoginseng saponins.

    PubMed

    Zhong, Yi; Zhu, Jieqiang; Yang, Zhenzhong; Shao, Qing; Fan, Xiaohui; Cheng, Yiyu

    2018-01-31

    To ensure pharmaceutical quality, chemistry, manufacturing and control (CMC) research is essential. However, due to the inherent complexity of Chinese medicine (CM), CMC study of CM remains a great challenge for academia, industry, and regulatory agencies. Recently, quality-marker (Q-marker) was proposed to establish quality standards or quality analysis approaches of Chinese medicine, which sheds a light on Chinese medicine's CMC study. Here manufacture processes of Panax Notoginseng Saponins (PNS) is taken as a case study and the present work is to establish a Q-marker based research strategy for CMC of Chinese medicine. The Q-markers of Panax Notoginseng Saponins (PNS) is selected and established by integrating chemical profile with pharmacological activities. Then, the key processes of PNS manufacturing are identified by material flow analysis. Furthermore, modeling algorithms are employed to explore the relationship between Q-markers and critical process parameters (CPPs) of the key processes. At last, CPPs of the key processes are optimized in order to improving the process efficiency. Among the 97 identified compounds, Notoginsenoside R 1 , ginsenoside Rg 1 , Re, Rb 1 and Rd are selected as the Q-markers of PNS. Our analysis on PNS manufacturing show the extraction process and column chromatography process are the key processes. With the CPPs of each process as the inputs and Q-markers' contents as the outputs, two process prediction models are built separately for the extraction process and column chromatography process of Panax notoginseng, which both possess good prediction ability. Based on the efficiency models of extraction process and column chromatography process we constructed, the optimal CPPs of both processes are calculated. Our results show that the Q-markers derived from CMC research strategy can be applied to analyze the manufacturing processes of Chinese medicine to assure product's quality and promote key processes' efficiency simultaneously. Copyright © 2018 Elsevier GmbH. All rights reserved.

  10. PyMS: a Python toolkit for processing of gas chromatography-mass spectrometry (GC-MS) data. Application and comparative study of selected tools

    PubMed Central

    2012-01-01

    Background Gas chromatography–mass spectrometry (GC-MS) is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. Results PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX), noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI) fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI), allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS). Conclusions PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs as well or better than leading software packages. We demonstrate data processing scenarios simple to implement in PyMS, yet difficult to achieve with many conventional GC-MS data processing software. Automated sample processing and quantitation with PyMS can provide substantial time savings compared to more traditional interactive software systems that tightly integrate data processing with the graphical user interface. PMID:22647087

  11. The Research Process on Converter Steelmaking Process by Using Limestone

    NASA Astrophysics Data System (ADS)

    Tang, Biao; Li, Xing-yi; Cheng, Han-chi; Wang, Jing; Zhang, Yun-long

    2017-08-01

    Compared with traditional converter steelmaking process, steelmaking process with limestone uses limestone to replace lime partly. A lot of researchers have studied about the new steelmaking process. There are much related research about material balance calculation, the behaviour of limestone in the slag, limestone powder injection in converter and application of limestone in iron and steel enterprises. The results show that the surplus heat of converter can meet the need of the limestone calcination, and the new process can reduce the steelmaking process energy loss in the whole steelmaking process, reduce carbon dioxide emissions, and improve the quality of the gas.

  12. Gas processing handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-04-01

    Brief details are given of processes including: BGC-Lurgi slagging gasification, COGAS, Exxon catalytic coal gasification, FW-Stoic 2-stage, GI two stage, HYGAS, Koppers-Totzek, Lurgi pressure gasification, Saarberg-Otto, Shell, Texaco, U-Gas, W-D.IGI, Wellman-Galusha, Westinghouse, and Winkler coal gasification processes; the Rectisol process; the Catacarb and the Benfield processes for removing CO/SUB/2, H/SUB/2s and COS from gases produced by the partial oxidation of coal; the selectamine DD, Selexol solvent, and Sulfinol gas cleaning processes; the sulphur-tolerant shift (SSK) process; and the Super-meth process for the production of high-Btu gas from synthesis gas.

  13. Working on the Boundaries: Philosophies and Practices of the Design Process

    NASA Technical Reports Server (NTRS)

    Ryan, R.; Blair, J.; Townsend, J.; Verderaime, V.

    1996-01-01

    While systems engineering process is a program formal management technique and contractually binding, the design process is the informal practice of achieving the design project requirements throughout all design phases of the systems engineering process. The design process and organization are systems and component dependent. Informal reviews include technical information meetings and concurrent engineering sessions, and formal technical discipline reviews are conducted through the systems engineering process. This paper discusses and references major philosophical principles in the design process, identifies its role in interacting systems and disciplines analyses and integrations, and illustrates the process application in experienced aerostructural designs.

  14. Chemical processing of lunar materials

    NASA Technical Reports Server (NTRS)

    Criswell, D. R.; Waldron, R. D.

    1979-01-01

    The paper highlights recent work on the general problem of processing lunar materials. The discussion covers lunar source materials, refined products, motivations for using lunar materials, and general considerations for a lunar or space processing plant. Attention is given to chemical processing through various techniques, including electrolysis of molten silicates, carbothermic/silicothermic reduction, carbo-chlorination process, NaOH basic-leach process, and HF acid-leach process. Several options for chemical processing of lunar materials are well within the state of the art of applied chemistry and chemical engineering to begin development based on the extensive knowledge of lunar materials.

  15. Coordination and organization of security software process for power information application environment

    NASA Astrophysics Data System (ADS)

    Wang, Qiang

    2017-09-01

    As an important part of software engineering, the software process decides the success or failure of software product. The design and development feature of security software process is discussed, so is the necessity and the present significance of using such process. Coordinating the function software, the process for security software and its testing are deeply discussed. The process includes requirement analysis, design, coding, debug and testing, submission and maintenance. In each process, the paper proposed the subprocesses to support software security. As an example, the paper introduces the above process into the power information platform.

  16. Sensor-based atomic layer deposition for rapid process learning and enhanced manufacturability

    NASA Astrophysics Data System (ADS)

    Lei, Wei

    In the search for sensor based atomic layer deposition (ALD) process to accelerate process learning and enhance manufacturability, we have explored new reactor designs and applied in-situ process sensing to W and HfO 2 ALD processes. A novel wafer scale ALD reactor, which features fast gas switching, good process sensing compatibility and significant similarity to the real manufacturing environment, is constructed. The reactor has a unique movable reactor cap design that allows two possible operation modes: (1) steady-state flow with alternating gas species; or (2) fill-and-pump-out cycling of each gas, accelerating the pump-out by lifting the cap to employ the large chamber volume as ballast. Downstream quadrupole mass spectrometry (QMS) sampling is applied for in-situ process sensing of tungsten ALD process. The QMS reveals essential surface reaction dynamics through real-time signals associated with byproduct generation as well as precursor introduction and depletion for each ALD half cycle, which are then used for process learning and optimization. More subtle interactions such as imperfect surface saturation and reactant dose interaction are also directly observed by QMS, indicating that ALD process is more complicated than the suggested layer-by-layer growth. By integrating in real-time the byproduct QMS signals over each exposure and plotting it against process cycle number, the deposition kinetics on the wafer is directly measured. For continuous ALD runs, the total integrated byproduct QMS signal in each ALD run is also linear to ALD film thickness, and therefore can be used for ALD film thickness metrology. The in-situ process sensing is also applied to HfO2 ALD process that is carried out in a furnace type ALD reactor. Precursor dose end-point control is applied to precisely control the precursor dose in each half cycle. Multiple process sensors, including quartz crystal microbalance (QCM) and QMS are used to provide real time process information. The sensing results confirm the proposed surface reaction path and once again reveal the complexity of ALD processes. The impact of this work includes: (1) It explores new ALD reactor designs which enable the implementation of in-situ process sensors for rapid process learning and enhanced manufacturability; (2) It demonstrates in the first time that in-situ QMS can reveal detailed process dynamics and film growth kinetics in wafer-scale ALD process, and thus can be used for ALD film thickness metrology. (3) Based on results from two different processes carried out in two different reactors, it is clear that ALD is a more complicated process than normally believed or advertised, but real-time observation of the operational chemistries in ALD by in-situ sensors provides critical insight to the process and the basis for more effective process control for ALD applications.

  17. Implicit Processes, Self-Regulation, and Interventions for Behavior Change.

    PubMed

    St Quinton, Tom; Brunton, Julie A

    2017-01-01

    The ability to regulate and subsequently change behavior is influenced by both reflective and implicit processes. Traditional theories have focused on conscious processes by highlighting the beliefs and intentions that influence decision making. However, their success in changing behavior has been modest with a gap between intention and behavior apparent. Dual-process models have been recently applied to health psychology; with numerous models incorporating implicit processes that influence behavior as well as the more common conscious processes. Such implicit processes are theorized to govern behavior non-consciously. The article provides a commentary on motivational and volitional processes and how interventions have combined to attempt an increase in positive health behaviors. Following this, non-conscious processes are discussed in terms of their theoretical underpinning. The article will then highlight how these processes have been measured and will then discuss the different ways that the non-conscious and conscious may interact. The development of interventions manipulating both processes may well prove crucial in successfully altering behavior.

  18. All varieties of encoding variability are not created equal: Separating variable processing from variable tasks

    PubMed Central

    Huff, Mark J.; Bodner, Glen E.

    2014-01-01

    Whether encoding variability facilitates memory is shown to depend on whether item-specific and relational processing are both performed across study blocks, and whether study items are weakly versus strongly related. Variable-processing groups studied a word list once using an item-specific task and once using a relational task. Variable-task groups’ two different study tasks recruited the same type of processing each block. Repeated-task groups performed the same study task each block. Recall and recognition were greatest in the variable-processing group, but only with weakly related lists. A variable-processing benefit was also found when task-based processing and list-type processing were complementary (e.g., item-specific processing of a related list) rather than redundant (e.g., relational processing of a related list). That performing both item-specific and relational processing across trials, or within a trial, yields encoding-variability benefits may help reconcile decades of contradictory findings in this area. PMID:25018583

  19. Continuous welding of unidirectional fiber reinforced thermoplastic tape material

    NASA Astrophysics Data System (ADS)

    Schledjewski, Ralf

    2017-10-01

    Continuous welding techniques like thermoplastic tape placement with in situ consolidation offer several advantages over traditional manufacturing processes like autoclave consolidation, thermoforming, etc. However, still there is a need to solve several important processing issues before it becomes a viable economic process. Intensive process analysis and optimization has been carried out in the past through experimental investigation, model definition and simulation development. Today process simulation is capable to predict resulting consolidation quality. Effects of material imperfections or process parameter variations are well known. But using this knowledge to control the process based on online process monitoring and according adaption of the process parameters is still challenging. Solving inverse problems and using methods for automated code generation allowing fast implementation of algorithms on targets are required. The paper explains the placement technique in general. Process-material-property-relationships and typical material imperfections are described. Furthermore, online monitoring techniques and how to use them for a model based process control system are presented.

  20. Economics of polysilicon process: A view from Japan

    NASA Technical Reports Server (NTRS)

    Shimizu, Y.

    1986-01-01

    The production process of solar grade silicon (SOG-Si) through trichlorosilane (TCS) was researched in a program sponsored by New Energy Development Organization (NEDO). The NEDO process consists of the following two steps: TCS production from by-product silicon tetrachloride (STC) and SOG-Si formation from TCS using a fluidized bed reactor. Based on the data obtained during the research program, the manufacturing cost of the NEDO process and other polysilicon manufacturing processes were compared. The manufacturing cost was calculated on the basis of 1000 tons/year production. The cost estimate showed that the cost of producing silicon by all of the new processes is less than the cost by the conventional Siemens process. Using a new process, the cost of producing semiconductor grade silicon was found to be virtually the same with any to the TCS, diclorosilane, and monosilane processes when by-products were recycled. The SOG-Si manufacturing processes using the fluidized bed reactor, which needs further development, shows a greater probablility of cost reduction than the filament processes.

  1. Autonomous Agents for Dynamic Process Planning in the Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Nik Nejad, Hossein Tehrani; Sugimura, Nobuhiro; Iwamura, Koji; Tanimizu, Yoshitaka

    Rapid changes of market demands and pressures of competition require manufacturers to maintain highly flexible manufacturing systems to cope with a complex manufacturing environment. This paper deals with development of an agent-based architecture of dynamic systems for incremental process planning in the manufacturing systems. In consideration of alternative manufacturing processes and machine tools, the process plans and the schedules of the manufacturing resources are generated incrementally and dynamically. A negotiation protocol is discussed, in this paper, to generate suitable process plans for the target products real-timely and dynamically, based on the alternative manufacturing processes. The alternative manufacturing processes are presented by the process plan networks discussed in the previous paper, and the suitable process plans are searched and generated to cope with both the dynamic changes of the product specifications and the disturbances of the manufacturing resources. We initiatively combine the heuristic search algorithms of the process plan networks with the negotiation protocols, in order to generate suitable process plans in the dynamic manufacturing environment.

  2. Implicit Processes, Self-Regulation, and Interventions for Behavior Change

    PubMed Central

    St Quinton, Tom; Brunton, Julie A.

    2017-01-01

    The ability to regulate and subsequently change behavior is influenced by both reflective and implicit processes. Traditional theories have focused on conscious processes by highlighting the beliefs and intentions that influence decision making. However, their success in changing behavior has been modest with a gap between intention and behavior apparent. Dual-process models have been recently applied to health psychology; with numerous models incorporating implicit processes that influence behavior as well as the more common conscious processes. Such implicit processes are theorized to govern behavior non-consciously. The article provides a commentary on motivational and volitional processes and how interventions have combined to attempt an increase in positive health behaviors. Following this, non-conscious processes are discussed in terms of their theoretical underpinning. The article will then highlight how these processes have been measured and will then discuss the different ways that the non-conscious and conscious may interact. The development of interventions manipulating both processes may well prove crucial in successfully altering behavior. PMID:28337164

  3. Models of recognition: a review of arguments in favor of a dual-process account.

    PubMed

    Diana, Rachel A; Reder, Lynne M; Arndt, Jason; Park, Heekyeong

    2006-02-01

    The majority of computationally specified models of recognition memory have been based on a single-process interpretation, claiming that familiarity is the only influence on recognition. There is increasing evidence that recognition is, in fact, based on two processes: recollection and familiarity. This article reviews the current state of the evidence for dual-process models, including the usefulness of the remember/know paradigm, and interprets the relevant results in terms of the source of activation confusion (SAC) model of memory. We argue that the evidence from each of the areas we discuss, when combined, presents a strong case that inclusion of a recollection process is necessary. Given this conclusion, we also argue that the dual-process claim that the recollection process is always available is, in fact, more parsimonious than the single-process claim that the recollection process is used only in certain paradigms. The value of a well-specified process model such as the SAC model is discussed with regard to other types of dual-process models.

  4. Integrating Thermal Tools Into the Mechanical Design Process

    NASA Technical Reports Server (NTRS)

    Tsuyuki, Glenn T.; Siebes, Georg; Novak, Keith S.; Kinsella, Gary M.

    1999-01-01

    The intent of mechanical design is to deliver a hardware product that meets or exceeds customer expectations, while reducing cycle time and cost. To this end, an integrated mechanical design process enables the idea of parallel development (concurrent engineering). This represents a shift from the traditional mechanical design process. With such a concurrent process, there are significant issues that have to be identified and addressed before re-engineering the mechanical design process to facilitate concurrent engineering. These issues also assist in the integration and re-engineering of the thermal design sub-process since it resides within the entire mechanical design process. With these issues in mind, a thermal design sub-process can be re-defined in a manner that has a higher probability of acceptance, thus enabling an integrated mechanical design process. However, the actual implementation is not always problem-free. Experience in applying the thermal design sub-process to actual situations provides the evidence for improvement, but more importantly, for judging the viability and feasibility of the sub-process.

  5. [Monitoring method of extraction process for Schisandrae Chinensis Fructus based on near infrared spectroscopy and multivariate statistical process control].

    PubMed

    Xu, Min; Zhang, Lei; Yue, Hong-Shui; Pang, Hong-Wei; Ye, Zheng-Liang; Ding, Li

    2017-10-01

    To establish an on-line monitoring method for extraction process of Schisandrae Chinensis Fructus, the formula medicinal material of Yiqi Fumai lyophilized injection by combining near infrared spectroscopy with multi-variable data analysis technology. The multivariate statistical process control (MSPC) model was established based on 5 normal batches in production and 2 test batches were monitored by PC scores, DModX and Hotelling T2 control charts. The results showed that MSPC model had a good monitoring ability for the extraction process. The application of the MSPC model to actual production process could effectively achieve on-line monitoring for extraction process of Schisandrae Chinensis Fructus, and can reflect the change of material properties in the production process in real time. This established process monitoring method could provide reference for the application of process analysis technology in the process quality control of traditional Chinese medicine injections. Copyright© by the Chinese Pharmaceutical Association.

  6. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  7. Process yield improvements with process control terminal for varian serial ion implanters

    NASA Astrophysics Data System (ADS)

    Higashi, Harry; Soni, Ameeta; Martinez, Larry; Week, Ken

    Implant processes in a modern wafer production fab are extremely complex. There can be several types of misprocessing, i.e. wrong dose or species, double implants and missed implants. Process Control Terminals (PCT) for Varian 350Ds installed at Intel fabs were found to substantially reduce the number of misprocessing steps. This paper describes those misprocessing steps and their subsequent reduction with use of PCTs. Reliable and simple process control with serial process ion implanters has been in increasing demand. A well designed process control terminal greatly increases device yield by monitoring all pertinent implanter functions and enabling process engineering personnel to set up process recipes for simple and accurate system operation. By programming user-selectable interlocks, implant errors are reduced and those that occur are logged for further analysis and prevention. A process control terminal should also be compatible with office personal computers for greater flexibility in system use and data analysis. The impact from the capability of a process control terminal is increased productivity, ergo higher device yield.

  8. An Aspect-Oriented Framework for Business Process Improvement

    NASA Astrophysics Data System (ADS)

    Pourshahid, Alireza; Mussbacher, Gunter; Amyot, Daniel; Weiss, Michael

    Recently, many organizations invested in Business Process Management Systems (BPMSs) in order to automate and monitor their processes. Business Activity Monitoring is one of the essential modules of a BPMS as it provides the core monitoring capabilities. Although the natural step after process monitoring is process improvement, most of the existing systems do not provide the means to help users with the improvement step. In this paper, we address this issue by proposing an aspect-oriented framework that allows the impact of changes to business processes to be explored with what-if scenarios based on the most appropriate process redesign patterns among several possibilities. As the four cornerstones of a BPMS are process, goal, performance and validation views, these views need to be aligned automatically by any approach that intends to support automated improvement of business processes. Our framework therefore provides means to reflect process changes also in the other views of the business process. A health care case study presented as a proof of concept suggests that this novel approach is feasible.

  9. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  10. Combined mesophilic anaerobic and thermophilic aerobic digestion process for high-strength food wastewater to increase removal efficiency and reduce sludge discharge.

    PubMed

    Jang, H M; Park, S K; Ha, J H; Park, J M

    2014-01-01

    In this study, a process that combines the mesophilic anaerobic digestion (MAD) process with thermophilic aerobic digestion (TAD) for high-strength food wastewater (FWW) treatment was developed to examine the removal of organic matter and methane production. All effluent discharged from the MAD process was separated into solid and liquid portions. The liquid part was discarded and the sludge part was passed to the TAD process for further degradation. Then, the digested sludge from the TAD process was recycled back to the MAD unit to achieve low sludge discharge from the combined process. The reactor combination was operated in two phases: during Phase I, 40 d of total hydraulic retention time (HRT) was applied; during Phase II, 20 d was applied. HRT of the TAD process was fixed at 5 d. For a comparison, a control process (single-stage MAD) was operated with the same HRTs of the combined process. Our results indicated that the combined process showed over 90% total solids, volatile solids and chemical oxygen demand removal efficiencies. In addition, the combined process showed a significantly higher methane production rate than that of the control process. Consequently, the experimental data demonstrated that the combined MAD-TAD process was successfully employed for high-strength FWW treatment with highly efficient organic matter reduction and methane production.

  11. Leading processes of patient care and treatment in hierarchical healthcare organizations in Sweden--process managers' experiences.

    PubMed

    Nilsson, Kerstin; Sandoff, Mette

    2015-01-01

    The purpose of this study is to gain better understanding of the roles and functions of process managers by describing Swedish process managers' experiences of leading processes involving patient care and treatment when working in a hierarchical health-care organization. This study is based on an explorative design. The data were gathered from interviews with 12 process managers at three Swedish hospitals. These data underwent qualitative and interpretative analysis with a modified editing style. The process managers' experiences of leading processes in a hierarchical health-care organization are described under three themes: having or not having a mandate, exposure to conflict situations and leading process development. The results indicate a need for clarity regarding process manager's responsibility and work content, which need to be communicated to all managers and staff involved in the patient care and treatment process, irrespective of department. There also needs to be an emphasis on realistic expectations and orientation of the goals that are an intrinsic part of the task of being a process manager. Generalizations from the results of the qualitative interview studies are limited, but a deeper understanding of the phenomenon was reached, which, in turn, can be transferred to similar settings. This study contributes qualitative descriptions of leading care and treatment processes in a functional, hierarchical health-care organization from process managers' experiences, a subject that has not been investigated earlier.

  12. Information Technology Process Improvement Decision-Making: An Exploratory Study from the Perspective of Process Owners and Process Managers

    ERIC Educational Resources Information Center

    Lamp, Sandra A.

    2012-01-01

    There is information available in the literature that discusses information technology (IT) governance and investment decision making from an executive-level perception, yet there is little information available that offers the perspective of process owners and process managers pertaining to their role in IT process improvement and investment…

  13. 43 CFR 2884.17 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false How will BLM process my Processing...-WAY UNDER THE MINERAL LEASING ACT Applying for MLA Grants or TUPs § 2884.17 How will BLM process my... written agreement that describes how BLM will process your application. The final agreement consists of a...

  14. 43 CFR 2884.17 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false How will BLM process my Processing...-WAY UNDER THE MINERAL LEASING ACT Applying for MLA Grants or TUPs § 2884.17 How will BLM process my... written agreement that describes how BLM will process your application. The final agreement consists of a...

  15. 15 CFR 15.3 - Acceptance of service of process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Acceptance of service of process. 15.3... Process § 15.3 Acceptance of service of process. (a) Except as otherwise provided in this subpart, any... employee by law is to be served personally with process. Service of process in this case is inadequate when...

  16. Weaknesses in Applying a Process Approach in Industry Enterprises

    NASA Astrophysics Data System (ADS)

    Kučerová, Marta; Mĺkva, Miroslava; Fidlerová, Helena

    2012-12-01

    The paper deals with a process approach as one of the main principles of the quality management. Quality management systems based on process approach currently represents one of a proofed ways how to manage an organization. The volume of sales, costs and profit levels are influenced by quality of processes and efficient process flow. As results of the research project showed, there are some weaknesses in applying of the process approach in the industrial routine and it has been often only a formal change of the functional management to process management in many organizations in Slovakia. For efficient process management it is essential that companies take attention to the way how to organize their processes and seek for their continuous improvement.

  17. Is Primary-Process Cognition a Feature of Hypnosis?

    PubMed

    Finn, Michael T; Goldman, Jared I; Lyon, Gyrid B; Nash, Michael R

    2017-01-01

    The division of cognition into primary and secondary processes is an important part of contemporary psychoanalytic metapsychology. Whereas primary processes are most characteristic of unconscious thought and loose associations, secondary processes generally govern conscious thought and logical reasoning. It has been theorized that an induction into hypnosis is accompanied by a predomination of primary-process cognition over secondary-process cognition. The authors hypothesized that highly hypnotizable individuals would demonstrate more primary-process cognition as measured by a recently developed cognitive-perceptual task. This hypothesis was not supported. In fact, low hypnotizable participants demonstrated higher levels of primary-process cognition. Exploratory analyses suggested a more specific effect: felt connectedness to the hypnotist seemed to promote secondary-process cognition among low hypnotizable participants.

  18. [Dual process in large number estimation under uncertainty].

    PubMed

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  19. Object-processing neural efficiency differentiates object from spatial visualizers.

    PubMed

    Motes, Michael A; Malach, Rafael; Kozhevnikov, Maria

    2008-11-19

    The visual system processes object properties and spatial properties in distinct subsystems, and we hypothesized that this distinction might extend to individual differences in visual processing. We conducted a functional MRI study investigating the neural underpinnings of individual differences in object versus spatial visual processing. Nine participants of high object-processing ability ('object' visualizers) and eight participants of high spatial-processing ability ('spatial' visualizers) were scanned, while they performed an object-processing task. Object visualizers showed lower bilateral neural activity in lateral occipital complex and lower right-lateralized neural activity in dorsolateral prefrontal cortex. The data indicate that high object-processing ability is associated with more efficient use of visual-object resources, resulting in less neural activity in the object-processing pathway.

  20. Process simulation for advanced composites production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allendorf, M.D.; Ferko, S.M.; Griffiths, S.

    1997-04-01

    The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coatingmore » techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.« less

  1. CDO budgeting

    NASA Astrophysics Data System (ADS)

    Nesladek, Pavel; Wiswesser, Andreas; Sass, Björn; Mauermann, Sebastian

    2008-04-01

    The Critical dimension off-target (CDO) is a key parameter for mask house customer, affecting directly the performance of the mask. The CDO is the difference between the feature size target and the measured feature size. The change of CD during the process is either compensated within the process or by data correction. These compensation methods are commonly called process bias and data bias, respectively. The difference between data bias and process bias in manufacturing results in systematic CDO error, however, this systematic error does not take into account the instability of the process bias. This instability is a result of minor variations - instabilities of manufacturing processes and changes in materials and/or logistics. Using several masks the CDO of the manufacturing line can be estimated. For systematic investigation of the unit process contribution to CDO and analysis of the factors influencing the CDO contributors, a solid understanding of each unit process and huge number of masks is necessary. Rough identification of contributing processes and splitting of the final CDO variation between processes can be done with approx. 50 masks with identical design, material and process. Such amount of data allows us to identify the main contributors and estimate the effect of them by means of Analysis of variance (ANOVA) combined with multivariate analysis. The analysis does not provide information about the root cause of the variation within the particular unit process, however, it provides a good estimate of the impact of the process on the stability of the manufacturing line. Additionally this analysis can be used to identify possible interaction between processes, which cannot be investigated if only single processes are considered. Goal of this work is to evaluate limits for CDO budgeting models given by the precision and the number of measurements as well as partitioning the variation within the manufacturing process. The CDO variation splits according to the suggested model into contributions from particular processes or process groups. Last but not least the power of this method to determine the absolute strength of each parameter will be demonstrated. Identification of the root cause of this variation within the unit process itself is not scope of this work.

  2. Application of high-throughput mini-bioreactor system for systematic scale-down modeling, process characterization, and control strategy development.

    PubMed

    Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2015-01-01

    High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.

  3. Consumers' conceptualization of ultra-processed foods.

    PubMed

    Ares, Gastón; Vidal, Leticia; Allegue, Gimena; Giménez, Ana; Bandeira, Elisa; Moratorio, Ximena; Molina, Verónika; Curutchet, María Rosa

    2016-10-01

    Consumption of ultra-processed foods has been associated with low diet quality, obesity and other non-communicable diseases. This situation makes it necessary to develop educational campaigns to discourage consumers from substituting meals based on unprocessed or minimally processed foods by ultra-processed foods. In this context, the aim of the present work was to investigate how consumers conceptualize the term ultra-processed foods and to evaluate if the foods they perceive as ultra-processed are in concordance with the products included in the NOVA classification system. An online study was carried out with 2381 participants. They were asked to explain what they understood by ultra-processed foods and to list foods that can be considered ultra-processed. Responses were analysed using inductive coding. The great majority of the participants was able to provide an explanation of what ultra-processed foods are, which was similar to the definition described in the literature. Most of the participants described ultra-processed foods as highly processed products that usually contain additives and other artificial ingredients, stressing that they have low nutritional quality and are unhealthful. The most relevant products for consumers' conceptualization of the term were in agreement with the NOVA classification system and included processed meats, soft drinks, snacks, burgers, powdered and packaged soups and noodles. However, some of the participants perceived processed foods, culinary ingredients and even some minimally processed foods as ultra-processed. This suggests that in order to accurately convey their message, educational campaigns aimed at discouraging consumers from consuming ultra-processed foods should include a clear definition of the term and describe some of their specific characteristics, such as the type of ingredients included in their formulation and their nutritional composition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Rapid communication: Global-local processing affects recognition of distractor emotional faces.

    PubMed

    Srinivasan, Narayanan; Gupta, Rashmi

    2011-03-01

    Recent studies have shown links between happy faces and global, distributed attention as well as sad faces to local, focused attention. Emotions have been shown to affect global-local processing. Given that studies on emotion-cognition interactions have not explored the effect of perceptual processing at different spatial scales on processing stimuli with emotional content, the present study investigated the link between perceptual focus and emotional processing. The study investigated the effects of global-local processing on the recognition of distractor faces with emotional expressions. Participants performed a digit discrimination task with digits at either the global level or the local level presented against a distractor face (happy or sad) as background. The results showed that global processing associated with broad scope of attention facilitates recognition of happy faces, and local processing associated with narrow scope of attention facilitates recognition of sad faces. The novel results of the study provide conclusive evidence for emotion-cognition interactions by demonstrating the effect of perceptual processing on emotional faces. The results along with earlier complementary results on the effect of emotion on global-local processing support a reciprocal relationship between emotional processing and global-local processing. Distractor processing with emotional information also has implications for theories of selective attention.

  5. Tomographical process monitoring of laser transmission welding with OCT

    NASA Astrophysics Data System (ADS)

    Ackermann, Philippe; Schmitt, Robert

    2017-06-01

    Process control of laser processes still encounters many obstacles. Although these processes are stable, a narrow process parameter window during the process or process deviations have led to an increase on the requirements for the process itself and on monitoring devices. Laser transmission welding as a contactless and locally limited joining technique is well-established in a variety of demanding production areas. For example, sensitive parts demand a particle-free joining technique which does not affect the inner components. Inline integrated non-destructive optical measurement systems capable of providing non-invasive tomographical images of the transparent material, the weld seam and its surrounding areas with micron resolution would improve the overall process. Obtained measurement data enable qualitative feedback into the system to adapt parameters for a more robust process. Within this paper we present the inline monitoring device based on Fourier-domain optical coherence tomography developed within the European-funded research project "Manunet Weldable". This device, after adaptation to the laser transmission welding process is optically and mechanically integrated into the existing laser system. The main target lies within the inline process control destined to extract tomographical geometrical measurement data from the weld seam forming process. Usage of this technology makes offline destructive testing of produced parts obsolete. 1,2,3,4

  6. A quality-refinement process for medical imaging applications.

    PubMed

    Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I

    2009-01-01

    To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.

  7. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  8. [Process management in the hospital pharmacy for the improvement of the patient safety].

    PubMed

    Govindarajan, R; Perelló-Juncá, A; Parès-Marimòn, R M; Serrais-Benavente, J; Ferrandez-Martí, D; Sala-Robinat, R; Camacho-Calvente, A; Campabanal-Prats, C; Solà-Anderiu, I; Sanchez-Caparrós, S; Gonzalez-Estrada, J; Martinez-Olalla, P; Colomer-Palomo, J; Perez-Mañosas, R; Rodríguez-Gallego, D

    2013-01-01

    To define a process management model for a hospital pharmacy in order to measure, analyse and make continuous improvements in patient safety and healthcare quality. In order to implement process management, Igualada Hospital was divided into different processes, one of which was the Hospital Pharmacy. A multidisciplinary management team was given responsibility for each process. For each sub-process one person was identified to be responsible, and a working group was formed under his/her leadership. With the help of each working group, a risk analysis using failure modes and effects analysis (FMEA) was performed, and the corresponding improvement actions were implemented. Sub-process indicators were also identified, and different process management mechanisms were introduced. The first risk analysis with FMEA produced more than thirty preventive actions to improve patient safety. Later, the weekly analysis of errors, as well as the monthly analysis of key process indicators, permitted us to monitor process results and, as each sub-process manager participated in these meetings, also to assume accountability and responsibility, thus consolidating the culture of excellence. The introduction of different process management mechanisms, with the participation of people responsible for each sub-process, introduces a participative management tool for the continuous improvement of patient safety and healthcare quality. Copyright © 2012 SECA. Published by Elsevier Espana. All rights reserved.

  9. Distributed processing method for arbitrary view generation in camera sensor network

    NASA Astrophysics Data System (ADS)

    Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki

    2003-05-01

    Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.

  10. EEG alpha synchronization is related to top-down processing in convergent and divergent thinking

    PubMed Central

    Benedek, Mathias; Bergner, Sabine; Könen, Tanja; Fink, Andreas; Neubauer, Aljoscha C.

    2011-01-01

    Synchronization of EEG alpha activity has been referred to as being indicative of cortical idling, but according to more recent evidence it has also been associated with active internal processing and creative thinking. The main objective of this study was to investigate to what extent EEG alpha synchronization is related to internal processing demands and to specific cognitive process involved in creative thinking. To this end, EEG was measured during a convergent and a divergent thinking task (i.e., creativity-related task) which once were processed involving low and once involving high internal processing demands. High internal processing demands were established by masking the stimulus (after encoding) and thus preventing further bottom-up processing. Frontal alpha synchronization was observed during convergent and divergent thinking only under exclusive top-down control (high internal processing demands), but not when bottom-up processing was allowed (low internal processing demands). We conclude that frontal alpha synchronization is related to top-down control rather than to specific creativity-related cognitive processes. Frontal alpha synchronization, which has been observed in a variety of different creativity tasks, thus may not reflect a brain state that is specific for creative cognition but can probably be attributed to high internal processing demands which are typically involved in creative thinking. PMID:21925520

  11. Kennedy Space Center Payload Processing

    NASA Technical Reports Server (NTRS)

    Lawson, Ronnie; Engler, Tom; Colloredo, Scott; Zide, Alan

    2011-01-01

    This slide presentation reviews the payload processing functions at Kennedy Space Center. It details some of the payloads processed at KSC, the typical processing tasks, the facilities available for processing payloads, and the capabilities and customer services that are available.

  12. Improving the Document Development Process: Integrating Relational Data and Statistical Process Control.

    ERIC Educational Resources Information Center

    Miller, John

    1994-01-01

    Presents an approach to document numbering, document titling, and process measurement which, when used with fundamental techniques of statistical process control, reveals meaningful process-element variation as well as nominal productivity models. (SR)

  13. USE OF INDICATOR ORGANISMS FOR DETERMINING PROCESS EFFECTIVENESS

    EPA Science Inventory

    Wastewaters, process effluents and treatment process residuals contain a variety of microorganisms. Many factors influence their densities as they move through collection systems and process equipment. Biological treatment systems rely on the catabolic processes of such microor...

  14. Food processing by high hydrostatic pressure.

    PubMed

    Yamamoto, Kazutaka

    2017-04-01

    High hydrostatic pressure (HHP) process, as a nonthermal process, can be used to inactivate microbes while minimizing chemical reactions in food. In this regard, a HHP level of 100 MPa (986.9 atm/1019.7 kgf/cm 2 ) and more is applied to food. Conventional thermal process damages food components relating color, flavor, and nutrition via enhanced chemical reactions. However, HHP process minimizes the damages and inactivates microbes toward processing high quality safe foods. The first commercial HHP-processed foods were launched in 1990 as fruit products such as jams, and then some other products have been commercialized: retort rice products (enhanced water impregnation), cooked hams and sausages (shelf life extension), soy sauce with minimized salt (short-time fermentation owing to enhanced enzymatic reactions), and beverages (shelf life extension). The characteristics of HHP food processing are reviewed from viewpoints of nonthermal process, history, research and development, physical and biochemical changes, and processing equipment.

  15. [Near infrared spectroscopy based process trajectory technology and its application in monitoring and controlling of traditional Chinese medicine manufacturing process].

    PubMed

    Li, Wen-Long; Qu, Hai-Bin

    2016-10-01

    In this paper, the principle of NIRS (near infrared spectroscopy)-based process trajectory technology was introduced.The main steps of the technique include:① in-line collection of the processes spectra of different technics; ② unfolding of the 3-D process spectra;③ determination of the process trajectories and their normal limits;④ monitoring of the new batches with the established MSPC (multivariate statistical process control) models.Applications of the technology in the chemical and biological medicines were reviewed briefly. By a comprehensive introduction of our feasibility research on the monitoring of traditional Chinese medicine technical process using NIRS-based multivariate process trajectories, several important problems of the practical applications which need urgent solutions are proposed, and also the application prospect of the NIRS-based process trajectory technology is fully discussed and put forward in the end. Copyright© by the Chinese Pharmaceutical Association.

  16. Recollection is a continuous process: implications for dual-process theories of recognition memory.

    PubMed

    Mickes, Laura; Wais, Peter E; Wixted, John T

    2009-04-01

    Dual-process theory, which holds that recognition decisions can be based on recollection or familiarity, has long seemed incompatible with signal detection theory, which holds that recognition decisions are based on a singular, continuous memory-strength variable. Formal dual-process models typically regard familiarity as a continuous process (i.e., familiarity comes in degrees), but they construe recollection as a categorical process (i.e., recollection either occurs or does not occur). A continuous process is characterized by a graded relationship between confidence and accuracy, whereas a categorical process is characterized by a binary relationship such that high confidence is associated with high accuracy but all lower degrees of confidence are associated with chance accuracy. Using a source-memory procedure, we found that the relationship between confidence and source-recollection accuracy was graded. Because recollection, like familiarity, is a continuous process, dual-process theory is more compatible with signal detection theory than previously thought.

  17. A qualitative assessment of a random process proposed as an atmospheric turbulence model

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1977-01-01

    A random process is formed by the product of two Gaussian processes and the sum of that product with a third Gaussian process. The resulting total random process is interpreted as the sum of an amplitude modulated process and a slowly varying, random mean value. The properties of the process are examined, including an interpretation of the process in terms of the physical structure of atmospheric motions. The inclusion of the mean value variation gives an improved representation of the properties of atmospheric motions, since the resulting process can account for the differences in the statistical properties of atmospheric velocity components and their gradients. The application of the process to atmospheric turbulence problems, including the response of aircraft dynamic systems, is examined. The effects of the mean value variation upon aircraft loads are small in most cases, but can be important in the measurement and interpretation of atmospheric turbulence data.

  18. Metals Recovery from Artificial Ore in Case of Printed Circuit Boards, Using Plasmatron Plasma Reactor

    PubMed Central

    Szałatkiewicz, Jakub

    2016-01-01

    This paper presents the investigation of metals production form artificial ore, which consists of printed circuit board (PCB) waste, processed in plasmatron plasma reactor. A test setup was designed and built that enabled research of plasma processing of PCB waste of more than 700 kg/day scale. The designed plasma process is presented and discussed. The process in tests consumed 2 kWh/kg of processed waste. Investigation of the process products is presented with their elemental analyses of metals and slag. The average recovery of metals in presented experiments is 76%. Metals recovered include: Ag, Au, Pd, Cu, Sn, Pb, and others. The chosen process parameters are presented: energy consumption, throughput, process temperatures, and air consumption. Presented technology allows processing of variable and hard-to-process printed circuit board waste that can reach up to 100% of the input mass. PMID:28773804

  19. Characterisation and Processing of Some Iron Ores of India

    NASA Astrophysics Data System (ADS)

    Krishna, S. J. G.; Patil, M. R.; Rudrappa, C.; Kumar, S. P.; Ravi, B. P.

    2013-10-01

    Lack of process characterization data of the ores based on the granulometry, texture, mineralogy, physical, chemical, properties, merits and limitations of process, market and local conditions may mislead the mineral processing entrepreneur. The proper implementation of process characterization and geotechnical map data will result in optimized sustainable utilization of resource by processing. A few case studies of process characterization of some Indian iron ores are dealt with. The tentative ascending order of process refractoriness of iron ores is massive hematite/magnetite < marine black iron oxide sands < laminated soft friable siliceous ore fines < massive banded magnetite quartzite < laminated soft friable clayey aluminous ore fines < massive banded hematite quartzite/jasper < massive clayey hydrated iron oxide ore < manganese bearing iron ores massive < Ti-V bearing magnetite magmatic ore < ferruginous cherty quartzite. Based on diagnostic process characterization, the ores have been classified and generic process have been adopted for some Indian iron ores.

  20. Measuring health care process quality with software quality measures.

    PubMed

    Yildiz, Ozkan; Demirörs, Onur

    2012-01-01

    Existing quality models focus on some specific diseases, clinics or clinical areas. Although they contain structure, process, or output type measures, there is no model which measures quality of health care processes comprehensively. In addition, due to the not measured overall process quality, hospitals cannot compare quality of processes internally and externally. To bring a solution to above problems, a new model is developed from software quality measures. We have adopted the ISO/IEC 9126 software quality standard for health care processes. Then, JCIAS (Joint Commission International Accreditation Standards for Hospitals) measurable elements were added to model scope for unifying functional requirements. Assessment (diagnosing) process measurement results are provided in this paper. After the application, it was concluded that the model determines weak and strong aspects of the processes, gives a more detailed picture for the process quality, and provides quantifiable information to hospitals to compare their processes with multiple organizations.

  1. Thermal Stir Welding: A New Solid State Welding Process

    NASA Technical Reports Server (NTRS)

    Ding, R. Jeffrey

    2003-01-01

    Thermal stir welding is a new welding process developed at NASA's Marshall Space Flight Center in Huntsville, AL. Thermal stir welding is similar to friction stir welding in that it joins similar or dissimilar materials without melting the parent material. However, unlike friction stir welding, the heating, stirring and forging elements of the process are all independent of each other and are separately controlled. Furthermore, the heating element of the process can be either a solid-state process (such as a thermal blanket, induction type process, etc), or, a fusion process (YG laser, plasma torch, etc.) The separation of the heating, stirring, forging elements of the process allows more degrees of freedom for greater process control. This paper introduces the mechanics of the thermal stir welding process. In addition, weld mechanical property data is presented for selected alloys as well as metallurgical analysis.

  2. Thermal Stir Welding: A New Solid State Welding Process

    NASA Technical Reports Server (NTRS)

    Ding, R. Jeffrey; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    Thermal stir welding is a new welding process developed at NASA's Marshall Space Flight Center in Huntsville, AL. Thermal stir welding is similar to friction stir welding in that it joins similar or dissimilar materials without melting the parent material. However, unlike friction stir welding, the heating, stirring and forging elements of the process are all independent of each other and are separately controlled. Furthermore, the heating element of the process can be either a solid-state process (such as a thermal blanket, induction type process, etc), or, a fusion process (YG laser, plasma torch, etc.) The separation of the heating, stirring, forging elements of the process allows more degrees of freedom for greater process control. This paper introduces the mechanics of the thermal stir welding process. In addition, weld mechanical property data is presented for selected alloys as well as metallurgical analysis.

  3. Metals Recovery from Artificial Ore in Case of Printed Circuit Boards, Using Plasmatron Plasma Reactor.

    PubMed

    Szałatkiewicz, Jakub

    2016-08-10

    This paper presents the investigation of metals production form artificial ore, which consists of printed circuit board (PCB) waste, processed in plasmatron plasma reactor. A test setup was designed and built that enabled research of plasma processing of PCB waste of more than 700 kg/day scale. The designed plasma process is presented and discussed. The process in tests consumed 2 kWh/kg of processed waste. Investigation of the process products is presented with their elemental analyses of metals and slag. The average recovery of metals in presented experiments is 76%. Metals recovered include: Ag, Au, Pd, Cu, Sn, Pb, and others. The chosen process parameters are presented: energy consumption, throughput, process temperatures, and air consumption. Presented technology allows processing of variable and hard-to-process printed circuit board waste that can reach up to 100% of the input mass.

  4. The origins of levels-of-processing effects in a conceptual test: evidence for automatic influences of memory from the process-dissociation procedure.

    PubMed

    Bergerbest, Dafna; Goshen-Gottstein, Yonatan

    2002-12-01

    In three experiments, we explored automatic influences of memory in a conceptual memory task, as affected by a levels-of-processing (LoP) manipulation. We also explored the origins of the LoP effect by examining whether the effect emerged only when participants in the shallow condition truncated the perceptual processing (the lexical-processing hypothesis) or even when the entire word was encoded in this condition (the conceptual-processing hypothesis). Using the process-dissociation procedure and an implicit association-generation task, we found that the deep encoding condition yielded higher estimates of automatic influences than the shallow condition. In support of the conceptual processing hypothesis, the LoP effect was found even when the shallow task did not lead to truncated processing of the lexical units. We suggest that encoding for meaning is a prerequisite for automatic processing on conceptual tests of memory.

  5. Exploring business process modelling paradigms and design-time to run-time transitions

    NASA Astrophysics Data System (ADS)

    Caron, Filip; Vanthienen, Jan

    2016-09-01

    The business process management literature describes a multitude of approaches (e.g. imperative, declarative or event-driven) that each result in a different mix of process flexibility, compliance, effectiveness and efficiency. Although the use of a single approach over the process lifecycle is often assumed, transitions between approaches at different phases in the process lifecycle may also be considered. This article explores several business process strategies by analysing the approaches at different phases in the process lifecycle as well as the various transitions.

  6. System Engineering Concept Demonstration, Process Model. Volume 3

    DTIC Science & Technology

    1992-12-01

    Process or Process Model The System Engineering process must be the enactment of the aforementioned definitions. Therefore, a process is an enactment of a...Prototype Tradeoff Scenario demonstrates six levels of abstraction in the Process Model. The Process Model symbology is explained within the "Help" icon ...dnofing no- ubeq t"vidi e /hn -am-a. lmi IzyuO ..pu Row _e._n au"c.ue-w’ ’- anuiildyidwile b ie htplup ?~imsav D symbo ,,ue,.dvu ,,dienl Flw s--..,fu..I

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eun, H.C.; Cho, Y.Z.; Choi, J.H.

    A regeneration process of LiCl-KCl eutectic waste salt generated from the pyrochemical process of spent nuclear fuel has been studied. This regeneration process is composed of a chemical conversion process and a vacuum distillation process. Through the regeneration process, a high efficiency of renewable salt recovery can be obtained from the waste salt and rare earth nuclides in the waste salt can be separated as oxide or phosphate forms. Thus, the regeneration process can contribute greatly to a reduction of the waste volume and a creation of durable final waste forms. (authors)

  8. An open system approach to process reengineering in a healthcare operational environment.

    PubMed

    Czuchry, A J; Yasin, M M; Norris, J

    2000-01-01

    The objective of this study is to examine the applicability of process reengineering in a healthcare operational environment. The intake process of a mental healthcare service delivery system is analyzed systematically to identify process-related problems. A methodology which utilizes an open system orientation coupled with process reengineering is utilized to overcome operational and patient related problems associated with the pre-reengineered intake process. The systematic redesign of the intake process resulted in performance improvements in terms of cost, quality, service and timing.

  9. Developing the JPL Engineering Processes

    NASA Technical Reports Server (NTRS)

    Linick, Dave; Briggs, Clark

    2004-01-01

    This paper briefly recounts the recent history of process reengineering at the NASA Jet Propulsion Laboratory, with a focus on the engineering processes. The JPL process structure is described and the process development activities of the past several years outlined. The main focus of the paper is on the current process structure, the emphasis on the flight project life cycle, the governance approach that lead to Flight Project Practices, and the remaining effort to capture process knowledge at the detail level of the work group.

  10. Water-saving liquid-gas conditioning system

    DOEpatents

    Martin, Christopher; Zhuang, Ye

    2014-01-14

    A method for treating a process gas with a liquid comprises contacting a process gas with a hygroscopic working fluid in order to remove a constituent from the process gas. A system for treating a process gas with a liquid comprises a hygroscopic working fluid comprising a component adapted to absorb or react with a constituent of a process gas, and a liquid-gas contactor for contacting the working fluid and the process gas, wherein the constituent is removed from the process gas within the liquid-gas contactor.

  11. Model for Simulating a Spiral Software-Development Process

    NASA Technical Reports Server (NTRS)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.

  12. Magnitude processing of symbolic and non-symbolic proportions: an fMRI study.

    PubMed

    Mock, Julia; Huber, Stefan; Bloechle, Johannes; Dietrich, Julia F; Bahnmueller, Julia; Rennig, Johannes; Klein, Elise; Moeller, Korbinian

    2018-05-10

    Recent research indicates that processing proportion magnitude is associated with activation in the intraparietal sulcus. Thus, brain areas associated with the processing of numbers (i.e., absolute magnitude) were activated during processing symbolic fractions as well as non-symbolic proportions. Here, we investigated systematically the cognitive processing of symbolic (e.g., fractions and decimals) and non-symbolic proportions (e.g., dot patterns and pie charts) in a two-stage procedure. First, we investigated relative magnitude-related activations of proportion processing. Second, we evaluated whether symbolic and non-symbolic proportions share common neural substrates. We conducted an fMRI study using magnitude comparison tasks with symbolic and non-symbolic proportions, respectively. As an indicator for magnitude-related processing of proportions, the distance effect was evaluated. A conjunction analysis indicated joint activation of specific occipito-parietal areas including right intraparietal sulcus (IPS) during proportion magnitude processing. More specifically, results indicate that the IPS, which is commonly associated with absolute magnitude processing, is involved in processing relative magnitude information as well, irrespective of symbolic or non-symbolic presentation format. However, we also found distinct activation patterns for the magnitude processing of the different presentation formats. Our findings suggest that processing for the separate presentation formats is not only associated with magnitude manipulations in the IPS, but also increasing demands on executive functions and strategy use associated with frontal brain regions as well as visual attention and encoding in occipital regions. Thus, the magnitude processing of proportions may not exclusively reflect processing of number magnitude information but also rather domain-general processes.

  13. [Alcohol-purification technology and its particle sedimentation process in manufactory of Fufang Kushen injection].

    PubMed

    Liu, Xiaoqian; Tong, Yan; Wang, Jinyu; Wang, Ruizhen; Zhang, Yanxia; Wang, Zhimin

    2011-11-01

    Fufang Kushen injection was selected as the model drug, to optimize its alcohol-purification process and understand the characteristics of particle sedimentation process, and to investigate the feasibility of using process analytical technology (PAT) on traditional Chinese medicine (TCM) manufacturing. Total alkaloids (calculated by matrine, oxymatrine, sophoridine and oxysophoridine) and macrozamin were selected as quality evaluation markers to optimize the process of Fufang Kushen injection purification with alcohol. Process parameters of particulate formed in the alcohol-purification, such as the number, density and sedimentation velocity, were also determined to define the sedimentation time and well understand the process. The purification process was optimized as that alcohol is added to the concentrated extract solution (drug material) to certain concentration for 2 times and deposited the alcohol-solution containing drug-material to sediment for some time, i.e. 60% alcohol deposited for 36 hours, filter and then 80% -90% alcohol deposited for 6 hours in turn. The content of total alkaloids was decreased a little during the depositing process. The average settling time of particles with the diameters of 10, 25 microm were 157.7, 25.2 h in the first alcohol-purified process, and 84.2, 13.5 h in the second alcohol-purified process, respectively. The optimized alcohol-purification process remains the marker compositions better and compared with the initial process, it's time saving and much economy. The manufacturing quality of TCM-injection can be controlled by process. PAT pattern must be designed under the well understanding of process of TCM production.

  14. Application of volume-retarded osmosis and low-pressure membrane hybrid process for water reclamation.

    PubMed

    Im, Sung-Ju; Choi, Jungwon; Lee, Jung-Gil; Jeong, Sanghyun; Jang, Am

    2018-03-01

    A new concept of volume-retarded osmosis and low-pressure membrane (VRO-LPM) hybrid process was developed and evaluated for the first time in this study. Commercially available forward osmosis (FO) and ultrafiltration (UF) membranes were employed in a VRO-LPM hybrid process to overcome energy limitations of draw solution (DS) regeneration and production of permeate in the FO process. To evaluate its feasibility as a water reclamation process, and to optimize the operational conditions, cross-flow FO and dead-end mode UF processes were individually evaluated. For the FO process, a DS concentration of 0.15 g mL -1 of polysulfonate styrene (PSS) was determined to be optimal, having a high flux with a low reverse salt flux. The UF membrane with a molecular weight cut-off of 1 kDa was chosen for its high PSS rejection in the LPM process. As a single process, UF (LPM) exhibited a higher flux than FO, but this could be controlled by adjusting the effective membrane area of the FO and UF membranes in the VRO-LPM system. The VRO-LPM hybrid process only required a circulation pump for the FO process. This led to a decrease in the specific energy consumption of the VRO-LPM process for potable water production, that was similar to the single FO process. Therefore, the newly developed VRO-LPM hybrid process, with an appropriate DS selection, can be used as an energy efficient water production method, and can outperform conventional water reclamation processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Quality control process improvement of flexible printed circuit board by FMEA

    NASA Astrophysics Data System (ADS)

    Krasaephol, Siwaporn; Chutima, Parames

    2018-02-01

    This research focuses on the quality control process improvement of Flexible Printed Circuit Board (FPCB), centred around model 7-Flex, by using Failure Mode and Effect Analysis (FMEA) method to decrease proportion of defective finished goods that are found at the final inspection process. Due to a number of defective units that were found at the final inspection process, high scraps may be escaped to customers. The problem comes from poor quality control process which is not efficient enough to filter defective products from in-process because there is no In-Process Quality Control (IPQC) or sampling inspection in the process. Therefore, the quality control process has to be improved by setting inspection gates and IPCQs at critical processes in order to filter the defective products. The critical processes are analysed by the FMEA method. IPQC is used for detecting defective products and reducing chances of defective finished goods escaped to the customers. Reducing proportion of defective finished goods also decreases scrap cost because finished goods incur higher scrap cost than work in-process. Moreover, defective products that are found during process can reflect the abnormal processes; therefore, engineers and operators should timely solve the problems. Improved quality control was implemented for 7-Flex production lines from July 2017 to September 2017. The result shows decreasing of the average proportion of defective finished goods and the average of Customer Manufacturers Lot Reject Rate (%LRR of CMs) equal to 4.5% and 4.1% respectively. Furthermore, cost saving of this quality control process equals to 100K Baht.

  16. Formulating poultry processing sanitizers from alkaline salts of fatty acids

    USDA-ARS?s Scientific Manuscript database

    Though some poultry processing operations remove microorganisms from carcasses; other processing operations cause cross-contamination that spreads microorganisms between carcasses, processing water, and processing equipment. One method used by commercial poultry processors to reduce microbial contam...

  17. Fabrication Process for Cantilever Beam Micromechanical Switches

    DTIC Science & Technology

    1993-08-01

    Beam Design ................................................................... 13 B. Chemistry and Materials Used in Cantilever Beam Process...7 3. Photomask levels and composite...pp 410-413. 5 2. Cantilever Beam Fabrication Process The beam fabrication process incorporates four different photomasking levels with 62 processing

  18. Reports of planetary geology program, 1983

    NASA Technical Reports Server (NTRS)

    Holt, H. E. (Compiler)

    1984-01-01

    Several areas of the Planetary Geology Program were addressed including outer solar system satellites, asteroids, comets, Venus, cratering processes and landform development, volcanic processes, aeolian processes, fluvial processes, periglacial and permafrost processes, geomorphology, remote sensing, tectonics and stratigraphy, and mapping.

  19. Cognitive Processes in Discourse Comprehension: Passive Processes, Reader-Initiated Processes, and Evolving Mental Representations

    ERIC Educational Resources Information Center

    van den Broek, Paul; Helder, Anne

    2017-01-01

    As readers move through a text, they engage in various types of processes that, if all goes well, result in a mental representation that captures their interpretation of the text. With each new text segment the reader engages in passive and, at times, reader-initiated processes. These processes are strongly influenced by the readers'…

  20. The Use of Knowledge Based Decision Support Systems in Reengineering Selected Processes in the U. S. Marine Corps

    DTIC Science & Technology

    2001-09-01

    measurable benefit in terms of process efficiency and effectiveness, business process reengineering (BPR) is becoming increasingly important. BPR suggests...technology by businesses in hopes of achieving a measurable benefit in terms of process efficiency and effectiveness, business process...KOPER-LITE ........................................13 E. HOW MIGHT THE MILITARY BENEFIT FROM PROCESS REENGINEERING EFFORTS

  1. 30 CFR 206.181 - How do I establish processing costs for dual accounting purposes when I do not process the gas?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accounting purposes when I do not process the gas? 206.181 Section 206.181 Mineral Resources MINERALS... Processing Allowances § 206.181 How do I establish processing costs for dual accounting purposes when I do not process the gas? Where accounting for comparison (dual accounting) is required for gas production...

  2. Conceptual models of information processing

    NASA Technical Reports Server (NTRS)

    Stewart, L. J.

    1983-01-01

    The conceptual information processing issues are examined. Human information processing is defined as an active cognitive process that is analogous to a system. It is the flow and transformation of information within a human. The human is viewed as an active information seeker who is constantly receiving, processing, and acting upon the surrounding environmental stimuli. Human information processing models are conceptual representations of cognitive behaviors. Models of information processing are useful in representing the different theoretical positions and in attempting to define the limits and capabilities of human memory. It is concluded that an understanding of conceptual human information processing models and their applications to systems design leads to a better human factors approach.

  3. Industrial application of semantic process mining

    NASA Astrophysics Data System (ADS)

    Espen Ingvaldsen, Jon; Atle Gulla, Jon

    2012-05-01

    Process mining relates to the extraction of non-trivial and useful information from information system event logs. It is a new research discipline that has evolved significantly since the early work on idealistic process logs. Over the last years, process mining prototypes have incorporated elements from semantics and data mining and targeted visualisation techniques that are more user-friendly to business experts and process owners. In this article, we present a framework for evaluating different aspects of enterprise process flows and address practical challenges of state-of-the-art industrial process mining. We also explore the inherent strengths of the technology for more efficient process optimisation.

  4. Reliability and performance of a system-on-a-chip by predictive wear-out based activation of functional components

    DOEpatents

    Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong

    2013-10-01

    A processor-implemented method for determining aging of a processing unit in a processor the method comprising: calculating an effective aging profile for the processing unit wherein the effective aging profile quantifies the effects of aging on the processing unit; combining the effective aging profile with process variation data, actual workload data and operating conditions data for the processing unit; and determining aging through an aging sensor of the processing unit using the effective aging profile, the process variation data, the actual workload data, architectural characteristics and redundancy data, and the operating conditions data for the processing unit.

  5. Fuzzy control of burnout of multilayer ceramic actuators

    NASA Astrophysics Data System (ADS)

    Ling, Alice V.; Voss, David; Christodoulou, Leo

    1996-08-01

    To improve the yield and repeatability of the burnout process of multilayer ceramic actuators (MCAs), an intelligent processing of materials (IPM-based) control system has been developed for the manufacture of MCAs. IPM involves the active (ultimately adaptive) control of a material process using empirical or analytical models and in situ sensing of critical process states (part features and process parameters) to modify the processing conditions in real time to achieve predefined product goals. Thus, the three enabling technologies for the IPM burnout control system are process modeling, in situ sensing and intelligent control. This paper presents the design of an IPM-based control strategy for the burnout process of MCAs.

  6. Direct access inter-process shared memory

    DOEpatents

    Brightwell, Ronald B; Pedretti, Kevin; Hudson, Trammell B

    2013-10-22

    A technique for directly sharing physical memory between processes executing on processor cores is described. The technique includes loading a plurality of processes into the physical memory for execution on a corresponding plurality of processor cores sharing the physical memory. An address space is mapped to each of the processes by populating a first entry in a top level virtual address table for each of the processes. The address space of each of the processes is cross-mapped into each of the processes by populating one or more subsequent entries of the top level virtual address table with the first entry in the top level virtual address table from other processes.

  7. Biotechnology in Food Production and Processing

    NASA Astrophysics Data System (ADS)

    Knorr, Dietrich; Sinskey, Anthony J.

    1985-09-01

    The food processing industry is the oldest and largest industry using biotechnological processes. Further development of food products and processes based on biotechnology depends upon the improvement of existing processes, such as fermentation, immobilized biocatalyst technology, and production of additives and processing aids, as well as the development of new opportunities for food biotechnology. Improvements are needed in the characterization, safety, and quality control of food materials, in processing methods, in waste conversion and utilization processes, and in currently used food microorganism and tissue culture systems. Also needed are fundamental studies of the structure-function relationship of food materials and of the cell physiology and biochemistry of raw materials.

  8. What is a good public participation process? Five perspectives from the public.

    PubMed

    Webler, T; Tuler, S; Krueger, R

    2001-03-01

    It is now widely accepted that members of the public should be involved in environmental decision-making. This has inspired many to search for principles that characterize good public participation processes. In this paper we report on a study that identifies discourses about what defines a good process. Our case study was a forest planning process in northern New England and New York. We employed Q methodology to learn how participants characterize a good process differently, by selecting, defining, and privileging different principles. Five discourses, or perspectives, about good process emerged from our study. One perspective emphasizes that a good process acquires and maintains popular legitimacy. A second sees a good process as one that facilitates an ideological discussion. A third focuses on the fairness of the process. A fourth perspective conceptualizes participatory processes as a power struggle--in this instance a power play between local land-owning interests and outsiders. A fifth perspective highlights the need for leadership and compromise. Dramatic differences among these views suggest an important challenge for those responsible for designing and carrying out public participation processes. Conflicts may emerge about process designs because people disagree about what is good in specific contexts.

  9. Alternating event processes during lifetimes: population dynamics and statistical inference.

    PubMed

    Shinohara, Russell T; Sun, Yifei; Wang, Mei-Cheng

    2018-01-01

    In the literature studying recurrent event data, a large amount of work has been focused on univariate recurrent event processes where the occurrence of each event is treated as a single point in time. There are many applications, however, in which univariate recurrent events are insufficient to characterize the feature of the process because patients experience nontrivial durations associated with each event. This results in an alternating event process where the disease status of a patient alternates between exacerbations and remissions. In this paper, we consider the dynamics of a chronic disease and its associated exacerbation-remission process over two time scales: calendar time and time-since-onset. In particular, over calendar time, we explore population dynamics and the relationship between incidence, prevalence and duration for such alternating event processes. We provide nonparametric estimation techniques for characteristic quantities of the process. In some settings, exacerbation processes are observed from an onset time until death; to account for the relationship between the survival and alternating event processes, nonparametric approaches are developed for estimating exacerbation process over lifetime. By understanding the population dynamics and within-process structure, the paper provide a new and general way to study alternating event processes.

  10. Process mining in oncology using the MIMIC-III dataset

    NASA Astrophysics Data System (ADS)

    Prima Kurniati, Angelina; Hall, Geoff; Hogg, David; Johnson, Owen

    2018-03-01

    Process mining is a data analytics approach to discover and analyse process models based on the real activities captured in information systems. There is a growing body of literature on process mining in healthcare, including oncology, the study of cancer. In earlier work we found 37 peer-reviewed papers describing process mining research in oncology with a regular complaint being the limited availability and accessibility of datasets with suitable information for process mining. Publicly available datasets are one option and this paper describes the potential to use MIMIC-III, for process mining in oncology. MIMIC-III is a large open access dataset of de-identified patient records. There are 134 publications listed as using the MIMIC dataset, but none of them have used process mining. The MIMIC-III dataset has 16 event tables which are potentially useful for process mining and this paper demonstrates the opportunities to use MIMIC-III for process mining in oncology. Our research applied the L* lifecycle method to provide a worked example showing how process mining can be used to analyse cancer pathways. The results and data quality limitations are discussed along with opportunities for further work and reflection on the value of MIMIC-III for reproducible process mining research.

  11. Research on the technique of large-aperture off-axis parabolic surface processing using tri-station machine and its applicability.

    PubMed

    Zhang, Xin; Luo, Xiao; Hu, Haixiang; Zhang, Xuejun

    2015-09-01

    In order to process large-aperture aspherical mirrors, we designed and constructed a tri-station machine processing center with a three station device, which bears vectored feed motion of up to 10 axes. Based on this processing center, an aspherical mirror-processing model is proposed, in which each station implements traversal processing of large-aperture aspherical mirrors using only two axes, while the stations are switchable, thus lowering cost and enhancing processing efficiency. The applicability of the tri-station machine is also analyzed. At the same time, a simple and efficient zero-calibration method for processing is proposed. To validate the processing model, using our processing center, we processed an off-axis parabolic SiC mirror with an aperture diameter of 1450 mm. The experimental results indicate that, with a one-step iterative process, the peak to valley (PV) and root mean square (RMS) of the mirror converged from 3.441 and 0.5203 μm to 2.637 and 0.2962 μm, respectively, where the RMS reduced by 43%. The validity and high accuracy of the model are thereby demonstrated.

  12. Patterning of Indium Tin Oxide Films

    NASA Technical Reports Server (NTRS)

    Immer, Christopher

    2008-01-01

    A relatively rapid, economical process has been devised for patterning a thin film of indium tin oxide (ITO) that has been deposited on a polyester film. ITO is a transparent, electrically conductive substance made from a mixture of indium oxide and tin oxide that is commonly used in touch panels, liquid-crystal and plasma display devices, gas sensors, and solar photovoltaic panels. In a typical application, the ITO film must be patterned to form electrodes, current collectors, and the like. Heretofore it has been common practice to pattern an ITO film by means of either a laser ablation process or a photolithography/etching process. The laser ablation process includes the use of expensive equipment to precisely position and focus a laser. The photolithography/etching process is time-consuming. The present process is a variant of the direct toner process an inexpensive but often highly effective process for patterning conductors for printed circuits. Relative to a conventional photolithography/ etching process, this process is simpler, takes less time, and is less expensive. This process involves equipment that costs less than $500 (at 2005 prices) and enables patterning of an ITO film in a process time of less than about a half hour.

  13. Assessment of Process Capability: the case of Soft Drinks Processing Unit

    NASA Astrophysics Data System (ADS)

    Sri Yogi, Kottala

    2018-03-01

    The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dafler, J.R.; Sinnott, J.; Novil, M.

    The first phase of a study to identify candidate processes and products suitable for future exploitation using high-temperature solar energy is presented. This phase has been principally analytical, consisting of techno-economic studies, thermodynamic assessments of chemical reactions and processes, and the determination of market potentials for major chemical commodities that use significant amounts of fossil resources today. The objective was to identify energy-intensive processes that would be suitable for the production of chemicals and fuels using solar energy process heat. Of particular importance was the comparison of relative costs and energy requirements for the selected solar product versus costs formore » the product derived from conventional processing. The assessment methodology used a systems analytical approach to identify processes and products having the greatest potential for solar energy-thermal processing. This approach was used to establish the basis for work to be carried out in subsequent phases of development. It has been the intent of the program to divide the analysis and process identification into the following three distinct areas: (1) process selection, (2) process evaluation, and (3) ranking of processes. Four conventional processes were selected for assessment namely, methanol synthesis, styrene monomer production, vinyl chloride monomer production, and terephthalic acid production.« less

  15. An Application of X-Ray Fluorescence as Process Analytical Technology (PAT) to Monitor Particle Coating Processes.

    PubMed

    Nakano, Yoshio; Katakuse, Yoshimitsu; Azechi, Yasutaka

    2018-06-01

    An attempt to apply X-Ray Fluorescence (XRF) analysis to evaluate small particle coating process as a Process Analytical Technologies (PAT) was made. The XRF analysis was used to monitor coating level in small particle coating process with at-line manner. The small particle coating process usually consists of multiple coating processes. This study was conducted by a simple coating particles prepared by first coating of a model compound (DL-methionine) and second coating by talc on spherical microcrystalline cellulose cores. The particles with two layered coating are enough to demonstrate the small particle coating process. From the result by the small particle coating process, it was found that the XRF signal played different roles, resulting that XRF signals by first coating (layering) and second coating (mask coating) could demonstrate the extent with different mechanisms for the coating process. Furthermore, the particle coating of the different particle size has also been investigated to evaluate size effect of these coating processes. From these results, it was concluded that the XRF could be used as a PAT in monitoring particle coating processes and become powerful tool in pharmaceutical manufacturing.

  16. Single-Run Single-Mask Inductively-Coupled-Plasma Reactive-Ion-Etching Process for Fabricating Suspended High-Aspect-Ratio Microstructures

    NASA Astrophysics Data System (ADS)

    Yang, Yao-Joe; Kuo, Wen-Cheng; Fan, Kuang-Chao

    2006-01-01

    In this work, we present a single-run single-mask (SRM) process for fabricating suspended high-aspect-ratio structures on standard silicon wafers using an inductively coupled plasma-reactive ion etching (ICP-RIE) etcher. This process eliminates extra fabrication steps which are required for structure release after trench etching. Released microstructures with 120 μm thickness are obtained by this process. The corresponding maximum aspect ratio of the trench is 28. The SRM process is an extended version of the standard process proposed by BOSCH GmbH (BOSCH process). The first step of the SRM process is a standard BOSCH process for trench etching, then a polymer layer is deposited on trench sidewalls as a protective layer for the subsequent structure-releasing step. The structure is released by dry isotropic etching after the polymer layer on the trench floor is removed. All the steps can be integrated into a single-run ICP process. Also, only one mask is required. Therefore, the process complexity and fabrication cost can be effectively reduced. Discussions on each SRM step and considerations for avoiding undesired etching of the silicon structures during the release process are also presented.

  17. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.

  18. Discrete State Change Model of Manufacturing Quality to Aid Assembly Process Design

    NASA Astrophysics Data System (ADS)

    Koga, Tsuyoshi; Aoyama, Kazuhiro

    This paper proposes a representation model of the quality state change in an assembly process that can be used in a computer-aided process design system. In order to formalize the state change of the manufacturing quality in the assembly process, the functions, operations, and quality changes in the assembly process are represented as a network model that can simulate discrete events. This paper also develops a design method for the assembly process. The design method calculates the space of quality state change and outputs a better assembly process (better operations and better sequences) that can be used to obtain the intended quality state of the final product. A computational redesigning algorithm of the assembly process that considers the manufacturing quality is developed. The proposed method can be used to design an improved manufacturing process by simulating the quality state change. A prototype system for planning an assembly process is implemented and applied to the design of an auto-breaker assembly process. The result of the design example indicates that the proposed assembly process planning method outputs a better manufacturing scenario based on the simulation of the quality state change.

  19. Effect of simulated mechanical recycling processes on the structure and properties of poly(lactic acid).

    PubMed

    Beltrán, F R; Lorenzo, V; Acosta, J; de la Orden, M U; Martínez Urreaga, J

    2018-06-15

    The aim of this work is to study the effects of different simulated mechanical recycling processes on the structure and properties of PLA. A commercial grade of PLA was melt compounded and compression molded, then subjected to two different recycling processes. The first recycling process consisted of an accelerated ageing and a second melt processing step, while the other recycling process included an accelerated ageing, a demanding washing process and a second melt processing step. The intrinsic viscosity measurements indicate that both recycling processes produce a degradation in PLA, which is more pronounced in the sample subjected to the washing process. DSC results suggest an increase in the mobility of the polymer chains in the recycled materials; however the degree of crystallinity of PLA seems unchanged. The optical, mechanical and gas barrier properties of PLA do not seem to be largely affected by the degradation suffered during the different recycling processes. These results suggest that, despite the degradation of PLA, the impact of the different simulated mechanical recycling processes on the final properties is limited. Thus, the potential use of recycled PLA in packaging applications is not jeopardized. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Consumption of ultra-processed foods predicts diet quality in Canada.

    PubMed

    Moubarac, Jean-Claude; Batal, M; Louzada, M L; Martinez Steele, E; Monteiro, C A

    2017-01-01

    This study describes food consumption patterns in Canada according to the types of food processing using the Nova classification and investigates the association between consumption of ultra-processed foods and the nutrient profile of the diet. Dietary intakes of 33,694 individuals from the 2004 Canadian Community Health Survey aged 2 years and above were analyzed. Food and drinks were classified using Nova into unprocessed or minimally processed foods, processed culinary ingredients, processed foods and ultra-processed foods. Average consumption (total daily energy intake) and relative consumption (% of total energy intake) provided by each of the food groups were calculated. Consumption of ultra-processed foods according to sex, age, education, residential location and relative family revenue was assessed. Mean nutrient content of ultra-processed foods and non-ultra-processed foods were compared, and the average nutrient content of the overall diet across quintiles of dietary share of ultra-processed foods was measured. In 2004, 48% of calories consumed by Canadians came from ultra-processed foods. Consumption of such foods was high amongst all socioeconomic groups, and particularly in children and adolescents. As a group, ultra-processed foods were grossly nutritionally inferior to non-ultra-processed foods. After adjusting for covariates, a significant and positive relationship was found between the dietary share of ultra-processed foods and the content in carbohydrates, free sugars, total and saturated fats and energy density, while an inverse relationship was observed with the dietary content in protein, fiber, vitamins A, C, D, B6 and B12, niacin, thiamine, riboflavin, as well as zinc, iron, magnesium, calcium, phosphorus and potassium. Lowering the dietary share of ultra-processed foods and raising consumption of hand-made meals from unprocessed or minimally processed foods would substantially improve the diet quality of Canadian. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Development of Statistical Process Control Methodology for an Environmentally Compliant Surface Cleaning Process in a Bonding Laboratory

    NASA Technical Reports Server (NTRS)

    Hutchens, Dale E.; Doan, Patrick A.; Boothe, Richard E.

    1997-01-01

    Bonding labs at both MSFC and the northern Utah production plant prepare bond test specimens which simulate or witness the production of NASA's Reusable Solid Rocket Motor (RSRM). The current process for preparing the bonding surfaces employs 1,1,1-trichloroethane vapor degreasing, which simulates the current RSRM process. Government regulations (e.g., the 1990 Amendments to the Clean Air Act) have mandated a production phase-out of a number of ozone depleting compounds (ODC) including 1,1,1-trichloroethane. In order to comply with these regulations, the RSRM Program is qualifying a spray-in-air (SIA) precision cleaning process using Brulin 1990, an aqueous blend of surfactants. Accordingly, surface preparation prior to bonding process simulation test specimens must reflect the new production cleaning process. The Bonding Lab Statistical Process Control (SPC) program monitors the progress of the lab and its capabilities, as well as certifies the bonding technicians, by periodically preparing D6AC steel tensile adhesion panels with EA-91 3NA epoxy adhesive using a standardized process. SPC methods are then used to ensure the process is statistically in control, thus producing reliable data for bonding studies, and identify any problems which might develop. Since the specimen cleaning process is being changed, new SPC limits must be established. This report summarizes side-by-side testing of D6AC steel tensile adhesion witness panels and tapered double cantilevered beams (TDCBs) using both the current baseline vapor degreasing process and a lab-scale spray-in-air process. A Proceco 26 inches Typhoon dishwasher cleaned both tensile adhesion witness panels and TDCBs in a process which simulates the new production process. The tests were performed six times during 1995, subsequent statistical analysis of the data established new upper control limits (UCL) and lower control limits (LCL). The data also demonstrated that the new process was equivalent to the vapor degreasing process.

  2. Evaluation of stabilization techniques for ion implant processing

    NASA Astrophysics Data System (ADS)

    Ross, Matthew F.; Wong, Selmer S.; Minter, Jason P.; Marlowe, Trey; Narcy, Mark E.; Livesay, William R.

    1999-06-01

    With the integration of high current ion implant processing into volume CMOS manufacturing, the need for photoresist stabilization to achieve a stable ion implant process is critical. This study compares electron beam stabilization, a non-thermal process, with more traditional thermal stabilization techniques such as hot plate baking and vacuum oven processing. The electron beam processing is carried out in a flood exposure system with no active heating of the wafer. These stabilization techniques are applied to typical ion implant processes that might be found in a CMOS production process flow. The stabilization processes are applied to a 1.1 micrometers thick PFI-38A i-line photoresist film prior to ion implant processing. Post stabilization CD variation is detailed with respect to wall slope and feature integrity. SEM photographs detail the effects of the stabilization technique on photoresist features. The thermal stability of the photoresist is shown for different levels of stabilization and post stabilization thermal cycling. Thermal flow stability of the photoresist is detailed via SEM photographs. A significant improvement in thermal stability is achieved with the electron beam process, such that photoresist features are stable to temperatures in excess of 200 degrees C. Ion implant processing parameters are evaluated and compared for the different stabilization methods. Ion implant system end-station chamber pressure is detailed as a function of ion implant process and stabilization condition. The ion implant process conditions are detailed for varying factors such as ion current, energy, and total dose. A reduction in the ion implant systems end-station chamber pressure is achieved with the electron beam stabilization process over the other techniques considered. This reduction in end-station chamber pressure is shown to provide a reduction in total process time for a given ion implant dose. Improvements in the ion implant process are detailed across several combinations of current and energy.

  3. The prevalence of medial coronoid process disease is high in lame large breed dogs and quantitative radiographic assessments contribute to the diagnosis.

    PubMed

    Mostafa, Ayman; Nolte, Ingo; Wefstaedt, Patrick

    2018-06-05

    Medial coronoid process disease is a common leading cause of thoracic limb lameness in dogs. Computed tomography and arthroscopy are superior to radiography to diagnose medial coronoid process disease, however, radiography remains the most available diagnostic imaging modality in veterinary practice. Objectives of this retrospective observational study were to describe the prevalence of medial coronoid process disease in lame large breed dogs and apply a novel method for quantifying the radiographic changes associated with medial coronoid process and subtrochlear-ulnar region in Labrador and Golden Retrievers with confirmed medial coronoid process disease. Purebred Labrador and Golden Retrievers (n = 143, 206 elbows) without and with confirmed medial coronoid process disease were included. The prevalence of medial coronoid process disease in lame large breed dogs was calculated. Mediolateral and craniocaudal radiographs of elbows were analyzed to assess the medial coronoid process length and morphology, and subtrochlear-ulnar width. Mean grayscale value was calculated for radial and subtrochlear-ulnar zones. The prevalence of medial coronoid process disease was 20.8%. Labrador and Golden Retrievers were the most affected purebred dogs (29.6%). Elbows with confirmed medial coronoid process disease had short (P < 0.0001) and deformed (∼95%) medial coronoid process, with associated medial coronoid process osteophytosis (7.5%). Subtrochlear-ulnar sclerosis was evidenced in ∼96% of diseased elbows, with a significant increase (P < 0.0001) in subtrochlear-ulnar width and standardized grayscale value. Radial grayscale value did not differ between groups. Periarticular osteophytosis was identified in 51.4% of elbows with medial coronoid process disease. Medial coronoid process length and morphology, and subtrochlear-ulnar width and standardized grayscale value varied significantly in dogs with confirmed medial coronoid process disease compared to controls. Findings indicated that medial coronoid process disease has a high prevalence in lame large breed dogs and that quantitative radiographic assessments can contribute to the diagnosis. © 2018 American College of Veterinary Radiology.

  4. The role of rational and experiential processing in influencing the framing effect.

    PubMed

    Stark, Emily; Baldwin, Austin S; Hertel, Andrew W; Rothman, Alexander J

    2017-01-01

    Research on individual differences and the framing effect has focused primarily on how variability in rational processing influences choice. However, we propose that measuring only rational processing presents an incomplete picture of how participants are responding to framed options, as orthogonal individual differences in experiential processing might be relevant. In two studies, we utilize the Rational Experiential Inventory, which captures individual differences in rational and experiential processing, to investigate how both processing types influence decisions. Our results show that differences in experiential processing, but not rational processing, moderated the effect of frame on choice. We suggest that future research should more closely examine the influence of experiential processing on making decisions, to gain a broader understanding of the conditions that contribute to the framing effect.

  5. Study and Analysis of The Robot-Operated Material Processing Systems (ROMPS)

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.

    1996-01-01

    This is a report presenting the progress of a research grant funded by NASA for work performed during 1 Oct. 1994 - 31 Sep. 1995. The report deals with the development and investigation of potential use of software for data processing for the Robot Operated Material Processing System (ROMPS). It reports on the progress of data processing of calibration samples processed by ROMPS in space and on earth. First data were retrieved using the I/O software and manually processed using MicroSoft Excel. Then the data retrieval and processing process was automated using a program written in C which is able to read the telemetry data and produce plots of time responses of sample temperatures and other desired variables. LabView was also employed to automatically retrieve and process the telemetry data.

  6. Application of statistical process control and process capability analysis procedures in orbiter processing activities at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Safford, Robert R.; Jackson, Andrew E.; Swart, William W.; Barth, Timothy S.

    1994-01-01

    Successful ground processing at KSC requires that flight hardware and ground support equipment conform to specifications at tens of thousands of checkpoints. Knowledge of conformance is an essential requirement for launch. That knowledge of conformance at every requisite point does not, however, enable identification of past problems with equipment, or potential problem areas. This paper describes how the introduction of Statistical Process Control and Process Capability Analysis identification procedures into existing shuttle processing procedures can enable identification of potential problem areas and candidates for improvements to increase processing performance measures. Results of a case study describing application of the analysis procedures to Thermal Protection System processing are used to illustrate the benefits of the approaches described in the paper.

  7. Separate cortical networks involved in music perception: preliminary functional MRI evidence for modularity of music processing.

    PubMed

    Schmithorst, Vincent J

    2005-04-01

    Music perception is a quite complex cognitive task, involving the perception and integration of various elements including melody, harmony, pitch, rhythm, and timbre. A preliminary functional MRI investigation of music perception was performed, using a simplified passive listening task. Group independent component analysis (ICA) was used to separate out various components involved in music processing, as the hemodynamic responses are not known a priori. Various components consistent with auditory processing, expressive language, syntactic processing, and visual association were found. The results are discussed in light of various hypotheses regarding modularity of music processing and its overlap with language processing. The results suggest that, while some networks overlap with ones used for language processing, music processing may involve its own domain-specific processing subsystems.

  8. Industrial implementation of spatial variability control by real-time SPC

    NASA Astrophysics Data System (ADS)

    Roule, O.; Pasqualini, F.; Borde, M.

    2016-10-01

    Advanced technology nodes require more and more information to get the wafer process well setup. The critical dimension of components decreases following Moore's law. At the same time, the intra-wafer dispersion linked to the spatial non-uniformity of tool's processes is not capable to decrease in the same proportions. APC systems (Advanced Process Control) are being developed in waferfab to automatically adjust and tune wafer processing, based on a lot of process context information. It can generate and monitor complex intrawafer process profile corrections between different process steps. It leads us to put under control the spatial variability, in real time by our SPC system (Statistical Process Control). This paper will outline the architecture of an integrated process control system for shape monitoring in 3D, implemented in waferfab.

  9. Modeling and analysis of power processing systems: Feasibility investigation and formulation of a methodology

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.

    1974-01-01

    A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.

  10. Laser displacement sensor to monitor the layup process of composite laminate production

    NASA Astrophysics Data System (ADS)

    Miesen, Nick; Groves, Roger M.; Sinke, Jos; Benedictus, Rinze

    2013-04-01

    Several types of flaw can occur during the layup process of prepreg composite laminates. Quality control after the production process checks the end product by testing the specimens for flaws which are included during the layup process or curing process, however by then these flaws are already irreversibly embedded in the laminate. This paper demonstrates the use of a laser displacement sensor technique applied during the layup process of prepreg laminates for in-situ flaw detection, for typical flaws that can occur during the composite production process. An incorrect number of layers and fibre wrinkling are dominant flaws during the process of layup. These and other dominant flaws have been modeled to determine the requirements for an in-situ monitoring during the layup process of prepreg laminates.

  11. Levels of integration in cognitive control and sequence processing in the prefrontal cortex.

    PubMed

    Bahlmann, Jörg; Korb, Franziska M; Gratton, Caterina; Friederici, Angela D

    2012-01-01

    Cognitive control is necessary to flexibly act in changing environments. Sequence processing is needed in language comprehension to build the syntactic structure in sentences. Functional imaging studies suggest that sequence processing engages the left ventrolateral prefrontal cortex (PFC). In contrast, cognitive control processes additionally recruit bilateral rostral lateral PFC regions. The present study aimed to investigate these two types of processes in one experimental paradigm. Sequence processing was manipulated using two different sequencing rules varying in complexity. Cognitive control was varied with different cue-sets that determined the choice of a sequencing rule. Univariate analyses revealed distinct PFC regions for the two types of processing (i.e. sequence processing: left ventrolateral PFC and cognitive control processing: bilateral dorsolateral and rostral PFC). Moreover, in a common brain network (including left lateral PFC and intraparietal sulcus) no interaction between sequence and cognitive control processing was observed. In contrast, a multivariate pattern analysis revealed an interaction of sequence and cognitive control processing, such that voxels in left lateral PFC and parietal cortex showed different tuning functions for tasks involving different sequencing and cognitive control demands. These results suggest that the difference between the process of rule selection (i.e. cognitive control) and the process of rule-based sequencing (i.e. sequence processing) find their neuronal underpinnings in distinct activation patterns in lateral PFC. Moreover, the combination of rule selection and rule sequencing can shape the response of neurons in lateral PFC and parietal cortex.

  12. Levels of Integration in Cognitive Control and Sequence Processing in the Prefrontal Cortex

    PubMed Central

    Bahlmann, Jörg; Korb, Franziska M.; Gratton, Caterina; Friederici, Angela D.

    2012-01-01

    Cognitive control is necessary to flexibly act in changing environments. Sequence processing is needed in language comprehension to build the syntactic structure in sentences. Functional imaging studies suggest that sequence processing engages the left ventrolateral prefrontal cortex (PFC). In contrast, cognitive control processes additionally recruit bilateral rostral lateral PFC regions. The present study aimed to investigate these two types of processes in one experimental paradigm. Sequence processing was manipulated using two different sequencing rules varying in complexity. Cognitive control was varied with different cue-sets that determined the choice of a sequencing rule. Univariate analyses revealed distinct PFC regions for the two types of processing (i.e. sequence processing: left ventrolateral PFC and cognitive control processing: bilateral dorsolateral and rostral PFC). Moreover, in a common brain network (including left lateral PFC and intraparietal sulcus) no interaction between sequence and cognitive control processing was observed. In contrast, a multivariate pattern analysis revealed an interaction of sequence and cognitive control processing, such that voxels in left lateral PFC and parietal cortex showed different tuning functions for tasks involving different sequencing and cognitive control demands. These results suggest that the difference between the process of rule selection (i.e. cognitive control) and the process of rule-based sequencing (i.e. sequence processing) find their neuronal underpinnings in distinct activation patterns in lateral PFC. Moreover, the combination of rule selection and rule sequencing can shape the response of neurons in lateral PFC and parietal cortex. PMID:22952762

  13. Flow chemistry using milli- and microstructured reactors-from conventional to novel process windows.

    PubMed

    Illg, Tobias; Löb, Patrick; Hessel, Volker

    2010-06-01

    The terminology Novel Process Window unites different methods to improve existing processes by applying unconventional and harsh process conditions like: process routes at much elevated pressure, much elevated temperature, or processing in a thermal runaway regime to achieve a significant impact on process performance. This paper is a review of parts of IMM's works in particular the applicability of above mentioned Novel Process Windows on selected chemical reactions. First, general characteristics of microreactors are discussed like excellent mass and heat transfer and improved mixing quality. Different types of reactions are presented in which the use of microstructured devices led to an increased process performance by applying Novel Process Windows. These examples were chosen to demonstrate how chemical reactions can benefit from the use of milli- and microstructured devices and how existing protocols can be changed toward process conditions hitherto not applicable in standard laboratory equipment. The used milli- and microstructured reactors can also offer advantages in other areas, for example, high-throughput screening of catalysts and better control of size distribution in a particle synthesis process by improved mixing, etc. The chemical industry is under continuous improvement. So, a lot of research is being done to synthesize high value chemicals, to optimize existing processes in view of process safety and energy consumption and to search for new routes to produce such chemicals. Leitmotifs of such undertakings are often sustainable development(1) and Green Chemistry(2).

  14. Fast but fleeting: adaptive motor learning processes associated with aging and cognitive decline.

    PubMed

    Trewartha, Kevin M; Garcia, Angeles; Wolpert, Daniel M; Flanagan, J Randall

    2014-10-01

    Motor learning has been shown to depend on multiple interacting learning processes. For example, learning to adapt when moving grasped objects with novel dynamics involves a fast process that adapts and decays quickly-and that has been linked to explicit memory-and a slower process that adapts and decays more gradually. Each process is characterized by a learning rate that controls how strongly motor memory is updated based on experienced errors and a retention factor determining the movement-to-movement decay in motor memory. Here we examined whether fast and slow motor learning processes involved in learning novel dynamics differ between younger and older adults. In addition, we investigated how age-related decline in explicit memory performance influences learning and retention parameters. Although the groups adapted equally well, they did so with markedly different underlying processes. Whereas the groups had similar fast processes, they had different slow processes. Specifically, the older adults exhibited decreased retention in their slow process compared with younger adults. Within the older group, who exhibited considerable variation in explicit memory performance, we found that poor explicit memory was associated with reduced retention in the fast process, as well as the slow process. These findings suggest that explicit memory resources are a determining factor in impairments in the both the fast and slow processes for motor learning but that aging effects on the slow process are independent of explicit memory declines. Copyright © 2014 the authors 0270-6474/14/3413411-11$15.00/0.

  15. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  16. OCLC-MARC Tape Processing: A Functional Analysis.

    ERIC Educational Resources Information Center

    Miller, Bruce Cummings

    1984-01-01

    Analyzes structure of, and data in, the OCLC-MARC record in the form delivered via OCLC's Tape Subscription Service, and outlines important processing functions involved: "unreadable tapes," duplicate records and deduping, match processing, choice processing, locations processing, "automatic" and "input" stamps,…

  17. 7 Processes that Enable NASA Software Engineering Technologies: Value-Added Process Engineering

    NASA Technical Reports Server (NTRS)

    Housch, Helen; Godfrey, Sally

    2011-01-01

    The presentation reviews Agency process requirements and the purpose, benefits, and experiences or seven software engineering processes. The processes include: product integration, configuration management, verification, software assurance, measurement and analysis, requirements management, and planning and monitoring.

  18. Risk-based Strategy to Determine Testing Requirement for the Removal of Residual Process Reagents as Process-related Impurities in Bioprocesses.

    PubMed

    Qiu, Jinshu; Li, Kim; Miller, Karen; Raghani, Anil

    2015-01-01

    The purpose of this article is to recommend a risk-based strategy for determining clearance testing requirements of the process reagents used in manufacturing biopharmaceutical products. The strategy takes account of four risk factors. Firstly, the process reagents are classified into two categories according to their safety profile and history of use: generally recognized as safe (GRAS) and potential safety concern (PSC) reagents. The clearance testing of GRAS reagents can be eliminated because of their safe use historically and process capability to remove these reagents. An estimated safety margin (Se) value, a ratio of the exposure limit to the estimated maximum reagent amount, is then used to evaluate the necessity for testing the PSC reagents at an early development stage. The Se value is calculated from two risk factors, the starting PSC reagent amount per maximum product dose (Me), and the exposure limit (Le). A worst-case scenario is assumed to estimate the Me value, that is common. The PSC reagent of interest is co-purified with the product and no clearance occurs throughout the entire purification process. No clearance testing is required for this PSC reagent if its Se value is ≥1; otherwise clearance testing is needed. Finally, the point of the process reagent introduction to the process is also considered in determining the necessity of the clearance testing for process reagents. How to use the measured safety margin as a criterion for determining PSC reagent testing at process characterization, process validation, and commercial production stages are also described. A large number of process reagents are used in the biopharmaceutical manufacturing to control the process performance. Clearance testing for all of the process reagents will be an enormous analytical task. In this article, a risk-based strategy is described to eliminate unnecessary clearance testing for majority of the process reagents using four risk factors. The risk factors included in the strategy are (i) safety profile of the reagents, (ii) the starting amount of the process reagents used in the manufacturing process, (iii) the maximum dose of the product, and (iv) the point of introduction of the process reagents in the process. The implementation of the risk-based strategy can eliminate clearance testing for approximately 90% of the process reagents used in the manufacturing processes. This science-based strategy allows us to ensure patient safety and meet regulatory agency expectations throughout the product development life cycle. © PDA, Inc. 2015.

  19. Titania nanotube powders obtained by rapid breakdown anodization in perchloric acid electrolytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Saima, E-mail: saima.ali@aalto.fi; Hannula, Simo-Pekka

    Titania nanotube (TNT) powders are prepared by rapid break down anodization (RBA) in a 0.1 M perchloric acid (HClO{sub 4}) solution (Process 1), and ethylene glycol (EG) mixture with HClO{sub 4} and water (Process 2). A study of the as-prepared and calcined TNT powders obtained by both processes is implemented to evaluate and compare the morphology, crystal structure, specific surface area, and the composition of the nanotubes. Longer TNTs are formed in Process 1, while comparatively larger pore diameter and wall thickness are obtained for the nanotubes prepared by Process 2. The TNTs obtained by Process 1 are converted tomore » nanorods at 350 °C, while nanotubes obtained by Process 2 preserve tubular morphology till 350 °C. In addition, the TNTs prepared by an aqueous electrolyte have a crystalline structure, whereas the TNTs obtained by Process 2 are amorphous. Samples calcined till 450 °C have XRD peaks from the anatase phase, while the rutile phase appears at 550 °C for the TNTs prepared by both processes. The Raman spectra also show clear anatase peaks for all samples except the as-prepared sample obtained by Process 2, thus supporting the XRD findings. FTIR spectra reveal the presence of O-H groups in the structure for the TNTs obtained by both processes. However, the presence is less prominent for annealed samples. Additionally, TNTs obtained by Process 2 have a carbonaceous impurity present in the structure attributed to the electrolyte used in that process. While a negligible weight loss is typical for TNTs prepared from aqueous electrolytes, a weight loss of 38.6% in the temperature range of 25–600 °C is found for TNTs prepared in EG electrolyte (Process 2). A large specific surface area of 179.2 m{sup 2} g{sup −1} is obtained for TNTs prepared by Process 1, whereas Process 2 produces nanotubes with a lower specific surface area. The difference appears to correspond to the dimensions of the nanotubes obtained by the two processes. - Graphical abstract: Titania nanotube powders prepared by Process 1 and Process 2 have different crystal structure and specific surface area. - Highlights: • Titania nanotube (TNT) powder is prepared in low water organic electrolyte. • Characterization of TNT powders prepared from aqueous and organic electrolyte. • TNTs prepared by Process 1 are crystalline with higher specific surface area. • TNTs obtained by Process 2 have carbonaceous impurities in the structure.« less

  20. A processing approach to the working memory/long-term memory distinction: evidence from the levels-of-processing span task.

    PubMed

    Rose, Nathan S; Craik, Fergus I M

    2012-07-01

    Recent theories suggest that performance on working memory (WM) tasks involves retrieval from long-term memory (LTM). To examine whether WM and LTM tests have common principles, Craik and Tulving's (1975) levels-of-processing paradigm, which is known to affect LTM, was administered as a WM task: Participants made uppercase, rhyme, or category-membership judgments about words, and immediate recall of the words was required after every 3 or 8 processing judgments. In Experiment 1, immediate recall did not demonstrate a levels-of-processing effect, but a subsequent LTM test (delayed recognition) of the same words did show a benefit of deeper processing. Experiment 2 showed that surprise immediate recall of 8-item lists did demonstrate a levels-of-processing effect, however. A processing account of the conditions in which levels-of-processing effects are and are not found in WM tasks was advanced, suggesting that the extent to which levels-of-processing effects are similar between WM and LTM tests largely depends on the amount of disruption to active maintenance processes. 2012 APA, all rights reserved

  1. Emotional words can be embodied or disembodied: the role of superficial vs. deep types of processing

    PubMed Central

    Abbassi, Ensie; Blanchette, Isabelle; Ansaldo, Ana I.; Ghassemzadeh, Habib; Joanette, Yves

    2015-01-01

    Emotional words are processed rapidly and automatically in the left hemisphere (LH) and slowly, with the involvement of attention, in the right hemisphere (RH). This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This part of the process is automatic and may be sufficient for the purpose of language processing. Deep processing, in contrast, seems to involve conceptual information and imagery of a word’s perceptual and emotional properties using autobiographical memory contents. Imagery and the involvement of autobiographical memory likely differentiate between emotional and neutral word processing and explain the salient role of the RH in emotional word processing. It is concluded that the level of emotional word processing in the RH should be deeper than in the LH and, thus, it is conceivable that the slow mode of processing adds certain qualities to the output. PMID:26217288

  2. Process Monitoring Evaluation and Implementation for the Wood Abrasive Machining Process

    PubMed Central

    Saloni, Daniel E.; Lemaster, Richard L.; Jackson, Steven D.

    2010-01-01

    Wood processing industries have continuously developed and improved technologies and processes to transform wood to obtain better final product quality and thus increase profits. Abrasive machining is one of the most important of these processes and therefore merits special attention and study. The objective of this work was to evaluate and demonstrate a process monitoring system for use in the abrasive machining of wood and wood based products. The system developed increases the life of the belt by detecting (using process monitoring sensors) and removing (by cleaning) the abrasive loading during the machining process. This study focused on abrasive belt machining processes and included substantial background work, which provided a solid base for understanding the behavior of the abrasive, and the different ways that the abrasive machining process can be monitored. In addition, the background research showed that abrasive belts can effectively be cleaned by the appropriate cleaning technique. The process monitoring system developed included acoustic emission sensors which tended to be sensitive to belt wear, as well as platen vibration, but not loading, and optical sensors which were sensitive to abrasive loading. PMID:22163477

  3. Adaptive memory: determining the proximate mechanisms responsible for the memorial advantages of survival processing.

    PubMed

    Burns, Daniel J; Burns, Sarah A; Hwang, Ana J

    2011-01-01

    J. S. Nairne, S. R. Thompson, and J. N. S. Pandeirada (2007) suggested that our memory systems may have evolved to help us remember fitness-relevant information and showed that retention of words rated for their relevance to survival is superior to that of words encoded under other deep processing conditions. The authors present 4 experiments that uncover the proximate mechanisms likely responsible. The authors obtained a recall advantage for survival processing compared with conditions that promoted only item-specific processing or only relational processing. This effect was eliminated when control conditions encouraged both item-specific and relational processing. Data from separate measures of item-specific and relational processing generally were consistent with the view that the memorial advantage for survival processing results from the encoding of both types of processing. Although the present study suggests the proximate mechanisms for the effect, the authors argue that survival processing may be fundamentally different from other memory phenomena for which item-specific and relational processing differences have been implicated. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  4. Implementation of quality by design toward processing of food products.

    PubMed

    Rathore, Anurag S; Kapoor, Gautam

    2017-05-28

    Quality by design (QbD) is a systematic approach that begins with predefined objectives and emphasizes product and process understanding and process control. It is an approach based on principles of sound science and quality risk management. As the food processing industry continues to embrace the idea of in-line, online, and/or at-line sensors and real-time characterization for process monitoring and control, the existing gaps with regard to our ability to monitor multiple parameters/variables associated with the manufacturing process will be alleviated over time. Investments made for development of tools and approaches that facilitate high-throughput analytical and process development, process analytical technology, design of experiments, risk analysis, knowledge management, and enhancement of process/product understanding would pave way for operational and economic benefits later in the commercialization process and across other product pipelines. This article aims to achieve two major objectives. First, to review the progress that has been made in the recent years on the topic of QbD implementation in processing of food products and second, present a case study that illustrates benefits of such QbD implementation.

  5. Energy saving processes for nitrogen removal in organic wastewater from food processing industries in Thailand.

    PubMed

    Johansen, N H; Suksawad, N; Balslev, P

    2004-01-01

    Nitrogen removal from organic wastewater is becoming a demand in developed communities. The use of nitrite as intermediate in the treatment of wastewater has been largely ignored, but is actually a relevant energy saving process compared to conventional nitrification/denitrification using nitrate as intermediate. Full-scale results and pilot-scale results using this process are presented. The process needs some additional process considerations and process control to be utilized. Especially under tropical conditions the nitritation process will round easily, and it must be expected that many AS treatment plants in the food industry already produce NO2-N. This uncontrolled nitrogen conversion can be the main cause for sludge bulking problems. It is expected that sludge bulking problems in many cases can be solved just by changing the process control in order to run a more consequent nitritation. Theoretically this process will decrease the oxygen consumption for oxidation by 25% and the use of carbon source for the reduction will be decreased by 40% compared to the conventional process.

  6. Application of Ozone MBBR Process in Refinery Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Lin, Wang

    2018-01-01

    Moving Bed Biofilm Reactor (MBBR) is a kind of sewage treatment technology based on fluidized bed. At the same time, it can also be regarded as an efficient new reactor between active sludge method and the biological membrane method. The application of ozone MBBR process in refinery wastewater treatment is mainly studied. The key point is to design the ozone +MBBR combined process based on MBBR process. The ozone +MBBR process is used to analyze the treatment of concentrated water COD discharged from the refinery wastewater treatment plant. The experimental results show that the average removal rate of COD is 46.0%~67.3% in the treatment of reverse osmosis concentrated water by ozone MBBR process, and the effluent can meet the relevant standard requirements. Compared with the traditional process, the ozone MBBR process is more flexible. The investment of this process is mainly ozone generator, blower and so on. The prices of these items are relatively inexpensive, and these costs can be offset by the excess investment in traditional activated sludge processes. At the same time, ozone MBBR process has obvious advantages in water quality, stability and other aspects.

  7. Models of recognition: A review of arguments in favor of a dual-process account

    PubMed Central

    DIANA, RACHEL A.; REDER, LYNNE M.; ARNDT, JASON; PARK, HEEKYEONG

    2008-01-01

    The majority of computationally specified models of recognition memory have been based on a single-process interpretation, claiming that familiarity is the only influence on recognition. There is increasing evidence that recognition is, in fact, based on two processes: recollection and familiarity. This article reviews the current state of the evidence for dual-process models, including the usefulness of the remember/know paradigm, and interprets the relevant results in terms of the source of activation confusion (SAC) model of memory. We argue that the evidence from each of the areas we discuss, when combined, presents a strong case that inclusion of a recollection process is necessary. Given this conclusion, we also argue that the dual-process claim that the recollection process is always available is, in fact, more parsimonious than the single-process claim that the recollection process is used only in certain paradigms. The value of a well-specified process model such as the SAC model is discussed with regard to other types of dual-process models. PMID:16724763

  8. Emotional words can be embodied or disembodied: the role of superficial vs. deep types of processing.

    PubMed

    Abbassi, Ensie; Blanchette, Isabelle; Ansaldo, Ana I; Ghassemzadeh, Habib; Joanette, Yves

    2015-01-01

    Emotional words are processed rapidly and automatically in the left hemisphere (LH) and slowly, with the involvement of attention, in the right hemisphere (RH). This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This part of the process is automatic and may be sufficient for the purpose of language processing. Deep processing, in contrast, seems to involve conceptual information and imagery of a word's perceptual and emotional properties using autobiographical memory contents. Imagery and the involvement of autobiographical memory likely differentiate between emotional and neutral word processing and explain the salient role of the RH in emotional word processing. It is concluded that the level of emotional word processing in the RH should be deeper than in the LH and, thus, it is conceivable that the slow mode of processing adds certain qualities to the output.

  9. Techno-economic analysis of biocatalytic processes for production of alkene expoxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borole, Abhijeet P

    2007-01-01

    A techno-economic analysis of two different bioprocesses was conducted, one for the conversion of propylene to propylene oxide (PO) and other for conversion of styrene to styrene expoxide (SO). The first process was a lipase-mediated chemo-enzymatic reaction, whereas the second one was a one-step enzymatic process using chloroperoxidase. The PO produced through the chemo-enzymatic process is a racemic product, whereas the latter process (based on chloroperoxidase) produces an enantio-pure product. The former process thus falls under the category of high-volume commodity chemical (PO); whereas the latter is a low-volume, high-value product (SO).A simulation of the process was conducted using themore » bioprocess engineering software SuperPro Designer v6.0 (Intelligen, Inc., Scotch Plains, NJ) to determine the economic feasibility of the process. The purpose of the exercise was to compare biocatalytic processes with existing chemical processes for production of alkene expoxides. The results show that further improvements are needed in improving biocatalyst stability to make these bioprocesses competitive with chemical processes.« less

  10. The representation of conceptual knowledge: visual, auditory, and olfactory imagery compared with semantic processing.

    PubMed

    Palmiero, Massimiliano; Di Matteo, Rosalia; Belardinelli, Marta Olivetti

    2014-05-01

    Two experiments comparing imaginative processing in different modalities and semantic processing were carried out to investigate the issue of whether conceptual knowledge can be represented in different format. Participants were asked to judge the similarity between visual images, auditory images, and olfactory images in the imaginative block, if two items belonged to the same category in the semantic block. Items were verbally cued in both experiments. The degree of similarity between the imaginative and semantic items was changed across experiments. Experiment 1 showed that the semantic processing was faster than the visual and the auditory imaginative processing, whereas no differentiation was possible between the semantic processing and the olfactory imaginative processing. Experiment 2 revealed that only the visual imaginative processing could be differentiated from the semantic processing in terms of accuracy. These results showed that the visual and auditory imaginative processing can be differentiated from the semantic processing, although both visual and auditory images strongly rely on semantic representations. On the contrary, no differentiation is possible within the olfactory domain. Results are discussed in the frame of the imagery debate.

  11. Working memory load eliminates the survival processing effect.

    PubMed

    Kroneisen, Meike; Rummel, Jan; Erdfelder, Edgar

    2014-01-01

    In a series of experiments, Nairne, Thompson, and Pandeirada (2007) demonstrated that words judged for their relevance to a survival scenario are remembered better than words judged for a scenario not relevant on a survival dimension. They explained this survival-processing effect by arguing that nature "tuned" our memory systems to process and remember fitness-relevant information. Kroneisen and Erdfelder (2011) proposed that it may not be survival processing per se that facilitates recall but the richness and distinctiveness with which information is encoded. To further test this account, we investigated how the survival processing effect is affected by cognitive load. If the survival processing effect is due to automatic processes or, alternatively, if survival processing is routinely prioritized in dual-task contexts, we would expect this effect to persist under cognitive load conditions. If the effect relies on cognitively demanding processes like richness and distinctiveness of encoding, however, the survival processing benefit should be hampered by increased cognitive load during encoding. Results were in line with the latter prediction, that is, the survival processing effect vanished under dual-task conditions.

  12. E-learning process maturity level: a conceptual framework

    NASA Astrophysics Data System (ADS)

    Rahmah, A.; Santoso, H. B.; Hasibuan, Z. A.

    2018-03-01

    ICT advancement is a sure thing with the impact influencing many domains, including learning in both formal and informal situations. It leads to a new mindset that we should not only utilize the given ICT to support the learning process, but also improve it gradually involving a lot of factors. These phenomenon is called e-learning process evolution. Accordingly, this study attempts to explore maturity level concept to provide the improvement direction gradually and progression monitoring for the individual e-learning process. Extensive literature review, observation, and forming constructs are conducted to develop a conceptual framework for e-learning process maturity level. The conceptual framework consists of learner, e-learning process, continuous improvement, evolution of e-learning process, technology, and learning objectives. Whilst, evolution of e-learning process depicted as current versus expected conditions of e-learning process maturity level. The study concludes that from the e-learning process maturity level conceptual framework, it may guide the evolution roadmap for e-learning process, accelerate the evolution, and decrease the negative impact of ICT. The conceptual framework will be verified and tested in the future study.

  13. Heat input and accumulation for ultrashort pulse processing with high average power

    NASA Astrophysics Data System (ADS)

    Finger, Johannes; Bornschlegel, Benedikt; Reininghaus, Martin; Dohrn, Andreas; Nießen, Markus; Gillner, Arnold; Poprawe, Reinhart

    2018-05-01

    Materials processing using ultrashort pulsed laser radiation with pulse durations <10 ps is known to enable very precise processing with negligible thermal load. However, even for the application of picosecond and femtosecond laser radiation, not the full amount of the absorbed energy is converted into ablation products and a distinct fraction of the absorbed energy remains as residual heat in the processed workpiece. For low average power and power densities, this heat is usually not relevant for the processing results and dissipates into the workpiece. In contrast, when higher average powers and repetition rates are applied to increase the throughput and upscale ultrashort pulse processing, this heat input becomes relevant and significantly affects the achieved processing results. In this paper, we outline the relevance of heat input for ultrashort pulse processing, starting with the heat input of a single ultrashort laser pulse. Heat accumulation during ultrashort pulse processing with high repetition rate is discussed as well as heat accumulation for materials processing using pulse bursts. In addition, the relevance of heat accumulation with multiple scanning passes and processing with multiple laser spots is shown.

  14. Defining and reconstructing clinical processes based on IHE and BPMN 2.0.

    PubMed

    Strasser, Melanie; Pfeifer, Franz; Helm, Emmanuel; Schuler, Andreas; Altmann, Josef

    2011-01-01

    This paper describes the current status and the results of our process management system for defining and reconstructing clinical care processes, which contributes to compare, analyze and evaluate clinical processes and further to identify high cost tasks or stays. The system is founded on IHE, which guarantees standardized interfaces and interoperability between clinical information systems. At the heart of the system there is BPMN, a modeling notation and specification language, which allows the definition and execution of clinical processes. The system provides functionality to define healthcare information system independent clinical core processes and to execute the processes in a workflow engine. Furthermore, the reconstruction of clinical processes is done by evaluating an IHE audit log database, which records patient movements within a health care facility. The main goal of the system is to assist hospital operators and clinical process managers to detect discrepancies between defined and actual clinical processes and as well to identify main causes of high medical costs. Beyond that, the system can potentially contribute to reconstruct and improve clinical processes and enhance cost control and patient care quality.

  15. Process qualification and testing of LENS deposited AY1E0125 D-bottle brackets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atwood, Clinton J.; Smugeresky, John E.; Jew, Michael

    2006-11-01

    The LENS Qualification team had the goal of performing a process qualification for the Laser Engineered Net Shaping{trademark}(LENS{reg_sign}) process. Process Qualification requires that a part be selected for process demonstration. The AY1E0125 D-Bottle Bracket from the W80-3 was selected for this work. The repeatability of the LENS process was baselined to determine process parameters. Six D-Bottle brackets were deposited using LENS, machined to final dimensions, and tested in comparison to conventionally processed brackets. The tests, taken from ES1E0003, included a mass analysis and structural dynamic testing including free-free and assembly-level modal tests, and Haversine shock tests. The LENS brackets performedmore » with very similar characteristics to the conventionally processed brackets. Based on the results of the testing, it was concluded that the performance of the brackets made them eligible for parallel path testing in subsystem level tests. The testing results and process rigor qualified the LENS process as detailed in EER200638525A.« less

  16. Sustainability assessment of shielded metal arc welding (SMAW) process

    NASA Astrophysics Data System (ADS)

    Alkahla, Ibrahim; Pervaiz, Salman

    2017-09-01

    Shielded metal arc welding (SMAW) process is one of the most commonly employed material joining processes utilized in the various industrial sectors such as marine, ship-building, automotive, aerospace, construction and petrochemicals etc. The increasing pressure on manufacturing sector wants the welding process to be sustainable in nature. The SMAW process incorporates several types of inputs and output streams. The sustainability concerns associated with SMAW process are linked with the various input and output streams such as electrical energy requirement, input material consumptions, slag formation, fumes emission and hazardous working conditions associated with the human health and occupational safety. To enhance the environmental performance of the SMAW welding process, there is a need to characterize the sustainability for the SMAW process under the broad framework of sustainability. Most of the available literature focuses on the technical and economic aspects of the welding process, however the environmental and social aspects are rarely addressed. The study reviews SMAW process with respect to the triple bottom line (economic, environmental and social) sustainability approach. Finally, the study concluded recommendations towards achieving economical and sustainable SMAW welding process.

  17. Decontamination and disposal of PCB wastes.

    PubMed Central

    Johnston, L E

    1985-01-01

    Decontamination and disposal processes for PCB wastes are reviewed. Processes are classed as incineration, chemical reaction or decontamination. Incineration technologies are not limited to the rigorous high temperature but include those where innovations in use of oxident, heat transfer and residue recycle are made. Chemical processes include the sodium processes, radiant energy processes and low temperature oxidations. Typical processing rates and associated costs are provided where possible. PMID:3928363

  18. Logistics Control Facility: A Normative Model for Total Asset Visibility in the Air Force Logistics System

    DTIC Science & Technology

    1994-09-01

    IIssue Computers, information systems, and communication systems are being increasingly used in transportation, warehousing, order processing , materials...inventory levels, reduced order processing times, reduced order processing costs, and increased customer satisfaction. While purchasing and transportation...process, the speed in which crders are processed would increase significantly. Lowering the order processing time in turn lowers the lead time, which in

  19. Definition and documentation of engineering processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, G.W.

    1997-11-01

    This tutorial is an extract of a two-day workshop developed under the auspices of the Quality Engineering Department at Sandia National Laboratories. The presentation starts with basic definitions and addresses why processes should be defined and documented. It covers three primary topics: (1) process considerations and rationale, (2) approach to defining and documenting engineering processes, and (3) an IDEFO model of the process for defining engineering processes.

  20. Method for enhanced atomization of liquids

    DOEpatents

    Thompson, Richard E.; White, Jerome R.

    1993-01-01

    In a process for atomizing a slurry or liquid process stream in which a slurry or liquid is passed through a nozzle to provide a primary atomized process stream, an improvement which comprises subjecting the liquid or slurry process stream to microwave energy as the liquid or slurry process stream exits the nozzle, wherein sufficient microwave heating is provided to flash vaporize the primary atomized process stream.

Top