The finite scaling for S = 1 XXZ chains with uniaxial single-ion-type anisotropy
NASA Astrophysics Data System (ADS)
Wang, Honglei; Xiong, Xingliang
2014-03-01
The scaling behavior of criticality for spin-1 XXZ chains with uniaxial single-ion-type anisotropy is investigated by employing the infinite matrix product state representation with the infinite time evolving block decimation method. At criticality, the accuracy of the ground state of a system is limited by the truncation dimension χ of the local Hilbert space. We present four evidences for the scaling of the entanglement entropy, the largest eigenvalue of the Schmidt decomposition, the correlation length, and the connection between the actual correlation length ξ and the energy. The result shows that the finite scalings are governed by the central charge of the critical system. Also, it demonstrates that the infinite time evolving block decimation algorithm by the infinite matrix product state representation can be a quite accurate method to simulate the critical properties at criticality.
Quasi-soliton scattering in quantum spin chains
NASA Astrophysics Data System (ADS)
Vlijm, R.; Ganahl, M.; Fioretto, D.; Brockmann, M.; Haque, M.; Evertz, H. G.; Caux, J.-S.
2015-12-01
The quantum scattering of magnon bound states in the anisotropic Heisenberg spin chain is shown to display features similar to the scattering of solitons in classical exactly solvable models. Localized colliding Gaussian wave packets of bound magnons are constructed from string solutions of the Bethe equations and subsequently evolved in time, relying on an algebraic Bethe ansatz based framework for the computation of local expectation values in real space-time. The local magnetization profile shows the trajectories of colliding wave packets of bound magnons, which obtain a spatial displacement upon scattering. Analytic predictions on the displacements for various values of anisotropy and string lengths are derived from scattering theory and Bethe ansatz phase shifts, matching time-evolution fits on the displacements. The time-evolved block decimation algorithm allows for the study of scattering displacements from spin-block states, showing similar scattering displacement features.
Entangled Dynamics in Macroscopic Quantum Tunneling of Bose-Einstein Condensates
NASA Astrophysics Data System (ADS)
Alcala, Diego A.; Glick, Joseph A.; Carr, Lincoln D.
2017-05-01
Tunneling of a quasibound state is a nonsmooth process in the entangled many-body case. Using time-evolving block decimation, we show that repulsive (attractive) interactions speed up (slow down) tunneling. While the escape time scales exponentially with small interactions, the maximization time of the von Neumann entanglement entropy between the remaining quasibound and escaped atoms scales quadratically. Stronger interactions require higher-order corrections. Entanglement entropy is maximized when about half the atoms have escaped.
Entanglement dynamics in critical random quantum Ising chain with perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yichen, E-mail: ychuang@caltech.edu
We simulate the entanglement dynamics in a critical random quantum Ising chain with generic perturbations using the time-evolving block decimation algorithm. Starting from a product state, we observe super-logarithmic growth of entanglement entropy with time. The numerical result is consistent with the analytical prediction of Vosk and Altman using a real-space renormalization group technique. - Highlights: • We study the dynamical quantum phase transition between many-body localized phases. • We simulate the dynamics of a very long random spin chain with matrix product states. • We observe numerically super-logarithmic growth of entanglement entropy with time.
Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B; Tamascelli, Dario; Montangero, Simone
2018-01-01
We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.
NASA Astrophysics Data System (ADS)
Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone
2018-01-01
We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.
NASA Astrophysics Data System (ADS)
Cha, Min-Chul; Chung, Myung-Hoon
2018-05-01
We study quantum phase transition of interacting fermions by measuring the local entanglement entropy in the one-dimensional Hubbard model. The reduced density matrices for blocks of a few sites are constructed from the ground state wave function in infinite systems by adopting the matrix product state representation where time-evolving block decimations are performed to obtain the lowest energy states. The local entanglement entropy, constructed from the reduced density matrices, as a function of the chemical potential shows clear signatures of the Mott transition. The value of the central charge, numerically determined from the universal properties of the local entanglement entropy, confirms that the transition is caused by the suppression of the charge degrees of freedom.
Phase diagram of the isotropic spin-(3)/(2) model on the z=3 Bethe lattice
NASA Astrophysics Data System (ADS)
Depenbrock, Stefan; Pollmann, Frank
2013-07-01
We study an SU(2) symmetric spin-3/2 model on the z=3 Bethe lattice using the infinite time evolving block decimation (iTEBD) method. This model is shown to exhibit a rich phase diagram. We compute several order parameters which allow us to identify a ferromagnetic, a ferrimagnetic, an antiferromagnetic, as well as a dimerized phase. We calculate the entanglement spectra from which we conclude the existence of a symmetry protected topological phase that is characterized by S=1/2 edge spins. Details of the iTEBD algorithm used for the simulations are included.
Hybrid Semiclassical Theory of Quantum Quenches in One-Dimensional Systems
NASA Astrophysics Data System (ADS)
Moca, Cǎtǎlin Paşcu; Kormos, Márton; Zaránd, Gergely
2017-09-01
We develop a hybrid semiclassical method to study the time evolution of one-dimensional quantum systems in and out of equilibrium. Our method handles internal degrees of freedom completely quantum mechanically by a modified time-evolving block decimation method while treating orbital quasiparticle motion classically. We can follow dynamics up to time scales well beyond the reach of standard numerical methods to observe the crossover between preequilibrated and locally phase equilibrated states. As an application, we investigate the quench dynamics and phase fluctuations of a pair of tunnel-coupled one-dimensional Bose condensates. We demonstrate the emergence of soliton-collision-induced phase propagation, soliton-entropy production, and multistep thermalization. Our method can be applied to a wide range of gapped one-dimensional systems.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-12
... for Four Decimal Point Pricing for Block and Exchange for Physical (``EFPs'') Trades August 8, 2011... block trades and the futures component of EFP trades to be traded/priced in four decimals points. Regular trades (non-block or non EFP) will continue to trade in only two decimal points. The text of the...
Variational optimization algorithms for uniform matrix product states
NASA Astrophysics Data System (ADS)
Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.
2018-01-01
We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.
Classical simulation of infinite-size quantum lattice systems in two spatial dimensions.
Jordan, J; Orús, R; Vidal, G; Verstraete, F; Cirac, J I
2008-12-19
We present an algorithm to simulate two-dimensional quantum lattice systems in the thermodynamic limit. Our approach builds on the projected entangled-pair state algorithm for finite lattice systems [F. Verstraete and J. I. Cirac, arxiv:cond-mat/0407066] and the infinite time-evolving block decimation algorithm for infinite one-dimensional lattice systems [G. Vidal, Phys. Rev. Lett. 98, 070201 (2007)10.1103/PhysRevLett.98.070201]. The present algorithm allows for the computation of the ground state and the simulation of time evolution in infinite two-dimensional systems that are invariant under translations. We demonstrate its performance by obtaining the ground state of the quantum Ising model and analyzing its second order quantum phase transition.
Macroscopic quantum tunneling escape of Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Zhao, Xinxin; Alcala, Diego A.; McLain, Marie A.; Maeda, Kenji; Potnis, Shreyas; Ramos, Ramon; Steinberg, Aephraim M.; Carr, Lincoln D.
2017-12-01
Recent experiments on macroscopic quantum tunneling reveal a nonexponential decay of the number of atoms trapped in a quasibound state behind a potential barrier. Through both experiment and theory, we demonstrate this nonexponential decay results from interactions between atoms. Quantum tunneling of tens of thousands of 87Rb atoms in a Bose-Einstein condensate is modeled by a modified Jeffreys-Wentzel-Kramers-Brillouin model, taking into account the effective time-dependent barrier induced by the mean field. Three-dimensional Gross-Pitaevskii simulations corroborate a mean-field result when compared with experiments. However, with one-dimensional modeling using time-evolving block decimation, we present an effective renormalized mean-field theory that suggests many-body dynamics for which a bare mean-field theory may not apply.
Quantum transverse-field Ising model on an infinite tree from matrix product states
NASA Astrophysics Data System (ADS)
Nagaj, Daniel; Farhi, Edward; Goldstone, Jeffrey; Shor, Peter; Sylvester, Igor
2008-06-01
We give a generalization to an infinite tree geometry of Vidal’s infinite time-evolving block decimation (iTEBD) algorithm [G. Vidal, Phys. Rev. Lett. 98, 070201 (2007)] for simulating an infinite line of quantum spins. We numerically investigate the quantum Ising model in a transverse field on the Bethe lattice using the matrix product state ansatz. We observe a second order phase transition, with certain key differences from the transverse field Ising model on an infinite spin chain. We also investigate a transverse field Ising model with a specific longitudinal field. When the transverse field is turned off, this model has a highly degenerate ground state as opposed to the pure Ising model whose ground state is only doubly degenerate.
On the application of under-decimated filter banks
NASA Technical Reports Server (NTRS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-01-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.
On the application of under-decimated filter banks
NASA Astrophysics Data System (ADS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-11-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.
NASA Astrophysics Data System (ADS)
Nikmehr, Hooman; Phillips, Braden; Lim, Cheng-Chew
2005-02-01
Recently, decimal arithmetic has become attractive in the financial and commercial world including banking, tax calculation, currency conversion, insurance and accounting. Although computers are still carrying out decimal calculation using software libraries and binary floating-point numbers, it is likely that in the near future, all processors will be equipped with units performing decimal operations directly on decimal operands. One critical building block for some complex decimal operations is the decimal carry-free adder. This paper discusses the mathematical framework of the addition, introduces a new signed-digit format for representing decimal numbers and presents an efficient architectural implementation. Delay estimation analysis shows that the adder offers improved performance over earlier designs.
Flux quench in a system of interacting spinless fermions in one dimension
NASA Astrophysics Data System (ADS)
Nakagawa, Yuya O.; Misguich, Grégoire; Oshikawa, Masaki
2016-05-01
We study a quantum quench in a one-dimensional spinless fermion model (equivalent to the XXZ spin chain), where a magnetic flux is suddenly switched off. This quench is equivalent to imposing a pulse of electric field and therefore generates an initial particle current. This current is not a conserved quantity in the presence of a lattice and interactions, and we investigate numerically its time evolution after the quench, using the infinite time-evolving block decimation method. For repulsive interactions or large initial flux, we find oscillations that are governed by excitations deep inside the Fermi sea. At long times we observe that the current remains nonvanishing in the gapless cases, whereas it decays to zero in the gapped cases. Although the linear response theory (valid for a weak flux) predicts the same long-time limit of the current for repulsive and attractive interactions (relation with the zero-temperature Drude weight), larger nonlinearities are observed in the case of repulsive interactions compared with that of the attractive case.
Quantum phase transition modulation in an atomtronic Mott switch
NASA Astrophysics Data System (ADS)
McLain, Marie A.; Carr, Lincoln D.
2018-07-01
Mott insulators provide stable quantum states and long coherence times due to small number fluctuations, making them good candidates for quantum memory and atomic circuits. We propose a proof-of-principle for a 1D Mott switch using an ultracold Bose gas and optical lattice. With time-evolving block decimation simulations—efficient matrix product state methods—we design a means for transient parameter characterization via a local excitation for ease of engineering into more complex atomtronics. We perform the switch operation by tuning the intensity of the optical lattice, and thus the interaction strength through a conductance transition due to the confined modifications of the ‘wedding cake’ Mott structure. We demonstrate the time-dependence of Fock state transmission and fidelity of the excitation as a means of tuning up the device in a double well and as a measure of noise performance. Two-point correlations via the g (2) measure provide additional information regarding superfluid fragments on the Mott insulating background due to the confinement of the potential.
Unifying time evolution and optimization with matrix product states
NASA Astrophysics Data System (ADS)
Haegeman, Jutho; Lubich, Christian; Oseledets, Ivan; Vandereycken, Bart; Verstraete, Frank
2016-10-01
We show that the time-dependent variational principle provides a unifying framework for time-evolution methods and optimization methods in the context of matrix product states. In particular, we introduce a new integration scheme for studying time evolution, which can cope with arbitrary Hamiltonians, including those with long-range interactions. Rather than a Suzuki-Trotter splitting of the Hamiltonian, which is the idea behind the adaptive time-dependent density matrix renormalization group method or time-evolving block decimation, our method is based on splitting the projector onto the matrix product state tangent space as it appears in the Dirac-Frenkel time-dependent variational principle. We discuss how the resulting algorithm resembles the density matrix renormalization group (DMRG) algorithm for finding ground states so closely that it can be implemented by changing just a few lines of code and it inherits the same stability and efficiency. In particular, our method is compatible with any Hamiltonian for which ground-state DMRG can be implemented efficiently. In fact, DMRG is obtained as a special case of our scheme for imaginary time evolution with infinite time step.
Designing spin-channel geometries for entanglement distribution
NASA Astrophysics Data System (ADS)
Levi, E. K.; Kirton, P. G.; Lovett, B. W.
2016-09-01
We investigate different geometries of spin-1/2 nitrogen impurity channels for distributing entanglement between pairs of remote nitrogen vacancy centers (NVs) in diamond. To go beyond the system size limits imposed by directly solving the master equation, we implement a matrix product operator method to describe the open system dynamics. In so doing, we provide an early demonstration of how the time-evolving block decimation algorithm can be used for answering a problem related to a real physical system that could not be accessed by other methods. For a fixed NV separation there is an interplay between incoherent impurity spin decay and coherent entanglement transfer: Long-transfer-time, few-spin systems experience strong dephasing that can be overcome by increasing the number of spins in the channel. We examine how missing spins and disorder in the coupling strengths affect the dynamics, finding that in some regimes a spin ladder is a more effective conduit for information than a single-spin chain.
Decipipes: Helping Students to "Get the Point"
ERIC Educational Resources Information Center
Moody, Bruce
2011-01-01
Decipipes are a representational model that can be used to help students develop conceptual understanding of decimal place value. They provide a non-standard tool for representing length, which in turn can be represented using conventional decimal notation. They are conceptually identical to Linear Arithmetic Blocks. This article reviews theory…
Quantum many-body dynamics of dark solitons in optical lattices
NASA Astrophysics Data System (ADS)
Mishmash, R. V.; Danshita, I.; Clark, Charles W.; Carr, L. D.
2009-11-01
We present a fully quantum many-body treatment of dark solitons formed by ultracold bosonic atoms in one-dimensional optical lattices. Using time-evolving block decimation to simulate the single-band Bose-Hubbard Hamiltonian, we consider the quantum dynamics of density and phase engineered dark solitons as well as the quantum evolution of mean-field dark solitons injected into the quantum model. The former approach directly models how one may create quantum entangled dark solitons in experiment. While we have already presented results regarding the latter approach elsewhere [R. V. Mishmash and L. D. Carr, Phys. Rev. Lett. 103, 140403 (2009)], we expand upon those results in this work. In both cases, quantum fluctuations cause the dark soliton to fill in and may induce an inelasticity in soliton-soliton collisions. Comparisons are made to the Bogoliubov theory which predicts depletion into an anomalous mode that fills in the soliton. Our many-body treatment allows us to go beyond the Bogoliubov approximation and calculate explicitly the dynamics of the system’s natural orbitals.
Matrix-product-state method with local basis optimization for nonequilibrium electron-phonon systems
NASA Astrophysics Data System (ADS)
Heidrich-Meisner, Fabian; Brockt, Christoph; Dorfner, Florian; Vidmar, Lev; Jeckelmann, Eric
We present a method for simulating the time evolution of quasi-one-dimensional correlated systems with strongly fluctuating bosonic degrees of freedom (e.g., phonons) using matrix product states. For this purpose we combine the time-evolving block decimation (TEBD) algorithm with a local basis optimization (LBO) approach. We discuss the performance of our approach in comparison to TEBD with a bare boson basis, exact diagonalization, and diagonalization in a limited functional space. TEBD with LBO can reduce the computational cost by orders of magnitude when boson fluctuations are large and thus it allows one to investigate problems that are out of reach of other approaches. First, we test our method on the non-equilibrium dynamics of a Holstein polaron and show that it allows us to study the regime of strong electron-phonon coupling. Second, the method is applied to the scattering of an electronic wave packet off a region with electron-phonon coupling. Our study reveals a rich physics including transient self-trapping and dissipation. Supported by Deutsche Forschungsgemeinschaft (DFG) via FOR 1807.
NASA Astrophysics Data System (ADS)
Choi, Hwan Bin; Lee, Ji-Woo
2017-09-01
We study quantum phase transitions of a XXZ spin model with spin S = 1/2 and 1 in one dimension. The XXZ spin chain is one of basic models in understanding various one-dimensional magnetic materials. To study this model, we construct infinite-lattice matrix product state (iMPS), which is a tensor product form for a one-dimensional many-body quantum wave function. By using timeevolution- block-decimation method (TEBD) on iMPS, we obtain the ground states of the XXZ model at zero temperature. This method is very delicate in calculating ground states so that we developed a reliable method of finding the ground state with the dimension of entanglement coefficients up to 300, which is beyond the previous works. By analyzing ground-state energies, half-chain entanglement entropies, and entanglement spectrum, we found the signatures of quantum phase transitions between ferromagnetic phase, XY phase, Haldane phase, and antiferromagnetic phase.
Quantum entanglement and criticality of the antiferromagnetic Heisenberg model in an external field.
Liu, Guang-Hua; Li, Ruo-Yan; Tian, Guang-Shan
2012-06-27
By Lanczos exact diagonalization and the infinite time-evolving block decimation (iTEBD) technique, the two-site entanglement as well as the bipartite entanglement, the ground state energy, the nearest-neighbor correlations, and the magnetization in the antiferromagnetic Heisenberg (AFH) model under an external field are investigated. With increasing external field, the small size system shows some distinct upward magnetization stairsteps, accompanied synchronously with some downward two-site entanglement stairsteps. In the thermodynamic limit, the two-site entanglement, as well as the bipartite entanglement, the ground state energy, the nearest-neighbor correlations, and the magnetization are calculated, and the critical magnetic field h(c) = 2.0 is determined exactly. Our numerical results show that the quantum entanglement is sensitive to the subtle changing of the ground state, and can be used to describe the magnetization and quantum phase transition. Based on the discontinuous behavior of the first-order derivative of the entanglement entropy and fidelity per site, we think that the quantum phase transition in this model should belong to the second-order category. Furthermore, in the magnon existence region (h < 2.0), a logarithmically divergent behavior of block entanglement which can be described by a free bosonic field theory is observed, and the central charge c is determined to be 1.
NASA Astrophysics Data System (ADS)
Lamolda, Héctor; Felpeto, Alicia; Bethencourt, Abelardo
2017-07-01
Between 2011 and 2014 there were at least seven episodes of magmatic intrusion in El Hierro Island, but only the first one led to a submarine eruption in 2011-2012. In order to study the relationship between GPS deformation and seismicity during these episodes, we compare the temporal evolution of the deformation with the cumulative seismic energy released. In some of the episodes both deformation and seismicity evolve in a very similar way, but in others a time lag appears between them, in which the deformation precedes the seismicity. Furthermore, a linear correlation between decimal logarithm of intruded magma volume and decimal logarithm of total seismic energy released along the different episodes has been observed. Therefore, if a future magmatic intrusion in El Hierro Island follows this behavior with a proper time lag, we could have an a priori estimate on the order of magnitude the seismic energy released would reach.
A New Property of Repeating Decimals
ERIC Educational Resources Information Center
Arledge, Jane; Tekansik, Sarah
2008-01-01
As extended by Ginsberg, Midi's theorem says that if the repeated section of a decimal expansion of a prime is split into appropriate blocks and these are added, the result is a string of nines. We show that if the expansion of 1/p[superscript n+1] is treated the same way, instead of being a string of nines, the sum is related to the period of…
The analysis of decimation and interpolation in the linear canonical transform domain.
Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li
2016-01-01
Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.
Numerical analysis of spin-orbit-coupled one-dimensional Fermi gas in a magnetic field
NASA Astrophysics Data System (ADS)
Chan, Y. H.
2015-06-01
Based on the density-matrix renormalization group and the infinite time-evolving block decimation methods we study the interacting spin-orbit-coupled 1D Fermi gas in a transverse magnetic field. We find that the system with an attractive interaction can have a polarized insulator phase, a superconducting (SC) phase, a Luther-Emery (LE) phase, and a band insulator phase as we vary the chemical potential and the strength of the magnetic field. Spin-orbit coupling (SOC) enhances the triplet pairing order at zero momentum in both the SC and the LE phase, which leads to an algebraically decaying correlation with the same exponent as that of the singlet pairing one. In contrast to the Fulde-Ferrell-Larkin-Ovchinnikov phase found in the spin imbalanced system without SOC, pairings at finite momentum in these two phases have larger exponents hence do not dictate the long-range behavior. We also test for the presence of Majorana fermions in this system. Unlike results from the mean-field study, we do not find positive evidence of Majorana fermions.
Driven Bose-Hubbard model with a parametrically modulated harmonic trap
NASA Astrophysics Data System (ADS)
Mann, N.; Bakhtiari, M. Reza; Massel, F.; Pelster, A.; Thorwart, M.
2017-04-01
We investigate a one-dimensional Bose-Hubbard model in a parametrically driven global harmonic trap. The delicate interplay of both the local interaction of the atoms in the lattice and the driving of the global trap allows us to control the dynamical stability of the trapped quantum many-body state. The impact of the atomic interaction on the dynamical stability of the driven quantum many-body state is revealed in the regime of weak interaction by analyzing a discretized Gross-Pitaevskii equation within a Gaussian variational ansatz, yielding a Mathieu equation for the condensate width. The parametric resonance condition is shown to be modified by the atom interaction strength. In particular, the effective eigenfrequency is reduced for growing interaction in the mean-field regime. For a stronger interaction, the impact of the global parametric drive is determined by the numerically exact time-evolving block decimation scheme. When the trapped bosons in the lattice are in a Mott insulating state, the absorption of energy from the driving field is suppressed due to the strongly reduced local compressibility of the quantum many-body state. In particular, we find that the width of the local Mott region shows a breathing dynamics. Finally, we observe that the global modulation also induces an effective time-independent inhomogeneous hopping strength for the atoms.
Photon transport in a dissipative chain of nonlinear cavities
NASA Astrophysics Data System (ADS)
Biella, Alberto; Mazza, Leonardo; Carusotto, Iacopo; Rossini, Davide; Fazio, Rosario
2015-05-01
By means of numerical simulations and the input-output formalism, we study photon transport through a chain of coupled nonlinear optical cavities subject to uniform dissipation. Photons are injected from one end of the chain by means of a coherent source. The propagation through the array of cavities is sensitive to the interplay between the photon hopping strength and the local nonlinearity in each cavity. We characterize photon transport by studying the populations and the photon correlations as a function of the cavity position. When complemented with input-output theory, these quantities provide direct information about photon transmission through the system. The position of single-photon and multiphoton resonances directly reflects the structure of the many-body energy levels. This shows how a study of transport along a coupled cavity array can provide rich information about the strongly correlated (many-body) states of light even in presence of dissipation. The numerical algorithm we use, based on the time-evolving block decimation scheme adapted to mixed states, allows us to simulate large arrays (up to 60 cavities). The scaling of photon transmission with the number of cavities does depend on the structure of the many-body photon states inside the array.
Quantum bright solitons in a quasi-one-dimensional optical lattice
NASA Astrophysics Data System (ADS)
Barbiero, Luca; Salasnich, Luca
2014-06-01
We study a quasi-one-dimensional attractive Bose gas confined in an optical lattice with a superimposed harmonic potential by analyzing the one-dimensional Bose-Hubbard Hamiltonian of the system. Starting from the three-dimensional many-body quantum Hamiltonian, we derive strong inequalities involving the transverse degrees of freedom under which the one-dimensional Bose-Hubbard Hamiltonian can be safely used. To have a reliable description of the one-dimensional ground state, which we call a quantum bright soliton, we use the density-matrix-renormalization-group (DMRG) technique. By comparing DMRG results with mean-field (MF) ones, we find that beyond-mean-field effects become relevant by increasing the attraction between bosons or by decreasing the frequency of the harmonic confinement. In particular, we find that, contrary to the MF predictions based on the discrete nonlinear Schrödinger equation, average density profiles of quantum bright solitons are not shape-invariant. We also use the time-evolving-block-decimation method to investigate the dynamical properties of bright solitons when the frequency of the harmonic potential is suddenly increased. This quantum quench induces a breathing mode whose period crucially depends on the final strength of the superimposed harmonic confinement.
Real-time decay of a highly excited charge carrier in the one-dimensional Holstein model
NASA Astrophysics Data System (ADS)
Dorfner, F.; Vidmar, L.; Brockt, C.; Jeckelmann, E.; Heidrich-Meisner, F.
2015-03-01
We study the real-time dynamics of a highly excited charge carrier coupled to quantum phonons via a Holstein-type electron-phonon coupling. This is a prototypical example for the nonequilibrium dynamics in an interacting many-body system where excess energy is transferred from electronic to phononic degrees of freedom. We use diagonalization in a limited functional space (LFS) to study the nonequilibrium dynamics on a finite one-dimensional chain. This method agrees with exact diagonalization and the time-evolving block-decimation method, in both the relaxation regime and the long-time stationary state, and among these three methods it is the most efficient and versatile one for this problem. We perform a comprehensive analysis of the time evolution by calculating the electron, phonon and electron-phonon coupling energies, and the electronic momentum distribution function. The numerical results are compared to analytical solutions for short times, for a small hopping amplitude and for a weak electron-phonon coupling. In the latter case, the relaxation dynamics obtained from the Boltzmann equation agrees very well with the LFS data. We also study the time dependence of the eigenstates of the single-site reduced density matrix, which defines the so-called optimal phonon modes. We discuss their structure in nonequilibrium and the distribution of their weights. Our analysis shows that the structure of optimal phonon modes contains very useful information for the interpretation of the numerical data.
Design of Cancelable Palmprint Templates Based on Look Up Table
NASA Astrophysics Data System (ADS)
Qiu, Jian; Li, Hengjian; Dong, Jiwen
2018-03-01
A novel cancelable palmprint templates generation scheme is proposed in this paper. Firstly, the Gabor filter and chaotic matrix are used to extract palmprint features. It is then arranged into a row vector and divided into equal size blocks. These blocks are converted to corresponding decimals and mapped to look up tables, forming final cancelable palmprint features based on the selected check bits. Finally, collaborative representation based classification with regularized least square is used for classification. Experimental results on the Hong Kong PolyU Palmprint Database verify that the proposed cancelable templates can achieve very high performance and security levels. Meanwhile, it can also satisfy the needs of real-time applications.
Ammunition Resupply Model. Volume II. Programmers Manual.
1980-03-01
pointer tables. If the placement is successful the flag ( ICHECK ) is set equal to 1. COMMON BLOCKS: EVENTS CALLS: NONE IS CALLED BY: SCHED CALLING PARAMETERS...decimal portion of the event time multiplied by 3600. ICHECK - 0 if no room on the file, I if there is room on the file. LOCAL ARRAYS: JFORE (1024...8217EVT, ITH, I-IS, !CHECK) C PUTEVT PLACES AN EVENT RECORD IN -THE QUEUE IN CHRONOLOGICAL C ORDER A,1D UPDATES THE QUEUE DIRECTORY. ICHECK FLAG SET C IF
Ultra-fast relaxation, decoherence, and localization of photoexcited states in π-conjugated polymers
NASA Astrophysics Data System (ADS)
Mannouch, Jonathan R.; Barford, William; Al-Assam, Sarah
2018-01-01
The exciton relaxation dynamics of photoexcited electronic states in poly(p-phenylenevinylene) are theoretically investigated within a coarse-grained model, in which both the exciton and nuclear degrees of freedom are treated quantum mechanically. The Frenkel-Holstein Hamiltonian is used to describe the strong exciton-phonon coupling present in the system, while external damping of the internal nuclear degrees of freedom is accounted for by a Lindblad master equation. Numerically, the dynamics are computed using the time evolving block decimation and quantum jump trajectory techniques. The values of the model parameters physically relevant to polymer systems naturally lead to a separation of time scales, with the ultra-fast dynamics corresponding to energy transfer from the exciton to the internal phonon modes (i.e., the C-C bond oscillations), while the longer time dynamics correspond to damping of these phonon modes by the external dissipation. Associated with these time scales, we investigate the following processes that are indicative of the system relaxing onto the emissive chromophores of the polymer: (1) Exciton-polaron formation occurs on an ultra-fast time scale, with the associated exciton-phonon correlations present within half a vibrational time period of the C-C bond oscillations. (2) Exciton decoherence is driven by the decay in the vibrational overlaps associated with exciton-polaron formation, occurring on the same time scale. (3) Exciton density localization is driven by the external dissipation, arising from "wavefunction collapse" occurring as a result of the system-environment interactions. Finally, we show how fluorescence anisotropy measurements can be used to investigate the exciton decoherence process during the relaxation dynamics.
Understanding decimal numbers: a foundation for correct calculations.
Pierce, Robyn U; Steinle, Vicki A; Stacey, Kaye C; Widjaja, Wanty
2008-01-01
This paper reports on the effectiveness of an intervention designed to improve nursing students' conceptual understanding of decimal numbers. Results of recent intervention studies have indicated some success at improving nursing students' numeracy through practice in applying procedural rules for calculation and working in real or simulated practical contexts. However, in this we identified a fundamental problem: a significant minority of students had an inadequate understanding of decimal numbers. The intervention aimed to improve nursing students' basic understanding of the size of decimal numbers, so that, firstly, calculation rules are more meaningful, and secondly, students can interpret decimal numbers (whether digital output or results of calculations) sensibly. A well-researched, time-efficient diagnostic instrument was used to identify individuals with an inadequate understanding of decimal numbers. We describe a remedial intervention that resulted in significant improvement on a delayed post-intervention test. We conclude that nurse educators should consider diagnosing and, as necessary, plan for remediation of students' foundational understanding of decimal numbers before teaching procedural rules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hastings, Matthew B
We show how to combine the light-cone and matrix product algorithms to simulate quantum systems far from equilibrium for long times. For the case of the XXZ spin chain at {Delta} = 0.5, we simulate to a time of {approx} 22.5. While part of the long simulation time is due to the use of the light-cone method, we also describe a modification of the infinite time-evolving bond decimation algorithm with improved numerical stability, and we describe how to incorporate symmetry into this algorithm. While statistical sampling error means that we are not yet able to make a definite statement, themore » behavior of the simulation at long times indicates the appearance of either 'revivals' in the order parameter as predicted by Hastings and Levitov (e-print arXiv:0806.4283) or of a distinct shoulder in the decay of the order parameter.« less
39 CFR 3055.65 - Special Services.
Code of Federal Regulations, 2012 CFR
2012-07-01
... within the Special Services group, report the percentage of time (rounded to one decimal place) that each... report the percentage of time (rounded to one decimal place) that each service meets or exceeds its...) Additional reporting for Post Office Box Service. For Post Office Box Service, report the percentage of time...
39 CFR 3055.65 - Special Services.
Code of Federal Regulations, 2014 CFR
2014-07-01
... within the Special Services group, report the percentage of time (rounded to one decimal place) that each... report the percentage of time (rounded to one decimal place) that each service meets or exceeds its...) Additional reporting for Post Office Box Service. For Post Office Box Service, report the percentage of time...
39 CFR 3055.65 - Special Services.
Code of Federal Regulations, 2011 CFR
2011-07-01
... within the Special Services group, report the percentage of time (rounded to one decimal place) that each... report the percentage of time (rounded to one decimal place) that each service meets or exceeds its...) Additional reporting for Post Office Box Service. For Post Office Box Service, report the percentage of time...
39 CFR 3055.65 - Special Services.
Code of Federal Regulations, 2013 CFR
2013-07-01
... within the Special Services group, report the percentage of time (rounded to one decimal place) that each... report the percentage of time (rounded to one decimal place) that each service meets or exceeds its...) Additional reporting for Post Office Box Service. For Post Office Box Service, report the percentage of time...
NASA Astrophysics Data System (ADS)
Hauke, Philipp; Cucchietti, Fernando M.; Müller-Hermes, Alexander; Bañuls, Mari-Carmen; Cirac, J. Ignacio; Lewenstein, Maciej
2010-11-01
Systems with long-range interactions show a variety of intriguing properties: they typically accommodate many metastable states, they can give rise to spontaneous formation of supersolids, and they can lead to counterintuitive thermodynamic behavior. However, the increased complexity that comes with long-range interactions strongly hinders theoretical studies. This makes a quantum simulator for long-range models highly desirable. Here, we show that a chain of trapped ions can be used to quantum simulate a one-dimensional (1D) model of hard-core bosons with dipolar off-site interaction and tunneling, equivalent to a dipolar XXZ spin-1/2 chain. We explore the rich phase diagram of this model in detail, employing perturbative mean-field theory, exact diagonalization and quasi-exact numerical techniques (density-matrix renormalization group and infinite time-evolving block decimation). We find that the complete devil's staircase—an infinite sequence of crystal states existing at vanishing tunneling—spreads to a succession of lobes similar to the Mott lobes found in Bose-Hubbard models. Investigating the melting of these crystal states at increased tunneling, we do not find (contrary to similar 2D models) clear indications of supersolid behavior in the region around the melting transition. However, we find that inside the insulating lobes there are quasi-long-range (algebraic) correlations, as opposed to models with nearest-neighbor tunneling, that show exponential decay of correlations.
Quantum decimation in Hilbert space: Coarse graining without structure
NASA Astrophysics Data System (ADS)
Singh, Ashmeet; Carroll, Sean M.
2018-03-01
We present a technique to coarse grain quantum states in a finite-dimensional Hilbert space. Our method is distinguished from other approaches by not relying on structures such as a preferred factorization of Hilbert space or a preferred set of operators (local or otherwise) in an associated algebra. Rather, we use the data corresponding to a given set of states, either specified independently or constructed from a single state evolving in time. Our technique is based on principle component analysis (PCA), and the resulting coarse-grained quantum states live in a lower-dimensional Hilbert space whose basis is defined using the underlying (isometric embedding) transformation of the set of fine-grained states we wish to coarse grain. Physically, the transformation can be interpreted to be an "entanglement coarse-graining" scheme that retains most of the global, useful entanglement structure of each state, while needing fewer degrees of freedom for its reconstruction. This scheme could be useful for efficiently describing collections of states whose number is much smaller than the dimension of Hilbert space, or a single state evolving over time.
Rapid Decimation for Direct Volume Rendering
NASA Technical Reports Server (NTRS)
Gibbs, Jonathan; VanGelder, Allen; Verma, Vivek; Wilhelms, Jane
1997-01-01
An approach for eliminating unnecessary portions of a volume when producing a direct volume rendering is described. This reduction in volume size sacrifices some image quality in the interest of rendering speed. Since volume visualization is often used as an exploratory visualization technique, it is important to reduce rendering times, so the user can effectively explore the volume. The methods presented can speed up rendering by factors of 2 to 3 with minor image degradation. A family of decimation algorithms to reduce the number of primitives in the volume without altering the volume's grid in any way is introduced. This allows the decimation to be computed rapidly, making it easier to change decimation levels on the fly. Further, because very little extra space is required, this method is suitable for the very large volumes that are becoming common. The method is also grid-independent, so it is suitable for multiple overlapping curvilinear and unstructured, as well as regular, grids. The decimation process can proceed automatically, or can be guided by the user so that important regions of the volume are decimated less than unimportant regions. A formal error measure is described based on a three-dimensional analog of the Radon transform. Decimation methods are evaluated based on this metric and on direct comparison with reference images.
Effect of the image resolution on the statistical descriptors of heterogeneous media.
Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime
2018-02-01
The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.
Effect of the image resolution on the statistical descriptors of heterogeneous media
NASA Astrophysics Data System (ADS)
Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime
2018-02-01
The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.
Magnitude comparison with different types of rational numbers.
DeWolf, Melissa; Grounds, Margaret A; Bassok, Miriam; Holyoak, Keith J
2014-02-01
An important issue in understanding mathematical cognition involves the similarities and differences between the magnitude representations associated with various types of rational numbers. For single-digit integers, evidence indicates that magnitudes are represented as analog values on a mental number line, such that magnitude comparisons are made more quickly and accurately as the numerical distance between numbers increases (the distance effect). Evidence concerning a distance effect for compositional numbers (e.g., multidigit whole numbers, fractions and decimals) is mixed. We compared the patterns of response times and errors for college students in magnitude comparison tasks across closely matched sets of rational numbers (e.g., 22/37, 0.595, 595). In Experiment 1, a distance effect was found for both fractions and decimals, but response times were dramatically slower for fractions than for decimals. Experiments 2 and 3 compared performance across fractions, decimals, and 3-digit integers. Response patterns for decimals and integers were extremely similar but, as in Experiment 1, magnitude comparisons based on fractions were dramatically slower, even when the decimals varied in precision (i.e., number of place digits) and could not be compared in the same way as multidigit integers (Experiment 3). Our findings indicate that comparisons of all three types of numbers exhibit a distance effect, but that processing often involves strategic focus on components of numbers. Fractions impose an especially high processing burden due to their bipartite (a/b) structure. In contrast to the other number types, the magnitude values associated with fractions appear to be less precise, and more dependent on explicit calculation. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Greaves, Mel; Maley, Carlo C.
2012-01-01
Cancers evolve by a reiterative process of clonal expansion, genetic diversification and clonal selection within the adaptive landscapes of tissue ecosystems. The dynamics are complex with highly variable patterns of genetic diversity and resultant clonal architecture. Therapeutic intervention may decimate cancer clones, and erode their habitats, but inadvertently provides potent selective pressure for the expansion of resistant variants. The inherently Darwinian character of cancer lies at the heart of therapeutic failure but perhaps also holds the key to more effective control. PMID:22258609
GIFTS SM EDU Level 1B Algorithms
NASA Technical Reports Server (NTRS)
Tian, Jialin; Gazarik, Michael J.; Reisse, Robert A.; Johnson, David G.
2007-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) SensorModule (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets and off-axis effects are also implemented. In the third block, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation. Finally, in the fourth block, the single pixel algorithms are applied to the entire FPA.
The evolving block universe and the meshing together of times.
Ellis, George F R
2014-10-01
It has been proposed that spacetime should be regarded as an evolving block universe, bounded to the future by the present time, which continually extends to the future. This future boundary is defined at each time by measuring proper time along Ricci eigenlines from the start of the universe. A key point, then, is that physical reality can be represented at many different scales: hence, the passage of time may be seen as different at different scales, with quantum gravity determining the evolution of spacetime itself at the Planck scale, but quantum field theory and classical physics determining the evolution of events within spacetime at larger scales. The fundamental issue then arises as to how the effective times at different scales mesh together, leading to the concepts of global and local times. © 2014 New York Academy of Sciences.
2001-09-01
oldz3 decimal(5,3), @sel1 decimal(5,3), @sel2 decimal(5,3), @sel3 decimal(5,3), @ sel4 decimal(5,3), @sel5 decimal(5,3), @sel6 decimal(5,3...tnpSchieffelinHistory WHERE EmployeeID = @s4 and CalendarYear = @year SET @ sel4 = @z4formula + @oldsel4 UPDATE tnpSchieffelinHistory SET...SelectedOnBallotScore = @ sel4 WHERE EmployeeID = @s4 and CalendarYear = @year 345 END
Flexible and unique representations of two-digit decimals.
Zhang, Li; Chen, Min; Lin, Chongde; Szűcs, Denes
2014-09-01
We examined the representation of two-digit decimals through studying distance and compatibility effects in magnitude comparison tasks in four experiments. Using number pairs with different leftmost digits, we found both the second digit distance effect and compatibility effect with two-digit integers but only the second digit distance effect with two-digit pure decimals. This suggests that both integers and pure decimals are processed in a compositional manner. In contrast, neither the second digit distance effect nor the compatibility effect was observed in two-digit mixed decimals, thereby showing no evidence for compositional processing of two-digit mixed decimals. However, when the relevance of the rightmost digit processing was increased by adding some decimals pairs with the same leftmost digits, both pure and mixed decimals produced the compatibility effect. Overall, results suggest that the processing of decimals is flexible and depends on the relevance of unique digit positions. This processing mode is different from integer analysis in that two-digit mixed decimals demonstrate parallel compositional processing only when the rightmost digit is relevant. Findings suggest that people probably do not represent decimals by simply ignoring the decimal point and converting them to natural numbers. Copyright © 2014 Elsevier B.V. All rights reserved.
Elving, Josefine; Vinnerås, Björn; Albihn, Ann; Ottoson, Jakob R
2014-01-01
Thermal treatment at temperatures between 46.0°C and 55.0°C was evaluated as a method for sanitization of organic waste, a temperature interval less commonly investigated but important in connection with biological treatment processes. Samples of dairy cow feces inoculated with Salmonella Senftenberg W775, Enterococcus faecalis, bacteriophage ϕX174, and porcine parvovirus (PPV) were thermally treated using block thermostats at set temperatures in order to determine time-temperature regimes to achieve sufficient bacterial and viral reduction, and to model the inactivation rate. Pasteurization at 70°C in saline solution was used as a comparison in terms of bacterial and viral reduction and was proven to be effective in rapidly reducing all organisms with the exception of PPV (decimal reduction time of 1.2 h). The results presented here can be used to construct time-temperature regimes in terms of bacterial inactivation, with D-values ranging from 0.37 h at 55°C to 22.5 h at 46.0°C and 0.45 h at 55.0°C to 14.5 h at 47.5°C for Salmonella Senftenberg W775 and Enterococcus faecalis, respectively and for relevant enteric viruses based on the ϕX174 phage with decimal reduction times ranging from 1.5 h at 55°C to 16.5 h at 46°C. Hence, the study implies that considerably lower treatment temperatures than 70°C can be used to reach a sufficient inactivation of bacterial pathogens and potential process indicator organisms such as the ϕX174 phage and raises the question whether PPV is a valuable process indicator organism considering its extreme thermotolerance.
Clark, William John
2011-01-01
During the 20th century functional appliances evolved from night time wear to more flexible appliances for increased day time wear to full time wear with Twin Block appliances. The current trend is towards fixed functional appliances and this paper introduces the Fixed Twin Block, bonded to the teeth to eliminate problems of compliance in functional therapy. TransForce lingual appliances are pre-activated and may be used in first phase treatment for sagittal and transverse arch development. Alternatively they may be integrated with fixed appliances at any stage of treatment.
A Classification Methodology and Retrieval Model to Support Software Reuse
1988-01-01
Dewey Decimal Classification ( DDC 18), an enumerative scheme, occupies 40 pages [Buchanan 19791. Langridge [19731 states that the facets listed in the...sense of historical importance or wide spread use. The schemes are: Dewey Decimal Classification ( DDC ), Universal Decimal Classification (UDC...Classification Systems ..... ..... 2.3.3 Library Classification__- .52 23.3.1 Dewey Decimal Classification -53 2.33.2 Universal Decimal Classification 55 2333
Loumann Knudsen, Lars
2003-08-01
To study reproducibility and biological variation of visual acuity in diabetic maculopathy, using two different visual acuity tests, the decimal progression chart and the Freiburg visual acuity test. Twenty-two eyes in 11 diabetic subjects were examined several times within a 12-month period using both visual acuity tests. The most commonly used visual acuity test in Denmark (the decimal progression chart) was compared to the Freiburg visual acuity test (automated testing) in a paired study. Correlation analysis revealed agreement between the two methods (r(2)=0.79; slope=0.82; y-axis intercept=0.01). The mean visual acuity was found to be 15% higher (P<0.0001) with the decimal progression chart than with the Freiburg visual acuity test. The reproducibility was the same in both tests (coefficient of variation: 12% for each test); however, the variation within the 12-month examination period differed significantly. The coefficient of variation was 17% using the decimal progression chart, 35% with the Freiburg visual acuity test. The reproducibility of the two visual acuity tests is comparable under optimal testing conditions in diabetic subjects with macular oedema. However, it appears that the Freiburg visual acuity test is significantly better for detection of biological variation.
Bit error rate tester using fast parallel generation of linear recurring sequences
Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.
2003-05-06
A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.
Research of future network with multi-layer IP address
NASA Astrophysics Data System (ADS)
Li, Guoling; Long, Zhaohua; Wei, Ziqiang
2018-04-01
The shortage of IP addresses and the scalability of routing systems [1] are challenges for the Internet. The idea of dividing existing IP addresses between identities and locations is one of the important research directions. This paper proposed a new decimal network architecture based on IPv9 [11], and decimal network IP address from E.164 principle of traditional telecommunication network, the IP address level, which helps to achieve separation and identification and location of IP address, IP address form a multilayer network structure, routing scalability problem in remission at the same time, to solve the problem of IPv4 address depletion. On the basis of IPv9, a new decimal network architecture is proposed, and the IP address of the decimal network draws on the E.164 principle of the traditional telecommunication network, and the IP addresses are hierarchically divided, which helps to realize the identification and location separation of IP addresses, the formation of multi-layer IP address network structure, while easing the scalability of the routing system to find a way out of IPv4 address exhausted. In addition to modifying DNS [10] simply and adding the function of digital domain, a DDNS [12] is formed. At the same time, a gateway device is added, that is, IPV9 gateway. The original backbone network and user network are unchanged.
1974-11-01
ol the abstract entered In Block 30. II dlllerent from Report) IB. SUPPLEMENTARY NOTES Available in DDC 19. KEY WORDS (Continue on revetee...Stream. " UTME TP 6808, June 1968. 20. Davis, D. D. , Jr. and Moore, Dewey . "Analytical Study of Blockage- and Lift-Interference...The variables N and NM must be right justified in their fields, and punched without a decimal point. The variables XLAM, UE, DO, BO, XMIN, and
5 Indicators of Decimal Understandings
ERIC Educational Resources Information Center
Cramer, Kathleen; Monson, Debra; Ahrendt, Sue; Colum, Karen; Wiley, Bethann; Wyberg, Terry
2015-01-01
The authors of this article collaborated with fourth-grade teachers from two schools to support implementation of a research-based fraction and decimal curriculum (Rational Number Project: Fraction Operations and Initial Decimal Ideas). Through this study, they identified five indicators of rich conceptual understanding of decimals, which are…
Sharable Courseware Object Reference Model (SCORM), Version 1.0
2000-07-01
or query tool may provide the top- level entries of a well-established classification (LOC, UDC, DDC , and so forth). SEL 9.2.2 Taxon This subcategory...YYYY/MM/DD. CMIFeedback Structured description of student response in an interaction. CMIDecimal Number which may have a decimal point. If not...Seconds shall contain 2 digits with an optional decimal point and additional digits. CMITimespan A length of time in hours, minutes, and seconds
Common magnitude representation of fractions and decimals is task dependent.
Zhang, Li; Fang, Qiaochu; Gabriel, Florence C; Szűcs, Denes
2016-01-01
Although several studies have compared the representation of fractions and decimals, no study has investigated whether fractions and decimals, as two types of rational numbers, share a common representation of magnitude. The current study aimed to answer the question of whether fractions and decimals share a common representation of magnitude and whether the answer is influenced by task paradigms. We included two different number pairs, which were presented sequentially: fraction-decimal mixed pairs and decimal-fraction mixed pairs in all four experiments. Results showed that when the mixed pairs were very close numerically with the distance 0.1 or 0.3, there was a significant distance effect in the comparison task but not in the matching task. However, when the mixed pairs were further apart numerically with the distance 0.3 or 1.3, the distance effect appeared in the matching task regardless of the specific stimuli. We conclude that magnitudes of fractions and decimals can be represented in a common manner, but how they are represented is dependent on the given task. Fractions and decimals could be translated into a common representation of magnitude in the numerical comparison task. In the numerical matching task, fractions and decimals also shared a common representation. However, both of them were represented coarsely, leading to a weak distance effect. Specifically, fractions and decimals produced a significant distance effect only when the numerical distance was larger.
Rational-number comparison across notation: Fractions, decimals, and whole numbers.
Hurst, Michelle; Cordes, Sara
2016-02-01
Although fractions, decimals, and whole numbers can be used to represent the same rational-number values, it is unclear whether adults conceive of these rational-number magnitudes as lying along the same ordered mental continuum. In the current study, we investigated whether adults' processing of rational-number magnitudes in fraction, decimal, and whole-number notation show systematic ratio-dependent responding characteristic of an integrated mental continuum. Both reaction time (RT) and eye-tracking data from a number-magnitude comparison task revealed ratio-dependent performance when adults compared the relative magnitudes of rational numbers, both within the same notation (e.g., fractions vs. fractions) and across different notations (e.g., fractions vs. decimals), pointing to an integrated mental continuum for rational numbers across notation types. In addition, eye-tracking analyses provided evidence of an implicit whole-number bias when we compared values in fraction notation, and individual differences in this whole-number bias were related to the individual's performance on a fraction arithmetic task. Implications of our results for both cognitive development research and math education are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Himathongkham, S; Riemann, H; Bahari, S; Nuanualsuwan, S; Kass, P; Cliver, D O
2000-01-01
Exponential inactivation was observed for Salmonella typhimurium and Escherichia coli O157:H7 in poultry manure with decimal reduction times ranging from half a day at 37 C to 1-2 wk at 4 C. There was no material difference in inactivation rates between S. typhimurium and E. coli O157:H7. Inactivation was slower in slurries made by mixing two parts of water with one part of manure; decimal reduction times (time required for 90% destruction) ranged from 1-2 days at 37 C to 6-22 wk at 4 C. Escherichia coli O157:H7 consistently exhibited slightly slower inactivation than S. typhimurium. Log decimal reduction time for both strains was a linear function of storage temperature for manure and slurries. Chemical analysis indicated that accumulation of free ammonia in poultry manure was an important factor in inactivation of the pathogens. This finding was experimentally confirmed for S. typhimurium by adding ammonia directly to peptone water or to bovine manure, which was naturally low in ammonia, and adjusting pH to achieve predetermined levels of free ammonia.
2015-08-01
activities for DDG 51, AMDR, Aegis, and other related programs, such as the Evolved Sea Sparrow Missile. We also reviewed DOD studies and past GAO...systems—from initial SPY-6 radar detection of a target, such as an anti- ship cruise missile, through target interception by an Evolved Sea Sparrow ...required to accredit the Aegis modeling and simulation capability, (2) the Evolved Sea Sparrow Missile Block 2—a key element of Flight III’s self
ERIC Educational Resources Information Center
Roche, Anne
2005-01-01
The author cites research from students' misconceptions of decimal notation that indicates that many students treat decimals as another whole number to the right of the decimal point. This "whole number thinking" leads some students to believe, in the context of comparing decimals, that "longer is larger" (for example, 0.45 is larger than 0.8…
Schmidt, S; Walter, G H
2014-05-01
A calibrated phylogeny of the family Pergidae indicates that the major lineages within the family evolved during the fragmentation of the Gondwanan supercontinent. The split between the Pergidae and its sister group Argidae is estimated at about 153 Myr ago. No central dichotomous division between Australian and South American pergid sawflies was observed, showing that the major lineages within this group had already evolved by the time Australia had become completely isolated from Antarctica. The molecular dating analysis strongly indicates a co-radiation of Australian pergid sawflies with their Myrtaceae hosts and suggest that the two eucalypt-feeding clades, pergines and pterygophorines, colonised their eucalypt host plants independently during the Palaeocene, at the time when their hosts appear to have started radiating. The present analysis includes representatives of 13 of the 14 currently recognised subfamilies of Pergidae, almost all of which are supported by the molecular data presented here. Exceptions include the Euryinae (paraphyletic in respect to Perreyiinae), Acordulecerinae (paraphyletic to the Perginae), and the Australian Phylacteophaginae (placed within the Neotropical Acordulecerinae). The break-up of Gondwana and the timing of the subsequent climatic change in Australia, leading from vegetation adapted to a seasonal-wet conditions to the arid-adapted sclerophyll vegetation typical of Australia, suggest that the species-poor subfamilies occurring in rainforests represent remnants of more diverse groups that were decimated through loss of habitat or host species. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kumar, Santosh
2017-07-01
Binary to Binary coded decimal (BCD) converter is a basic building block for BCD processing. The last few decades have witnessed exponential rise in applications of binary coded data processing in the field of optical computing thus there is an eventual increase in demand of acceptable hardware platform for the same. Keeping this as an approach a novel design exploiting the preeminent feature of Mach-Zehnder Interferometer (MZI) is presented in this paper. Here, an optical 4-bit binary to binary coded decimal (BCD) converter utilizing the electro-optic effect of lithium niobate based MZI has been demonstrated. It exhibits the property of switching the optical signal from one port to the other, when a certain appropriate voltage is applied to its electrodes. The projected scheme is implemented using the combinations of cascaded electro-optic (EO) switches. Theoretical description along with mathematical formulation of the device is provided and the operation is analyzed through finite difference-Beam propagation method (FD-BPM). The fabrication techniques to develop the device are also discussed.
ERIC Educational Resources Information Center
Malone, Amelia Schneider; Loehr, Abbey M.; Fuchs, Lynn S.
2017-01-01
The purpose of the study was to determine whether individual differences in at-risk 4th graders' language comprehension, nonverbal reasoning, concept formation, working memory, and use of decimal labels (i.e., place value, point, incorrect place value, incorrect fraction, or whole number) are related to their decimal magnitude understanding.…
Surface smoothing, decimation, and their effects on 3D biological specimens.
Veneziano, Alessio; Landi, Federica; Profico, Antonio
2018-06-01
Smoothing and decimation filters are commonly used to restore the realistic appearance of virtual biological specimens, but they can cause a loss of topological information of unknown extent. In this study, we analyzed the effect of smoothing and decimation on a 3D mesh to highlight the consequences of an inappropriate use of these filters. Topological noise was simulated on four anatomical regions of the virtual reconstruction of an orangutan cranium. Sequential levels of smoothing and decimation were applied, and their effects were analyzed on the overall topology of the 3D mesh and on linear and volumetric measurements. Different smoothing algorithms affected mesh topology and measurements differently, although the influence on the latter was generally low. Decimation always produced detrimental effects on both topology and measurements. The application of smoothing and decimation, both separate and combined, is capable of recovering topological information. Based on the results, objective guidelines are provided to minimize information loss when using smoothing and decimation on 3D meshes. © 2018 Wiley Periodicals, Inc.
Fault Tolerant Signal Processing Using Finite Fields and Error-Correcting Codes.
1983-06-01
Decimation in Frequency Form, Fast Inverse Transform F-18 F-4 Part of Decimation in Time Form, Fast Inverse Transform F-21 I . LIST OF TABLES fable Title Page...F-2 Intermediate Variables In A Fast Inverse Transform F-14 Accession For NTIS GRA&il DTIC TAB E Unannounced El ** Dist ribut ion/ ____ AvailabilitY...component polynomials may be transformed to an equiva- lent series of multiplications of the related transform ’.. coefficients. The inverse transform of
Digital Filter ASIC for NASA Deep Space Radio Science
NASA Technical Reports Server (NTRS)
Kowalski, James E.
1995-01-01
This paper is about the implementation of an 80 MHz, 16-bit, multi-stage digital filter to decimate by 1600, providing a 50 kHz output with bandpass ripple of less than +/-0.1 dB. The chip uses two decimation by five units and six decimations by two executed by a single decimation by two units. The six decimations by two consist of six halfband filters, five having 30-taps and one having 51-taps. Use of a 16x16 register file for the digital delay lines enables implementation in the Vitesse 350K gate array.
A Fully Integrated Sensor SoC with Digital Calibration Hardware and Wireless Transceiver at 2.4 GHz
Kim, Dong-Sun; Jang, Sung-Joon; Hwang, Tae-Ho
2013-01-01
A single-chip sensor system-on-a-chip (SoC) that implements radio for 2.4 GHz, complete digital baseband physical layer (PHY), 10-bit sigma-delta analog-to-digital converter and dedicated sensor calibration hardware for industrial sensing systems has been proposed and integrated in a 0.18-μm CMOS technology. The transceiver's building block includes a low-noise amplifier, mixer, channel filter, receiver signal-strength indicator, frequency synthesizer, voltage-controlled oscillator, and power amplifier. In addition, the digital building block consists of offset quadrature phase-shift keying (OQPSK) modulation, demodulation, carrier frequency offset compensation, auto-gain control, digital MAC function, sensor calibration hardware and embedded 8-bit microcontroller. The digital MAC function supports cyclic redundancy check (CRC), inter-symbol timing check, MAC frame control, and automatic retransmission. The embedded sensor signal processing block consists of calibration coefficient calculator, sensing data calibration mapper and sigma-delta analog-to-digital converter with digital decimation filter. The sensitivity of the overall receiver and the error vector magnitude (EVM) of the overall transmitter are −99 dBm and 18.14%, respectively. The proposed calibration scheme has a reduction of errors by about 45.4% compared with the improved progressive polynomial calibration (PPC) method and the maximum current consumption of the SoC is 16 mA. PMID:23698271
FPGA-based Fused Smart Sensor for Real-Time Plant-Transpiration Dynamic Estimation
Millan-Almaraz, Jesus Roberto; de Jesus Romero-Troncoso, Rene; Guevara-Gonzalez, Ramon Gerardo; Contreras-Medina, Luis Miguel; Carrillo-Serrano, Roberto Valentin; Osornio-Rios, Roque Alfredo; Duarte-Galvan, Carlos; Rios-Alcaraz, Miguel Angel; Torres-Pacheco, Irineo
2010-01-01
Plant transpiration is considered one of the most important physiological functions because it constitutes the plants evolving adaptation to exchange moisture with a dry atmosphere which can dehydrate or eventually kill the plant. Due to the importance of transpiration, accurate measurement methods are required; therefore, a smart sensor that fuses five primary sensors is proposed which can measure air temperature, leaf temperature, air relative humidity, plant out relative humidity and ambient light. A field programmable gate array based unit is used to perform signal processing algorithms as average decimation and infinite impulse response filters to the primary sensor readings in order to reduce the signal noise and improve its quality. Once the primary sensor readings are filtered, transpiration dynamics such as: transpiration, stomatal conductance, leaf-air-temperature-difference and vapor pressure deficit are calculated in real time by the smart sensor. This permits the user to observe different primary and calculated measurements at the same time and the relationship between these which is very useful in precision agriculture in the detection of abnormal conditions. Finally, transpiration related stress conditions can be detected in real time because of the use of online processing and embedded communications capabilities. PMID:22163656
Mazzocco, Michèle M M; Devlin, Kathleen T
2008-09-01
Many middle-school students struggle with decimals and fractions, even if they do not have a mathematical learning disability (MLD). In the present longitudinal study, we examined whether children with MLD have weaker rational number knowledge than children whose difficulty with rational numbers occurs in the absence of MLD. We found that children with MLD failed to accurately name decimals, to correctly rank order decimals and/or fractions, and to identify equivalent ratios (e.g. 0.5 = 1/2); they also 'identified' incorrect equivalents (e.g. 0.05 = 0.50). Children with low math achievement but no MLD accurately named decimals and identified equivalent pairs, but failed to correctly rank order decimals and fractions. Thus failure to accurately name decimals was an indicator of MLD; but accurate naming was no guarantee of rational number knowledge - most children who failed to correctly rank order fractions and decimals tests passed the naming task. Most children who failed the ranking tests at 6th grade also failed at 8th grade. Our findings suggest that a simple task involving naming and rank ordering fractions and decimals may be a useful addition to in-class assessments used to determine children's learning of rational numbers.
Multi-DSP and FPGA based Multi-channel Direct IF/RF Digital receiver for atmospheric radar
NASA Astrophysics Data System (ADS)
Yasodha, Polisetti; Jayaraman, Achuthan; Kamaraj, Pandian; Durga rao, Meka; Thriveni, A.
2016-07-01
Modern phased array radars depend highly on digital signal processing (DSP) to extract the echo signal information and to accomplish reliability along with programmability and flexibility. The advent of ASIC technology has made various digital signal processing steps to be realized in one DSP chip, which can be programmed as per the application and can handle high data rates, to be used in the radar receiver to process the received signal. Further, recent days field programmable gate array (FPGA) chips, which can be re-programmed, also present an opportunity to utilize them to process the radar signal. A multi-channel direct IF/RF digital receiver (MCDRx) is developed at NARL, taking the advantage of high speed ADCs and high performance DSP chips/FPGAs, to be used for atmospheric radars working in HF/VHF bands. Multiple channels facilitate the radar t be operated in multi-receiver modes and also to obtain the wind vector with improved time resolution, without switching the antenna beam. MCDRx has six channels, implemented on a custom built digital board, which is realized using six numbers of ADCs for simultaneous processing of the six input signals, Xilinx vertex5 FPGA and Spartan6 FPGA, and two ADSPTS201 DSP chips, each of which performs one phase of processing. MCDRx unit interfaces with the data storage/display computer via two gigabit ethernet (GbE) links. One of the six channels is used for Doppler beam swinging (DBS) mode and the other five channels are used for multi-receiver mode operations, dedicatedly. Each channel has (i) ADC block, to digitize RF/IF signal, (ii) DDC block for digital down conversion of the digitized signal, (iii) decoding block to decode the phase coded signal, and (iv) coherent integration block for integrating the data preserving phase intact. ADC block consists of Analog devices make AD9467 16-bit ADCs, to digitize the input signal at 80 MSPS. The output of ADC is centered around (80 MHz - input frequency). The digitized data is fed to DDC block, which down converts the data to base-band. The DDC block has NCO, mixer and two chains of Bessel filters (fifth order cascaded integration comb filter, two FIR filters, two half band filters and programmable FIR filters) for in-phase (I) and Quadrature phase (Q) channels. The NCO has 32 bits and is set to match the output frequency of ADC. Further, DDC down samples (decimation) the data and reduces the data rate to 16 MSPS. This data is further decimated and the data rate is reduced down to 4/2/1/0.5/0.25/0.125/0.0625 MSPS for baud lengths 0.25/0.5/1/2/4/8/16 μs respectively. The down sampled data is then fed to decoding block, which performs cross correlation to achieve pulse compression of the binary-phase coded data to obtain better range resolution with maximum possible height coverage. This step improves the signal power by a factor equal to the length of the code. Coherent integration block integrates the decoded data coherently for successive pulses, which improves the signal to noise ratio and reduces the data volume. DDC, decoding and coherent integration blocks are implemented in Xilinx vertex5 FPGA. Till this point, function of all six channels is same for DBS mode and multi-receiver modes. Data from vertex5 FPGA is transferred to PC via GbE-1 interface for multi-modes or to two Analog devices make ADSP-TS201 DSP chips (A and B), via link port for DBS mode. ADSP-TS201 chips perform the normalization, DC removal, windowing, FFT computation and spectral averaging on the data, which is transferred to storage/display PC via GbE-2 interface for real-time data display and data storing. Physical layer of GbE interface is implemented in an external chip (Marvel 88E1111) and MAC layer is implemented internal to vertex5 FPGA. The MCDRx has total 4 GB of DDR2 memory for data storage. Spartan6 FPGA is used for generating timing signals, required for basic operation of the radar and testing of the MCDRx.
Reconstruction of Haplotype-Blocks Selected during Experimental Evolution.
Franssen, Susanne U; Barton, Nicholas H; Schlötterer, Christian
2017-01-01
The genetic analysis of experimentally evolving populations typically relies on short reads from pooled individuals (Pool-Seq). While this method provides reliable allele frequency estimates, the underlying haplotype structure remains poorly characterized. With small population sizes and adaptive variants that start from low frequencies, the interpretation of selection signatures in most Evolve and Resequencing studies remains challenging. To facilitate the characterization of selection targets, we propose a new approach that reconstructs selected haplotypes from replicated time series, using Pool-Seq data. We identify selected haplotypes through the correlated frequencies of alleles carried by them. Computer simulations indicate that selected haplotype-blocks of several Mb can be reconstructed with high confidence and low error rates, even when allele frequencies change only by 20% across three replicates. Applying this method to real data from D. melanogaster populations adapting to a hot environment, we identify a selected haplotype-block of 6.93 Mb. We confirm the presence of this haplotype-block in evolved populations by experimental haplotyping, demonstrating the power and accuracy of our haplotype reconstruction from Pool-Seq data. We propose that the combination of allele frequency estimates with haplotype information will provide the key to understanding the dynamics of adaptive alleles. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Double-S Decimals, Mathematics: 5211.20.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
The last of four guidebooks in the sequence, this booklet uses UICSM's "stretcher and shrinker" approach in developing place value, and four operations with decimals, conversion between fractions and decimals, and applications to measurement and rate problems. Overall goals, performance objectives for the course, teaching suggestions,…
INSPECTION MEANS FOR INDUCTION MOTORS
Williams, A.W.
1959-03-10
an appartus is descripbe for inspcting electric motors and more expecially an appartus for detecting falty end rings inn suqirrel cage inductio motors while the motor is running. In its broua aspects, the mer would around ce of reference tedtor means also itons in the phase ition of the An electronic circuit for conversion of excess-3 binary coded serial decimal numbers to straight binary coded serial decimal numbers is reported. The converter of the invention in its basic form generally coded pulse words of a type having an algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance preceding a y algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance. A switching martix is coupled to said input circuit and is internally connected to produce serial straight binary coded pulse groups indicative of the excess-3 coded input. A stepping circuit is coupled to the switching matrix and to a synchronous counter having a plurality of x decimal digit and plurality of y decimal digit indicator terminals. The stepping circuit steps the counter in synchornism with the serial binary pulse group output from the switching matrix to successively produce pulses at corresponding ones of the x and y decimal digit indicator terminals. The combinations of straight binary coded pulse groups and corresponding decimal digit indicator signals so produced comprise a basic output suitable for application to a variety of output apparatus.
Nifty Nines and Repeating Decimals
ERIC Educational Resources Information Center
Brown, Scott A.
2016-01-01
The traditional technique for converting repeating decimals to common fractions can be found in nearly every algebra textbook that has been published, as well as in many precalculus texts. However, students generally encounter repeating decimal numerals earlier than high school when they study rational numbers in prealgebra classes. Therefore, how…
Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing
NASA Astrophysics Data System (ADS)
Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.
2008-07-01
Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.
NASA Technical Reports Server (NTRS)
Creech, Stephen D.; Crumbly, Christopher M.; Robinson, Kimerly F.
2016-01-01
A foundational capability for international human deep-space exploration, NASA's Space Launch System (SLS) vehicle represents a new spaceflight infrastructure asset, creating opportunities for mission profiles and space systems that cannot currently be executed. While the primary purpose of SLS, which is making rapid progress towards initial launch readiness in two years, will be to support NASA's Journey to Mars, discussions are already well underway regarding other potential utilization of the vehicle's unique capabilities. In its initial Block 1 configuration, capable of launching 70 metric tons (t) to low Earth orbit (LEO), SLS is capable of propelling the Orion crew vehicle to cislunar space, while also delivering small CubeSat-class spacecraft to deep-space destinations. With the addition of a more powerful upper stage, the Block 1B configuration of SLS will be able to deliver 105 t to LEO and enable more ambitious human missions into the proving ground of space. This configuration offers opportunities for launching co-manifested payloads with the Orion crew vehicle, and a class of secondary payloads, larger than today's CubeSats. Further upgrades to the vehicle, including advanced boosters, will evolve its performance to 130 t in its Block 2 configuration. Both Block 1B and Block 2 also offer the capability to carry 8.4- or 10-m payload fairings, larger than any contemporary launch vehicle. With unmatched mass-lift capability, payload volume, and C3, SLS not only enables spacecraft or mission designs currently impossible with contemporary EELVs, it also offers enhancing benefits, such as reduced risk, operational costs and/or complexity, shorter transit time to destination or launching large systems either monolithically or in fewer components. This paper will discuss both the performance and capabilities of Space Launch System as it evolves, and the current state of SLS utilization planning.
Repeating Decimals: An Alternative Teaching Approach
ERIC Educational Resources Information Center
Appova, Aina K.
2017-01-01
To help middle school students make better sense of decimals and fraction, the author and an eighth-grade math teacher worked on a 90-minute lesson that focused on representing repeating decimals as fractions. They embedded experimentations and explorations using technology and calculators to help promote students' intuitive and conceptual…
ERIC Educational Resources Information Center
McIlwaine, I. C.
1997-01-01
Discusses the history and development of the Universal Decimal Classification (UDC). Topics include the relationship with Dewey Decimal Classification; revision process; structure; facet analysis; lack of standard rules for application; application in automated systems; influence of UDC on classification development; links with thesauri; and use…
ERIC Educational Resources Information Center
Rao, Shaila; Kane, Martha T.
2009-01-01
This study assessed effectiveness of simultaneous prompting procedure in teaching two middle school students with cognitive impairment decimal subtraction using regrouping. A multiple baseline, multiple probe design replicated across subjects successfully taught two students with cognitive impairment at middle school level decimal subtraction…
Inference of the sparse kinetic Ising model using the decimation method
NASA Astrophysics Data System (ADS)
Decelle, Aurélien; Zhang, Pan
2015-05-01
In this paper we study the inference of the kinetic Ising model on sparse graphs by the decimation method. The decimation method, which was first proposed in Decelle and Ricci-Tersenghi [Phys. Rev. Lett. 112, 070603 (2014), 10.1103/PhysRevLett.112.070603] for the static inverse Ising problem, tries to recover the topology of the inferred system by setting the weakest couplings to zero iteratively. During the decimation process the likelihood function is maximized over the remaining couplings. Unlike the ℓ1-optimization-based methods, the decimation method does not use the Laplace distribution as a heuristic choice of prior to select a sparse solution. In our case, the whole process can be done auto-matically without fixing any parameters by hand. We show that in the dynamical inference problem, where the task is to reconstruct the couplings of an Ising model given the data, the decimation process can be applied naturally into a maximum-likelihood optimization algorithm, as opposed to the static case where pseudolikelihood method needs to be adopted. We also use extensive numerical studies to validate the accuracy of our methods in dynamical inference problems. Our results illustrate that, on various topologies and with different distribution of couplings, the decimation method outperforms the widely used ℓ1-optimization-based methods.
47 CFR 32.20 - Numbering convention.
Code of Federal Regulations, 2010 CFR
2010-10-01
... following to the right of the decimal point indicate, respectively, the section or account. All Part 32 Account numbers contain 4 digits to-the-right-of the decimal point. (c) Cross references to accounts are made by citing the account numbers to the right of the decimal point; e.g., Account 2232 rather than...
Lay-Ekuakille, Aimé; Fabbiano, Laura; Vacca, Gaetano; Kitoko, Joël Kidiamboko; Kulapa, Patrice Bibala; Telesca, Vito
2018-06-04
Pipelines conveying fluids are considered strategic infrastructures to be protected and maintained. They generally serve for transportation of important fluids such as drinkable water, waste water, oil, gas, chemicals, etc. Monitoring and continuous testing, especially on-line, are necessary to assess the condition of pipelines. The paper presents findings related to a comparison between two spectral response algorithms based on the decimated signal diagonalization (DSD) and decimated Padé approximant (DPA) techniques that allow to one to process signals delivered by pressure sensors mounted on an experimental pipeline.
The transition from managed care to consumerism: a community-level status report.
Christianson, Jon B; Ginsburg, Paul B; Draper, Debra A
2008-01-01
This paper assesses the evolving "facilitated consumerism" model of health care at the community level using data from the Community Tracking Study (CTS). We find that in a relatively short time, large employers and health plans have made notable progress in putting the building blocks in place to support their vision of consumerism. However, developments in the CTS communities suggest that the consumerism strategy evolving in local markets is more nuanced than implied by some descriptions of health care consumerism.
Go, Ramon; Huang, Yolanda Y; Weyker, Paul D; Webb, Christopher Aj
2016-10-01
As the American healthcare system continues to evolve and reimbursement becomes tied to value-based incentive programs, perioperative pain management will become increasingly important. Regional anesthetic techniques are only one component of a successful multimodal pain regimen. In recent years, the use of peripheral and paraneuraxial blocks to provide chest wall and abdominal analgesia has gained popularity. When used within a multimodal regimen, truncal blocks may provide similar analgesia when compared with other regional anesthetic techniques. While there are other reviews that cover this topic, our review will also highlight the emerging role for serratus plane blocks, pectoral nerve blocks and quadratus lumborum blocks in providing thoracic and abdominal analgesia.
How cancer shapes evolution, and how evolution shapes cancer
Casás-Selves, Matias; DeGregori, James
2013-01-01
Evolutionary theories are critical for understanding cancer development at the level of species as well as at the level of cells and tissues, and for developing effective therapies. Animals have evolved potent tumor suppressive mechanisms to prevent cancer development. These mechanisms were initially necessary for the evolution of multi-cellular organisms, and became even more important as animals evolved large bodies and long lives. Indeed, the development and architecture of our tissues were evolutionarily constrained by the need to limit cancer. Cancer development within an individual is also an evolutionary process, which in many respects mirrors species evolution. Species evolve by mutation and selection acting on individuals in a population; tumors evolve by mutation and selection acting on cells in a tissue. The processes of mutation and selection are integral to the evolution of cancer at every step of multistage carcinogenesis, from tumor genesis to metastasis. Factors associated with cancer development, such as aging and carcinogens, have been shown to promote cancer evolution by impacting both mutation and selection processes. While there are therapies that can decimate a cancer cell population, unfortunately, cancers can also evolve resistance to these therapies, leading to the resurgence of treatment-refractory disease. Understanding cancer from an evolutionary perspective can allow us to appreciate better why cancers predominantly occur in the elderly, and why other conditions, from radiation exposure to smoking, are associated with increased cancers. Importantly, the application of evolutionary theory to cancer should engender new treatment strategies that could better control this dreaded disease. PMID:23705033
Calculation of time of travel from dye-tracing studies in karstic aquifers is critical to establishing pollutant arrival times from points of inflow to resurgences, calculation of subsurface flow velocities, and determining other important transport parameters such as longitudin...
NASA's Space Launch System: An Evolving Capability for Exploration
NASA Technical Reports Server (NTRS)
Creech, Stephen D.; Robinson, Kimberly F.
2016-01-01
A foundational capability for international human deep-space exploration, NASA's Space Launch System (SLS) vehicle represents a new spaceflight infrastructure asset, creating opportunities for mission profiles and space systems that cannot currently be executed. While the primary purpose of SLS, which is making rapid progress towards initial launch readiness in two years, will be to support NASA's Journey to Mars, discussions are already well underway regarding other potential utilization of the vehicle's unique capabilities. In its initial Block 1 configuration, capable of launching 70 metric tons (t) to low Earth orbit (LEO), SLS will propel the Orion crew vehicle to cislunar space, while also delivering small CubeSat-class spacecraft to deep-space destinations. With the addition of a more powerful upper stage, the Block 1B configuration of SLS will be able to deliver 105 t to LEO and enable more ambitious human missions into the proving ground of space. This configuration offers opportunities for launching co-manifested payloads with the Orion crew vehicle, and a class of secondary payloads, larger than today's CubeSats. Further upgrades to the vehicle, including advanced boosters, will evolve its performance to 130 t in its Block 2 configuration. Both Block 1B and Block 2 also offer the capability to carry 8.4- or 10-m payload fairings, larger than any contemporary launch vehicle. With unmatched mass-lift capability, payload volume, and C3, SLS not only enables spacecraft or mission designs currently impossible with contemporary EELVs, it also offers enhancing benefits, such as reduced risk, operational costs and/or complexity, shorter transit time to destination or launching large systems either monolithically or in fewer components. This paper will discuss both the performance and capabilities of Space Launch System as it evolves, and the current state of SLS utilization planning.
ERIC Educational Resources Information Center
Lai, Mun Yee; Murray, Sara
2015-01-01
Most studies of students' understanding of decimals have been conducted within Western cultural settings. The broad aim of the present research was to gain insight into Chinese Hong Kong grade 6 students' general performance on a variety of decimals tasks. More specifically, the study aimed to explore students' mathematical reasoning for their use…
Salgado, Diana; Torres, J Antonio; Welti-Chanes, Jorge; Velazquez, Gonzalo
2011-08-01
Consumer demand for food safety and quality improvements, combined with new regulations, requires determining the processor's confidence level that processes lowering safety risks while retaining quality will meet consumer expectations and regulatory requirements. Monte Carlo calculation procedures incorporate input data variability to obtain the statistical distribution of the output of prediction models. This advantage was used to analyze the survival risk of Mycobacterium avium subspecies paratuberculosis (M. paratuberculosis) and Clostridium botulinum spores in high-temperature short-time (HTST) milk and canned mushrooms, respectively. The results showed an estimated 68.4% probability that the 15 sec HTST process would not achieve at least 5 decimal reductions in M. paratuberculosis counts. Although estimates of the raw milk load of this pathogen are not available to estimate the probability of finding it in pasteurized milk, the wide range of the estimated decimal reductions, reflecting the variability of the experimental data available, should be a concern to dairy processors. Knowledge of the C. botulinum initial load and decimal thermal time variability was used to estimate an 8.5 min thermal process time at 110 °C for canned mushrooms reducing the risk to 10⁻⁹ spores/container with a 95% confidence. This value was substantially higher than the one estimated using average values (6.0 min) with an unacceptable 68.6% probability of missing the desired processing objective. Finally, the benefit of reducing the variability in initial load and decimal thermal time was confirmed, achieving a 26.3% reduction in processing time when standard deviation values were lowered by 90%. In spite of novel technologies, commercialized or under development, thermal processing continues to be the most reliable and cost-effective alternative to deliver safe foods. However, the severity of the process should be assessed to avoid under- and over-processing and determine opportunities for improvement. This should include a systematic approach to consider variability in the parameters for the models used by food process engineers when designing a thermal process. The Monte Carlo procedure here presented is a tool to facilitate this task for the determination of process time at a constant lethal temperature. © 2011 Institute of Food Technologists®
Kamgang-Youbi, Georges; Herry, Jean-Marie; Bellon-Fontaine, Marie-Noëlle; Brisset, Jean-Louis; Doubla, Avaly; Naïtali, Murielle
2007-01-01
This study aimed to characterize the bacterium-destroying properties of a gliding arc plasma device during electric discharges and also under temporal postdischarge conditions (i.e., when the discharge was switched off). This phenomenon was reported for the first time in the literature in the case of the plasma destruction of microorganisms. When cells of a model bacterium, Hafnia alvei, were exposed to electric discharges, followed or not followed by temporal postdischarges, the survival curves exhibited a shoulder and then log-linear decay. These destruction kinetics were modeled using GinaFiT, a freeware tool to assess microbial survival curves, and adjustment parameters were determined. The efficiency of postdischarge treatments was clearly affected by the discharge time (t*); both the shoulder length and the inactivation rate kmax were linearly modified as a function of t*. Nevertheless, all conditions tested (t* ranging from 2 to 5 min) made it possible to achieve an abatement of at least 7 decimal logarithm units. Postdischarge treatment was also efficient against bacteria not subjected to direct discharge, and the disinfecting properties of “plasma-activated water” were dependent on the treatment time for the solution. Water treated with plasma for 2 min achieved a 3.7-decimal-logarithm-unit reduction in 20 min after application to cells, and abatement greater than 7 decimal logarithm units resulted from the same contact time with water activated with plasma for 10 min. These disinfecting properties were maintained during storage of activated water for 30 min. After that, they declined as the storage time increased. PMID:17557841
Evolving virtual creatures and catapults.
Chaumont, Nicolas; Egli, Richard; Adami, Christoph
2007-01-01
We present a system that can evolve the morphology and the controller of virtual walking and block-throwing creatures (catapults) using a genetic algorithm. The system is based on Sims' work, implemented as a flexible platform with an off-the-shelf dynamics engine. Experiments aimed at evolving Sims-type walkers resulted in the emergence of various realistic gaits while using fairly simple objective functions. Due to the flexibility of the system, drastically different morphologies and functions evolved with only minor modifications to the system and objective function. For example, various throwing techniques evolved when selecting for catapults that propel a block as far as possible. Among the strategies and morphologies evolved, we find the drop-kick strategy, as well as the systematic invention of the principle behind the wheel, when allowing mutations to the projectile.
ERIC Educational Resources Information Center
Girit, Dilek; Akyuz, Didem
2016-01-01
Studies reveal that students as well as teachers have difficulties in understanding and learning of decimals. The purpose of this study is to investigate students' as well as pre-service teachers' solution strategies when solving a question that involves an estimation task for the value of a decimal number on the number line. We also examined the…
ERIC Educational Resources Information Center
Deliyianni, Eleni; Gagatsis, Athanasios; Elia, Iliada; Panaoura, Areti
2016-01-01
The aim of this study was to propose and validate a structural model in fraction and decimal number addition, which is founded primarily on a synthesis of major theoretical approaches in the field of representations in Mathematics and also on previous research on the learning of fractions and decimals. The study was conducted among 1,701 primary…
Novel Digital Signal Processing and Detection Techniques.
1980-09-01
decimation and interpolation [11, 1 2]. * Submitted by: Bede Liu Department of Electrical .l Engineering and Computer Science Princeton University ...on the use of recursive filters for decimation and interpolation. 4- UNCL.ASSIFIED~ SECURITY CLASSIFICATION OF PAGEfW1,en Data Fneprd) ...filter structure for realizing low-pass filter is developed 16,7]. By employing decimation and interpolation, the filter uses only coefficients 0, +1, and
An Adaptive Mesh Algorithm: Mesh Structure and Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scannapieco, Anthony J.
2016-06-21
The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally sparse.« less
Possible mechanisms for four regimes associated with cold events over East Asia
NASA Astrophysics Data System (ADS)
Yang, Zifan; Huang, Wenyu; Wang, Bin; Chen, Ruyan; Wright, Jonathon S.; Ma, Wenqian
2017-09-01
Circulation patterns associated with cold events over East Asia during the winter months of 1948-2014 are classified into four regimes by applying a k-means clustering method based on the area-weighted pattern correlation. The earliest precursor signals for two regimes are anticyclonic anomalies, which evolve into Ural and central Siberian blocking-like circulation patterns. The earliest precursor signals for the other two regimes are cyclonic anomalies, both of which evolve to amplify the East Asian trough (EAT). Both the blocking-like circulation patterns and amplified EAT favor the initialization of cold events. On average, the blocking-related regimes tend to last longer. The lead time of the earliest precursor signal for the central Siberian blocking-related regime is only 4 days, while those for the other regimes range from 16 to 18 days. The North Atlantic Oscillation plays essential roles both in triggering the precursor for the Ural blocking-related regime and in amplifying the precursors for all regimes. All regimes preferentially occur during the positive phase of the Eurasian teleconnection pattern and the negative phase of the El Niño-Southern Oscillation. For three regimes, surface cooling is primarily due to reduced downward infrared radiation and enhanced cold advection. For the remaining regime, which is associated with the southernmost cooling center, sensible and latent heat release and horizontal cold advection dominate the East Asian cooling.
NASA Technical Reports Server (NTRS)
Ly, Chun; Rigby, Jane R.; Cooper, Michael; Yan, Renbin
2015-01-01
We report on the discovery of 28 redshift (z) approximately 0.8 metal-poor galaxies in DEEP2. These galaxies were selected for their detection of the weak [O (sub III)] lambda 4363 emission line, which provides a "direct" measure of the gas-phase metallicity. A primary goal for identifying these rare galaxies is to examine whether the fundamental metallicity relation (FMR) between stellar mass, gas metallicity, and star formation rate (SFR) extends to low stellar mass and high SFR. The FMR suggests that higher SFR galaxies have lower metallicity (at fixed stellar mass). To test this trend, we combine spectroscopic measurements of metallicity and dust-corrected SFRs, with stellar mass estimates from modeling the optical photometry. We find that these galaxies are 1.05 plus or minus 0.61 decimal exponent (dex) above the redshift (z) approximately equal to 1 stellar mass-SFR relation, and 0.23 plus or minus 0.23 decimal exponent (dex) below the local mass-metallicity relation. Relative to the FMR, the latter offset is reduced to 0.01 decimal exponent (dex), but significant dispersion remains (0.29 decimal exponent (dex) with 0.16 decimal exponent (dex) due to measurement uncertainties). This dispersion suggests that gas accretion, star formation and chemical enrichment have not reached equilibrium in these galaxies. This is evident by their short stellar mass doubling timescale of approximately 100 (sup plus 310) (sub minus 75) million years that suggests stochastic star formation. Combining our sample with other redshift (z) of approximately 1 metal-poor galaxies, we find a weak positive SFR-metallicity dependence (at fixed stellar mass) that is significant at 97.3 percent confidence. We interpret this positive correlation as recent star formation that has enriched the gas, but has not had time to drive the metal-enriched gas out with feedback mechanisms.
2016-08-09
2 Slight rounding inconsistencies exist because auditor calculations included decimals. (FOUO) Table 2. (FOUO) No. Initiative Title1 FY Action...rounding inconsistencies exist because auditor calculations included decimals. Acronyms: USAF United States Air Force USMC United States Marine Corps...exist because auditor calculations included decimals. (FOUO) Table 3. FOR OFFICIAL USE ONLY FOR OFFICIAL USE ONLY Management Comments 30 │ DODIG-2016
Code of Federal Regulations, 2014 CFR
2014-04-01
... decimal point one place to the left; that for 8 pounds from the column 800 pounds by moving the decimal point two places to the left. Example. A package of spirits at 86 proof weighed 3211/2 pounds net. We... to the left; that for 1 pound from the column 100 pounds by moving the decimal point two places to...
Code of Federal Regulations, 2013 CFR
2013-04-01
... decimal point one place to the left; that for 8 pounds from the column 800 pounds by moving the decimal point two places to the left. Example. A package of spirits at 86 proof weighed 3211/2 pounds net. We... to the left; that for 1 pound from the column 100 pounds by moving the decimal point two places to...
Code of Federal Regulations, 2011 CFR
2011-04-01
... decimal point one place to the left; that for 8 pounds from the column 800 pounds by moving the decimal point two places to the left. Example. A package of spirits at 86 proof weighed 3211/2 pounds net. We... to the left; that for 1 pound from the column 100 pounds by moving the decimal point two places to...
Code of Federal Regulations, 2012 CFR
2012-04-01
... decimal point one place to the left; that for 8 pounds from the column 800 pounds by moving the decimal point two places to the left. Example. A package of spirits at 86 proof weighed 3211/2 pounds net. We... to the left; that for 1 pound from the column 100 pounds by moving the decimal point two places to...
Code of Federal Regulations, 2010 CFR
2010-04-01
... decimal point one place to the left; that for 8 pounds from the column 800 pounds by moving the decimal point two places to the left. Example. A package of spirits at 86 proof weighed 3211/2 pounds net. We... to the left; that for 1 pound from the column 100 pounds by moving the decimal point two places to...
Parsley: a Command-Line Parser for Astronomical Applications
NASA Astrophysics Data System (ADS)
Deich, William
Parsley is a sophisticated keyword + value parser, packaged as a library of routines that offers an easy method for providing command-line arguments to programs. It makes it easy for the user to enter values, and it makes it easy for the programmer to collect and validate the user's entries. Parsley is tuned for astronomical applications: for example, dates entered in Julian, Modified Julian, calendar, or several other formats are all recognized without special effort by the user or by the programmer; angles can be entered using decimal degrees or dd:mm:ss; time-like intervals as decimal hours, hh:mm:ss, or a variety of other units. Vectors of data are accepted as readily as scalars.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2016-12-01
Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.
On the Global Regularity of a Helical-Decimated Version of the 3D Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Biferale, Luca; Titi, Edriss S.
2013-06-01
We study the global regularity, for all time and all initial data in H 1/2, of a recently introduced decimated version of the incompressible 3D Navier-Stokes (dNS) equations. The model is based on a projection of the dynamical evolution of Navier-Stokes (NS) equations into the subspace where helicity (the L 2-scalar product of velocity and vorticity) is sign-definite. The presence of a second (beside energy) sign-definite inviscid conserved quadratic quantity, which is equivalent to the H 1/2-Sobolev norm, allows us to demonstrate global existence and uniqueness, of space-periodic solutions, together with continuity with respect to the initial conditions, for this decimated 3D model. This is achieved thanks to the establishment of two new estimates, for this 3D model, which show that the H 1/2 and the time average of the square of the H 3/2 norms of the velocity field remain finite. Such two additional bounds are known, in the spirit of the work of H. Fujita and T. Kato (Arch. Ration. Mech. Anal. 16:269-315, 1964; Rend. Semin. Mat. Univ. Padova 32:243-260, 1962), to be sufficient for showing well-posedness for the 3D NS equations. Furthermore, they are directly linked to the helicity evolution for the dNS model, and therefore with a clear physical meaning and consequences.
Adaptation to wildfire: A fish story
John Kirkland; Rebecca Flitcroft; Gordon Reeves; Paul Hessburg
2017-01-01
In the Pacific Northwest, native salmon and trout are some of the toughest survivors on the block. Over time, these fish have evolved behavioral adaptations to natural disturbances, and they rely on these disturbances to deliver coarse sediment and wood that become complex stream habitat. Powerful disturbances such as wildfire, postfire landslides, and debris flows may...
An ERP Study of the Processing of Common and Decimal Fractions: How Different They Are
Zhang, Li; Wang, Qi; Lin, Chongde; Ding, Cody; Zhou, Xinlin
2013-01-01
This study explored event-related potential (ERP) correlates of common fractions (1/5) and decimal fractions (0.2). Thirteen subjects performed a numerical magnitude matching task under two conditions. In the common fraction condition, a nonsymbolic fraction was asked to be judged whether its magnitude matched the magnitude of a common fraction; in the decimal fraction condition, a nonsymbolic fraction was asked to be matched with a decimal fraction. Behavioral results showed significant main effects of condition and numerical distance, but no significant interaction of condition and numerical distance. Electrophysiological data showed that when nonsymbolic fractions were compared to common fractions, they displayed larger N1 and P3 amplitudes than when they were compared to decimal fractions. This finding suggested that the visual identification for nonsymbolic fractions was different under the two conditions, which was not due to perceptual differences but to task demands. For symbolic fractions, the condition effect was observed in the N1 and P3 components, revealing stimulus-specific visual identification processing. The effect of numerical distance as an index of numerical magnitude representation was observed in the P2, N3 and P3 components under the two conditions. However, the topography of the distance effect was different under the two conditions, suggesting stimulus specific semantic processing of common fractions and decimal fractions. PMID:23894491
Primary teachers' subject matter knowledge: decimals
NASA Astrophysics Data System (ADS)
Ubuz, Behiye; Yayan, Betül
2010-09-01
The main objective of this study was to investigate primary teachers' subject matter knowledge in the domain of decimals and more elaborately to investigate their performance and difficulties in reading scale, ordering numbers, finding the nearest decimal and doing operations, such as addition and subtraction. The difficulties in these particular areas are analysed and suggestions are made regarding their causes. Further, factors that influence this knowledge were explored. The sample of the study was 63 primary teachers. A decimal concepts test including 18 tasks was administered and the total scores for the 63 primary teachers ranged from 3 to 18 with a mean and median of 12. Fifty per cent of the teachers were above the mean score. The detailed investigation of the responses revealed that the primary teachers faced similar difficulties that students and pre-service teachers faced. Discrepancy on teachers' knowledge revealed important differences based on educational level attained, but not the number of years of teaching experience and experience in teaching decimals. Some suggestions have been made regarding the implications for pre- and in-service teacher training.
Data extraction for complex meta-analysis (DECiMAL) guide.
Pedder, Hugo; Sarri, Grammati; Keeney, Edna; Nunes, Vanessa; Dias, Sofia
2016-12-13
As more complex meta-analytical techniques such as network and multivariate meta-analyses become increasingly common, further pressures are placed on reviewers to extract data in a systematic and consistent manner. Failing to do this appropriately wastes time, resources and jeopardises accuracy. This guide (data extraction for complex meta-analysis (DECiMAL)) suggests a number of points to consider when collecting data, primarily aimed at systematic reviewers preparing data for meta-analysis. Network meta-analysis (NMA), multiple outcomes analysis and analysis combining different types of data are considered in a manner that can be useful across a range of data collection programmes. The guide has been shown to be both easy to learn and useful in a small pilot study.
Hurst, Michelle A; Cordes, Sara
2018-04-01
Fraction and decimal concepts are notoriously difficult for children to learn yet are a major component of elementary and middle school math curriculum and an important prerequisite for higher order mathematics (i.e., algebra). Thus, recently there has been a push to understand how children think about rational number magnitudes in order to understand how to promote rational number understanding. However, prior work investigating these questions has focused almost exclusively on fraction notation, overlooking the open questions of how children integrate rational number magnitudes presented in distinct notations (i.e., fractions, decimals, and whole numbers) and whether understanding of these distinct notations may independently contribute to pre-algebra ability. In the current study, we investigated rational number magnitude and arithmetic performance in both fraction and decimal notation in fourth- to seventh-grade children. We then explored how these measures of rational number ability predicted pre-algebra ability. Results reveal that children do represent the magnitudes of fractions and decimals as falling within a single numerical continuum and that, despite greater experience with fraction notation, children are more accurate when processing decimal notation than when processing fraction notation. Regression analyses revealed that both magnitude and arithmetic performance predicted pre-algebra ability, but magnitude understanding may be particularly unique and depend on notation. The educational implications of differences between children in the current study and previous work with adults are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.
Assessing crown dynamics and inter-tree competition in southern pines
Timothy A. Martin; Angelica Garcia; Tania Quesada; Eric J. Jokela; Salvador Gezan
2015-01-01
Genetic improvement of southern pines has been underway for 50 years and during this time, deployment of germplasm has generally evolved from more genetically diverse to less genetically diverse. Information is needed on how deployment of individual genotypes in pure blocks will affect traits such as within-stand variation in individual tree traits, as well as tree-...
de Jong, Aarieke E. I.; van Asselt, Esther D.; Zwietering, Marcel H.; Nauta, Maarten J.; de Jonge, Rob
2012-01-01
The aim of this research was to determine the decimal reduction times of bacteria present on chicken fillet in boiling water. The experiments were conducted with Campylobacter jejuni, Salmonella, and Escherichia coli. Whole chicken breast fillets were inoculated with the pathogens, stored overnight (4°C), and subsequently cooked. The surface temperature reached 70°C within 30 sec and 85°C within one minute. Extremely high decimal reduction times of 1.90, 1.97, and 2.20 min were obtained for C. jejuni, E. coli, and S. typhimurium, respectively. Chicken meat and refrigerated storage before cooking enlarged the heat resistance of the food borne pathogens. Additionally, a high challenge temperature or fast heating rate contributed to the level of heat resistance. The data were used to assess the probability of illness (campylobacteriosis) due to consumption of chicken fillet as a function of cooking time. The data revealed that cooking time may be far more critical than previously assumed. PMID:22389647
Neural representations of magnitude for natural and rational numbers.
DeWolf, Melissa; Chiang, Jeffrey N; Bassok, Miriam; Holyoak, Keith J; Monti, Martin M
2016-11-01
Humans have developed multiple symbolic representations for numbers, including natural numbers (positive integers) as well as rational numbers (both fractions and decimals). Despite a considerable body of behavioral and neuroimaging research, it is currently unknown whether different notations map onto a single, fully abstract, magnitude code, or whether separate representations exist for specific number types (e.g., natural versus rational) or number representations (e.g., base-10 versus fractions). We address this question by comparing brain metabolic response during a magnitude comparison task involving (on different trials) integers, decimals, and fractions. Univariate and multivariate analyses revealed that the strength and pattern of activation for fractions differed systematically, within the intraparietal sulcus, from that of both decimals and integers, while the latter two number representations appeared virtually indistinguishable. These results demonstrate that the two major notations formats for rational numbers, fractions and decimals, evoke distinct neural representations of magnitude, with decimals representations being more closely linked to those of integers than to those of magnitude-equivalent fractions. Our findings thus suggest that number representation (base-10 versus fractions) is an important organizational principle for the neural substrate underlying mathematical cognition. Copyright © 2016 Elsevier Inc. All rights reserved.
40 CFR 1033.240 - Demonstrating compliance with exhaust emission standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... significant figures to calculate the cycle-weighted emission rate to at least one more decimal place than the....245, then round the adjusted figure to the same number of decimal places as the emission standard...
40 CFR 1033.240 - Demonstrating compliance with exhaust emission standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... significant figures to calculate the cycle-weighted emission rate to at least one more decimal place than the....245, then round the adjusted figure to the same number of decimal places as the emission standard...
40 CFR 1033.240 - Demonstrating compliance with exhaust emission standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... significant figures to calculate the cycle-weighted emission rate to at least one more decimal place than the....245, then round the adjusted figure to the same number of decimal places as the emission standard...
40 CFR 1033.240 - Demonstrating compliance with exhaust emission standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... significant figures to calculate the cycle-weighted emission rate to at least one more decimal place than the....245, then round the adjusted figure to the same number of decimal places as the emission standard...
40 CFR 1033.240 - Demonstrating compliance with exhaust emission standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... significant figures to calculate the cycle-weighted emission rate to at least one more decimal place than the....245, then round the adjusted figure to the same number of decimal places as the emission standard...
Properties of the Tent map for decimal fractions with fixed precision
NASA Astrophysics Data System (ADS)
Chetverikov, V. M.
2018-01-01
The one-dimensional discrete Tent map is a well-known example of a map whose fixed points are all unstable on the segment [0,1]. This map leads to the positivity of the Lyapunov exponent for the corresponding recurrent sequence. Therefore in a situation of general position, this sequence must demonstrate the properties of deterministic chaos. However if the first term of the recurrence sequence is taken as a decimal fraction with a fixed number “k” of digits after the decimal point and all calculations are carried out accurately, then the situation turns out to be completely different. In this case, first, the Tent map does not lead to an increase in significant digits in the terms of the sequence, and secondly, demonstrates the existence of a finite number of eventually periodic orbits, which are attractors for all other decimal numbers with the number of significant digits not exceeding “k”.
DeWolf, Melissa; Bassok, Miriam; Holyoak, Keith J
2015-02-01
The standard number system includes several distinct types of notations, which differ conceptually and afford different procedures. Among notations for rational numbers, the bipartite format of fractions (a/b) enables them to represent 2-dimensional relations between sets of discrete (i.e., countable) elements (e.g., red marbles/all marbles). In contrast, the format of decimals is inherently 1-dimensional, expressing a continuous-valued magnitude (i.e., proportion) but not a 2-dimensional relation between sets of countable elements. Experiment 1 showed that college students indeed view these 2-number notations as conceptually distinct. In a task that did not involve mathematical calculations, participants showed a strong preference to represent partitioned displays of discrete objects with fractions and partitioned displays of continuous masses with decimals. Experiment 2 provided evidence that people are better able to identify and evaluate ratio relationships using fractions than decimals, especially for discrete (or discretized) quantities. Experiments 3 and 4 found a similar pattern of performance for a more complex analogical reasoning task. When solving relational reasoning problems based on discrete or discretized quantities, fractions yielded greater accuracy than decimals; in contrast, when quantities were continuous, accuracy was lower for both symbolic notations. Whereas previous research has established that decimals are more effective than fractions in supporting magnitude comparisons, the present study reveals that fractions are relatively advantageous in supporting relational reasoning with discrete (or discretized) concepts. These findings provide an explanation for the effectiveness of natural frequency formats in supporting some types of reasoning, and have implications for teaching of rational numbers.
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Liu, Chong
2016-10-01
The common solution for a field programmable gate array (FPGA)-based time-to-digital converter (TDC) is constructing a tapped delay line (TDL) for time interpolation to yield a sub-clock time resolution. The granularity and uniformity of the delay elements of TDL determine the TDC time resolution. In this paper, we propose a dual-sampling TDL architecture and a bin decimation method that could make the delay elements as small and uniform as possible, so that the implemented TDCs can achieve a high time resolution beyond the intrinsic cell delay. Two identical full hardware-based TDCs were implemented in a Xilinx UltraScale FPGA for performance evaluation. For fixed time intervals in the range from 0 to 440 ns, the average time-interval RMS resolution is measured by the two TDCs with 4.2 ps, thus the timestamp resolution of single TDC is derived as 2.97 ps. The maximum hit rate of the TDC is as high as half the system clock rate of FPGA, namely 250 MHz in our demo prototype. Because the conventional online bin-by-bin calibration is not needed, the implementation of the proposed TDC is straightforward and relatively resource-saving.
A deterministic compressive sensing model for bat biosonar.
Hague, David A; Buck, John R; Bilik, Igal
2012-12-01
The big brown bat (Eptesicus fuscus) uses frequency modulated (FM) echolocation calls to accurately estimate range and resolve closely spaced objects in clutter and noise. They resolve glints spaced down to 2 μs in time delay which surpasses what traditional signal processing techniques can achieve using the same echolocation call. The Matched Filter (MF) attains 10-12 μs resolution while the Inverse Filter (IF) achieves higher resolution at the cost of significantly degraded detection performance. Recent work by Fontaine and Peremans [J. Acoustic. Soc. Am. 125, 3052-3059 (2009)] demonstrated that a sparse representation of bat echolocation calls coupled with a decimating sensing method facilitates distinguishing closely spaced objects over realistic SNRs. Their work raises the intriguing question of whether sensing approaches structured more like a mammalian auditory system contains the necessary information for the hyper-resolution observed in behavioral tests. This research estimates sparse echo signatures using a gammatone filterbank decimation sensing method which loosely models the processing of the bat's auditory system. The decimated filterbank outputs are processed with [script-l](1) minimization. Simulations demonstrate that this model maintains higher resolution than the MF and significantly better detection performance than the IF for SNRs of 5-45 dB while undersampling the return signal by a factor of six.
The Trail of Tears Continues: Dispossession and Genocide of the Native American Indians.
ERIC Educational Resources Information Center
Bender, Albert M.
1981-01-01
Describes the high cultural level of native American Indian populations at the time of conquest. Illustrates how cultural breakdown and demographic decimation have resulted from systematic policies that focused on exploiting natural resources at the expense of native peoples. (GC)
Novel Designs of Quantum Reversible Counters
NASA Astrophysics Data System (ADS)
Qi, Xuemei; Zhu, Haihong; Chen, Fulong; Zhu, Junru; Zhang, Ziyang
2016-11-01
Reversible logic, as an interesting and important issue, has been widely used in designing combinational and sequential circuits for low-power and high-speed computation. Though a significant number of works have been done on reversible combinational logic, the realization of reversible sequential circuit is still at premature stage. Reversible counter is not only an important part of the sequential circuit but also an essential part of the quantum circuit system. In this paper, we designed two kinds of novel reversible counters. In order to construct counter, the innovative reversible T Flip-flop Gate (TFG), T Flip-flop block (T_FF) and JK flip-flop block (JK_FF) are proposed. Based on the above blocks and some existing reversible gates, the 4-bit binary-coded decimal (BCD) counter and controlled Up/Down synchronous counter are designed. With the help of Verilog hardware description language (Verilog HDL), these counters above have been modeled and confirmed. According to the simulation results, our circuits' logic structures are validated. Compared to the existing ones in terms of quantum cost (QC), delay (DL) and garbage outputs (GBO), it can be concluded that our designs perform better than the others. There is no doubt that they can be used as a kind of important storage components to be applied in future low-power computing systems.
ERIC Educational Resources Information Center
McKinlay, John
Despite some inroads by the Library of Congress Classification and short-lived experimentation with Universal Decimal Classification and Bliss Classification, Dewey Decimal Classification, with its ability in recent editions to be hospitable to local needs, remains the most widely used classification system in Australia. Although supplemented at…
Precision of a CAD/CAM technique for the production of zirconium dioxide copings.
Coli, Pierluigi; Karlsson, Stig
2004-01-01
The precision of a computer-aided design/manufacturing (CAD/CAM) system to manufacture zirconium dioxide copings with a predetermined internal space was investigated. Two master models were produced in acrylic resin. One was directly scanned by the Decim Reader. The Decim Producer then manufactured 10 copings from prefabricated zirconium dioxide blocks. Five copings were prepared, aiming for an internal space to the master of 45 microm. The other five copings were prepared for an internal space of 90 microm. The second test model was used to try in the copings produced. The obtained internal space of the ceramic copings was evaluated by separate measurements of the master models and inner surfaces of the copings. The master models were measured at predetermined points with an optical instrument. The zirconium dioxide copings were measured with a contact instrument at the corresponding sites measured in the masters. The first group of copings had a mean internal space to the scanned master of 41 microm and of 53 microm to the try-in master. In general, the internal space along the axial walls of the masters was smaller than that along the occlusal walls. The second group had a mean internal space of 82 microm to the scanned master and of 90 microm to the try-in master. The aimed-for internal space of the copings was achieved by the manufacturer. The CAD/CAM technique tested provided high precision in the manufacture of zirconium dioxide copings.
The Origin and Diversification of Birds.
Brusatte, Stephen L; O'Connor, Jingmai K; Jarvis, Erich D
2015-10-05
Birds are one of the most recognizable and diverse groups of modern vertebrates. Over the past two decades, a wealth of new fossil discoveries and phylogenetic and macroevolutionary studies has transformed our understanding of how birds originated and became so successful. Birds evolved from theropod dinosaurs during the Jurassic (around 165-150 million years ago) and their classic small, lightweight, feathered, and winged body plan was pieced together gradually over tens of millions of years of evolution rather than in one burst of innovation. Early birds diversified throughout the Jurassic and Cretaceous, becoming capable fliers with supercharged growth rates, but were decimated at the end-Cretaceous extinction alongside their close dinosaurian relatives. After the mass extinction, modern birds (members of the avian crown group) explosively diversified, culminating in more than 10,000 species distributed worldwide today. Copyright © 2015 Elsevier Ltd. All rights reserved.
Elementary Metric Curriculum - Project T.I.M.E. (Timely Implementation of Metric Education). Part I.
ERIC Educational Resources Information Center
Community School District 18, Brooklyn, NY.
This is a teacher's manual for an ISS-based elementary school course in the metric system. Behavioral objectives and student activities are included. The topics covered include: (1) linear measurement; (2) metric-decimal relationships; (3) metric conversions; (4) geometry; (5) scale drawings; and (6) capacity. This is the first of a two-part…
Determinant Factors of Long-Term Performance Development in Young Swimmers.
Morais, Jorge E; Silva, António J; Marinho, Daniel A; Lopes, Vítor P; Barbosa, Tiago M
2017-02-01
To develop a performance predictor model based on swimmers' biomechanical profile, relate the partial contribution of the main predictors with the training program, and analyze the time effect, sex effect, and time × sex interaction. 91 swimmers (44 boys, 12.04 ± 0.81 y; 47 girls, 11.22 ± 0.98 y) evaluated during a 3-y period. The decimal age and anthropometric, kinematic, and efficiency features were collected 10 different times over 3 seasons (ie, longitudinal research). Hierarchical linear modeling was the procedure used to estimate the performance predictors. Performance improved between season 1 early and season 3 late for both sexes (boys 26.9% [20.88;32.96], girls 16.1% [10.34;22.54]). Decimal age (estimate [EST] -2.05, P < .001), arm span (EST -0.59, P < .001), stroke length (EST 3.82; P = .002), and propelling efficiency (EST -0.17, P = .001) were entered in the final model. Over 3 consecutive seasons young swimmers' performance improved. Performance is a multifactorial phenomenon where anthropometrics, kinematics, and efficiency were the main determinants. The change of these factors over time was coupled with the training plans of this talent identification and development program.
Studying relaxation phenomena via effective master equations
NASA Astrophysics Data System (ADS)
Chan, David; Wan, Jones T. K.; Chu, L. L.; Yu, K. W.
2000-04-01
The real-time dynamics of various relaxation phenomena can be conveniently formulated by a master equation with the enumeration of transition rates between given classes of conformations. To study the relaxation time towards equilibrium, it suffices to solve for the second largest eigenvalue of the resulting eigenvalue equation. Generally speaking, there is no analytic solution for the dynamic equation. Mean-field approaches generally yield misleading results while the presumably exact Monte-Carlo methods require prohibitive time steps in most real systems. In this work, we propose an exact decimation procedure for reducing the number of conformations significantly, while there is no loss of information, i.e., the reduced (or effective) equation is an exact transformed version of the original one. However, we have to pay the price: the initial Markovianity of the evolution equation is lost and the reduced equation contains memory terms in the transition rates. Since the transformed equation has significantly reduced number of degrees of freedom, the systems can readily be diagonalized by iterative means, to obtain the exact second largest eigenvalue and hence the relaxation time. The decimation method has been applied to various relaxation equations with generally desirable results. The advantages and limitations of the method will be discussed.
The computation of pi to 29,360,000 decimal digits using Borweins' quartically convergent algorithm
NASA Technical Reports Server (NTRS)
Bailey, David H.
1988-01-01
The quartically convergent numerical algorithm developed by Borwein and Borwein (1987) for 1/pi is implemented via a prime-modulus-transform multiprecision technique on the NASA Ames Cray-2 supercomputer to compute the first 2.936 x 10 to the 7th digits of the decimal expansion of pi. The history of pi computations is briefly recalled; the most recent algorithms are characterized; the implementation procedures are described; and samples of the output listing are presented. Statistical analyses show that the present decimal expansion is completely random, with only acceptable numbers of long repeating strings and single-digit runs.
Ran, Shi-Ju
2016-05-01
In this work, a simple and fundamental numeric scheme dubbed as ab initio optimization principle (AOP) is proposed for the ground states of translational invariant strongly correlated quantum lattice models. The idea is to transform a nondeterministic-polynomial-hard ground-state simulation with infinite degrees of freedom into a single optimization problem of a local function with finite number of physical and ancillary degrees of freedom. This work contributes mainly in the following aspects: (1) AOP provides a simple and efficient scheme to simulate the ground state by solving a local optimization problem. Its solution contains two kinds of boundary states, one of which play the role of the entanglement bath that mimics the interactions between a supercell and the infinite environment, and the other gives the ground state in a tensor network (TN) form. (2) In the sense of TN, a novel decomposition named as tensor ring decomposition (TRD) is proposed to implement AOP. Instead of following the contraction-truncation scheme used by many existing TN-based algorithms, TRD solves the contraction of a uniform TN in an opposite way by encoding the contraction in a set of self-consistent equations that automatically reconstruct the whole TN, making the simulation simple and unified; (3) AOP inherits and develops the ideas of different well-established methods, including the density matrix renormalization group (DMRG), infinite time-evolving block decimation (iTEBD), network contractor dynamics, density matrix embedding theory, etc., providing a unified perspective that is previously missing in this fields. (4) AOP as well as TRD give novel implications to existing TN-based algorithms: A modified iTEBD is suggested and the two-dimensional (2D) AOP is argued to be an intrinsic 2D extension of DMRG that is based on infinite projected entangled pair state. This paper is focused on one-dimensional quantum models to present AOP. The benchmark is given on a transverse Ising chain and 2D classical Ising model, showing the remarkable efficiency and accuracy of the AOP.
NASA's Space Launch System: An Evolving Capability for Exploration
NASA Technical Reports Server (NTRS)
Crumbly, Christopher M.; Creech, Stephen D.; Robinson,Kimberly F.
2016-01-01
Designed to meet the stringent requirements of human exploration missions into deep space and to Mars, NASA's Space Launch System (SLS) vehicle represents a unique new launch capability opening new opportunities for mission design. While SLS's super-heavy launch vehicle predecessor, the Saturn V, was used for only two types of missions - launching Apollo spacecraft to the moon and lofting the Skylab space station into Earth orbit - NASA is working to identify new ways to use SLS to enable new missions or mission profiles. In its initial Block 1 configuration, capable of launching 70 metric tons (t) to low Earth orbit (LEO), SLS is capable of not only propelling the Orion crew vehicle into cislunar space, but also delivering small satellites to deep space destinations. With a 5-meter (m) fairing consistent with contemporary Evolved Expendable Launch Vehicles (EELVs), the Block 1 configuration can also deliver science payloads to high-characteristic-energy (C3) trajectories to the outer solar system. With the addition of an upper stage, the Block 1B configuration of SLS will be able to deliver 105 t to LEO and enable more ambitious human missions into the proving ground of space. This configuration offers opportunities for launching co-manifested payloads with the Orion crew vehicle, and a new class of secondary payloads, larger than today's cubesats. The evolved configurations of SLS, including both Block 1B and the 130 t Block 2, also offer the capability to carry 8.4- or 10-m payload fairings, larger than any contemporary launch vehicle. With unmatched mass-lift capability, payload volume, and C3, SLS not only enables spacecraft or mission designs currently impossible with contemporary EELVs, it also offers enhancing benefits, such as reduced risk and operational costs associated with shorter transit time to destination and reduced risk and complexity associated with launching large systems either monolithically or in fewer components. As this paper will demonstrate, SLS represents a unique new capability for spaceflight, and an opportunity to reinvent space by developing out-of-the-box missions and mission designs unlike any flown before.
Tracking Decimal Misconceptions: Strategic Instructional Choices
ERIC Educational Resources Information Center
Griffin, Linda B.
2016-01-01
Understanding the decimal system is challenging, requiring coordination of place-value concepts with features of whole-number and fraction knowledge (Moloney and Stacey 1997). Moreover, the learner must discern if and how previously learned concepts and procedures apply. The process is complex, and misconceptions will naturally arise. In a…
Color Your Classroom V: A Math Guide on the Secondary Level.
ERIC Educational Resources Information Center
Mississippi Materials & Resource Center, Gulfport.
This curriculum guide, designed for use with secondary migrant students, presents mathematics activities in the areas of whole numbers, fractions, decimals, percent, measurement, geometry, probability and statistics, and sets. Within the categories of whole numbers, fractions, and decimals are activities using addition, subtraction,…
Dewey Decimal Classification: A Quagmire.
ERIC Educational Resources Information Center
Gamaluddin, Ahmad Fouad
1980-01-01
A survey of 660 Pennsylvania school librarians indicates that, though there is limited professional interest in the Library of Congress Classification system, Dewey Decimal Classification (DDC) appears to be firmly entrenched. This article also discusses the relative merits of DDC, the need for a uniform system, librarianship preparation, and…
MARC Coding of DDC for Subject Retrieval.
ERIC Educational Resources Information Center
Wajenberg, Arnold S.
1983-01-01
Recommends an expansion of MARC codes for decimal class numbers to enhance automated subject retrieval. Five values for a second indicator and two new subfields are suggested for encoding hierarchical relationships among decimal class numbers. Additional subfields are suggested to enhance retrieval through analysis of synthesized numbers in…
Psychology and Didactics of Mathematics in France--An Overview.
ERIC Educational Resources Information Center
Vergnaud, Gerard
1983-01-01
Examples are given of the variety of mathematical concepts and problems being studied by psychologically oriented researchers in France. Work on decimals, circles, natural numbers, decimal and real numbers, and didactic transposition are included. Comments on designing research on mathematics concept formation conclude the article. (MNS)
40 CFR 86.609-98 - Calculation and reporting of test results.
Code of Federal Regulations, 2011 CFR
2011-07-01
... decimal places contained in the applicable standard expressed to one additional significant figure... decimal places contained in the applicable emission standard expressed to one additional significant figure. Rounding is done in accordance with ASTM E 29-67, (reapproved 1980) (as referenced in § 86.094-28...
1973-01-01
This EREP photograph of the Uncompahgre Plateau area of Colorado illustrates the land use classification using the hierarchical numbering system to depict land forms and vegetative patterns. The numerator is a three-digit number with decimal components identifying the vegetation analog or land use conditions. The denominator uses a three-component decimal system for landscape characterization.
Decimal Fraction Arithmetic: Logical Error Analysis and Its Validation.
ERIC Educational Resources Information Center
Standiford, Sally N.; And Others
This report illustrates procedures of item construction for addition and subtraction examples involving decimal fractions. Using a procedural network of skills required to solve such examples, an item characteristic matrix of skills analysis was developed to describe the characteristics of the content domain by projected student difficulties. Then…
Which Type of Rational Numbers Should Students Learn First?
ERIC Educational Resources Information Center
Tian, Jing; Siegler, Robert S.
2017-01-01
Many children and adults have difficulty gaining a comprehensive understanding of rational numbers. Although fractions are taught before decimals and percentages in many countries, including the USA, a number of researchers have argued that decimals are easier to learn than fractions and therefore teaching them first might mitigate children's…
ERIC Educational Resources Information Center
Harris, Christopher
2013-01-01
In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…
ERIC Educational Resources Information Center
Beaman, Belinda
2013-01-01
As teachers we are encouraged to contextualize the mathematics that we teach. In this article, Belinda Beaman explains how she used the weather as a context for developing decimal understanding. We particularly enjoyed reading how the students were involved in estimating.
ERIC Educational Resources Information Center
de Mestre, Neville
2010-01-01
All common fractions can be written in decimal form. In this Discovery article, the author suggests that teachers ask their students to calculate the decimals by actually doing the divisions themselves, and later on they can use a calculator to check their answers. This article presents a lesson based on the research of Bolt (1982).
Comparing Instructional Strategies for Integrating Conceptual and Procedural Knowledge.
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Koedinger, Kenneth R.
We compared alternative instructional strategies for integrating knowledge of decimal place value and regrouping concepts with procedures for adding and subtracting decimals. The first condition was based on recent research suggesting that conceptual and procedural knowledge develop in an iterative, hand over hand fashion. In this iterative…
Conceptual Knowledge of Decimal Arithmetic
ERIC Educational Resources Information Center
Lortie-Forgues, Hugues; Siegler, Robert S.
2016-01-01
In two studies (N's = 55 and 54), we examined a basic form of conceptual understanding of rational number arithmetic, the direction of effect of decimal arithmetic operations, at a level of detail useful for informing instruction. Middle school students were presented tasks examining knowledge of the direction of effects (e.g., "True or…
Dewey: How to Make It Work for You
ERIC Educational Resources Information Center
Panzer, Michael
2013-01-01
As knowledge brokers, librarians are living in interesting times for themselves and libraries. It causes them to wonder sometimes if the traditional tools like the Dewey Decimal Classification (DDC) system can cope with the onslaught of information. The categories provided do not always seem adequate for the knowledge-discovery habits of…
Space Launch Systems Block 1B Preliminary Navigation System Design
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Park, Thomas; Anzalone, Evan; Smith, Austin; Strickland, Dennis; Patrick, Sean
2018-01-01
NASA is currently building the Space Launch Systems (SLS) Block 1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. In parallel, NASA is also designing the Block 1B launch vehicle. The Block 1B vehicle is an evolution of the Block 1 vehicle and extends the capability of the NASA launch vehicle. This evolution replaces the Interim Cryogenic Propulsive Stage (ICPS) with the Exploration Upper Stage (EUS). As the vehicle evolves to provide greater lift capability, increased robustness for manned missions, and the capability to execute more demanding missions so must the SLS Integrated Navigation System evolved to support those missions. This paper describes the preliminary navigation systems design for the SLS Block 1B vehicle. The evolution of the navigation hard-ware and algorithms from an inertial-only navigation system for Block 1 ascent flight to a tightly coupled GPS-aided inertial navigation system for Block 1B is described. The Block 1 GN&C system has been designed to meet a LEO insertion target with a specified accuracy. The Block 1B vehicle navigation system is de-signed to support the Block 1 LEO target accuracy as well as trans-lunar or trans-planetary injection accuracy. Additionally, the Block 1B vehicle is designed to support human exploration and thus is designed to minimize the probability of Loss of Crew (LOC) through high-quality inertial instruments and robust algorithm design, including Fault Detection, Isolation, and Recovery (FDIR) logic.
Ditommaso, Savina; Giacomuzzi, Monica; Ricciardi, Elisa; Zotti, Carla M
2016-07-22
This study was designed to examine the in vitro bactericidal activity of hydrogen peroxide against Legionella. We tested hydrogen peroxide (Peroxy Ag⁺) at 600 ppm to evaluate Legionella survival in a simulated dental treatment water system equipped with Water Hygienization Equipment (W.H.E.) device that was artificially contaminated. When Legionella pneumophila serogroup (sg) 1 was exposed to Peroxy Ag⁺ for 60 min we obtained a two decimal log reduction. High antimicrobial efficacy was obtained with extended periods of exposure: four decimal log reduction at 75 min and five decimal log reduction at 15 h of exposure. Involving a simulation device (Peroxy Ag⁺ is flushed into the simulation dental unit waterlines (DUWL)) we obtained an average reduction of 85% of Legionella load. The product is effective in reducing the number of Legionella cells after 75 min of contact time (99.997%) in the simulator device under test conditions. The Peroxy Ag⁺ treatment is safe for continuous use in the dental water supply system (i.e., it is safe for patient contact), so it could be used as a preventive option, and it may be useful in long-term treatments, alone or coupled with a daily or periodic shock treatment.
ERIC Educational Resources Information Center
Bonotto, C.
1995-01-01
Attempted to verify knowledge regarding decimal and rational numbers in children ages 10-14. Discusses how pupils can receive and assimilate extensions of the number system from natural numbers to decimals and fractions and later can integrate this extension into a single and coherent numerical structure. (Author/MKR)
40 CFR 86.1823-01 - Durability demonstration procedures for exhaust emissions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (including both hardware and software) must be installed and operating for the entire mileage accumulation... decimal places) from the regression analysis; the result shall be rounded to three-decimal places of... less than one shall be changed to one for the purposes of this paragraph. (2) An additive DF will be...
Why Is Learning Fraction and Decimal Arithmetic so Difficult?
ERIC Educational Resources Information Center
Lortie-Forgues, Hugues; Tian, Jing; Siegler, Robert S.
2015-01-01
Fraction and decimal arithmetic are crucial for later mathematics achievement and for ability to succeed in many professions. Unfortunately, these capabilities pose large difficulties for many children and adults, and students' proficiency in them has shown little sign of improvement over the past three decades. To summarize what is known about…
Ambiguity in Units and the Referents: Two Cases in Rational Number Operations
ERIC Educational Resources Information Center
Rathouz, Margaret
2010-01-01
I explore the impact of ambiguous referral to the unit on understanding of decimal and fraction operations during episodes in two different mathematics courses for pre-service teachers (PSTs). In one classroom, the instructor introduces a rectangular area diagram to help the PSTs visualize decimal multiplication. A transcript from this classroom…
Dewey Decimal Classification for U. S. Conn: An Advantage?
ERIC Educational Resources Information Center
Marek, Kate
This paper examines the use of the Dewey Decimal Classification (DDC) system at the U. S. Conn Library at Wayne State College (WSC) in Nebraska. Several developments in the last 20 years which have eliminated the trend toward reclassification of academic library collections from DDC to the Library of Congress (LC) classification scheme are…
Modeling discrete and continuous entities with fractions and decimals.
Rapp, Monica; Bassok, Miriam; DeWolf, Melissa; Holyoak, Keith J
2015-03-01
When people use mathematics to model real-life situations, their use of mathematical expressions is often mediated by semantic alignment (Bassok, Chase, & Martin, 1998): The entities in a problem situation evoke semantic relations (e.g., tulips and vases evoke the functionally asymmetric "contain" relation), which people align with analogous mathematical relations (e.g., the noncommutative division operation, tulips/vases). Here we investigate the possibility that semantic alignment is also involved in the comprehension and use of rational numbers (fractions and decimals). A textbook analysis and results from two experiments revealed that both mathematic educators and college students tend to align the discreteness versus continuity of the entities in word problems (e.g., marbles vs. distance) with distinct symbolic representations of rational numbers--fractions versus decimals, respectively. In addition, fractions and decimals tend to be used with nonmetric units and metric units, respectively. We discuss the importance of the ontological distinction between continuous and discrete entities to mathematical cognition, the role of symbolic notations, and possible implications of our findings for the teaching of rational numbers. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Ricci time in the Lemaître-Tolman model and the block universe
NASA Astrophysics Data System (ADS)
Elmahalawy, Yasser; Hellaby, Charles; Ellis, George F. R.
2015-10-01
It is common to think of our universe according to the "block universe" concept, which says that spacetime consists of many "stacked" three-surfaces, labelled by some kind of proper time, . Standard ideas do not distinguish past and future, but Ellis' "evolving block universe" tries to make a fundamental distinction. One proposal for this proper time is the proper time measured along the timelike Ricci eigenlines, starting from the big bang. This work investigates the shape of the "Ricci time" surfaces relative to the the null surfaces. We use the Lemaître-Tolman metric as our inhomogeneous spacetime model, and we find the necessary and sufficient conditions for these constant surfaces, , to be spacelike or timelike. Furthermore, we look at the effect of strong gravity domains by determining the location of timelike S regions relative to apparent horizons. We find that constant Ricci time surfaces are always spacelike near the big bang, while at late times (near the crunch or the extreme far future), they are only timelike under special circumstances. At intermediate times, timelike S regions are common unless the variation of the bang time is restricted. The regions where these surfaces become timelike are often adjacent to apparent horizons, but always outside them, and in particular timelike S regions do not occur inside the horizons of black-hole-like models.
Disparity changes in 370 Ma Devonian fossils: the signature of ecological dynamics?
Girard, Catherine; Renaud, Sabrina
2012-01-01
Early periods in Earth's history have seen a progressive increase in complexity of the ecosystems, but also dramatic crises decimating the biosphere. Such patterns are usually considered as large-scale changes among supra-specific groups, including morphological novelties, radiation, and extinctions. Nevertheless, in the same time, each species evolved by the way of micro-evolutionary processes, extended over millions of years into the evolution of lineages. How these two evolutionary scales interacted is a challenging issue because this requires bridging a gap between scales of observation and processes. The present study aims at transferring a typical macro-evolutionary approach, namely disparity analysis, to the study of fine-scale evolutionary variations in order to decipher what processes actually drove the dynamics of diversity at a micro-evolutionary level. The Late Frasnian to Late Famennian period was selected because it is punctuated by two major macro-evolutionary crises, as well as a progressive diversification of marine ecosystem. Disparity was estimated through this period on conodonts, tooth-like fossil remains of small eel-like predators that were part of the nektonic fauna. The study was focused on the emblematic genus of the period, Palmatolepis. Strikingly, both crises affected an already impoverished Palmatolepis disparity, increasing risks of random extinction. The major disparity signal rather emerged as a cycle of increase and decrease in disparity during the inter-crises period. The diversification shortly followed the first crisis and might correspond to an opportunistic occupation of empty ecological niche. The subsequent oriented shrinking in the morphospace occupation suggests that the ecological space available to Palmatolepis decreased through time, due to a combination of factors: deteriorating climate, expansion of competitors and predators. Disparity changes of Palmatolepis thus reflect changes in the structure of the ecological space itself, which was prone to evolve during this ancient period where modern ecosystems were progressively shaped.
Disparity Changes in 370 Ma Devonian Fossils: The Signature of Ecological Dynamics?
Girard, Catherine; Renaud, Sabrina
2012-01-01
Early periods in Earth's history have seen a progressive increase in complexity of the ecosystems, but also dramatic crises decimating the biosphere. Such patterns are usually considered as large-scale changes among supra-specific groups, including morphological novelties, radiation, and extinctions. Nevertheless, in the same time, each species evolved by the way of micro-evolutionary processes, extended over millions of years into the evolution of lineages. How these two evolutionary scales interacted is a challenging issue because this requires bridging a gap between scales of observation and processes. The present study aims at transferring a typical macro-evolutionary approach, namely disparity analysis, to the study of fine-scale evolutionary variations in order to decipher what processes actually drove the dynamics of diversity at a micro-evolutionary level. The Late Frasnian to Late Famennian period was selected because it is punctuated by two major macro-evolutionary crises, as well as a progressive diversification of marine ecosystem. Disparity was estimated through this period on conodonts, tooth-like fossil remains of small eel-like predators that were part of the nektonic fauna. The study was focused on the emblematic genus of the period, Palmatolepis. Strikingly, both crises affected an already impoverished Palmatolepis disparity, increasing risks of random extinction. The major disparity signal rather emerged as a cycle of increase and decrease in disparity during the inter-crises period. The diversification shortly followed the first crisis and might correspond to an opportunistic occupation of empty ecological niche. The subsequent oriented shrinking in the morphospace occupation suggests that the ecological space available to Palmatolepis decreased through time, due to a combination of factors: deteriorating climate, expansion of competitors and predators. Disparity changes of Palmatolepis thus reflect changes in the structure of the ecological space itself, which was prone to evolve during this ancient period where modern ecosystems were progressively shaped. PMID:22558396
Decimals Are Not Processed Automatically, Not Even as Being Smaller than One
ERIC Educational Resources Information Center
Kallai, Arava Y.; Tzelgov, Joseph
2014-01-01
Common fractions have been found to be processed intentionally but not automatically, which led to the conclusion that they are not represented holistically in long-term memory. However, decimals are more similar to natural numbers in their form and thus might be better candidates to be holistically represented by educated adults. To test this…
20 CFR 345.302 - Definition of terms and phrases used in experience-rating.
Code of Federal Regulations, 2010 CFR
2010-04-01
... for the current calendar year. This ratio is computed to four decimal places. (k) Pooled credit ratio... employer for the calendar year involved in the computation. This ratio is computed to four decimal places... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Definition of terms and phrases used in...
ERIC Educational Resources Information Center
Smith, Scott
2017-01-01
Understanding the relationship between fractions and decimals is an important step in developing an overall understanding of rational numbers. Research has demonstrated the feasibility of technology in the form of virtual manipulatives for facilitating students' meaningful understanding of rational number concepts. This exploratory dissertation…
ERIC Educational Resources Information Center
Durkin, Kelley; Shafto, Patrick
2016-01-01
The epistemic trust literature emphasizes that children's evaluations of informants' trustworthiness affects learning, but there is no evidence that epistemic trust affects learning in academic domains. The current study investigated how reliability affects decimal learning. Fourth and fifth graders (N = 122; M[subscript age] = 10.1 years)…
ERIC Educational Resources Information Center
Schneider, John H.
This hierarchical decimal classification of information related to cancer therapy in humans and animals (preceeded by a few general categories) is a working draft of categories taken from an extensive classification of biomedical information. Because the classification identifies very small areas of cancer information, it can be used for precise…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, H.P.; Potter, Elinor
1971-03-01
This collection of mathematical data consists of two tables of decimal constants arranged according to size rather than function, a third table of integers from 1 to 1000, giving some of their properties, and a fourth table listing some infinite series arranged according to increasing size of the coefficients of the terms. The decimal values of Tables I and II are given to 20 D.
Individualized Math Problems in Decimals. Oregon Vo-Tech Mathematics Problem Sets.
ERIC Educational Resources Information Center
Cosler, Norma, Ed.
THis is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume concern use of decimals and are related to the…
Fate of pathogens present in livestock wastes spread onto fescue plots.
Hutchison, Mike L; Walters, Lisa D; Moore, Tony; Thomas, D John I; Avery, Sheryl M
2005-02-01
Fecal wastes from a variety of farmed livestock were inoculated with livestock isolates of Escherichia coli O157, Listeria monocytogenes, Salmonella, Campylobacter jejuni, and Cryptosporidium parvum oocysts at levels representative of the levels found in naturally contaminated wastes. The wastes were subsequently spread onto a grass pasture, and the decline of each of the zoonotic agents was monitored over time. There were no significant differences among the decimal reduction times for the bacterial pathogens. The mean bacterial decimal reduction time was 1.94 days. A range of times between 8 and 31 days for a 1-log reduction in C. parvum levels was obtained, demonstrating that the protozoans were significantly more hardy than the bacteria. Oocyst recovery was more efficient from wastes with lower dry matter contents. The levels of most of the zoonotic agents had declined to below detectable levels by 64 days. However, for some waste types, 128 days was required for the complete decline of L. monocytogenes levels. We were unable to find significant differences between the rates of pathogen decline in liquid (slurry) and solid (farmyard manure) wastes, although concerns have been raised that increased slurry generation as a consequence of more intensive farming practices could lead to increased survival of zoonotic agents in the environment.
NASA's Space Launch System: Moving Toward the Launch Pad
NASA Technical Reports Server (NTRS)
Creech, Stephen D.; May, Todd A.
2013-01-01
The National Aeronautics and Space Administration's (NASA's) Space Launch System (SLS) Program, managed at the Marshall Space Flight Center (MSFC), is making progress toward delivering a new capability for human space flight and scientific missions beyond Earth orbit. Designed with the goals of safety, affordability, and sustainability in mind, the SLS rocket will launch the Orion Multi-Purpose Crew Vehicle (MPCV), equipment, supplies, and major science missions for exploration and discovery. Supporting Orion's first autonomous flight to lunar orbit and back in 2017 and its first crewed flight in 2021, the SLS will evolve into the most powerful launch vehicle ever flown via an upgrade approach that will provide building blocks for future space exploration. NASA is working to deliver this new capability in an austere economic climate, a fact that has inspired the SLS team to find innovative solutions to the challenges of designing, developing, fielding, and operating the largest rocket in history. This paper will summarize the planned capabilities of the vehicle, the progress the SLS Program has made in the 2 years since the Agency formally announced its architecture in September 2011, the path it is following to reach the launch pad in 2017 and then to evolve the 70 metric ton (t) initial lift capability to 130-t lift capability after 2021. The paper will explain how, to meet the challenge of a flat funding curve, an architecture was chosen that combines the use and enhancement of legacy systems and technology with strategic new developments that will evolve the launch vehicle's capabilities. This approach reduces the time and cost of delivering the initial 70 t Block 1 vehicle, and reduces the number of parallel development investments required to deliver the evolved 130 t Block 2 vehicle. The paper will outline the milestones the program has already reached, from developmental milestones such as the manufacture of the first flight hardware, to life-cycle milestones such as the vehicle's Preliminary Design Review (PDR). The paper will also discuss the remaining challenges both in delivering the 70-t vehicle and in evolving its capabilities to the 130-t vehicle, and how NASA plans to accomplish these goals. As this paper will explain, SLS is making measurable progress toward becoming a global infrastructure asset for robotic and human scouts of all nations by harnessing business and technological innovations to deliver sustainable solutions for space exploration.
Urciuolo, F; Garziano, A; Imparato, G; Panzetta, V; Fusco, S; Casale, C; Netti, P A
2016-01-29
The fabrication of functional tissue units is one of the major challenges in tissue engineering due to their in vitro use in tissue-on-chip systems, as well as in modular tissue engineering for the construction of macrotissue analogs. In this work, we aim to engineer dermal tissue micromodules obtained by culturing human dermal fibroblasts into porous gelatine microscaffold. We proved that such stromal cells coupled with gelatine microscaffolds are able to synthesize and to assemble an endogenous extracellular matrix (ECM) resulting in tissue micromodules, which evolve their biophysical features over the time. In particular, we found a time-dependent variation of oxygen consumption kinetic parameters, of newly formed ECM stiffness and of micromodules self-aggregation properties. As consequence when used as building blocks to fabricate larger tissues, the initial tissue micromodules state strongly affects the ECM organization and maturation in the final macrotissue. Such results highlight the role of the micromodules properties in controlling the formation of three-dimensional macrotissue in vitro, defining an innovative design criterion for selecting tissue-building blocks for modular tissue engineering.
ERIC Educational Resources Information Center
Schneider, John H.
This is a hierarchical decimal classification of information related to cancer biochemistry, to host-tumor interactions (including cancer immunology), and to occurrence of cancer in special types of animals and plants. It is a working draft of categories taken from an extensive classification of many fields of biomedical information. Because the…
Assessment of the Knowledge of the Decimal Number System Exhibited by Students with Down Syndrome
ERIC Educational Resources Information Center
Noda, Aurelia; Bruno, Alicia
2017-01-01
This paper presents an assessment of the understanding of the decimal numeral system in students with Down Syndrome (DS). We followed a methodology based on a descriptive case study involving six students with DS. We used a framework of four constructs (counting, grouping, partitioning and numerical relationships) and five levels of thinking for…
ERIC Educational Resources Information Center
Bonotto, C.
1993-01-01
Examined fifth-grade students' survey responses to investigate incorrect rules that derive from children's efforts to interpret decimals as integers or as fractions. Regarding fractions, difficulties arise because only the whole-part approach to fractions is presented in elementary school. (Author/MDH)
Identify Fractions and Decimals on a Number Line
ERIC Educational Resources Information Center
Shaughnessy, Meghan M.
2011-01-01
Tasks that ask students to label rational number points on a number line are common not only in curricula in the upper elementary school grades but also on state assessments. Such tasks target foundational rational number concepts: A fraction (or a decimal) is more than a shaded part of an area, a part of a pizza, or a representation using…
ERIC Educational Resources Information Center
Brousseau, Guy; Brousseau, Nadine; Warfield, Virginia
2008-01-01
In the late seventies, Guy Brousseau set himself the goal of verifying experimentally a theory he had been building up for a number of years. The theory, consistent with what was later named (non-radical) constructivism, was that children, in suitable carefully arranged circumstances, can build their own knowledge of mathematics. The experiment,…
Rationals and Decimals as Required in the School Curriculum Part 2: From Rationals to Decimals
ERIC Educational Resources Information Center
Brousseau, Guy; Brousseau, Nadine; Warfield, Virginia
2007-01-01
In the late seventies, Guy Brousseau set himself the goal of verifying experimentally a theory he had been building up for a number of years. The theory, consistent with what was later named (non-radical) constructivism, was that children, in suitable carefully arranged circumstances, can build their own knowledge of mathematics. The experiment,…
49 CFR 565.15 - Content requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... = 8 J = 1 K = 2 L = 3 M = 4 N = 5 P = 7 R = 9 S = 2 T = 3 U = 4 V = 5 W = 6 X = 7 Y = 8 Z = 9 (2... Decimal Equivalent Remainder as reflected in Table V. All Decimal Equivalent Remainders in Table V are... in VIN position nine (9). Table V—Ninth Position Check Digit Values [Rounded to the nearest...
ERIC Educational Resources Information Center
Varma, Sashank; Karl, Stacy R.
2013-01-01
Much of the research on mathematical cognition has focused on the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9, with considerably less attention paid to more abstract number classes. The current research investigated how people understand decimal proportions--rational numbers between 0 and 1 expressed in the place-value symbol system. The results…
ERIC Educational Resources Information Center
Rimbey, Kimberly
2007-01-01
Created by teachers for teachers, the Math Academy tools and activities included in this booklet were designed to create hands-on activities and a fun learning environment for the teaching of mathematics to the students. This booklet contains the "Math Academy--Dining Out! Explorations in Fractions, Decimals, and Percents," which teachers can use…
NASA Astrophysics Data System (ADS)
Branellec, Matthieu; Nivière, Bertrand; Callot, Jean-Paul; Ringenbach, Jean-Claude
2015-04-01
The Malargüe fold and thrust belt (MFTB) and the San Rafael Block (SRB) are located in the northern termination of the Neuquén basin in Argentina. This basin is a wide inverted intracratonic sag basin with polyphased evolution controlled at large scale by the dynamic of the Pacific subduction. By late Triassic times, narrow rift basins developed and evolved toward a sag basin from middle Jurassic to late Cretaceous. From that time on, compression at the trench resulted in various shortening pulses in the back-arc area. Here we aim to analyze the Andean system at 35°S by comparing the Miocene structuration in the MFTB and the current deformation along the oriental border or the San Rafael Block. The main structuration stage in the MFTB occurred by Miocene times (15 to 10 Ma) producing the principal uplift of the Andean Cordillera. As shown by new structural cross sections, Triassic-early Jurassic rift border faults localized the Miocene compressive tectonics. Deformation is compartmentalized and does not exhibit a classical propagation of homogeneous deformation sequence expected from the critical taper theory. Several intramontane basins in the hangingwall of the main thrusts progressively disconnected from the foreland. In addition, active tectonics has been described in the front of the MFTB attesting for the on-going compression in this area. 100 km farther to the east, The San Rafael Block, is separated from the MFTB by the Rio Grande basin. The SRB is mostly composed of Paleozoic terranes and Triassic rift-related rocks, overlain by late Miocene synorogenic deposits. The SRB is currently uplifted along its oriental border along several active faults. These faults have clear morphologic signatures in Quaternary alluvial terraces and folded Pleistocene lavas. As in the MFTB, the active deformation localization remains localized by structural inheritance. The Andean system is thus evolving as an atypical orogenic wedge partly by frontal accretion at the front of the belt and by migration and localization of strain far from the front leading to crustal block reactivation.
10 CFR 71.59 - Standards for arrays of fissile material packages.
Code of Federal Regulations, 2012 CFR
2012-01-01
... the stack by water: (1) Five times “N” undamaged packages with nothing between the packages would be.... The value of the CSI may be zero provided that an unlimited number of packages are subcritical, such...) of this section. Any CSI greater than zero must be rounded up to the first decimal place. (c) For a...
10 CFR 71.59 - Standards for arrays of fissile material packages.
Code of Federal Regulations, 2013 CFR
2013-01-01
... the stack by water: (1) Five times “N” undamaged packages with nothing between the packages would be.... The value of the CSI may be zero provided that an unlimited number of packages are subcritical, such...) of this section. Any CSI greater than zero must be rounded up to the first decimal place. (c) For a...
10 CFR 71.59 - Standards for arrays of fissile material packages.
Code of Federal Regulations, 2014 CFR
2014-01-01
... the stack by water: (1) Five times “N” undamaged packages with nothing between the packages would be.... The value of the CSI may be zero provided that an unlimited number of packages are subcritical, such...) of this section. Any CSI greater than zero must be rounded up to the first decimal place. (c) For a...
Independent Research and Independent Exploratory Development Programs: FY92 Annual Report
1993-04-01
transform- of an ERP provides a record of ERP energy at different times and scales. It does this by producing a set of filtered time series ai different...that the coefficients at any level are a series that measures energy within the bandwidth of that level as a function of time. For this reason it is...I to 25 Hz, and decimated to a final sampling rate of 50 Hz. The prestimulus baseline (200 ms) was adjusted to zero to remove any DC offset
NASA's Space Launch System: SmallSat Deployment to Deep Space
NASA Technical Reports Server (NTRS)
Robinson, Kimberly F.; Creech, Stephen D.
2017-01-01
Leveraging the significant capability it offers for human exploration and flagship science missions, NASA's Space Launch System (SLS) also provides a unique opportunity for lower-cost deep-space science in the form of small-satellite secondary payloads. Current plans call for such opportunities to begin with the rocket's first flight; a launch of the vehicle's Block 1 configuration, capable of delivering 70 metric tons (t) to Low Earth Orbit (LEO), which will send the Orion crew vehicle around the moon and return it to Earth. On that flight, SLS will also deploy 13 CubeSat-class payloads to deep-space destinations. These secondary payloads will include not only NASA research, but also spacecraft from industry and international partners and academia. The payloads also represent a variety of disciplines including, but not limited to, studies of the moon, Earth, sun, and asteroids. While the SLS Program is making significant progress toward that first launch, preparations are already under way for the second, which will see the booster evolve to its more-capable Block 1B configuration, able to deliver 105t to LEO. That configuration will have the capability to carry large payloads co-manifested with the Orion spacecraft, or to utilize an 8.4-meter (m) fairing to carry payloads several times larger than are currently possible. The Block 1B vehicle will be the workhorse of the Proving Ground phase of NASA's deep-space exploration plans, developing and testing the systems and capabilities necessary for human missions into deep space and ultimately to Mars. Ultimately, the vehicle will evolve to its full Block 2 configuration, with a LEO capability of 130 metric tons. Both the Block 1B and Block 2 versions of the vehicle will be able to carry larger secondary payloads than the Block 1 configuration, creating even more opportunities for affordable scientific exploration of deep space. This paper will outline the progress being made toward flying smallsats on the first flight of SLS, and discuss future opportunities for smallsats on subsequent flights.
ERIC Educational Resources Information Center
Perreault, Jean M., Ed.
Several factors are involved in the decision to reclassify library collections and several problems and choices must be faced. The discussion of four classification schemes (Dewey Decimal, Library of Congress, Library of Congress subject-headings and Universal Decimal Classification) involved in the choices concerns their structure, currency,…
ERIC Educational Resources Information Center
Atherton, Pauline; And Others
A single issue of Nuclear Science Abstracts, containing about 2,300 abstracts, was indexed by Universal Decimal Classification (UDC) using the Special Subject Edition of UDC for Nuclear Science and Technology. The descriptive cataloging and UDC-indexing records formed a computer-stored data base. A systematic random sample of 500 additional…
ERIC Educational Resources Information Center
McLaren, Bruce M.; Adams, Deanne M.; Mayer, Richard E.
2015-01-01
Erroneous examples--step-by-step problem solutions with one or more errors for students to find and fix--hold great potential to help students learn. In this study, which is a replication of a prior study (Adams et al. 2014), but with a much larger population (390 vs. 208), middle school students learned about decimals either by working with…
ERIC Educational Resources Information Center
Tempier, Frédérick
2016-01-01
Many studies have shown the difficulties of learning and teaching the decimal number system for whole numbers. In the case of numbers bigger than one hundred, complexity is partly due to the multitude of possible relationships between units. This study was aimed to develop conditions of a resource which can help teachers to enhance their teaching…
ERIC Educational Resources Information Center
Markey, Karen; Demeyer, Anh N.
This research project focuses on the implementation and testing of the Dewey Decimal Classification (DDC) system as an online searcher's tool for subject access, browsing, and display in an online catalog. The research project comprises 12 activities. The three interim reports in this document cover the first seven of these activities: (1) obtain…
A New Climate: The Time is Now for Colleges to Rally around the Sustainability Movement
ERIC Educational Resources Information Center
Stephens, Rusty
2010-01-01
Environmentalist Lester Brown famously likened the declining natural environment to a global Ponzi scheme, where the decimation of the planet's natural asset base has yielded high, yet unsustainable economic returns. Brown cited a range of examples, from population growth to food security to air pollution. His claims took on new urgency this year…
"Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; vanGelder, Allen
1999-01-01
During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.
Stochastic processes on multiple scales: averaging, decimation and beyond
NASA Astrophysics Data System (ADS)
Bo, Stefano; Celani, Antonio
The recent advances in handling microscopic systems are increasingly motivating stochastic modeling in a large number of physical, chemical and biological phenomena. Relevant processes often take place on widely separated time scales. In order to simplify the description, one usually focuses on the slower degrees of freedom and only the average effect of the fast ones is retained. It is then fundamental to eliminate such fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. We shall present how this can be done by either decimating or coarse-graining the fast processes and discuss applications to physical, biological and chemical examples. With the same tools we will address the fate of functionals of the stochastic trajectories (such as residence times, counting statistics, fluxes, entropy production, etc.) upon elimination of the fast variables. In general, for functionals, such elimination can present additional difficulties. In some cases, it is not possible to express them in terms of the effective trajectories on the slow degrees of freedom but additional details of the fast processes must be retained. We will focus on such cases and show how naive procedures can lead to inconsistent results.
ERIC Educational Resources Information Center
Markey, Karen; Demeyer, Anh N.
In this research project, subject terms from the Dewey Decimal Classification (DDC) Schedules and Relative Index were incorporated into an online catalog as searcher's tools for subject access, browsing, and display. Four features of the DDC were employed to help searchers browse for and match their own subject terms with the online catalog's…
ERIC Educational Resources Information Center
Pateman, Neil A., Ed; Dougherty, Barbara J., Ed.; Zilliox, Joseph T., Ed.
2003-01-01
This volume of the 27th International Group for the Psychology of Mathematics Education Conference includes the following research reports: (1) Improving Decimal Number Conception by Transfer from Fractions to Decimals (Irita Peled and Juhaina Awawdy Shahbari); (2) The Development of Student Teachers' Efficacy Beliefs in Mathematics during…
Evolution of sequence-defined highly functionalized nucleic acid polymers
NASA Astrophysics Data System (ADS)
Chen, Zhen; Lichtor, Phillip A.; Berliner, Adrian P.; Chen, Jonathan C.; Liu, David R.
2018-03-01
The evolution of sequence-defined synthetic polymers made of building blocks beyond those compatible with polymerase enzymes or the ribosome has the potential to generate new classes of receptors, catalysts and materials. Here we describe a ligase-mediated DNA-templated polymerization and in vitro selection system to evolve highly functionalized nucleic acid polymers (HFNAPs) made from 32 building blocks that contain eight chemically diverse side chains on a DNA backbone. Through iterated cycles of polymer translation, selection and reverse translation, we discovered HFNAPs that bind proprotein convertase subtilisin/kexin type 9 (PCSK9) and interleukin-6, two protein targets implicated in human diseases. Mutation and reselection of an active PCSK9-binding polymer yielded evolved polymers with high affinity (KD = 3 nM). This evolved polymer potently inhibited the binding between PCSK9 and the low-density lipoprotein receptor. Structure-activity relationship studies revealed that specific side chains at defined positions in the polymers are required for binding to their respective targets. Our findings expand the chemical space of evolvable polymers to include densely functionalized nucleic acids with diverse, researcher-defined chemical repertoires.
NASA Astrophysics Data System (ADS)
Bhattachryya, Arunava; Kumar Gayen, Dilip; Chattopadhyay, Tanay
2013-04-01
All-optical 4-bit binary to binary coded decimal (BCD) converter has been proposed and described, with the help of semiconductor optical amplifier (SOA)-assisted Sagnac interferometric switches in this manuscript. The paper describes all-optical conversion scheme using a set of all-optical switches. BCD is common in computer systems that display numeric values, especially in those consisting solely of digital logic with no microprocessor. In many personal computers, the basic input/output system (BIOS) keep the date and time in BCD format. The operations of the circuit are studied theoretically and analyzed through numerical simulations. The model accounts for the SOA small signal gain, line-width enhancement factor and carrier lifetime, the switching pulse energy and width, and the Sagnac loop asymmetry. By undertaking a detailed numerical simulation the influence of these key parameters on the metrics that determine the quality of switching is thoroughly investigated.
FBC: a flat binary code scheme for fast Manhattan hash retrieval
NASA Astrophysics Data System (ADS)
Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun
2018-04-01
Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.
Tensor network method for reversible classical computation
NASA Astrophysics Data System (ADS)
Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.
2018-03-01
We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.
Extracting the Data From the LCM vk4 Formatted Output File
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less
Rapid short-term cooling following the Chicxulub impact at the Cretaceous–Paleogene boundary
Vellekoop, Johan; Sluijs, Appy; Smit, Jan; Schouten, Stefan; Weijers, Johan W. H.; Sinninghe Damsté, Jaap S.; Brinkhuis, Henk
2014-01-01
The mass extinction at the Cretaceous–Paleogene boundary, ∼66 Ma, is thought to be caused by the impact of an asteroid at Chicxulub, present-day Mexico. Although the precise mechanisms that led to this mass extinction remain enigmatic, most postulated scenarios involve a short-lived global cooling, a so-called “impact winter” phase. Here we document a major decline in sea surface temperature during the first months to decades following the impact event, using TEX86 paleothermometry of sediments from the Brazos River section, Texas. We interpret this cold spell to reflect, to our knowledge, the first direct evidence for the effects of the formation of dust and aerosols by the impact and their injection in the stratosphere, blocking incoming solar radiation. This impact winter was likely a major driver of mass extinction because of the resulting global decimation of marine and continental photosynthesis. PMID:24821785
Entropy of finite random binary sequences with weak long-range correlations.
Melnik, S S; Usatenko, O V
2014-11-01
We study the N-step binary stationary ergodic Markov chain and analyze its differential entropy. Supposing that the correlations are weak we express the conditional probability function of the chain through the pair correlation function and represent the entropy as a functional of the pair correlator. Since the model uses the two-point correlators instead of the block probability, it makes it possible to calculate the entropy of strings at much longer distances than using standard methods. A fluctuation contribution to the entropy due to finiteness of random chains is examined. This contribution can be of the same order as its regular part even at the relatively short lengths of subsequences. A self-similar structure of entropy with respect to the decimation transformations is revealed for some specific forms of the pair correlation function. Application of the theory to the DNA sequence of the R3 chromosome of Drosophila melanogaster is presented.
Entropy of finite random binary sequences with weak long-range correlations
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2014-11-01
We study the N -step binary stationary ergodic Markov chain and analyze its differential entropy. Supposing that the correlations are weak we express the conditional probability function of the chain through the pair correlation function and represent the entropy as a functional of the pair correlator. Since the model uses the two-point correlators instead of the block probability, it makes it possible to calculate the entropy of strings at much longer distances than using standard methods. A fluctuation contribution to the entropy due to finiteness of random chains is examined. This contribution can be of the same order as its regular part even at the relatively short lengths of subsequences. A self-similar structure of entropy with respect to the decimation transformations is revealed for some specific forms of the pair correlation function. Application of the theory to the DNA sequence of the R3 chromosome of Drosophila melanogaster is presented.
Toward a Global Bundle Adjustment of SPOT 5 - HRS Images
NASA Astrophysics Data System (ADS)
Massera, S.; Favé, P.; Gachet, R.; Orsoni, A.
2012-07-01
The HRS (High Resolution Stereoscopic) instrument carried on SPOT 5 enables quasi-simultaneous acquisition of stereoscopic images on wide segments - 120 km wide - with two forward and backward-looking telescopes observing the Earth with an angle of 20° ahead and behind the vertical. For 8 years IGN (Institut Géographique National) has been developing techniques to achieve spatiotriangulation of these images. During this time the capacities of bundle adjustment of SPOT 5 - HRS spatial images have largely improved. Today a global single block composed of about 20,000 images can be computed in reasonable calculation time. The progression was achieved step by step: first computed blocks were only composed of 40 images, then bigger blocks were computed. Finally only one global block is now computed. In the same time calculation tools have improved: for example the adjustment of 2,000 images of North Africa takes about 2 minutes whereas 8 hours were needed two years ago. To reach such a result a new independent software was developed to compute fast and efficient bundle adjustments. In the same time equipment - GCPs (Ground Control Points) and tie points - and techniques have also evolved over the last 10 years. Studies were made to get recommendations about the equipment in order to make an accurate single block. Tie points can now be quickly and automatically computed with SURF (Speeded Up Robust Features) techniques. Today the updated equipment is composed of about 500 GCPs and studies show that the ideal configuration is around 100 tie points by square degree. With such an equipment, the location of the global HRS block becomes a few meters accurate whereas non adjusted images are only 15 m accurate. This paper will describe the methods used in IGN Espace to compute a global single block composed of almost 20,000 HRS images, 500 GCPs and several million of tie points in reasonable calculation time. Many advantages can be found to use such a block. Because the global block is unique it becomes easier to manage the historic and the different evolutions of the computations (new images, new GCPs or tie points). The location is now unique and consequently coherent all around the world, avoiding steps and artifacts on the borders of DSMs (Digital Surface Models) and OrthoImages historically calculated from different blocks. No extrapolation far from GCPs in the limits of images is done anymore. Using the global block as a reference will allow new images from other sources to be easily located on this reference.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the specific security being auctioned; (ii) For which the security being auctioned is one of several... increments. The third decimal must be either a zero or a five, for example, 5.320 or 5.325. We will treat any missing decimals as zero, for example, a bid of 5.32 will be treated as 5.320. The rate bid may be a...
ERIC Educational Resources Information Center
Mandel, Carol A.
This paper presents a synthesis of the ideas and issues developed at a conference convened to review the results of the Dewey Decimal Classification Online Project and explore the potential for future use of the Dewey Decimal Classification (DDC) and Library of Congress Classification (LCC) schedules in online library catalogs. Conference…
ERIC Educational Resources Information Center
Dubinsky, Ed; Arnon, Ilana; Weller, Kirk
2013-01-01
In this article, we obtain a genetic decomposition of students' progress in developing an understanding of the decimal 0.9 and its relation to 1. The genetic decomposition appears to be valid for a high percentage of the study participants and suggests the possibility of a new stage in APOS Theory that would be the first substantial change in…
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2001-01-01
Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers' performance levels high is an important area of research. In this article, we explore input decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses them to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles (IDEs) outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains.
Heddle, William; Roberton, Gayle; Mahoney, Sarah; Walters, Lucie; Strasser, Sarah; Worley, Paul
2014-01-01
Longitudinal integrated clerkships (LIC) in the first major clinical year in medical student training have been demonstrated to be at least equivalent to and in some areas superior to the "traditional block rotation" (TBR). Flinders University School of Medicine is starting a pilot changing the traditional teaching at the major Academic Medical Centre from TBR to LIC (50% of students in other locations in the medical school already have a partial or full LIC programme). This paper summarises the expected challenges presented at the "Rendez-Vous" Conference in October 2012: (a) creating urgency, (b) training to be a clinician rather than imparting knowledge, (c) resistance to change. We discuss the unexpected challenges that have evolved since then: (a) difficulty finalising the precise schedule, (b) underestimating time requirements, (c) managing the change process inclusively. Transformation of a "block rotation" to "LIC" medical student education in a tertiary academic teaching hospital has many challenges, many of which can be anticipated, but some are unexpected.
Ophthalmic regional blocks: management, challenges, and solutions
Palte, Howard D
2015-01-01
In the past decade ophthalmic anesthesia has witnessed a major transformation. The sun has set on the landscape of ophthalmic procedures performed under general anesthesia at in-hospital settings. In its place a new dawn has ushered in the panorama of eye surgeries conducted under regional and topical anesthesia at specialty eye care centers. The impact of the burgeoning geriatric population is that an increasing number of elderly patients will present for eye surgery. In order to accommodate increased patient volumes and simultaneously satisfy administrative initiatives directed at economic frugality, administrators will seek assistance from anesthesia providers in adopting measures that enhance operating room efficiency. The performance of eye blocks in a holding suite meets many of these objectives. Unfortunately, most practicing anesthesiologists resist performing ophthalmic regional blocks because they lack formal training. In future, anesthesiologists will need to block eyes and manage common medical conditions because economic pressures will eliminate routine preoperative testing. This review addresses a variety of topical issues in ophthalmic anesthesia with special emphasis on cannula and needle-based blocks and the new-generation antithrombotic agents. In a constantly evolving arena, the sub-Tenon’s block has gained popularity while the deep angulated intraconal (retrobulbar) block has been largely superseded by the shallower extraconal (peribulbar) approach. Improvements in surgical technique have also impacted anesthetic practice. For example, phacoemulsification techniques facilitate the conduct of cataract surgery under topical anesthesia, and suture-free vitrectomy ports may cause venous air embolism during air/fluid exchange. Hyaluronidase is a useful adjuvant because it promotes local anesthetic diffusion and hastens block onset time but it is allergenic. Ultrasound-guided eye blocks afford real-time visualization of needle position and local anesthetic spread. An advantage of sonic guidance is that it may eliminate the hazard of globe perforation by identifying abnormal anatomy, such as staphyloma. PMID:26316814
ERIC Educational Resources Information Center
Cohen, Dale J.
2010-01-01
Participants' reaction times (RTs) in numerical judgment tasks in which one must determine which of 2 numbers is greater generally follow a monotonically decreasing function of the numerical distance between the two presented numbers. Here, I present 3 experiments in which the relative influences of numerical distance and physical similarity are…
Analog/digital pH meter system I.C.
NASA Technical Reports Server (NTRS)
Vincent, Paul; Park, Jea
1992-01-01
The project utilizes design automation software tools to design, simulate, and fabricate a pH meter integrated circuit (IC) system including a successive approximation type seven-bit analog to digital converter circuits using a 1.25 micron N-Well CMOS MOSIS process. The input voltage ranges from 0.5 to 1.0 V derived from a special type pH sensor, and the output is a three-digit decimal number display of pH with one decimal point.
Varma, Sashank; Karl, Stacy R
2013-05-01
Much of the research on mathematical cognition has focused on the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9, with considerably less attention paid to more abstract number classes. The current research investigated how people understand decimal proportions--rational numbers between 0 and 1 expressed in the place-value symbol system. The results demonstrate that proportions are represented as discrete structures and processed in parallel. There was a semantic interference effect: When understanding a proportion expression (e.g., "0.29"), both the correct proportion referent (e.g., 0.29) and the incorrect natural number referent (e.g., 29) corresponding to the visually similar natural number expression (e.g., "29") are accessed in parallel, and when these referents lead to conflicting judgments, performance slows. There was also a syntactic interference effect, generalizing the unit-decade compatibility effect for natural numbers: When comparing two proportions, their tenths and hundredths components are processed in parallel, and when the different components lead to conflicting judgments, performance slows. The results also reveal that zero decimals--proportions ending in zero--serve multiple cognitive functions, including eliminating semantic interference and speeding processing. The current research also extends the distance, semantic congruence, and SNARC effects from natural numbers to decimal proportions. These findings inform how people understand the place-value symbol system, and the mental implementation of mathematical symbol systems more generally. Copyright © 2013 Elsevier Inc. All rights reserved.
DeWolf, Melissa; Bassok, Miriam; Holyoak, Keith J
2015-05-01
To understand the development of mathematical cognition and to improve instructional practices, it is critical to identify early predictors of difficulty in learning complex mathematical topics such as algebra. Recent work has shown that performance with fractions on a number line estimation task predicts algebra performance, whereas performance with whole numbers on similar estimation tasks does not. We sought to distinguish more specific precursors to algebra by measuring multiple aspects of knowledge about rational numbers. Because fractions are the first numbers that are relational expressions to which students are exposed, we investigated how understanding the relational bipartite format (a/b) of fractions might connect to later algebra performance. We presented middle school students with a battery of tests designed to measure relational understanding of fractions, procedural knowledge of fractions, and placement of fractions, decimals, and whole numbers onto number lines as well as algebra performance. Multiple regression analyses revealed that the best predictors of algebra performance were measures of relational fraction knowledge and ability to place decimals (not fractions or whole numbers) onto number lines. These findings suggest that at least two specific components of knowledge about rational numbers--relational understanding (best captured by fractions) and grasp of unidimensional magnitude (best captured by decimals)--can be linked to early success with algebraic expressions. Copyright © 2015 Elsevier Inc. All rights reserved.
Spinelli, Ana Cláudia N F; Sant'Ana, Anderson S; Pacheco-Sanchez, Cristiana P; Massaguer, Pilar R
2010-02-28
In this study, the influence of the hot-fill water-spray-cooling process after continuous pasteurization on the number of decimal reductions (gamma) and growth parameters (lag time; lambda, ratio N(f)/N(o); kappa, maximum growth rate; mu) of Alicyclobacillus acidoterrestris CRA 7152 in orange juice stored at 35 degrees C were investigated. Two different inoculum levels of A. acidoterrestris CRA 7152 (10(2) and 10(3) spores/mL) in orange juice (11(0)Brix, pH 3.7) and a Microthermics UHT-HTST pilot plant were used to simulate industrial conditions. Results have shown that regardless of the inoculum level (10(2) or 10(3) spores/mL), the pasteurization processes were unable to cause even 1 gamma. Predictive modeling using the Baranyi model showed that only kappa and time to reach 10(4)spores/mL (t10(4) - time to juice spoilage) were affected by the spore inoculum used (p<0.05). It has been concluded that A. acidoterrestris was able to survive the hot-fill process and to grow and spoil orange juice in 5-6 days when the final storage temperature was 35 degrees C. (c) 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Clarke, A. P.; Vannucchi, P.; Ougier-Simonin, A.; Morgan, J. P.
2017-12-01
Subduction zone interface layers are often conceived to be heterogeneous, polyrheological zones analogous to exhumed mélanges. Mélanges typically contain mechanically strong blocks within a weaker matrix. However, our geomechanical study of the Osa Mélange, SW Costa Rica shows that this mélange contains blocks of altered basalt which are now weaker in friction than their surrounding indurated volcanoclastic matrix. Triaxial deformation experiments were conducted on samples of both the altered basalt blocks and the indurated volcanoclastic matrix at confining pressures of 60 and 120 MPa. These revealed that the volcanoclastic matrix has a strength 7.5 times that of the altered basalt at 60 MPa and 4 times at 120 MPa, with the altered basalt experiencing multi-stage failure. The inverted strength relationship between weaker blocks and stronger matrix evolved during subduction and diagenesis of the melange unit by dewatering, compaction and diagenesis of the matrix and cataclastic brecciation and hydrothermal alteration of the basalt blocks. During the evolution of this material, the matrix progressively indurated until its plastic yield stress became greater than the brittle yield stress of the blocks. At this point, the typical rheological relationship found within melanges inverts and melange blocks can fail seismically as the weakest links along the subduction plate interface. The Osa Melange is currently in the forearc of the erosive Middle America Trench and is being incorporated into the subduction zone interface at the updip limit of seismogenesis. The presence of altered basalt blocks acting as weak inclusions within this rock unit weakens the mélange as a whole rock mass. Seismic fractures can nucleate at or within these weak inclusions and the size of the block may limit the size of initial microseismic rock failure. However, when fractures are able to bridge across the matrix between blocks, significantly larger rupture areas may be possible. While this mechanism is a promising candidate for the updip limit of the unusually shallow seismogenic zone beneath Osa, it remains to be seen whether analogous evolutionary strength-inversions control the updip limit of other subduction seismogenic zones.
ERIC Educational Resources Information Center
Hu, Yi; Ericsson, K. Anders; Yang, Dan; Lu, Chao
2009-01-01
Over the last century many individuals with exceptional memory have been studied and tested in the laboratory. This article studies Chao Lu, who set a Guinness World Record by memorizing 67,890 decimals of pi. Chao Lu's superior self-paced memorization of digits is shown through analyses of study times and verbal reports to be mediated by mnemonic…
G.A.M.E.: GPU-accelerated mixture elucidator.
Schurz, Alioune; Su, Bo-Han; Tu, Yi-Shu; Lu, Tony Tsung-Yu; Lin, Olivia A; Tseng, Yufeng J
2017-09-15
GPU acceleration is useful in solving complex chemical information problems. Identifying unknown structures from the mass spectra of natural product mixtures has been a desirable yet unresolved issue in metabolomics. However, this elucidation process has been hampered by complex experimental data and the inability of instruments to completely separate different compounds. Fortunately, with current high-resolution mass spectrometry, one feasible strategy is to define this problem as extending a scaffold database with sidechains of different probabilities to match the high-resolution mass obtained from a high-resolution mass spectrum. By introducing a dynamic programming (DP) algorithm, it is possible to solve this NP-complete problem in pseudo-polynomial time. However, the running time of the DP algorithm grows by orders of magnitude as the number of mass decimal digits increases, thus limiting the boost in structural prediction capabilities. By harnessing the heavily parallel architecture of modern GPUs, we designed a "compute unified device architecture" (CUDA)-based GPU-accelerated mixture elucidator (G.A.M.E.) that considerably improves the performance of the DP, allowing up to five decimal digits for input mass data. As exemplified by four testing datasets with verified constitutions from natural products, G.A.M.E. allows for efficient and automatic structural elucidation of unknown mixtures for practical procedures. Graphical abstract .
Olsen, David A.; Amundson, Adam W.
2017-01-01
Background Ipsilateral phrenic nerve blockade is a common adverse event after an interscalene brachial plexus block, which can result in respiratory deterioration in patients with preexisting pulmonary conditions. Diaphragm-sparing nerve block techniques are continuing to evolve, with the intention of providing satisfactory postoperative analgesia while minimizing hemidiaphragmatic paralysis after shoulder surgery. Case Report We report the successful application of a combined ultrasound-guided infraclavicular brachial plexus block and suprascapular nerve block in a patient with a complicated pulmonary history undergoing a total shoulder replacement. Conclusion This case report briefly reviews the important innervations to the shoulder joint and examines the utility of the infraclavicular brachial plexus block for postoperative pain management. PMID:29410922
NASA's Space Launch System: An Evolving Capability for Exploration
NASA Technical Reports Server (NTRS)
Creech, Stephen D.; Robinson, Kimberly F.
2016-01-01
Designed to meet the stringent requirements of human exploration missions into deep space and to Mars, NASA's Space Launch System (SLS) vehicle represents a unique new launch capability opening new opportunities for mission design. NASA is working to identify new ways to use SLS to enable new missions or mission profiles. In its initial Block 1 configuration, capable of launching 70 metric tons (t) to low Earth orbit (LEO), SLS is capable of not only propelling the Orion crew vehicle into cislunar space, but also delivering small satellites to deep space destinations. The evolved configurations of SLS, including both the 105 t Block 1B and the 130 t Block 2, offer opportunities for launching co-manifested payloads and a new class of secondary payloads with the Orion crew vehicle, and also offer the capability to carry 8.4- or 10-m payload fairings, larger than any contemporary launch vehicle, delivering unmatched mass-lift capability, payload volume, and C3.
Datta, Asit K; Munshi, Soumika
2002-03-10
Based on the negabinary number representation, parallel one-step arithmetic operations (that is, addition and subtraction), logical operations, and matrix-vector multiplication on data have been optically implemented, by use of a two-dimensional spatial-encoding technique. For addition and subtraction, one of the operands in decimal form is converted into the unsigned negabinary form, whereas the other decimal number is represented in the signed negabinary form. The result of operation is obtained in the mixed negabinary form and is converted back into decimal. Matrix-vector multiplication for unsigned negabinary numbers is achieved through the convolution technique. Both of the operands for logical operation are converted to their signed negabinary forms. All operations are implemented by use of a unique optical architecture. The use of a single liquid-crystal-display panel to spatially encode the input data, operational kernels, and decoding masks have simplified the architecture as well as reduced the cost and complexity.
[Resistance of Listeria monocytogenes to physical exposure].
Augustin, J C
1996-11-01
The resistance of Listeria monocytogenes to physical processing, particularly heat resistance and radioresistance, is widely dependent on the method involved, the physiological state of the strain used, and, obviously, the substrate in which the organism is. HTST pasteurization of milk would allow at least 11 decimal reductions of the potentially present population of L. monocytogenes, and then greatly minimizes the risks of survival of the organism. On the other hand, high and low pasteurizations of egg products may involve only 4 to 5 decimal reductions and appear then not very reliable towards Listeria. Similarly, meat products cooking can, in some conditions, be inadequate to allow the total inactivation of contaminant L. monocytogenes. A 3 kGy irradiation of meat products should allow, on an average, 6 decimal reductions. These results must incite the manufacturers to take into account factors present in their products which allow L. monocytogenes to better resist and this in order to adapt processing to these conditions of increased resistance.
Nagarajan, Ramanathan
2017-06-01
Low molecular weight surfactants and high molecular weight block copolymers display analogous self-assembly behavior in solutions and at interfaces, generating nanoscale structures of different shapes. Understanding the link between the molecular structure of these amphiphiles and their self-assembly behavior has been the goal of theoretical studies. Despite the analogies between surfactants and block copolymers, models predicting their self-assembly behavior have evolved independent of one another, each overlooking the molecular feature considered critical to the other. In this review, we focus on the interplay of ideas pertaining to surfactants and block copolymers in three areas of self-assembly. First, we show how improved free energy models have evolved by applying ideas from surfactants to block copolymers and vice versa, giving rise to a unitary theoretical framework and better predictive capabilities for both classes of amphiphiles. Second we show that even though molecular packing arguments are often used to explain aggregate shape transitions resulting from self-assembly, the molecular packing considerations are more relevant in the case of surfactants whereas free energy criteria are relevant for block copolymers. Third, we show that even though the surfactant and block copolymer aggregates are small nanostructures, the size differences between them is significant enough to make the interfacial effects control the solubilization of molecules in surfactant micelles while the bulk interactions control the solubilization in block copolymer micelles. Finally, we conclude by identifying recent theoretical progress in adapting the micelle model to a wide variety of self-assembly phenomena and the challenges to modeling posed by emerging novel classes of amphiphiles with complex biological, inorganic or nanoparticle moieties. Published by Elsevier B.V.
Independent divergence of 13- and 17-y life cycles among three periodical cicada lineages.
Sota, Teiji; Yamamoto, Satoshi; Cooley, John R; Hill, Kathy B R; Simon, Chris; Yoshimura, Jin
2013-04-23
The evolution of 13- and 17-y periodical cicadas (Magicicada) is enigmatic because at any given location, up to three distinct species groups (Decim, Cassini, Decula) with synchronized life cycles are involved. Each species group is divided into one 13- and one 17-y species with the exception of the Decim group, which contains two 13-y species-13-y species are Magicicada tredecim, Magicicada neotredecim, Magicicada tredecassini, and Magicicada tredecula; and 17-y species are Magicicada septendecim, Magicicada cassini, and Magicicada septendecula. Here we show that the divergence leading to the present 13- and 17-y populations differs considerably among the species groups despite the fact that each group exhibits strikingly similar phylogeographic patterning. The earliest divergence of extant lineages occurred ∼4 Mya with one branch forming the Decim species group and the other subsequently splitting 2.5 Mya to form the Cassini and Decula species groups. The earliest split of extant lineages into 13- and 17-y life cycles occurred in the Decim lineage 0.5 Mya. All three species groups experienced at least one episode of life cycle divergence since the last glacial maximum. We hypothesize that despite independent origins, the three species groups achieved their current overlapping distributions because life-cycle synchronization of invading congeners to a dominant resident population enabled escape from predation and population persistence. The repeated life-cycle divergences supported by our data suggest the presence of a common genetic basis for the two life cycles in the three species groups.
Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria
2013-08-01
In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.
Reanalyzing Head et al. (2015): investigating the robustness of widespread p-hacking.
Hartgerink, Chris H J
2017-01-01
Head et al. (2015) provided a large collection of p -values that, from their perspective, indicates widespread statistical significance seeking (i.e., p -hacking). This paper inspects this result for robustness. Theoretically, the p -value distribution should be a smooth, decreasing function, but the distribution of reported p -values shows systematically more reported p -values for .01, .02, .03, .04, and .05 than p -values reported to three decimal places, due to apparent tendencies to round p -values to two decimal places. Head et al. (2015) correctly argue that an aggregate p -value distribution could show a bump below .05 when left-skew p -hacking occurs frequently. Moreover, the elimination of p = .045 and p = .05, as done in the original paper, is debatable. Given that eliminating p = .045 is a result of the need for symmetric bins and systematically more p -values are reported to two decimal places than to three decimal places, I did not exclude p = .045 and p = .05. I conducted Fisher's method .04 < p < .05 and reanalyzed the data by adjusting the bin selection to .03875 < p ≤ .04 versus .04875 < p ≤ .05. Results of the reanalysis indicate that no evidence for left-skew p -hacking remains when we look at the entire range between .04 < p < .05 or when we inspect the second-decimal. Taking into account reporting tendencies when selecting the bins to compare is especially important because this dataset does not allow for the recalculation of the p -values. Moreover, inspecting the bins that include two-decimal reported p -values potentially increases sensitivity if strategic rounding down of p -values as a form of p -hacking is widespread. Given the far-reaching implications of supposed widespread p -hacking throughout the sciences Head et al. (2015), it is important that these findings are robust to data analysis choices if the conclusion is to be considered unequivocal. Although no evidence of widespread left-skew p -hacking is found in this reanalysis, this does not mean that there is no p -hacking at all. These results nuance the conclusion by Head et al. (2015), indicating that the results are not robust and that the evidence for widespread left-skew p -hacking is ambiguous at best.
Reanalyzing Head et al. (2015): investigating the robustness of widespread p-hacking
2017-01-01
Head et al. (2015) provided a large collection of p-values that, from their perspective, indicates widespread statistical significance seeking (i.e., p-hacking). This paper inspects this result for robustness. Theoretically, the p-value distribution should be a smooth, decreasing function, but the distribution of reported p-values shows systematically more reported p-values for .01, .02, .03, .04, and .05 than p-values reported to three decimal places, due to apparent tendencies to round p-values to two decimal places. Head et al. (2015) correctly argue that an aggregate p-value distribution could show a bump below .05 when left-skew p-hacking occurs frequently. Moreover, the elimination of p = .045 and p = .05, as done in the original paper, is debatable. Given that eliminating p = .045 is a result of the need for symmetric bins and systematically more p-values are reported to two decimal places than to three decimal places, I did not exclude p = .045 and p = .05. I conducted Fisher’s method .04 < p < .05 and reanalyzed the data by adjusting the bin selection to .03875 < p ≤ .04 versus .04875 < p ≤ .05. Results of the reanalysis indicate that no evidence for left-skew p-hacking remains when we look at the entire range between .04 < p < .05 or when we inspect the second-decimal. Taking into account reporting tendencies when selecting the bins to compare is especially important because this dataset does not allow for the recalculation of the p-values. Moreover, inspecting the bins that include two-decimal reported p-values potentially increases sensitivity if strategic rounding down of p-values as a form of p-hacking is widespread. Given the far-reaching implications of supposed widespread p-hacking throughout the sciences Head et al. (2015), it is important that these findings are robust to data analysis choices if the conclusion is to be considered unequivocal. Although no evidence of widespread left-skew p-hacking is found in this reanalysis, this does not mean that there is no p-hacking at all. These results nuance the conclusion by Head et al. (2015), indicating that the results are not robust and that the evidence for widespread left-skew p-hacking is ambiguous at best. PMID:28265523
NASA'S Space Launch System: Opening Opportunities for Mission Design
NASA Technical Reports Server (NTRS)
Robinson, Kimberly F.; Hefner, Keith; Hitt, David
2015-01-01
Designed to meet the stringent requirements of human exploration missions into deep space and to Mars, NASA's Space Launch System (SLS) vehicle represents a unique new launch capability opening new opportunities for mission design. While SLS's super-heavy launch vehicle predecessor, the Saturn V, was used for only two types of missions - launching Apollo spacecraft to the moon and lofting the Skylab space station into Earth orbit - NASA is working to identify new ways to use SLS to enable new missions or mission profiles. In its initial Block 1 configuration, capable of launching 70 metric tons (t) to low Earth orbit (LEO), SLS is capable of not only propelling the Orion crew vehicle into cislunar space, but also delivering small satellites to deep space destinations. With a 5-meter (m) fairing consistent with contemporary Evolved Expendable Launch Vehicles (EELVs), the Block 1 configuration can also deliver science payloads to high-characteristic-energy (C3) trajectories to the outer solar system. With the addition of an upper stage, the Block 1B configuration of SLS will be able to deliver 105 t to LEO and enable more ambitious human missions into the proving ground of space. This configuration offers opportunities for launching co-manifested payloads with the Orion crew vehicle, and a new class of secondary payloads, larger than today's cubesats. The evolved configurations of SLS, including both Block 1B and the 130 t Block 2, also offer the capability to carry 8.4- or 10-m payload fairings, larger than any contemporary launch vehicle. With unmatched mass-lift capability, payload volume, and C3, SLS not only enables spacecraft or mission designs currently impossible with contemporary EELVs, it also offers enhancing benefits, such as reduced risk and operational costs associated with shorter transit time to destination and reduced risk and complexity associated with launching large systems either monolithically or in fewer components. As this paper will demonstrate, SLS is making strong progress toward first launch, and represents a unique new capability for spaceflight, and an opportunity to reinvent space by developing out-of-the-box missions and mission designs unlike any flown before.
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.
2011-01-01
Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.
Emergence of a Norovirus GII.4 Strain Correlates with Changes in Evolving Blockade Epitopes
Lindesmith, Lisa C.; Costantini, Verónica; Swanstrom, Jesica; Debbink, Kari; Donaldson, Eric F.; Vinjé, Jan
2013-01-01
The major capsid protein of norovirus GII.4 strains is evolving rapidly, resulting in epidemic strains with altered antigenicity. GII.4.2006 Minerva strains circulated at pandemic levels in 2006 and persisted at lower levels until 2009. In 2009, a new GII.4 variant, GII.4.2009 New Orleans, emerged and since then has become the predominant strain circulating in human populations. To determine whether changes in evolving blockade epitopes correlate with the emergence of the GII.4.2009 New Orleans strains, we compared the antibody reactivity of a panel of mouse monoclonal antibodies (MAbs) against GII.4.2006 and GII.4.2009 virus-like particles (VLPs). Both anti-GII.4.2006 and GII.4.2009 MAbs effectively differentiated the two strains by VLP-carbohydrate ligand blockade assay. Most of the GII.4.2006 MAbs preferentially blocked GII.4.2006, while all of the GII.4.2009 MAbs preferentially blocked GII.4.2009, although 8 of 12 tested blockade MAbs blocked both VLPs. Using mutant VLPs designed to alter predicted antigenic epitopes, binding of seven of the blockade MAbs was impacted by alterations in epitope A, identifying residues 294, 296, 297, 298, 368, and 372 as important antigenic sites in these strains. Convalescent-phase serum collected from a GII.4.2009 outbreak confirmed the immunodominance of epitope A, since alterations of epitope A affected serum reactivity by 40%. These data indicate that the GII.4.2009 New Orleans variant has evolved a key blockade epitope, possibly allowing for at least partial escape from protective herd immunity and provide epidemiological support for the utility of monitoring changes in epitope A in emergent strain surveillance. PMID:23269783
Learning and optimization with cascaded VLSI neural network building-block chips
NASA Technical Reports Server (NTRS)
Duong, T.; Eberhardt, S. P.; Tran, M.; Daud, T.; Thakoor, A. P.
1992-01-01
To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-b multiplying digital-to-analog converter (MDAC) synapse circuits, with 31 x 32 and 32 x 32 synapses per chip. A novel learning algorithm compatible with analog VLSI was applied to the two-input parity problem. The algorithm combines dynamically evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the learning algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7 x 7 assignment problem was mapped onto a cascaded 64-neuron fully connected feedback network. In 100 randomly selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.
How the continents deform: The evidence from tectonic geodesy
Thatcher, Wayne R.
2009-01-01
Space geodesy now provides quantitative maps of the surface velocity field within tectonically active regions, supplying constraints on the spatial distribution of deformation, the forces that drive it, and the brittle and ductile properties of continental lithosphere. Deformation is usefully described as relative motions among elastic blocks and is block-like because major faults are weaker than adjacent intact crust. Despite similarities, continental block kinematics differs from global plate tectonics: blocks are much smaller, typically ∼100–1000 km in size; departures from block rigidity are sometimes measurable; and blocks evolve over ∼1–10 Ma timescales, particularly near their often geometrically irregular boundaries. Quantitatively relating deformation to the forces that drive it requires simplifying assumptions about the strength distribution in the lithosphere. If brittle/elastic crust is strongest, interactions among blocks control the deformation. If ductile lithosphere is the stronger, its flow properties determine the surface deformation, and a continuum approach is preferable.
N-body simulations of star clusters
NASA Astrophysics Data System (ADS)
Engle, Kimberly Anne
1999-10-01
We investigate the structure and evolution of underfilling (i.e. non-Roche-lobe-filling) King model globular star clusters using N-body simulations. We model clusters with various underfilling factors and mass distributions to determine their evolutionary tracks and lifetimes. These models include a self-consistent galactic tidal field, mass loss due to stellar evolution, ejection, and evaporation, and binary evolution. We find that a star cluster that initially does not fill its Roche lobe can live many times longer than one that does initially fill its Roche lobe. After a few relaxation times, the cluster expands to fill its Roche lobe. We also find that the choice of initial mass function significantly affects the lifetime of the cluster. These simulations were performed on the GRAPE-4 (GRAvity PipE) special-purpose hardware with the stellar dynamics package ``Starlab.'' The GRAPE-4 system is a massively-parallel computer designed to calculate the force (and its first time derivative) due to N particles. Starlab's integrator ``kira'' employs a 4th- order Hermite scheme with hierarchical (block) time steps to evolve the stellar system. We discuss, in some detail, the design of the GRAPE-4 system and the manner in which the Hermite integration scheme with block time steps is implemented in the hardware.
Noise analysis of antibiotic permeation through bacterial channels
NASA Astrophysics Data System (ADS)
Nestorovich, Ekaterina M.; Danelon, Christophe; Winterhalter, Mathias; Bezrukov, Sergey M.
2003-05-01
Statistical analysis of high-resolution current recordings from a single ion channel reconstituted into a planar lipid membrane allows us to study transport of antibiotics at the molecular detail. Working with the general bacterial porin, OmpF, we demonstrate that addition of zwitterionic β-lactam antibiotics to the membrane-bathing solution introduces transient interruptions in the small-ion current through the channel. Time-resolved measurements reveal that one antibiotic molecule blocks one of the monomers in the OmpF trimer for characteristic times from microseconds to hundreds of microseconds. Spectral noise analysis enables us to perform measurements over a wide range of changing parameters. In all cases studied, the residence time of an antibiotic molecule in the channel exceeds the estimated time for free diffusion by orders of magnitude. This demonstrates that, in analogy to substrate-specific channels that evolved to bind specific metabolite molecules, antibiotics have 'evolved' to be channel-specific. The charge distribution of an efficient antibiotic complements the charge distribution at the narrowest part of the bacterial porin. Interaction of these charges creates a zone of attraction inside the channel and compensates the penetrating molecule's entropy loss and desolvation energy. This facilitates antibiotic translocation through the narrowest part of the channel and accounts for higher antibiotic permeability rates.
Wireless sensor platform for harsh environments
NASA Technical Reports Server (NTRS)
Garverick, Steven L. (Inventor); Yu, Xinyu (Inventor); Toygur, Lemi (Inventor); He, Yunli (Inventor)
2009-01-01
Reliable and efficient sensing becomes increasingly difficult in harsher environments. A sensing module for high-temperature conditions utilizes a digital, rather than analog, implementation on a wireless platform to achieve good quality data transmission. The module comprises a sensor, integrated circuit, and antenna. The integrated circuit includes an amplifier, A/D converter, decimation filter, and digital transmitter. To operate, an analog signal is received by the sensor, amplified by the amplifier, converted into a digital signal by the A/D converter, filtered by the decimation filter to address the quantization error, and output in digital format by the digital transmitter and antenna.
Tariq, V N; Scott, E M; McCain, N E
1995-01-01
Interactions between six compounds (econazole, miconazole, amphotericin B, nystatin, nikkomycin Z, and ibuprofen) were investigated for their antifungal activities against Candida albicans by using pair combinations in an in vitro decimal assay for additivity based on disk diffusion. Additive interactions were observed between miconazole and econazole, amphotericin B and nystatin, and amphotericin B and ibuprofen, while an antagonistic interaction was observed between econazole and amphotericin B. Synergistic interactions were recorded for the combinations of econazole and ibuprofen, econazole and nikkomycin Z, and ibuprofen and nikkomycin Z. PMID:8592989
Counting spanning trees on fractal graphs and their asymptotic complexity
NASA Astrophysics Data System (ADS)
Anema, Jason A.; Tsougkas, Konstantinos
2016-09-01
Using the method of spectral decimation and a modified version of Kirchhoff's matrix-tree theorem, a closed form solution to the number of spanning trees on approximating graphs to a fully symmetric self-similar structure on a finitely ramified fractal is given in theorem 3.4. We show how spectral decimation implies the existence of the asymptotic complexity constant and obtain some bounds for it. Examples calculated include the Sierpiński gasket, a non-post critically finite analog of the Sierpiński gasket, the Diamond fractal, and the hexagasket. For each example, the asymptotic complexity constant is found.
Thinking Fast Increases Framing Effects in Risky Decision Making.
Guo, Lisa; Trueblood, Jennifer S; Diederich, Adele
2017-04-01
Every day, people face snap decisions when time is a limiting factor. In addition, the way a problem is presented can influence people's choices, which creates what are known as framing effects. In this research, we explored how time pressure interacts with framing effects in risky decision making. Specifically, does time pressure strengthen or weaken framing effects? On one hand, research has suggested that framing effects evolve through the deliberation process, growing larger with time. On the other hand, dual-process theory attributes framing effects to an intuitive, emotional system that responds automatically to stimuli. In our experiments, participants made decisions about gambles framed in terms of either gains or losses, and time pressure was manipulated across blocks. Results showed increased framing effects under time pressure in both hypothetical and incentivized choices, which supports the dual-process hypothesis that these effects arise from a fast, intuitive system.
An observation about the relative hardiness of bacterial spores and planetary quarantine
NASA Technical Reports Server (NTRS)
Trauth, C. A., Jr.
1973-01-01
Planetary quarantine objectives are shown to be critically dependent on the deviation in the actual decimal-reduction-time or D values (i.e., the time necessary to reduce the population to one-tenth of its original value) of organisms on spacecraft from the values chosen for spacecraft sterilization that have been selected conservatively relative to defined experimental procedures and bacterial spore stocks. New data indicate that these D values are not conservative when compared with those of naturally occurring organisms. The possible implications of these new data for planetary quarantine are analyzed.
Abbadia: un étonnant château-observatoire
NASA Astrophysics Data System (ADS)
Briot, Danielle
2016-09-01
The observatory-castle Abbadia is standing in Hendaye, in the French Basque Country, in the South-West of France. The man who planned this building was Antoine d'Abbadie, both an explorer and a scientist. After spending several years in Abbissinia, now Ethiopia, he came back at home and he wanted to build a large house, actually a castle, which was also a scientific place containing an observatory. The very famous architects of the castle were Eugène-Emmanuel Viollet-le-Duc and his assistant Edmond Duthoit. The style is neogothic as typical of the nineteenth century. The main instrument of the observatory is a meridian refracting telescope made in 1879 by Eichens. The meridian circle is graduated in grades, that is a decimal angle unit established during the French Révolution. Similarly, the time unit of clocks are decimal units. The meridian observations of stars were carried on after the death of Antoine d'Abbadie in 1901, during three quarters of a century and observations of more than 50000 stars were published. Nowadays the castle is open to the public and pupils, and the scientific instruments have been put in use again.
NASA Astrophysics Data System (ADS)
Park, Sungkyung; Park, Chester Sungchung
2018-03-01
A composite radio receiver back-end and digital front-end, made up of a delta-sigma analogue-to-digital converter (ADC) with a high-speed low-noise sampling clock generator, and a fractional sample rate converter (FSRC), is proposed and designed for a multi-mode reconfigurable radio. The proposed radio receiver architecture contributes to saving the chip area and thus lowering the design cost. To enable inter-radio access technology handover and ultimately software-defined radio reception, a reconfigurable radio receiver consisting of a multi-rate ADC with its sampling clock derived from a local oscillator, followed by a rate-adjustable FSRC for decimation, is designed. Clock phase noise and timing jitter are examined to support the effectiveness of the proposed radio receiver. A FSRC is modelled and simulated with a cubic polynomial interpolator based on Lagrange method, and its spectral-domain view is examined in order to verify its effect on aliasing, nonlinearity and signal-to-noise ratio, giving insight into the design of the decimation chain. The sampling clock path and the radio receiver back-end data path are designed in a 90-nm CMOS process technology with 1.2V supply.
Wideband, mobile networking technologies
NASA Astrophysics Data System (ADS)
Hyer, Kevin L.; Bowen, Douglas G.; Pulsipher, Dennis C.
2005-05-01
Ubiquitous communications will be the next era in the evolving communications revolution. From the human perspective, access to information will be instantaneous and provide a revolution in services available to both the consumer and the warfighter. Services will be from the mundane - anytime, anywhere access to any movie ever made - to the vital - reliable and immediate access to the analyzed real-time video from the multi-spectral sensors scanning for snipers in the next block. In the former example, the services rely on a fixed infrastructure of networking devices housed in controlled environments and coupled to fixed terrestrial fiber backbones - in the latter, the services are derived from an agile and highly mobile ad-hoc backbone established in a matter of minutes by size, weight, and power-constrained platforms. This network must mitigate significant changes in the transmission media caused by millisecond-scale atmospheric temperature variations, the deployment of smoke, or the drifting of a cloud. It must mitigate against structural obscurations, jet wash, or incapacitation of a node. To maintain vital connectivity, the mobile backbone must be predictive and self-healing on both near-real-time and real-time time scales. The nodes of this network must be reconfigurable to mitigate intentional and environmental jammers, block attackers, and alleviate interoperability concerns caused by changing standards. The nodes must support multi-access of disparate waveform and protocols.
2018-02-21
change. Approved for public release; distribution is unlimited. 18 A.1 ThermalData.py import numpy import math import os from submit_maker...numpy import math import os from submit_maker import * import time from decimal import * #NOTE: ThermalData.py for the given material and...8217/’+Orientation+’/Submitfiles/submit_excal_’+r epr(temp)+’K_’+repr(u)+’eV.bash’) A.3 submit_maker.py import math import os def
2016-02-19
power converter, a solar photovoltaic ( PV ) system with inverter, and eighteen breakers. (Future work will require either validation of these models...custom control software. (For this project, this was done for the energy storage, solar PV , and breakers.) Implement several relay protection functions...for the PV array is given in Section A.3. This profile was generated by applying a decimation/interpolation filter to the signal from a solar flux
2016-02-23
power converter, a solar photovoltaic ( PV ) system with inverter, and eighteen breakers. (Future work will require either validation of these models or...control software. (For this project, this was done for the energy storage, solar PV , and breakers.) Implement several relay protection functions to...the PV array is given in Section A.3. This profile was generated by applying a decimation/interpolation filter to the signal from a solar flux point
A re-evaluation of the Kumta Suture in western peninsular India and its extension into Madagascar
NASA Astrophysics Data System (ADS)
Armistead, Sheree E.; Collins, Alan S.; Payne, Justin L.; Foden, John D.; De Waele, Bert; Shaji, E.; Santosh, M.
2018-05-01
It has long been recognised that Madagascar was contiguous with India until the Late Cretaceous. However, the timing and nature of the amalgamation of these two regions remain highly contentious as is the location of Madagascar against India in Gondwana. Here we address these issues with new U-Pb and Lu-Hf zircon data from five metasedimentary samples from the Karwar Block of India and new Lu-Hf data from eight previously dated igneous rocks from central Madagascar and the Antongil-Masora domains of eastern Madagascar. New U-Pb data from Karwar-region detrital zircon grains yield two dominant age peaks at c. 3100 Ma and c. 2500 Ma. The c. 3100 Ma population has relatively juvenile εHf(t) values that trend toward an evolved signature at c. 2500 Ma. The c. 2500 Ma population shows a wide range of εHf(t) values reflecting mixing of an evolved source with a juvenile source at that time. These data, and the new Lu-Hf data from Madagascar, are compared with our new compilation of over 7000 U-Pb and 1000 Lu-Hf analyses from Madagascar and India. We have used multidimensional scaling to assess similarities in these data in a statistically robust way. We propose that the Karwar Block of western peninsular India is an extension of the western Dharwar Craton and not part of the Antananarivo Domain of Madagascar as has been suggested in some models. Based on εHf(t) signatures we also suggest that India (and the Antongil-Masora domains of Madagascar) were palaeogeographically isolated from central Madagascar (the Antananarivo Domain) during the Palaeoproterozoic. This supports a model where central Madagascar and India amalgamated during the Neoproterozoic along the Betsimisaraka Suture.
Application of CRAFT in two-dimensional NMR data processing.
Krishnamurthy, Krish; Sefler, Andrea M; Russell, David J
2017-03-01
Two-dimensional (2D) data are typically truncated in both dimensions, but invariably and severely so in the indirect dimension. These truncated FIDs and/or interferograms are extensively zero filled, and Fourier transformation of such zero-filled data is always preceded by a rapidly decaying apodization function. Hence, the frequency line width in the spectrum (at least parallel to the evolution dimension) is almost always dominated by the apodization function. Such apodization-driven line broadening in the indirect (t 1 ) dimension leads to the lack of clear resolution of cross peaks in the 2D spectrum. Time-domain analysis (i.e. extraction of frequency, amplitudes, line width, and phase parameters directly from the FID, in this case via Bayesian modeling into a tabular format) of NMR data is another approach for spectral resonance characterization and quantification. The recently published complete reduction to amplitude frequency table (CRAFT) technique converts the raw FID data (i.e. time-domain data) into a table of frequencies, amplitudes, decay rate constants, and phases. CRAFT analyses of time-domain data require minimal or no apodization prior to extraction of the four parameters. We used the CRAFT processing approach for the decimation of the interferograms and compared the results from a variety of 2D spectra against conventional processing with and without linear prediction. The results show that use of the CRAFT technique to decimate the t 1 interferograms yields much narrower spectral line width of the resonances, circumventing the loss of resolution due to apodization. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Immunogenetic Mechanisms Driving Norovirus GII.4 Antigenic Variation
Donaldson, Eric F.; Corti, Davide; Swanstrom, Jesica; Debbink, Kari; Lanzavecchia, Antonio; Baric, Ralph S.
2012-01-01
Noroviruses are the principal cause of epidemic gastroenteritis worldwide with GII.4 strains accounting for 80% of infections. The major capsid protein of GII.4 strains is evolving rapidly, resulting in new epidemic strains with altered antigenic potentials. To test if antigenic drift may contribute to GII.4 persistence, human memory B cells were immortalized and the resulting human monoclonal antibodies (mAbs) characterized for reactivity to a panel of time-ordered GII.4 virus-like particles (VLPs). Reflecting the complex exposure history of the volunteer, human anti-GII.4 mAbs grouped into three VLP reactivity patterns; ancestral (1987–1997), contemporary (2004–2009), and broad (1987–2009). NVB 114 reacted exclusively to the earliest GII.4 VLPs by EIA and blockade. NVB 97 specifically bound and blocked only contemporary GII.4 VLPs, while NBV 111 and 43.9 exclusively reacted with and blocked variants of the GII.4.2006 Minerva strain. Three mAbs had broad GII.4 reactivity. Two, NVB 37.10 and 61.3, also detected other genogroup II VLPs by EIA but did not block any VLP interactions with carbohydrate ligands. NVB 71.4 cross-neutralized the panel of time-ordered GII.4 VLPs, as measured by VLP-carbohydrate blockade assays. Using mutant VLPs designed to alter predicted antigenic epitopes, two evolving, GII.4-specific, blockade epitopes were mapped. Amino acids 294–298 and 368–372 were required for binding NVB 114, 111 and 43.9 mAbs. Amino acids 393–395 were essential for binding NVB 97, supporting earlier correlations between antibody blockade escape and carbohydrate binding variation. These data inform VLP vaccine design, provide a strategy for expanding the cross-blockade potential of chimeric VLP vaccines, and identify an antibody with broadly neutralizing therapeutic potential for the treatment of human disease. Moreover, these data support the hypothesis that GII.4 norovirus evolution is heavily influenced by antigenic variation of neutralizing epitopes and consequently, antibody-driven receptor switching; thus, protective herd immunity is a driving force in norovirus molecular evolution. PMID:22615565
Clynne, Michael A.; Muffler, L.J.P.; Siems, D.F.; Taggart, J.E.; Bruggman, Peggy
2008-01-01
This open-file report presents WDXRF major-element chemical data for late Pliocene to Holocene volcanic rocks collected from Lassen Volcanic National Park and vicinity, California. Data for Rb, Sr, Ba, Y, Zr, Nb, Ni, Cr, Zn and Cu obtained by EDXRF are included for many samples. Data are presented in an EXCEL spreadsheet and are keyed to rock units as displayed on the Geologic Map of Lassen Volcanic National Park and vicinity (Clynne and Muffler, in press). Location of the samples is given in latitude and longitude in degrees and decimal minutes and in decimal degrees.
The USAF Stability and Control Digital DATCOM. Volume II. Implementation of Datcom Methods
1979-04-01
10N1S PAGE (Wheon 004 Enitletd4 811 UNCLASSIFIED SLkCUMITY CLASSIFICATION Or TAIIS PLQOS(W 1 D#* *,.E) , ---- program capabilities, input and output...J F. )W ..)- vi, w V)4 iI- C,)- co C’,J m m ~ 24 0 cr.’ >44 -i u S-P 0 CC uju w-12. 4.)L LW 3 0- -r DDc o0- C1 oa =ca C CC LA. CDCd LLjJ o 0...is located,, XX is the primary overlay number in decimal , and YY is the secondary overlay number in decimal . Hence, each overlay is written to a disk
Ando, S; Sekine, S; Mita, M; Katsuo, S
1989-12-15
An architecture and the algorithms for matrix multiplication using optical flip-flops (OFFs) in optical processors are proposed based on residue arithmetic. The proposed system is capable of processing all elements of matrices in parallel utilizing the information retrieving ability of optical Fourier processors. The employment of OFFs enables bidirectional data flow leading to a simpler architecture and the burden of residue-to-decimal (or residue-to-binary) conversion to operation time can be largely reduced by processing all elements in parallel. The calculated characteristics of operation time suggest a promising use of the system in a real time 2-D linear transform.
NASA Astrophysics Data System (ADS)
Kendrick, K. J.; Matti, J. C.
2017-12-01
The San Gorgonio Pass (SGP) region of southern California represents an extraordinarily complex section of the San Andreas Fault (SAF) zone, often referred to as a structural knot. Complexity is expressed both structurally and geomorphically, and arises because multiple strands of the SAF have evolved here in Quaternary time. Our integration of geologic and geomorphic analyses led to recognition of multiple fault-bounded blocks characterized by crystalline rocks that have similar physical properties. Hence, any morphometric differences in hypsometric analysis, slope, slope distribution, texture, and stream-power measurements and discontinuities reflect landscape response to tectonic processes rather than differences in lithology. We propose that the differing morphometry of the two blocks on either side of the San Bernardino strand (SBS) of the SAF, the high-standing Kitching Peak block to the east and the lower, more subdued Pisgah Peak block to the west, strongly suggests that the blocks experienced different uplift histories. This difference in uplift histories, in turn suggests that dextral slip occurred over a long time interval on the SBS—despite long-lived controversy raised by the fact that, at the surface, a throughgoing trace of the SBS is not present at this location. A different tectonic history between the two blocks is consistent with the gravity data which indicate that low-density rocks underthrusting the Kitching Peak block are absent below the Pisgah Peak block (Langenheim et al., 2015). Throughgoing slip on the SBS implied by geomorphic differences between the two blocks is also consistent with displaced geologic and geomorphic features. We find compelling evidence for discrete offsets of between 0.6 and 6 km of dextral slip on the SBS, including offset of fluvial and landslide deposits, and beheaded drainages. Although we lack numerical age control for the offset features, the degree of soil development associated with displaced landforms suggests that the SBS has had a longer geologic history than previously proposed, and that this fault strand may have experienced episodic activity. Landscape evolution and geologic evidence together require that dextral slip on the SAF must have continued through the SGP structural knot during an extended interval in the past.
Quasi-soliton scattering in quantum spin chains
NASA Astrophysics Data System (ADS)
Fioretto, Davide; Vljim, Rogier; Ganahl, Martin; Brockmann, Michael; Haque, Masud; Evertz, Hans-Gerd; Caux, Jean-Sébastien
The quantum scattering of magnon bound states in the anisotropic Heisenberg spin chain is shown to display features similar to the scattering of solitons in classical exactly solvable models. Localized colliding Gaussian wave packets of bound magnons are constructed from string solutions of the Bethe equations and subsequently evolved in time, relying on an algebraic Bethe ansatz based framework for the computation of local expectation values in real space-time. The local magnetization profile shows the trajectories of colliding wave packets of bound magnons, which obtain a spatial displacement upon scattering. Analytic predictions on the displacements for various values of anisotropy and string lengths are derived from scattering theory and Bethe ansatz phase shifts, matching time evolution fits on the displacements. The TEBD algorithm allows for the study of scattering displacements from spin-block states, showing similar displacement scattering features.
A Genetic Representation for Evolutionary Fault Recovery in Virtex FPGAs
NASA Technical Reports Server (NTRS)
Lohn, Jason; Larchev, Greg; DeMara, Ronald; Korsmeyer, David (Technical Monitor)
2003-01-01
Most evolutionary approaches to fault recovery in FPGAs focus on evolving alternative logic configurations as opposed to evolving the intra-cell routing. Since the majority of transistors in a typical FPGA are dedicated to interconnect, nearly 80% according to one estimate, evolutionary fault-recovery systems should benefit hy accommodating routing. In this paper, we propose an evolutionary fault-recovery system employing a genetic representation that takes into account both logic and routing configurations. Experiments were run using a software model of the Xilinx Virtex FPGA. We report that using four Virtex combinational logic blocks, we were able to evolve a 100% accurate quadrature decoder finite state machine in the presence of a stuck-at-zero fault.
Yuliana, Tri; Ebihara, Kyota; Suzuki, Mio; Shimonaka, Chie; Amachi, Seigo
2015-12-01
Alphaproteobacterium strain Q-1 produces an extracellular multicopper oxidase (IOX), which catalyzes iodide (I-) oxidation to form molecular iodine (I2). In this study, the antimicrobial activity of the IOX/iodide system was determined. Both Gram-positive and Gram-negative bacteria tested were killed completely within 5 min by 50 mU mL(-1) of IOX and 10 mM iodide. The sporicidal activity of the system was also tested and compared with a common iodophor, povidone-iodine (PVP-I). IOX (300 mU mL(-1)) killed Bacillus cereus, B. subtilis, and Geobacillus stearothermophilus spores with decimal reduction times of 2.58, 7.62, and 40.9 min, respectively. However, 0.1% PVP-I killed these spores with much longer decimal reduction times of 5.46, 38.0, and 260 min, respectively. To evaluate the more superior sporicidal activity of the IOX system over PVP-I, the amount of free iodine (non-complexed I2) was determined by an equilibrium dialysis technique. The IOX system included more than 40 mg L(-1) of free iodine, while PVP-I included at most 25 mg L(-1) free iodine. Our results suggest that the new enzyme-based antimicrobial system is effective against a wide variety of microorganisms and bacterial spores, and that its strong biocidal activity is due to its high free iodine content, which is probably maintained by re-oxidation of iodide released after oxidation of cell components by I2.
Multiple-scale stochastic processes: Decimation, averaging and beyond
NASA Astrophysics Data System (ADS)
Bo, Stefano; Celani, Antonio
2017-02-01
The recent experimental progresses in handling microscopic systems have allowed to probe them at levels where fluctuations are prominent, calling for stochastic modeling in a large number of physical, chemical and biological phenomena. This has provided fruitful applications for established stochastic methods and motivated further developments. These systems often involve processes taking place on widely separated time scales. For an efficient modeling one usually focuses on the slower degrees of freedom and it is of great importance to accurately eliminate the fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. This procedure in general requires to perform two different operations: decimation and coarse-graining. We introduce the asymptotic methods that form the basis of this procedure and discuss their application to a series of physical, biological and chemical examples. We then turn our attention to functionals of the stochastic trajectories such as residence times, counting statistics, fluxes, entropy production, etc. which have been increasingly studied in recent years. For such functionals, the elimination of the fast degrees of freedom can present additional difficulties and naive procedures can lead to blatantly inconsistent results. Homogenization techniques for functionals are less covered in the literature and we will pedagogically present them here, as natural extensions of the ones employed for the trajectories. We will also discuss recent applications of these techniques to the thermodynamics of small systems and their interpretation in terms of information-theoretic concepts.
Wilder, Aryn P; Eisen, Rebecca J; Bearden, Scott W; Montenieri, John A; Gage, Kenneth L; Antolin, Michael F
2008-06-01
Plague, caused by the bacterium Yersinia pestis, often leads to rapid decimation of black-tailed prairie dog colonies. Flea-borne transmission of Y. pestis has been thought to occur primarily via blocked fleas, and therefore studies of vector efficiency have focused on the period when blockage is expected to occur (> or =5 days post-infection [p.i.]). Oropsylla hirsuta, a prairie dog flea, rarely blocks and transmission is inefficient > or =5 days p.i.; thus, this flea has been considered incapable of explaining rapid dissemination of Y. pestis among prairie dogs. By infecting wild-caught fleas with Y. pestis and exposing naïve mice to groups of fleas at 24, 48, 72, and 96 h p.i., we examined the early-phase (1-4 days p.i.) efficiency of O. hirsuta to transmit Y. pestis to hosts and showed that O. hirsuta is a considerably more efficient vector at this largely overlooked stage (5.19% of fleas transmit Y. pestis at 24 h p.i.) than at later stages. Using a model of vectorial capacity, we suggest that this level of transmission can support plague at an enzootic level in a population when flea loads are within the average observed for black-tailed prairie dogs in nature. Shared burrows and sociality of prairie dogs could lead to accumulation of fleas when host population is reduced as a result of the disease, enabling epizootic spread of plague among prairie dogs.
Li, Qing; Chen, Yu; Rowlett, Jarrett R; McGrath, James E; Mack, Nathan H; Kim, Yu Seung
2014-04-23
Structure-property-performance relationships of disulfonated poly(arylene ether sulfone) multiblock copolymer membranes were investigated for their use in direct methanol fuel cell (DMFC) applications. Multiple series of reactive polysulfone, polyketone, and polynitrile hydrophobic block segments having different block lengths and molecular composition were synthesized and reacted with a disulfonated poly(arylene ether sulfone) hydrophilic block segment by a coupling reaction. Large-scale morphological order of the multiblock copolymers evolved with the increase of block size that gave notable influence on mechanical toughness, water uptake, and proton/methanol transport. Chemical structural changes of the hydrophobic blocks through polar group, fluorination, and bisphenol type allowed further control of the specific properties. DMFC performance was analyzed to elicit the impact of structural variations of the multiblock copolymers. Finally, DMFC performances of selected multiblock copolymers were compared against that of the industrial standard Nafion in the DMFC system.
IMS/Satellite Situation Center report: Predicted orbit plots for IMP-J-1976
NASA Technical Reports Server (NTRS)
1975-01-01
Predicted orbit plots for the IMP-J satellite were given for the time period January-December 1976. These plots are shown in three projections. The time period covered by each set of projections is 12 days and 6 hours, corresponding approximately to the period of IMP-J. The three coordinate systems used are the Geocentric Solar Ecliptic system (GSE), the Geocentric Solar Magnetospheric system (GSM), and the Solar Magnetic system (SM). For each of the three projections, time ticks and codes are given on the satellite trajectories. The codes are interpreted in the table at the base of each plot. Time is given in the table as year/day/decimal hour, and the total time covered by each plot is shown at the bottom of each table.
Reducing number entry errors: solving a widespread, serious problem.
Thimbleby, Harold; Cairns, Paul
2010-10-06
Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).
Piracha, Mohammad M; Thorp, Stephen L; Puttanniah, Vinay; Gulati, Amitabh
Postmastectomy pain syndrome (PMPS) is a significant burden for breast cancer survivors. Although multiple therapies have been described, an evolving field of serratus anterior plane blocks has been described in this population. We describe the addition of the deep serratus anterior plane block (DSPB) for PMPS. Four patients with history of PMPS underwent DSPB for anterior chest wall pain. A retrospective review of these patients' outcomes was obtained through postprocedure interviews. Three of the patients previously had a superficial serratus anterior plane block, which was not as efficacious as the DSPB. The fourth patient had a superficial serratus anterior plane that was difficult to separate with hydrodissection but had improved pain control with a DSPB. We illustrate 4 patients who have benefitted from a DSPB and describe indications that this block may be more efficacious than a superficial serratus plane block. Further study is recommended to understand the intercostal nerve branches within the lateral and anterior muscular chest wall planes.
Understanding multi-scale structural evolution in granular systems through gMEMS
NASA Astrophysics Data System (ADS)
Walker, David M.; Tordesillas, Antoinette
2013-06-01
We show how the rheological response of a material to applied loads can be systematically coded, analyzed and succinctly summarized, according to an individual grain's property (e.g. kinematics). Individual grains are considered as their own smart sensor akin to microelectromechanical systems (e.g. gyroscopes, accelerometers), each capable of recognizing their evolving role within self-organizing building block structures (e.g. contact cycles and force chains). A symbolic time series is used to represent their participation in such self-assembled building blocks and a complex network summarizing their interrelationship with other grains is constructed. In particular, relationships between grain time series are determined according to the information theory Hamming distance or the metric Euclidean distance. We then use topological distance to find network communities enabling groups of grains at remote physical metric distances in the material to share a classification. In essence grains with similar structural and functional roles at different scales are identified together. This taxonomy distills the dissipative structural rearrangements of grains down to its essential features and thus provides pointers for objective physics-based internal variable formalisms used in the construction of robust predictive continuum models.
Contributive Research and Development
1991-09-25
cyclopentadiene cracks down (evolves) at about 230 C by retro- Diels Alder reaction under ambient pressure, high pressure, or vacuum environments and the...block coagulation 3) lamination of extruded film 4) microwave drawing of extruded fiber. During processing of molecular composite solutions via wet
NASA Technical Reports Server (NTRS)
Apostol, Tom M. (Editor)
1989-01-01
The early history and the uses of the mathematical notation - pi - are presented through both film footage and computer animation in this 'Project Mathematics' series video. Pi comes from the first letter in the Greek word for perimeter. Archimedes, and early Greek mathematician, formulated the equations for the computation of a circle's area using pi and was the first person to seriously approximate pi numerically, although only to a few decimal places. By 1985, pi had been approximated to over one billion decimal places and was found to have no repeating pattern. One use of pi is the application of its approximation calculation as an analytical tool for determining the accuracy of supercomputers and software designs.
Trinary signed-digit arithmetic using an efficient encoding scheme
NASA Astrophysics Data System (ADS)
Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.
2000-09-01
The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
One-step trinary signed-digit arithmetic using an efficient encoding scheme
NASA Astrophysics Data System (ADS)
Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.
2000-11-01
The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
Methodology Investigation. Chamber Versus Environmental Deterioration Tests
1979-01-01
instructions do not apply to the record copy (AR 340-18). UNCLASSIFIED S.CURITY CLASSIFICATION OF I HIS PAGE ("eiln Daoe FKnterpd) I PAGE READ INSTRUCTIONS...D 4 - from P. Pin F 0-0 Figure 14 ->TIL 302 TL3 * 10 I , C 1:1 FiL r 16. Diia0lcto i yt m c e- t 1C 234 -.>11- tandem. The binary coded decimal ...Mexico State University and/or the University I of Texas at El Paso will be contracted to assist in this investigation by performing detailed, time
A systematic investigation of the link between rational number processing and algebra ability.
Hurst, Michelle; Cordes, Sara
2018-02-01
Recent research suggests that fraction understanding is predictive of algebra ability; however, the relative contributions of various aspects of rational number knowledge are unclear. Furthermore, whether this relationship is notation-dependent or rather relies upon a general understanding of rational numbers (independent of notation) is an open question. In this study, college students completed a rational number magnitude task, procedural arithmetic tasks in fraction and decimal notation, and an algebra assessment. Using these tasks, we measured three different aspects of rational number ability in both fraction and decimal notation: (1) acuity of underlying magnitude representations, (2) fluency with which symbols are mapped to the underlying magnitudes, and (3) fluency with arithmetic procedures. Analyses reveal that when looking at the measures of magnitude understanding, the relationship between adults' rational number magnitude performance and algebra ability is dependent upon notation. However, once performance on arithmetic measures is included in the relationship, individual measures of magnitude understanding are no longer unique predictors of algebra performance. Furthermore, when including all measures simultaneously, results revealed that arithmetic fluency in both fraction and decimal notation each uniquely predicted algebra ability. Findings are the first to demonstrate a relationship between rational number understanding and algebra ability in adults while providing a clearer picture of the nature of this relationship. © 2017 The British Psychological Society.
Evolving institutional and policy frameworks to support adaptation strategies
Dave Cleaves
2014-01-01
Given the consequences and opportunities of the Anthropocene, what is our underlying theory or vision of successful adaptation? This essay discusses the building blocks of this theory, and how will we translate this theory into guiding principles for management and policy.
NASA's Space Launch System: Moving Toward the Launch Pad
NASA Technical Reports Server (NTRS)
Creech, Stephen D.; May, Todd
2013-01-01
The National Aeronautics and Space Administration's (NASA's) Space Launch System (SLS) Program, managed at the Marshall Space Flight Center, is making progress toward delivering a new capability for human space flight and scientific missions beyond Earth orbit. Developed with the goals of safety, affordability, and sustainability in mind, the SLS rocket will launch the Orion Multi-Purpose Crew Vehicle (MPCV), equipment, supplies, and major science missions for exploration and discovery. Supporting Orion's first autonomous flight to lunar orbit and back in 2017 and its first crewed flight in 2021, the SLS will evolve into the most powerful launch vehicle ever flown, via an upgrade approach that will provide building blocks for future space exploration and development. NASA is working to develop this new capability in an austere economic climate, a fact which has inspired the SLS team to find innovative solutions to the challenges of designing, developing, fielding, and operating the largest rocket in history. This paper will summarize the planned capabilities of the vehicle, the progress the SLS program has made in the 2 years since the Agency formally announced its architecture in September 2011, and the path the program is following to reach the launch pad in 2017 and then to evolve the 70 metric ton (t) initial lift capability to 130-t lift capability. The paper will explain how, to meet the challenge of a flat funding curve, an architecture was chosen which combines the use and enhancement of legacy systems and technology with strategic new development projects that will evolve the capabilities of the launch vehicle. This approach reduces the time and cost of delivering the initial 70 t Block 1 vehicle, and reduces the number of parallel development investments required to deliver the evolved version of the vehicle. The paper will outline the milestones the program has already reached, from developmental milestones such as the manufacture of the first flight hardware and the record-breaking testing of the J-2X engine, to life-cycle milestones such as the vehicle's Preliminary Design Review. The paper will also discuss the remaining challenges in both delivering the 70 t vehicle and in evolving its capabilities to the 130 t vehicle, and how the program plans to accomplish these goals. As this paper will explain, SLS is making measurable progress toward becoming a global infrastructure asset for robotic and human scouts of all nations by harnessing business and technological innovations to deliver sustainable solutions for space exploration
Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C
2017-02-15
Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
An Approach for On-Board Software Building Blocks Cooperation and Interfaces Definition
NASA Astrophysics Data System (ADS)
Pascucci, Dario; Campolo, Giovanni; Candia, Sante; Lisio, Giovanni
2010-08-01
This paper provides an insight on the Avionic SW architecture developed by Thales Alenia Space Italy (TAS-I) to achieve structuring of the OBSW as a set of self-standing and re-usable building blocks. It is initially described the underlying framework for building blocks cooperation, which is based on ECSSE-70 packets forwarding (for services request to a building block) and standard parameters exchange for data communication. Subsequently it is discussed the high level of flexibility and scalability of the resulting architecture, reporting as example an implementation of the Failure Detection, Isolation and Recovery (FDIR) function which exploits the proposed architecture. The presented approach evolves from avionic SW architecture developed in the scope of the project PRIMA (Mult-Purpose Italian Re-configurable Platform) and has been adopted for the Sentinel-1 Avionic Software (ASW).
Evolving regulatory paradigm for proarrhythmic risk assessment for new drugs.
Vicente, Jose; Stockbridge, Norman; Strauss, David G
Fourteen drugs were removed from the market worldwide because their potential to cause torsade de pointes (torsade), a potentially fatal ventricular arrhythmia. The observation that most drugs that cause torsade block the potassium channel encoded by the human ether-à-go-go related gene (hERG) and prolong the heart rate corrected QT interval (QTc) on the ECG, led to a focus on screening new drugs for their potential to block the hERG potassium channel and prolong QTc. This has been a successful strategy keeping torsadogenic drugs off the market, but has resulted in drugs being dropped from development, sometimes inappropriately. This is because not all drugs that block the hERG potassium channel and prolong QTc cause torsade, sometimes because they block other channels. The regulatory paradigm is evolving to improve proarrhythmic risk prediction. ECG studies can now use exposure-response modeling for assessing the effect of a drug on the QTc in small sample size first-in-human studies. Furthermore, the Comprehensive in vitro Proarrhythmia Assay (CiPA) initiative is developing and validating a new in vitro paradigm for cardiac safety evaluation of new drugs that provides a more accurate and comprehensive mechanistic-based assessment of proarrhythmic potential. Under CiPA, the prediction of proarrhythmic potential will come from in vitro ion channel assessments coupled with an in silico model of the human ventricular myocyte. The preclinical assessment will be checked with an assessment of human phase 1 ECG data to determine if there are unexpected ion channel effects in humans compared to preclinical ion channel data. While there is ongoing validation work, the heart rate corrected J-T peak interval is likely to be assessed under CiPA to detect inward current block in presence of hERG potassium channel block. Copyright © 2016 Elsevier Inc. All rights reserved.
1988-09-01
CIT C 15 Name of local city. InCSrATE C 2 Name of local state as tw letter abbreviatiom. SIC ZIP C 10 Loa ZIP code. Five or nine digits . InC_ PHKtE C 15...record: 10 Database Dictimary for C: \\ BASE\\PAS1E.MF Field Nane Type Width Decimal Coments PMSCODE C 2 Third and fourth digits of PAS code. ON C 3...Version: 3.01 Date: 09/01/88 Time: 21:34 Report Libary : C: ASE\\GPO.RP1 Date: 08/28/88 Time: 11:32 PRMNT OFTICNS CflRL-PRINrM Nmber of copies: 1 Starting
Convolution of large 3D images on GPU and its decomposition
NASA Astrophysics Data System (ADS)
Karas, Pavel; Svoboda, David
2011-12-01
In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.
Scientific life should be measured in seven year units.
Charlton, Bruce G
2006-01-01
Traditional wisdom and empirical observation unite in recommending a 7 year unit for measuring human life - including individual and institutional science. But, because of astronomy and the decimal system, things tend to be measured either in years, five years or in decades. A year is too short while a decade is too long to measure the trends and transitions of individual or institutional life. And the half decade, such as the 'five year plan' beloved by politicians and bureaucrats seems too short. Therefore, seven years should become the standard unit for tracking trends and measuring attainment. Precedents for using a seven year unit include the notorious Jesuit saying: 'Give me the child until he is seven, and I will show you the man'; and the 'ninth commandment' of Leo Szilard: 'Do your work for six years; but in the seventh, go into solitude or among strangers, so that the memory of your friends does not prevent you from being what you have become'. In a scientific career, seven years is approximately the time spent at high school, the time taken for a traditional basic scientific training of first degree and doctorate, and the period after the doctorate building the knowledge to become an expert specialist. There seems to be enough anecdotal evidence to support the idea that we should reconsider the universal but un-reflective use of decimal units in planning and evaluation. For instance, seven year fellowships and program grants might replace the current five year versions. A new - and previously unconsidered - field of research beckons.
Controlling Contagion Processes in Activity Driven Networks
NASA Astrophysics Data System (ADS)
Liu, Suyu; Perra, Nicola; Karsai, Márton; Vespignani, Alessandro
2014-03-01
The vast majority of strategies aimed at controlling contagion processes on networks consider the connectivity pattern of the system either quenched or annealed. However, in the real world, many networks are highly dynamical and evolve, in time, concurrently with the contagion process. Here, we derive an analytical framework for the study of control strategies specifically devised for a class of time-varying networks, namely activity-driven networks. We develop a block variable mean-field approach that allows the derivation of the equations describing the coevolution of the contagion process and the network dynamic. We derive the critical immunization threshold and assess the effectiveness of three different control strategies. Finally, we validate the theoretical picture by simulating numerically the spreading process and control strategies in both synthetic networks and a large-scale, real-world, mobile telephone call data set.
BioBlocks: Programming Protocols in Biology Made Easier.
Gupta, Vishal; Irimia, Jesús; Pau, Iván; Rodríguez-Patón, Alfonso
2017-07-21
The methods to execute biological experiments are evolving. Affordable fluid handling robots and on-demand biology enterprises are making automating entire experiments a reality. Automation offers the benefit of high-throughput experimentation, rapid prototyping, and improved reproducibility of results. However, learning to automate and codify experiments is a difficult task as it requires programming expertise. Here, we present a web-based visual development environment called BioBlocks for describing experimental protocols in biology. It is based on Google's Blockly and Scratch, and requires little or no experience in computer programming to automate the execution of experiments. The experiments can be specified, saved, modified, and shared between multiple users in an easy manner. BioBlocks is open-source and can be customized to execute protocols on local robotic platforms or remotely, that is, in the cloud. It aims to serve as a de facto open standard for programming protocols in Biology.
A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series
NASA Astrophysics Data System (ADS)
Ghanbarzadeh, Mitra; Aminghafari, Mina
2016-02-01
Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.
Dry-heat Resistance of Bacillus Subtilis Var. Niger Spores on Mated Surfaces
NASA Technical Reports Server (NTRS)
Simko, G. J.; Devlin, J. D.; Wardle, M. D.
1971-01-01
Bacillus subtilis var. niger spores were placed on the surfaces of test coupons manufactured from typical spacecraft materials including stainless steel, magnesium, titanium, and aluminum. These coupons were then juxtaposed at the inoculated surfaces and subjected to test pressures of 0, 1000, 5000, and 10,000 psi. Tests were conducted in ambient, nitrogen, and helium atmospheres. While under the test pressure condition, the spores were exposed to 125 C for intervals of 5, 10, 20, 50, or 80 min. Survivor data were subjected to a linear regression analysis that calculated decimal reduction times.
1988-12-02
Include Area Code) I22c. OFIICE 5YMBOL Dr. David W. Hislop I I DD FORM 1473,84 MAR d3 APR edition may oe used until exnausteo. SECURITY CLASSIFICATION OF...Emulator for Performance Evaluation, CommUicanions ofILheACM23, 2 (Feb. 1980 ), 71-80. (4) Wirth, N., Microprocessor Architectures: A Comparison Based on...byte-addressing and has a 16-bit word 1980 decimal size. 3764B octal (denoted by the trailing "B") OCADH hexadeci’nal (denoted by the mailing "H") 1.1
Early chemo-dynamical evolution of dwarf galaxies deduced from enrichment of r-process elements
NASA Astrophysics Data System (ADS)
Hirai, Yutaka; Ishimaru, Yuhri; Saitoh, Takayuki R.; Fujii, Michiko S.; Hidaka, Jun; Kajino, Toshitaka
2017-04-01
The abundance of elements synthesized by the rapid neutron-capture process (r-process elements) of extremely metal-poor (EMP) stars in the Local Group galaxies gives us clues to clarify the early evolutionary history of the Milky Way halo. The Local Group dwarf galaxies would have similarly evolved with building blocks of the Milky Way halo. However, how the chemo-dynamical evolution of the building blocks affects the abundance of r-process elements is not yet clear. In this paper, we perform a series of simulations using dwarf galaxy models with various dynamical times and total mass, which determine star formation histories. We find that galaxies with dynamical times longer than 100 Myr have star formation rates less than 10-3 M⊙ yr-1 and slowly enrich metals in their early phase. These galaxies can explain the observed large scatters of r-process abundance in EMP stars in the Milky Way halo regardless of their total mass. On the other hand, the first neutron star merger appears at a higher metallicity in galaxies with a dynamical time shorter than typical neutron star merger times. The scatters of r-process elements mainly come from the inhomogeneity of the metals in the interstellar medium whereas the scatters of α-elements are mostly due to the difference in the yield of each supernova. Our results demonstrate that the future observations of r-process elements in EMP stars will be able to constrain the early chemo-dynamical evolution of the Local Group galaxies.
ERIC Educational Resources Information Center
Lyon, Betty Clayton
1990-01-01
One method of making magic squares using a prolongated square is illustrated. Discussed are third-order magic squares, fractional magic squares, fifth-order magic squares, decimal magic squares, and even magic squares. (CW)
Usefulness of Pulse Oximeter That Can Measure SpO2 to One Digit After Decimal Point.
Yamamoto, Akihiro; Burioka, Naoto; Eto, Aritoshi; Amisaki, Takashi; Shimizu, Eiji
2017-06-01
Pulse oximeters are used to noninvasively measure oxygen saturation in arterial blood (SaO 2 ). Although arterial oxygen saturation measured by pulse oximeter (SpO 2 ) is usually indicated in 1% increments, the value of SaO 2 from arterial blood gas analysis is not an integer. We have developed a new pulse oximeter that can measure SpO 2 to one digit after the decimal point. The values of SpO 2 from the newly developed pulse oximeter are highly correlated with the values of SaO 2 from arterial blood gas analysis (SpO 2 = 0.899 × SaO 2 + 9.944, r = 0.887, P < 0.0001). This device may help improve the evaluation of pathological conditions in patients.
NASA Astrophysics Data System (ADS)
Ioannidis, Andronique; Facci, John S.; Abkowitz, Martin A.
1998-08-01
Injection efficiency from evaporated Au contacts on a molecularly doped polymer (MDP) system has been previously observed to evolve from blocking to ohmic over time. In the present article this contact forming phenomenon is analyzed in detail. The initially blocking nature of the Au contact is in contrast with that expected from the relative workfunctions of Au and of the polymer which suggest Au should inject holes efficiently. It is also in apparent contrast to a differently prepared interface of the same materials. The phenomenon is not unique to this interface, having been confirmed also for evaporated Ag and mechanically made liquid Hg contacts on the same MDP. The MDP is a disordered solid state solution of electroactive triarylamine hole transporting TPD molecules in a polycarbonate matrix. The trap-free hole-transport MDP provides a model system for the study of metal/polymer interfaces by enabling the use of a recently developed technique that gives a quantitative measure of contact injection efficiency. The technique combines field-dependent steady state injection current measurements at a contact under test with time-of-flight (TOF) mobility measurements made on the same sample. In the present case, MDP films were prepared with two top vapor-deposited contacts, one of Au (test contact) and one of Al (for TOF), and a bottom carbon-loaded polymer electrode which is known to be ohmic for hole injection. The samples were aged at various temperatures below the glass transition of the MDP (85 °C) and the evolution of current versus field and capacitance versus frequency behaviors are followed in detail over time and analyzed. Control measurements ensure that the evolution of the electrical properties is due to the Au/polymer interface behavior and not the bulk. All evaporated Au contacts eventually achieved ohmic injection. The evaporated Au/MDP interface was also investigated by transmission electron microscopy as a function of time and showed no evidence of Au interdiffusion in the MDP layer, remaining abrupt to within ˜10 Å over the course of the evolution in injection efficiency. Mechanisms related to Au penetration into the MDP are therefore unlikely. Rapid sequence data acquisition enabled the detection of two main processes in the injection evolution. The evolving injection efficiency is very well fit by two exponentials, enabling the characterization of time and temperature dependence of the evolution processes.
IMS/Satellite Situation Center report. Predicted orbit plots for Hawkeye 1, 1976
NASA Technical Reports Server (NTRS)
1975-01-01
The predicted orbit plots are shown in three projections. The time period covered by each set of projections is 2 days 1 hour, corresponding approximately to the period of Hawkeye 1. The three coordinate systems used are the Geocentric Solar Ecliptic system (GSE), the Geocentric Solar Magnetospheric system (GSM), and the Solar Magnetic system (SM). For each of the three projections, time ticks and codes are given on the satellite trajectories. The codes are interpreted in the table at the base of each plot. Time is given in the table as year/day/decimal hour. The total time covered by each plot is shown at the bottom of each table, and an additional variable is given in the table for each time tick. For the GSM and SM projection this variable is the geocentric distance to the satellite in earth radii, and for the GSE projection the variable is satellite ecliptic latitude in degrees.
NASA Astrophysics Data System (ADS)
Coquillat, Sylvain; Defer, Eric; Lambert, Dominique; Martin, Jean-Michel; Pinty, Jean-Pierre; Pont, Véronique; Prieur, Serge
2015-04-01
Located in the West Mediterranean basin, Corsica is strategically positioned for atmospheric studies referred by MISTRALS/HyMeX and MISTRALS/CHARMEX programs. The implementation of the project of atmospheric observatory CORSiCA (supported by the Collectivité Territoriale de Corse via CPER/FEDER funds) was an opportunity to strengthen the potential observation of convective events causing heavy rainfall and flash floods, by acquiring a total lightning activity detection system adapted to storm tracking at a regional scale. This detection system called SAETTA (Suivi de l'Activité Electrique Tridimensionnelle Totale de l'Atmosphère) is a network of 12 LMA stations (Lightning Mapping Array). Developed by New Mexico Tech (USA), the instrument allows observing lightning flashes in 3D and real time, at high temporal and spatial resolutions. It detects the radiations emitted by cloud discharges in the 60-66 MHz band, in a radius of about 300 km from the centre of the network, in passive mode and standalone (solar panel and battery). Each LMA station samples the signal at high rate (80 microseconds), records data on internal hard disk, and transmits a decimated signal in real-time via the 3G phone network. The decimated data are received on a server that calculates the position of the detected sources by the time-of-arrival method and manages a quasi real-time display on a website. The non decimated data intended for research applications are recovered later on the field. Deployed in May and June 2014, SAETTA operated nominally from July 13 to October 20, 2014. It is to be definitively re-installed in spring 2015 after a hardware updating. The operation of SAETTA is contractually scheduled until the end of 2019, but it is planned to continue well beyond to obtain longer-term observations for addressing issues related to climatic trends. SAETTA has great scientific potential in a broad range of topics: physics of discharge; monitoring and simulation of storm systems; climatology of convection in the western Mediterranean; production of nitrogen oxides by lightning; influence of pollution and aerosols on the electrical activity; synergy with operational lightning networks (EUCLID, ATDnet, Linet, ZEUS) and radar observations (ARAMIS). SAETTA should also become a validation tool for space observation of lightning (e.g. TARANIS mission and optical flash sensor on Meteosat Third Generation), but also for field campaigns. Acknowledgements are adressed to CORSiCA-SAETTA main sponsors (Collectivité Territoriale de Corse through the Fonds Européen de Développement Régional of the European Operational Program 2007-2013 and the Contrat de Plan Etat Région; HyMeX/MISTRALS; Observatoire Midi-Pyrénées; Laboratoire d'Aérologie) and many individuals and regional institutions in Corsica that host the 12 stations of the network or that helped us to find sites.
Prescribing family criticism as a paradoxical intervention.
Bergman, J S
1983-12-01
Two case studies are presented in which parental criticism of children was prescribed in two fused families, who in part were fused because of the mutual and intense criticism between the parents and children. In both cases, prescribing the criticism resulted in blocking the parental criticism, thus forcing the family members to interact in new and different ways. Blocking the criticism permitted the individuals and the family to evolve to the next developmental stages in the family life cycle. Discussion is detailed as to why the prescription of family criticism led to shifts in the family system.
Evolution of Self-Organized Task Specialization in Robot Swarms
Ferrante, Eliseo; Turgut, Ali Emre; Duéñez-Guzmán, Edgar; Dorigo, Marco; Wenseleers, Tom
2015-01-01
Division of labor is ubiquitous in biological systems, as evidenced by various forms of complex task specialization observed in both animal societies and multicellular organisms. Although clearly adaptive, the way in which division of labor first evolved remains enigmatic, as it requires the simultaneous co-occurrence of several complex traits to achieve the required degree of coordination. Recently, evolutionary swarm robotics has emerged as an excellent test bed to study the evolution of coordinated group-level behavior. Here we use this framework for the first time to study the evolutionary origin of behavioral task specialization among groups of identical robots. The scenario we study involves an advanced form of division of labor, common in insect societies and known as “task partitioning”, whereby two sets of tasks have to be carried out in sequence by different individuals. Our results show that task partitioning is favored whenever the environment has features that, when exploited, reduce switching costs and increase the net efficiency of the group, and that an optimal mix of task specialists is achieved most readily when the behavioral repertoires aimed at carrying out the different subtasks are available as pre-adapted building blocks. Nevertheless, we also show for the first time that self-organized task specialization could be evolved entirely from scratch, starting only from basic, low-level behavioral primitives, using a nature-inspired evolutionary method known as Grammatical Evolution. Remarkably, division of labor was achieved merely by selecting on overall group performance, and without providing any prior information on how the global object retrieval task was best divided into smaller subtasks. We discuss the potential of our method for engineering adaptively behaving robot swarms and interpret our results in relation to the likely path that nature took to evolve complex sociality and task specialization. PMID:26247819
Evolution of Self-Organized Task Specialization in Robot Swarms.
Ferrante, Eliseo; Turgut, Ali Emre; Duéñez-Guzmán, Edgar; Dorigo, Marco; Wenseleers, Tom
2015-08-01
Division of labor is ubiquitous in biological systems, as evidenced by various forms of complex task specialization observed in both animal societies and multicellular organisms. Although clearly adaptive, the way in which division of labor first evolved remains enigmatic, as it requires the simultaneous co-occurrence of several complex traits to achieve the required degree of coordination. Recently, evolutionary swarm robotics has emerged as an excellent test bed to study the evolution of coordinated group-level behavior. Here we use this framework for the first time to study the evolutionary origin of behavioral task specialization among groups of identical robots. The scenario we study involves an advanced form of division of labor, common in insect societies and known as "task partitioning", whereby two sets of tasks have to be carried out in sequence by different individuals. Our results show that task partitioning is favored whenever the environment has features that, when exploited, reduce switching costs and increase the net efficiency of the group, and that an optimal mix of task specialists is achieved most readily when the behavioral repertoires aimed at carrying out the different subtasks are available as pre-adapted building blocks. Nevertheless, we also show for the first time that self-organized task specialization could be evolved entirely from scratch, starting only from basic, low-level behavioral primitives, using a nature-inspired evolutionary method known as Grammatical Evolution. Remarkably, division of labor was achieved merely by selecting on overall group performance, and without providing any prior information on how the global object retrieval task was best divided into smaller subtasks. We discuss the potential of our method for engineering adaptively behaving robot swarms and interpret our results in relation to the likely path that nature took to evolve complex sociality and task specialization.
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.
Anaesthetic considerations for pectus repair surgery
Patvardhan, Chinmay
2016-01-01
Repair of pectus is one of the most common congenital abnormality for which patient presents for thoracic surgery. In recent years, innovative minimally invasive techniques involving video assisted thoracoscopy for pectus repair have become the norm. Similarly, anaesthetic techniques have evolved to include principles of enhanced recovery, multimodal analgesia and innovative ultrasound guided neuraxial and nerve blocks. Adequate anaesthetic set up and monitoring including the use of real time intraoperative monitoring with transesophageal echocardiography (TOE) has enabled the anaesthetist to enhance patient safety by providing instantaneous imaging of cardiac compression and complications during surgery. In this review article we aim to provide non-systematic review and institutional experience of our anaesthetic strategy to provide effective peri-operative care in this patient group. PMID:29078504
NASA Astrophysics Data System (ADS)
Annila, Arto
2016-02-01
The principle of increasing entropy is derived from statistical physics of open systems assuming that quanta of actions, as undividable basic build blocks, embody everything. According to this tenet, all systems evolve from one state to another either by acquiring quanta from their surroundings or by discarding quanta to the surroundings in order to attain energetic balance in least time. These natural processes result in ubiquitous scale-free patterns: skewed distributions that accumulate in a sigmoid manner and hence span log-log scales mostly as straight lines. Moreover, the equation for least-time motions reveals that evolution is by nature a non-deterministic process. Although the obtained insight in thermodynamics from the notion of quanta in motion yields nothing new, it accentuates that contemporary comprehension is impaired when modeling evolution as a computable process by imposing conservation of energy and thereby ignoring that quantum of actions are the carriers of energy from the system to its surroundings.
Manufacture of Fior di Latte cheese by incorporation of probiotic lactobacilli.
Minervini, F; Siragusa, S; Faccia, M; Dal Bello, F; Gobbetti, M; De Angelis, M
2012-02-01
This work aimed to select heat-resistant probiotic lactobacilli to be added to Fior di Latte (high-moisture cow milk Mozzarella) cheese. First, 18 probiotic strains belonging to Lactobacillus casei, Lactobacillus delbrueckii ssp. bulgaricus, Lactobacillus paracasei, Lactobacillus plantarum, Lactobacillus rhamnosus, and Lactobacillus reuteri were screened. Resistance to heating (65 or 55°C for 10 min) varied markedly between strains. Adaptation at 42°C for 10 min increased the heat resistance at 55°C for 10 min of all probiotic lactobacilli. Heat-adapted L. delbrueckii ssp. bulgaricus SP5 (decimal reduction time at 55°C of 227.4 min) and L. paracasei BGP1 (decimal reduction time at 55°C of 40.8 min) showed the highest survival under heat conditions that mimicked the stretching of the curd and were used for the manufacture of Fior di Latte cheese. Two technology options were chosen: chemical (addition of lactic acid to milk) or biological (Streptococcus thermophilus as starter culture) acidification with or without addition of probiotics. As determined by random amplified polymorphic DNA-PCR and 16S rRNA gene analyses, the cell density of L. delbrueckii ssp. bulgaricus SP5 and L. paracasei BGP1 in chemically or biologically acidified Fior di Latte cheese was approximately 8.0 log(10)cfu/g. Microbiological, compositional, biochemical, and sensory analyses (panel test by 30 untrained judges) showed that the use of L. delbrueckii ssp. bulgaricus SP5 and L. paracasei BGP1 enhanced flavor formation and shelf-life of Fior di Latte cheeses. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Mafart, P; Leguérinel, I; Couvert, O; Coroller, L
2010-08-01
The assessment and optimization of food heating processes require knowledge of the thermal resistance of target spores. Although the concept of spore resistance may seem simple, the establishment of a reliable quantification system for characterizing the heat resistance of spores has proven far more complex than imagined by early researchers. This paper points out the main difficulties encountered by reviewing the historical works on the subject. During an early period, the concept of individual spore resistance had not yet been considered and the resistance of a strain of spore-forming bacterium was related to a global population regarded as alive or dead. A second period was opened by the introduction of the well-known D parameter (decimal reduction time) associated with the previously introduced z-concept. The present period has introduced three new sources of complexity: consideration of non log-linear survival curves, consideration of environmental factors other than temperature, and awareness of the variability of resistance parameters. The occurrence of non log-linear survival curves makes spore resistance dependent on heating time. Consequently, spore resistance characterisation requires at least two parameters. While early resistance models took only heating temperature into account, new models consider other environmental factors such as pH and water activity ("horizontal extension"). Similarly the new generation of models also considers certain environmental factors of the recovery medium for quantifying "apparent heat resistance" ("vertical extension"). Because the conventional F-value is no longer additive in cases of non log-linear survival curves, the decimal reduction ratio should be preferred for assessing the efficiency of a heating process. Copyright 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh, Man
Viscosities (η, N s m-2) and surface tensions (γ, N m-1) of methanol, ethanol, glycerol, ethyl acetate, n-hexane, diethyl ether, chloroform, benzene, carbon tetrachloride (CCl4), tetrahydrofuran (THF), dimethylformamide (DMF), dimethylsulfoxide (DMSO), acetonitrile, and formic acid have been measured with survismeter and compared with the data obtained by Ubbehold viscometer and stalagmometer, respectively. The ±1.1 × 10-5 N s m-2 and ±1.3 × 10-6 N m-1 deviations are noted in the data, in fact literature data of surface tension and viscosity are available to 2nd and 3rd place of decimals, respectively, while the survismeter measures them to 3rd and 4th place of decimals, respectively. The survismeter is 2-in-1 for viscosity and surface tension measurements together with high accuracies several times better than those of the separately measured data. Viscosities and surface tensions of aqueous DMSO, THF, DMF, and acetonitrile from 0.01 to 0.20 mol kg-1 and mannitol from 0.005 to 0.02 mol kg-1 have been measured with survismeter with ±1.2 × 10-5 N s m-2 and ±1.3 × 10-6 N m-1 deviations, respectively. The data are used for friccohesity and dipole moment determination, the lower viscosities, surface tension, and friccohesity values are noted for mannitol as compared to DMSO, THF, DMF, and acetonitrile solutions. The weaker molecular interactions are noted for mannitol. As compared to viscometer and stalagmometer individually, it is inexpensive and minimizes 2/3rd of consumables, human efforts, and infrastructure with 10 times better accuracies.
Pearce, Lindsay E.; Truong, H. Tuan; Crawford, Robert A.; Yates, Gary F.; Cavaignac, Sonia; de Lisle, Geoffrey W.
2001-01-01
A pilot-scale pasteurizer operating under validated turbulent flow (Reynolds number, 11,050) was used to study the heat sensitivity of Mycobacterium avium subsp. paratuberculosis added to raw milk. The ATCC 19698 type strain, ATCC 43015 (Linda, human isolate), and three bovine isolates were heated in raw whole milk for 15 s at 63, 66, 69, and 72°C in duplicate trials. No strains survived at 72°C for 15 s; and only one strain survived at 69°C. Means of pooled D values (decimal reduction times) at 63 and 66°C were 15.0 ± 2.8 s (95% confidence interval) and 5.9 ± 0.7 s (95% confidence interval), respectively. The mean extrapolated D72°C was <2.03 s. This was equivalent to a >7 log10 kill at 72°C for 15 s (95% confidence interval). The mean Z value (degrees required for the decimal reduction time to traverse one log cycle) was 8.6°C. These five strains showed similar survival whether recovery was on Herrold's egg yolk medium containing mycobactin or by a radiometric culture method (BACTEC). Milk was inoculated with fresh fecal material from a high-level fecal shedder with clinical Johne's disease. After heating at 72°C for 15 s, the minimum M. avium subsp. paratuberculosis kill was >4 log10. Properly maintained and operated equipment should ensure the absence of viable M. avium subsp. paratuberculosis in retail milk and other pasteurized dairy products. An additional safeguard is the widespread commercial practice of pasteurizing 1.5 to 2° above 72°C. PMID:11525992
NASA Astrophysics Data System (ADS)
Alshaery, Aisha; Ebaid, Abdelhalim
2017-11-01
Kepler's equation is one of the fundamental equations in orbital mechanics. It is a transcendental equation in terms of the eccentric anomaly of a planet which orbits the Sun. Determining the position of a planet in its orbit around the Sun at a given time depends upon the solution of Kepler's equation, which we will solve in this paper by the Adomian decomposition method (ADM). Several properties of the periodicity of the obtained approximate solutions have been proved in lemmas. Our calculations demonstrated a rapid convergence of the obtained approximate solutions which are displayed in tables and graphs. Also, it has been shown in this paper that only a few terms of the Adomian decomposition series are sufficient to achieve highly accurate numerical results for any number of revolutions of the Earth around the Sun as a consequence of the periodicity property. Numerically, the four-term approximate solution coincides with the Bessel-Fourier series solution in the literature up to seven decimal places at some values of the time parameter and nine decimal places at other values. Moreover, the absolute error approaches zero using the nine term approximate Adomian solution. In addition, the approximate Adomian solutions for the eccentric anomaly have been used to show the convergence of the approximate radial distances of the Earth from the Sun for any number of revolutions. The minimal distance (perihelion) and maximal distance (aphelion) approach 147 million kilometers and 152.505 million kilometers, respectively, and these coincide with the well known results in astronomical physics. Therefore, the Adomian decomposition method is validated as an effective tool to solve Kepler's equation for elliptical orbits.
Iconicity and the Emergence of Combinatorial Structure in Language
ERIC Educational Resources Information Center
Verhoef, Tessa; Kirby, Simon; de Boer, Bart
2016-01-01
In language, recombination of a discrete set of meaningless building blocks forms an unlimited set of possible utterances. How such combinatorial structure emerged in the evolution of human language is increasingly being studied. It has been shown that it can emerge when languages culturally evolve and adapt to human cognitive biases. How the…
Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System
NASA Technical Reports Server (NTRS)
Crocker, Andy; Greene, William D.
2017-01-01
Goals of NASA's Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS. (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. SLS Block 1 vehicle is being designed to carry 70 mT to LEO: (1) Uses two five-segment solid rocket boosters (SRBs) similar to the boosters that helped power the space shuttle to orbit. Evolved 130 mT payload class rocket requires an advanced booster with more thrust than any existing U.S. liquid-or solid-fueled boosters
50 CFR 216.93 - Tracking and verification program.
Code of Federal Regulations, 2013 CFR
2013-10-01
... canning company in the 50 states, Puerto Rico, or American Samoa receives a domestic or imported shipment... short tons to the fourth decimal, ocean area of capture (ETP, western Pacific, Indian, eastern and...
50 CFR 216.93 - Tracking and verification program.
Code of Federal Regulations, 2014 CFR
2014-10-01
... canning company in the 50 states, Puerto Rico, or American Samoa receives a domestic or imported shipment... short tons to the fourth decimal, ocean area of capture (ETP, western Pacific, Indian, eastern and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caignard, Gregory; Guerbois, Mathilde; Labernardiere, Jean-Louis
2007-11-25
Viruses have evolved various strategies to escape the antiviral activity of type I interferons (IFN-{alpha}/{beta}). For measles virus, this function is carried by the polycistronic gene P that encodes, by an unusual editing strategy, for the phosphoprotein P and the virulence factor V (MV-V). MV-V prevents STAT1 nuclear translocation by either sequestration or phosphorylation inhibition, thereby blocking IFN-{alpha}/{beta} pathway. We show that both the N- and C-terminal domains of MV-V (PNT and VCT) contribute to the inhibition of IFN-{alpha}/{beta} signaling. Using the two-hybrid system and co-affinity purification experiments, we identified STAT1 and Jak1 as interactors of MV-V and demonstrate thatmore » MV-V can block the direct phosphorylation of STAT1 by Jak1. A deleterious mutation within the PNT domain of MV-V (Y110H) impaired its ability to interact and block STAT1 phosphorylation. Thus, MV-V interacts with at least two components of IFN-{alpha}/{beta} receptor complex to block downstream signaling.« less
Recognition of Roasted Coffee Bean Levels using Image Processing and Neural Network
NASA Astrophysics Data System (ADS)
Nasution, T. H.; Andayani, U.
2017-03-01
The coffee beans roast levels have some characteristics. However, some people cannot recognize the coffee beans roast level. In this research, we propose to design a method to recognize the coffee beans roast level of images digital by processing the image and classifying with backpropagation neural network. The steps consist of how to collect the images data with image acquisition, pre-processing, feature extraction using Gray Level Co-occurrence Matrix (GLCM) method and finally normalization of data extraction using decimal scaling features. The values of decimal scaling features become an input of classifying in backpropagation neural network. We use the method of backpropagation to recognize the coffee beans roast levels. The results showed that the proposed method is able to identify the coffee roasts beans level with an accuracy of 97.5%.
Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design
Troncoso Romero, David Ernesto
2014-01-01
Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674
Optimal sharpening of compensated comb decimation filters: analysis and design.
Troncoso Romero, David Ernesto; Laddomada, Massimiliano; Jovanovic Dolecek, Gordana
2014-01-01
Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature.
Reasoning strategies with rational numbers revealed by eye tracking.
Plummer, Patrick; DeWolf, Melissa; Bassok, Miriam; Gordon, Peter C; Holyoak, Keith J
2017-07-01
Recent research has begun to investigate the impact of different formats for rational numbers on the processes by which people make relational judgments about quantitative relations. DeWolf, Bassok, and Holyoak (Journal of Experimental Psychology: General, 144(1), 127-150, 2015) found that accuracy on a relation identification task was highest when fractions were presented with countable sets, whereas accuracy was relatively low for all conditions where decimals were presented. However, it is unclear what processing strategies underlie these disparities in accuracy. We report an experiment that used eye-tracking methods to externalize the strategies that are evoked by different types of rational numbers for different types of quantities (discrete vs. continuous). Results showed that eye-movement behavior during the task was jointly determined by image and number format. Discrete images elicited a counting strategy for both fractions and decimals, but this strategy led to higher accuracy only for fractions. Continuous images encouraged magnitude estimation and comparison, but to a greater degree for decimals than fractions. This strategy led to decreased accuracy for both number formats. By analyzing participants' eye movements when they viewed a relational context and made decisions, we were able to obtain an externalized representation of the strategic choices evoked by different ontological types of entities and different types of rational numbers. Our findings using eye-tracking measures enable us to go beyond previous studies based on accuracy data alone, demonstrating that quantitative properties of images and the different formats for rational numbers jointly influence strategies that generate eye-movement behavior.
NASA Astrophysics Data System (ADS)
Bailey, David H.; Frolov, Alexei M.
2003-12-01
Since the above paper was published we have received a suggestion from T K Rebane that our variational energy, -402.261 928 652 266 220 998 au, for the 3S(L = 0) state from table 4 (right-hand column) is wrong in the fourth and fifth decimal digits. Our original variational energies were E(2000) = -402.192 865 226 622 099 583 au and E(3000) = -402.192 865 226 622 099 838 au. Unfortunately, table 4 contains a simple typographic error. The first two digits after the decimal point (26) in the published energies must be removed. Then the results exactly coincide with the original energies. These digits (26) were left in table 4 from the original version, which also included the 2S(L = 0) states of the helium-muonic atoms. A similar typographic error was found in table 4 of another paper by A M Frolov (2001 J. Phys. B: At. Mol. Opt. Phys. 34 3813). The computed ground state energy for the ppµ muonic molecular ion was -0.494 386 820 248 934 546 94 mau. In table 4 of that paper the first figure '8' (fifth digit after the decimal point) was lost from the energy value presented in this table. We wish to thank T K Rebane of the Fock Physical Institute in St Petersburg for pointing out the misprint related to the helium(4)-muonic atom.
Koehl, Patrice; Poitevin, Frédéric; Navaza, Rafael; Delarue, Marc
2017-03-14
Understanding the dynamics of biomolecules is the key to understanding their biological activities. Computational methods ranging from all-atom molecular dynamics simulations to coarse-grained normal-mode analyses based on simplified elastic networks provide a general framework to studying these dynamics. Despite recent successes in studying very large systems with up to a 100,000,000 atoms, those methods are currently limited to studying small- to medium-sized molecular systems due to computational limitations. One solution to circumvent these limitations is to reduce the size of the system under study. In this paper, we argue that coarse-graining, the standard approach to such size reduction, must define a hierarchy of models of decreasing sizes that are consistent with each other, i.e., that each model contains the information of the dynamics of its predecessor. We propose a new method, Decimate, for generating such a hierarchy within the context of elastic networks for normal-mode analysis. This method is based on the concept of the renormalization group developed in statistical physics. We highlight the details of its implementation, with a special focus on its scalability to large systems of up to millions of atoms. We illustrate its application on two large systems, the capsid of a virus and the ribosome translation complex. We show that highly decimated representations of those systems, containing down to 1% of their original number of atoms, still capture qualitatively and quantitatively their dynamics. Decimate is available as an OpenSource resource.
2010-01-01
Background Sexual selection theory predicts that females, being the limiting sex, invest less in courtship signals than males. However, when chemical signals are involved it is often the female that initiates mating by producing stimuli that inform about sex and/or receptivity. This apparent contradiction has been discussed in the literature as 'the female pheromone fallacy'. Because the release of chemical stimuli may not have evolved to elicit the male's courtship response, whether these female stimuli represent signals remains an open question. Using techniques to visualise and block release of urine, we studied the role of urine signals during fighting and mating interactions of crayfish (Pacifastacus leniusculus). Test individuals were blindfolded to exclude visual disturbance from dye release and artificial urine introduction. Results Staged female-male pairings during the reproductive season often resulted in male mating attempts. Blocking female urine release in such pairings prevented any male courtship behaviour. Artificial introduction of female urine re-established male mating attempts. Urine visualisation showed that female urine release coincides with aggressive behaviours but not with female submissive behaviour in reproductive interactions as well as in intersexual and intrasexual fights. In reproductive interactions, females predominately released urine during precopulatory aggression; males subsequently released significantly less urine during mating than in fights. Conclusions Urine-blocking experiments demonstrate that female urine contains sex-specific components that elicit male mating behaviour. The coincidence of chemical signalling and aggressive behaviour in both females and males suggests that urine release has evolved as an aggressive signal in both sexes of crayfish. By limiting urine release to aggressive behaviours in reproductive interactions females challenge their potential mating partners at the same time as they trigger a sexual response. These double messages should favour stronger males that are able to overcome the resistance of the female. We conclude that the difference between the sexes in disclosing urine-borne information reflects their conflicting interests in reproduction. Males discontinue aggressive urine signalling in order to increase their chances of mating. Females resume urine signalling in connection with aggressive behaviour, potentially repelling low quality or sexually inactive males while favouring reproduction with high quality males. PMID:20353555
Evolutionary history predicts plant defense against an invasive pest.
Desurmont, Gaylord A; Donoghue, Michael J; Clement, Wendy L; Agrawal, Anurag A
2011-04-26
It has long been hypothesized that invasive pests may be facilitated by the evolutionary naïveté of their new hosts, but this prediction has never been examined in a phylogenetic framework. To address the hypothesis, we have been studying the invasive viburnum leaf beetle (Pyrrhalta viburni), which is decimating North American native species of Viburnum, a clade of worldwide importance as understory shrubs and ornamentals. In a phylogenetic field experiment using 16 species of Viburnum, we show that old-world Viburnum species that evolved in the presence of Pyrrhalta beetles mount a massive defensive wound response that crushes eggs of the pest insect; in contrast, naïve North American species that share no evolutionary history with Pyrrhalta beetles show a markedly lower response. This convergent continental difference in the defensive response of Viburnum spp. against insect oviposition contrasts with little difference in the quality of leaves for beetle larvae. Females show strong oviposition preferences that correspond with larval performance regardless of continental origin, which has facilitated colonization of susceptible North American species. Thus, although much attention has been paid to escape from enemies as a factor in the establishment and spread of nonnative organisms, the colonization of undefended resources seems to play a major role in the success of invasive species such as the viburnum leaf beetle.
The evolution of air resonance power efficiency in the violin and its ancestors
Nia, Hadi T.; Jain, Ankita D.; Liu, Yuming; Alam, Mohammad-Reza; Barnas, Roman; Makris, Nicholas C.
2015-01-01
The fact that acoustic radiation from a violin at air-cavity resonance is monopolar and can be determined by pure volume change is used to help explain related aspects of violin design evolution. By determining the acoustic conductance of arbitrarily shaped sound holes, it is found that air flow at the perimeter rather than the broader sound-hole area dominates acoustic conductance, and coupling between compressible air within the violin and its elastic structure lowers the Helmholtz resonance frequency from that found for a corresponding rigid instrument by roughly a semitone. As a result of the former, it is found that as sound-hole geometry of the violin's ancestors slowly evolved over centuries from simple circles to complex f-holes, the ratio of inefficient, acoustically inactive to total sound-hole area was decimated, roughly doubling air-resonance power efficiency. F-hole length then slowly increased by roughly 30% across two centuries in the renowned workshops of Amati, Stradivari and Guarneri, favouring instruments with higher air-resonance power, through a corresponding power increase of roughly 60%. By evolution-rate analysis, these changes are found to be consistent with mutations arising within the range of accidental replication fluctuations from craftsmanship limitations with subsequent selection favouring instruments with higher air-resonance power. PMID:25792964
Lessons from applied ecology: cancer control using an evolutionary double bind.
Gatenby, Robert A; Brown, Joel; Vincent, Thomas
2009-10-01
Because the metastatic cascade is largely governed by the ability of malignant cells to adapt and proliferate at the distant tissue site, we propose that disseminated cancers are analogous in many important ways to the evolutionary and ecological dynamics of exotic species. Although pests can be decimated through the application of chemical toxins, this strategy virtually never achieves robust control as evolution of resistant phenotypes typically permits population recovery to pretreatment levels. In general, biological strategies that introduce predators, parasitoids, or pathogens have achieved more durable control of pest populations even after emergence of resistant phenotypes. From this we propose that long term outcome from any treatment strategy for invasive pests, including cancer, is not limited by evolution of resistance, but rather by the phenotypic cost of that resistance. If a cancerous cell's adaptation to therapy is achieved by upregulating xenobiotic metabolism or a redundant signaling pathway, the required investment in resources is small, and the original malignant phenotype remains essentially intact. As a result, the cancer cells' initial high level of fitness is little changed and unconstrained proliferation will resume once resistance evolves. Robust population control is possible if resistance to therapy requires a substantial and costly phenotypic adaptation that also significantly reduces the organism's fitness in its original niche: an evolutionary double bind.
NASA Astrophysics Data System (ADS)
Riddick, Thomas; Brovkin, Victor; Hagemann, Stefan; Mikolajewicz, Uwe
2017-04-01
The continually evolving large ice sheets present in the Northern Hemisphere during the last glacial cycle caused significant changes to river pathways both through directly blocking rivers and through glacial isostatic adjustment. These river pathway changes are believed to of had a significant impact on the evolution of ocean circulation through changing the pattern of fresh water discharge into the oceans. A fully coupled ESM simulation of the last glacial cycle thus requires a hydrological discharge model that uses a set of river pathways that evolve with the earth's changing orography while being able to reproduce the known present-day river network given the present-day orography. Here we present a method for dynamically modelling hydrological discharge that meets such requirements by applying relative manual corrections to an evolving fine scale orography (accounting for the changing ice sheets and isostatic rebound) each time the river directions are recalculated. The corrected orography thus produced is then used to create a set of fine scale river pathways and these are then upscaled to a course scale. An existing present-day hydrological discharge model within the JSBACH3 land surface model is run using the course scale river pathways generated. This method will be used in fully coupled paleoclimate runs made using MPI-ESM1 as part of the PalMod project. Tests show this procedure reproduces the known present-day river network to a sufficient degree of accuracy.
50 CFR 216.93 - Tracking and verification program.
Code of Federal Regulations, 2012 CFR
2012-10-01
... in the 50 states, Puerto Rico, or American Samoa receives a domestic or imported shipment of ETP..., dressed, gilled and gutted, other), weight in short tons to the fourth decimal, ocean area of capture (ETP...
50 CFR 216.93 - Tracking and verification program.
Code of Federal Regulations, 2010 CFR
2010-10-01
... in the 50 states, Puerto Rico, or American Samoa receives a domestic or imported shipment of ETP..., dressed, gilled and gutted, other), weight in short tons to the fourth decimal, ocean area of capture (ETP...
25 CFR 169.11 - Affidavit and certificate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... of road construction, and a certificate by the State or county engineer or other authorized State or... decimals, the line of route for which the right-of-way application is made. (b) Maps covering roads built...
Furman, Benjamin L. S.; Evans, Ben J.
2016-01-01
Sexual differentiation is fundamentally important for reproduction, yet the genetic triggers of this developmental process can vary, even between closely related species. Recent studies have uncovered, for example, variation in the genetic triggers for sexual differentiation within and between species of African clawed frogs (genus Xenopus). Here, we extend these discoveries by demonstrating that yet another sex determination system exists in Xenopus, specifically in the species Xenopus borealis. This system evolved recently in an ancestor of X. borealis that had the same sex determination system as X. laevis, a system which itself is newly evolved. Strikingly, the genomic region carrying the sex determination factor in X. borealis is homologous to that of therian mammals, including humans. Our results offer insights into how the genetic underpinnings of conserved phenotypes evolve, and suggest an important role for cooption of genetic building blocks with conserved developmental roles. PMID:27605520
Identifying and exploiting genes that potentiate the evolution of antibiotic resistance.
Gifford, Danna R; Furió, Victoria; Papkou, Andrei; Vogwill, Tom; Oliver, Antonio; MacLean, R Craig
2018-06-01
There is an urgent need to develop novel approaches for predicting and preventing the evolution of antibiotic resistance. Here, we show that the ability to evolve de novo resistance to a clinically important β-lactam antibiotic, ceftazidime, varies drastically across the genus Pseudomonas. This variation arises because strains possessing the ampR global transcriptional regulator evolve resistance at a high rate. This does not arise because of mutations in ampR. Instead, this regulator potentiates evolution by allowing mutations in conserved peptidoglycan biosynthesis genes to induce high levels of β-lactamase expression. Crucially, blocking this evolutionary pathway by co-administering ceftazidime with the β-lactamase inhibitor avibactam can be used to eliminate pathogenic P. aeruginosa populations before they can evolve resistance. In summary, our study shows that identifying potentiator genes that act as evolutionary catalysts can be used to both predict and prevent the evolution of antibiotic resistance.
Spectral interpolation - Zero fill or convolution. [image processing
NASA Technical Reports Server (NTRS)
Forman, M. L.
1977-01-01
Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.
NASA Astrophysics Data System (ADS)
Iseki, Sachiko; Ohta, Takayuki; Aomatsu, Akiyoshi; Ito, Masafumi; Kano, Hiroyuki; Higashijima, Yasuhiro; Hori, Masaru
2010-04-01
A promising, environmentally safe method for inactivating fungal spores of Penicillium digitatum, a difficult-to-inactivate food spoilage microorganism, was developed using a high-density nonequilibrium atmospheric pressure plasma (NEAPP). The NEAPP employing Ar gas had a high electron density on the order of 1015 cm-3. The spores were successfully and rapidly inactivated using the NEAPP, with a decimal reduction time in spores (D value) of 1.7 min. The contributions of ozone and UV radiation on the inactivation of the spores were evaluated and concluded to be not dominant, which was fundamentally different from the conventional sterilizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iseki, Sachiko; Hori, Masaru; Ohta, Takayuki
2010-04-12
A promising, environmentally safe method for inactivating fungal spores of Penicillium digitatum, a difficult-to-inactivate food spoilage microorganism, was developed using a high-density nonequilibrium atmospheric pressure plasma (NEAPP). The NEAPP employing Ar gas had a high electron density on the order of 10{sup 15} cm{sup -3}. The spores were successfully and rapidly inactivated using the NEAPP, with a decimal reduction time in spores (D value) of 1.7 min. The contributions of ozone and UV radiation on the inactivation of the spores were evaluated and concluded to be not dominant, which was fundamentally different from the conventional sterilizations.
Pulsation of late B-type stars
NASA Technical Reports Server (NTRS)
Beardsley, W. R.; Worek, T. F.; King, M. W.
1980-01-01
Radial velocity observations of three of the brightest stars in the Pleiades, Alcyone, Maia and Taygeta, made during the course of one night, 25 October 1976, are discussed. All three stars were discovered to be pulsating with periods of a few hours. Analysis of all published radial velocities for each star, covering more than 70 years and approximately 100,000 cycles, has established the value of the periods to eight decimal places, and demonstrated constancy of the periods. However, amplitudes of the radial velocity variations change over long time intervals, and changes in spectral line intensities are observed in phase with the pulsation. All three stars may also be members of binary systems.
An algorithm to compute the sequency ordered Walsh transform
NASA Technical Reports Server (NTRS)
Larsen, H.
1976-01-01
A fast sequency-ordered Walsh transform algorithm is presented; this sequency-ordered fast transform is complementary to the sequency-ordered fast Walsh transform introduced by Manz (1972) and eliminating gray code reordering through a modification of the basic fast Hadamard transform structure. The new algorithm retains the advantages of its complement (it is in place and is its own inverse), while differing in having a decimation-in time structure, accepting data in normal order, and returning the coefficients in bit-reversed sequency order. Applications include estimation of Walsh power spectra for a random process, sequency filtering and computing logical autocorrelations, and selective bit reversing.
NASA Technical Reports Server (NTRS)
Saltzman, Barry
1987-01-01
A variety of observational and theoretical studies were performed which were designed to clarify the relationship between satellite measurements of cloud and radiation and the evolution of transient and stationary circulation in middle latitudes. Satellite outgoing longwave radiation data are used to: (1) estimate the generation of available potential energy due to infrared radiation, and (2) show the extent to which these data can provide the signature of high and low frequency weather phenomena including blocking. In a significant series of studies the nonlinear, energetical, and predictability properties of these blocking situations, and the ralationship of blocking to the planetary, scale longwave structure are described. These studies form the background for continuing efforts to describe and theoretically account for these low frequency planetary wave phenomena in terms of their bimodal properties.
McCarter, Renee L.; Fodor, R.V.; Trusdell, Frank A.
2006-01-01
Explosive eruptions at Mauna Loa summit ejected coarse-grained blocks (free of lava coatings) from Moku'aweoweo caldera. Most are gabbronorites and gabbros that have 0–26 vol.% olivine and 1–29 vol.% oikocrystic orthopyroxene. Some blocks are ferrogabbros and diorites with micrographic matrices, and diorite veins (≤2 cm) cross-cut some gabbronorites and gabbros. One block is an open-textured dunite.The MgO of the gabbronorites and gabbros ranges ∼ 7–21 wt.%. Those with MgO >10 wt.% have some incompatible-element abundances (Zr, Y, REE; positive Eu anomalies) lower than those in Mauna Loa lavas of comparable MgO; gabbros (MgO <10 wt.%) generally overlap lava compositions. Olivines range Fo83–58, clinopyroxenes have Mg#s ∼83–62, and orthopyroxene Mg#s are 84–63 — all evolved beyond the mineral-Mg#s of Mauna Loa lavas. Plagioclase is An75–50. Ferrogabbro and diorite blocks have ∼ 3–5 wt.% MgO (TiO2 3.2–5.4%; K2O 0.8–1.3%; La 16–27 ppm), and a diorite vein is the most evolved (SiO2 59%, K2O 1.5%, La 38 ppm). They have clinopyroxene Mg#s 67–46, and plagioclase An57–40. The open-textured dunite has olivine ∼ Fo83.5. Seven isotope ratios are 87Sr/86Sr 0.70394–0.70374 and 143Nd/144Nd 0.51293–0.51286, and identify the suite as belonging to the Mauna Loa system.Gabbronorites and gabbros originated in solidification zones of Moku'aweoweo lava lakes where they acquired orthocumulate textures and incompatible-element depletions. These features suggest deeper and slower cooling lakes than the lava lake paradigm, Kilauea Iki, which is basalt and picrite. Clinopyroxene geobarometry suggests crystallization at <1 kbar P. Highly evolved mineral Mg#s, <75, are largely explained by cumulus phases exposed to evolving intercumulus liquids causing compositional ‘shifts.’ Ferrogabbro and diorite represent segregation veins from differentiated intercumulus liquids filter pressed into rigid zones of cooling lakes. Clinopyroxene geobarometry suggests <300 bar P. Open-textured dunite represents olivine-melt mush, precursor to vertical olivine-rich bodies (as in Kilauea Iki). Its Fo83.5 identifies the most primitive lake magma as ∼8.3 wt.% MgO. Mass balancing and MELTS show that such a magma could have yielded both ferrogabbro and diorite by ≥50% fractional crystallization, but under different fO2: < FMQ (250 bar) led to diorite, and FMQ (250 bar) yielded ferrogabbro. These segregation veins, documented as similar to those of Kilauea, testify to appreciable volumes of ‘rhyolitic’ liquid forming in oceanic environments. Namely, SiO2-rich veins are intrinsic to all shields that reached caldera stage to accommodate various-sized cooling, differentiating lava lakes.
ERIC Educational Resources Information Center
Cannon, Kristi B.; Hammer, Tonya R.; Reicherzer, Stacee; Gilliam, Billie J.
2012-01-01
Relational-cultural theory (RCT) is an evolving feminist model of human development that places emphasis on growth-fostering relationships as building blocks for wellness. This article demonstrates the use of RCT in addressing relational aggression, including cyberbullying, in counseling a group of adolescent girls. The group counselor's…
San, Phyo Phyo; Ling, Sai Ho; Nuryani; Nguyen, Hung
2014-08-01
This paper focuses on the hybridization technology using rough sets concepts and neural computing for decision and classification purposes. Based on the rough set properties, the lower region and boundary region are defined to partition the input signal to a consistent (predictable) part and an inconsistent (random) part. In this way, the neural network is designed to deal only with the boundary region, which mainly consists of an inconsistent part of applied input signal causing inaccurate modeling of the data set. Owing to different characteristics of neural network (NN) applications, the same structure of conventional NN might not give the optimal solution. Based on the knowledge of application in this paper, a block-based neural network (BBNN) is selected as a suitable classifier due to its ability to evolve internal structures and adaptability in dynamic environments. This architecture will systematically incorporate the characteristics of application to the structure of hybrid rough-block-based neural network (R-BBNN). A global training algorithm, hybrid particle swarm optimization with wavelet mutation is introduced for parameter optimization of proposed R-BBNN. The performance of the proposed R-BBNN algorithm was evaluated by an application to the field of medical diagnosis using real hypoglycemia episodes in patients with Type 1 diabetes mellitus. The performance of the proposed hybrid system has been compared with some of the existing neural networks. The comparison results indicated that the proposed method has improved classification performance and results in early convergence of the network.
Catta-Preta, Carolina M. C.; Brum, Felipe L.; da Silva, Camila C.; Zuma, Aline A.; Elias, Maria C.; de Souza, Wanderley; Schenkman, Sergio; Motta, Maria Cristina M.
2015-01-01
Mutualism is defined as a beneficial relationship for the associated partners and usually assumes that the symbiont number is controlled. Some trypanosomatid protozoa co-evolve with a bacterial symbiont that divides in coordination with the host in a way that results in its equal distribution between daughter cells. The mechanism that controls this synchrony is largely unknown, and its comprehension might provide clues to understand how eukaryotic cells evolved when acquiring symbionts that later became organelles. Here, we approached this question by studying the effects of inhibitors that affect the host exclusively in two symbiont-bearing trypanosomatids, Strigomonas culicis and Angomonas deanei. We found that inhibiting host protein synthesis using cycloheximide or host DNA replication using aphidicolin did not affect the duplication of bacterial DNA. Although the bacteria had autonomy to duplicate their DNA when host protein synthesis was blocked by cycloheximide, they could not complete cytokinesis. Aphidicolin promoted the inhibition of the trypanosomatid cell cycle in the G1/S phase, leading to symbiont filamentation in S. culicis but not in A. deanei. Treatment with camptothecin blocked the host protozoa cell cycle in the G2 phase and induced the formation of filamentous symbionts in both species. Oryzalin, which affects host microtubule polymerization, blocked trypanosomatid mitosis and abrogated symbiont division. Our results indicate that host factors produced during the cell division cycle are essential for symbiont segregation and may control the bacterial cell number. PMID:26082757
A collocation--Galerkin finite element model of cardiac action potential propagation.
Rogers, J M; McCulloch, A D
1994-08-01
A new computational method was developed for modeling the effects of the geometric complexity, nonuniform muscle fiber orientation, and material inhomogeneity of the ventricular wall on cardiac impulse propagation. The method was used to solve a modification to the FitzHugh-Nagumo system of equations. The geometry, local muscle fiber orientation, and material parameters of the domain were defined using linear Lagrange or cubic Hermite finite element interpolation. Spatial variations of time-dependent excitation and recovery variables were approximated using cubic Hermite finite element interpolation, and the governing finite element equations were assembled using the collocation method. To overcome the deficiencies of conventional collocation methods on irregular domains, Galerkin equations for the no-flux boundary conditions were used instead of collocation equations for the boundary degrees-of-freedom. The resulting system was evolved using an adaptive Runge-Kutta method. Converged two-dimensional simulations of normal propagation showed that this method requires less CPU time than a traditional finite difference discretization. The model also reproduced several other physiologic phenomena known to be important in arrhythmogenesis including: Wenckebach periodicity, slowed propagation and unidirectional block due to wavefront curvature, reentry around a fixed obstacle, and spiral wave reentry. In a new result, we observed wavespeed variations and block due to nonuniform muscle fiber orientation. The findings suggest that the finite element method is suitable for studying normal and pathological cardiac activation and has significant advantages over existing techniques.
Evaluation of students' knowledge about paediatric dosage calculations.
Özyazıcıoğlu, Nurcan; Aydın, Ayla İrem; Sürenler, Semra; Çinar, Hava Gökdere; Yılmaz, Dilek; Arkan, Burcu; Tunç, Gülseren Çıtak
2018-01-01
Medication errors are common and may jeopardize the patient safety. As paediatric dosages are calculated based on the child's age and weight, risk of error in dosage calculations is increasing. In paediatric patients, overdose drug prescribed regardless of the child's weight, age and clinical picture may lead to excessive toxicity and mortalities while low doses may delay the treatment. This study was carried out to evaluate the knowledge of nursing students about paediatric dosage calculations. This research, which is of retrospective type, covers a population consisting of all the 3rd grade students at the bachelor's degree in May, 2015 (148 students). Drug dose calculation questions in exam papers including 3 open ended questions on dosage calculation problems, addressing 5 variables were distributed to the students and their responses were evaluated by the researchers. In the evaluation of the data, figures and percentage distribution were calculated and Spearman correlation analysis was applied. Exam question on the dosage calculation based on child's age, which is the most common method in paediatrics, and which ensures right dosages and drug dilution was answered correctly by 87.1% of the students while 9.5% answered it wrong and 3.4% left it blank. 69.6% of the students was successful in finding the safe dose range, and 79.1% in finding the right ratio/proportion. 65.5% of the answers with regard to Ml/dzy calculation were correct. Moreover, student's four operation skills were assessed and 68.2% of the students were determined to have found the correct answer. When the relation among the questions on medication was examined, a significant relation (correlation) was determined between them. It is seen that in dosage calculations, the students failed mostly in calculating ml/dzy (decimal). This result means that as dosage calculations are based on decimal values, calculations may be ten times erroneous when the decimal point is placed wrongly. Moreover, it is also seen that students lack maths knowledge in respect of four operations and calculating safe dose range. Relations among the medications suggest that a student wrongly calculating a dosage may also make other errors. Additional courses, exercises or utilisation of different teaching techniques may be suggested to eliminate the deficiencies in terms of basic maths knowledge, problem solving skills and correct dosage calculation of the students. Copyright © 2017 Elsevier Ltd. All rights reserved.
Remainder Wheels and Group Theory
ERIC Educational Resources Information Center
Brenton, Lawrence
2008-01-01
Why should prospective elementary and high school teachers study group theory in college? This paper examines applications of abstract algebra to the familiar algorithm for converting fractions to repeating decimals, revealing ideas of surprising substance beneath an innocent facade.
ERIC Educational Resources Information Center
Flannery, Carol A.
This manuscript provides information and problems for teaching mathematics to vocational education students. Problems reflect applications of mathematical concepts to specific technical areas. The materials are organized into six chapters. Chapter 1 covers basic arithmetic, including fractions, decimals, ratio and proportions, percentages, and…
Rodrigo, D; Barbosa-Cánovas, G V; Martínez, A; Rodrigo, M
2003-12-01
The effects of pulsed electric fields (PEFs) on pectin methyl esterase (PME), molds and yeast, and total flora in fresh (nonpasteurized) mixed orange and carrot juice were studied. The PEF effect was more extensive when juices with high levels of initial PME activity were subjected to treatment and when PEF treatment (at 25 kV/cm for 340 micros) was combined with a moderate temperature (63 degrees C), with the maximum level of PME inactivation being 81.4%. These conditions produced 3.7 decimal reductions in molds and yeast and 2.4 decimal reductions in total flora. Experimental inactivation data for PME, molds and yeast, and total flora were fitted to Bigelow, Hülsheger, and Weibull inactivation models by nonlinear regression. The best fit (lowest mean square error) was obtained with the Weibull model.
General Entanglement Scaling Laws from Time Evolution
NASA Astrophysics Data System (ADS)
Eisert, Jens; Osborne, Tobias J.
2006-10-01
We establish a general scaling law for the entanglement of a large class of ground states and dynamically evolving states of quantum spin chains: we show that the geometric entropy of a distinguished block saturates, and hence follows an entanglement-boundary law. These results apply to any ground state of a gapped model resulting from dynamics generated by a local Hamiltonian, as well as, dually, to states that are generated via a sudden quench of an interaction as recently studied in the case of dynamics of quantum phase transitions. We achieve these results by exploiting ideas from quantum information theory and tools provided by Lieb-Robinson bounds. We also show that there exist noncritical fermionic systems and equivalent spin chains with rapidly decaying interactions violating this entanglement-boundary law. Implications for the classical simulatability are outlined.
We reside in the sun's atmosphere.
Kamide, Y
2005-10-01
The Sun is the origin of all activities of the Earth, including its solid, liquid and gas states, as well as life on the Earth surface. Life was created on this planet and was further evolved after long physical/chemical processes, so that life here matches with what this planet requires. This paper contends that the Earth is located within the solar atmosphere, but we do not feel it in a daily life because of the blocking effects of the Earth's magnetic field and atmosphere, preventing the entry of the solar atmosphere directly into the Earth's domain. This paper emphasizes that we should not attempt to change the quality of the natural environment that delicate interactions between the Sun and the Earth have established for us by taking a long time.
CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, W.; Almgren, A.; Bell, J.
We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunovmore » scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.« less
NASA's Space Launch System Program Update
NASA Technical Reports Server (NTRS)
May, Todd; Lyles, Garry
2015-01-01
Hardware and software for the world's most powerful launch vehicle for exploration is being welded, assembled, and tested today in high bays, clean rooms and test stands across the United States. NASA's Space Launch System (SLS) continued to make significant progress in the past year, including firing tests of both main propulsion elements, manufacturing of flight hardware, and the program Critical Design Review (CDR). Developed with the goals of safety, affordability, and sustainability, SLS will deliver unmatched capability for human and robotic exploration. The initial Block 1 configuration will deliver more than 70 metric tons (t) (154,000 pounds) of payload to low Earth orbit (LEO). The evolved Block 2 design will deliver some 130 t (286,000 pounds) to LEO. Both designs offer enormous opportunity and flexibility for larger payloads, simplifying payload design as well as ground and on-orbit operations, shortening interplanetary transit times, and decreasing overall mission risk. Over the past year, every vehicle element has manufactured or tested hardware, including flight hardware for Exploration Mission 1 (EM-1). This paper will provide an overview of the progress made over the past year and provide a glimpse of upcoming milestones on the way to a 2018 launch readiness date.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fei; Huang, Yongxi
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Xie, Fei; Huang, Yongxi
2018-02-04
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Reconstructing the Thermo-tectonic history of the Rwenzori Mountains, D. R. Congo
NASA Astrophysics Data System (ADS)
Mansour, S.; Bauer, F.; Glasmacher, P. D. U. A. A.; Grobe, R. W.; Starz, M.
2014-12-01
The Albertine Rift forms the northern section of the western Rift of the East African Rift System (EARS). The Rwenzori Mtns evolved along the eastern rift shoulder of the Albertine Rift, rising up to form a striking feature within the rift valley with elevations reaching 5109 m a.s.l. While, the scarcity of volcanic activity in the Western Rift has raised questions about the Rwenzori Mtns origin and how this fits into the general evolution of the Albertine Rift and the EARS. Detailed thermochronologic study of Bauer et al., (2013) on the eastern side on Rwenzori Mtns, differentiated it into northern and southern blocks. The northern block cooled faster to ~120 °C in Carboniferous to Permian times. The second cooling event to ~70 °C occurred in Mesozoic time. The third cooling event to surface temperature occurred in the Neogene. While, the southern block shows an earlier onset of cooling at >400 Ma. Temperatures of about 70 °C were reached in Silurian to Devonian times. During this study, 33 samples were collected from the western side of central Rwenzori. Zircon and apatite fission track and (U/Th)-He techniques were applied on these samples. The apatite fission track data could be divided into three age groups; ~45±11, ~25±5, ~12±2 Ma. These results reveal the difference in thermo-tectonic history between the eastern and western flanks of Rwenzori Mtns and support the tilt uplift geometry hypotheses (e.g. Pickford et al., 1993). ReferencesBauer, F.U., Glasmacher, U.A., Ring, U., Karl, M., Schumann, A., Nagudi, B., 2013. Tracing the exhumation history of the Rwenzori Mountains, Albertine Rift, Uganda, using low-temperature thermochronology, Tectonophysics, 599, 8-28. http://dx.doi.org/10.1016/j.tecto.2013.03.032. Pickford, M., Senut, B., Hadoto, D., 1993. Geology and Palaeobiology of the Albertine Rift Valley Uganda-Zaire, vol. 1. Geology. CIFEG Occas, Orleans. Publication, vol. 24, pp. 1-190.
An Event Related Field Study of Rapid Grammatical Plasticity in Adult Second-Language Learners
Bastarrika, Ainhoa; Davidson, Douglas J.
2017-01-01
The present study used magnetoencephalography (MEG) to investigate how Spanish adult learners of Basque respond to morphosyntactic violations after a short period of training on a small fragment of Basque grammar. Participants (n = 17) were exposed to violation and control phrases in three phases (pretest, training, generalization-test). In each phase participants listened to short Basque phrases and they judged whether they were correct or incorrect. During the pre-test and generalization-test, participants did not receive any feedback. During the training blocks feedback was provided after each response. We also ran two Spanish control blocks before and after training. We analyzed the event-related magnetic- field (ERF) recorded in response to a critical word during all three phases. In the pretest, classification was below chance and we found no electrophysiological differences between violation and control stimuli. Then participants were explicitly taught a Basque grammar rule. From the first training block participants were able to correctly classify control and violation stimuli and an evoked violation response was present. Although the timing of the electrophysiological responses matched participants' L1 effect, the effect size was smaller for L2 and the topographical distribution differed from the L1. While the L1 effect was bilaterally distributed on the auditory sensors, the L2 effect was present at right frontal sensors. During training blocks two and three, the violation-control effect size increased and the topography evolved to a more L1-like pattern. Moreover, this pattern was maintained in the generalization test. We conclude that rapid changes in neuronal responses can be observed in adult learners of a simple morphosyntactic rule, and that native-like responses can be achieved at least in small fragments of second language. PMID:28174530
Robust Fuzzy Controllers Using FPGAs
NASA Technical Reports Server (NTRS)
Monroe, Author Gene S., Jr.
2007-01-01
Electro-mechanical device controllers typically come in one of three forms, proportional (P), Proportional Derivative (PD), and Proportional Integral Derivative (PID). Two methods of control are discussed in this paper; they are (1) the classical technique that requires an in-depth mathematical use of poles and zeros, and (2) the fuzzy logic (FL) technique that is similar to the way humans think and make decisions. FL controllers are used in multiple industries; examples include control engineering, computer vision, pattern recognition, statistics, and data analysis. Presented is a study on the development of a PD motor controller written in very high speed hardware description language (VHDL), and implemented in FL. Four distinct abstractions compose the FL controller, they are the fuzzifier, the rule-base, the fuzzy inference system (FIS), and the defuzzifier. FL is similar to, but different from, Boolean logic; where the output value may be equal to 0 or 1, but it could also be equal to any decimal value between them. This controller is unique because of its VHDL implementation, which uses integer mathematics. To compensate for VHDL's inability to synthesis floating point numbers, a scale factor equal to 10(sup (N/4) is utilized; where N is equal to data word size. The scaling factor shifts the decimal digits to the left of the decimal point for increased precision. PD controllers are ideal for use with servo motors, where position control is effective. This paper discusses control methods for motion-base platforms where a constant velocity equivalent to a spectral resolution of 0.25 cm(exp -1) is required; however, the control capability of this controller extends to various other platforms.
The risks of innovation in health care.
Enzmann, Dieter R
2015-04-01
Innovation in health care creates risks that are unevenly distributed. An evolutionary analogy using species to represent business models helps categorize innovation experiments and their risks. This classification reveals two qualitative categories: early and late diversification experiments. Early diversification has prolific innovations with high risk because they encounter a "decimation" stage, during which most experiments disappear. Participants face high risk. The few decimation survivors can be sustaining or disruptive according to Christensen's criteria. Survivors enter late diversification, during which they again expand, but within a design range limited to variations of the previous surviving designs. Late diversifications carry lower risk. The exception is when disruptive survivors "diversify," which amplifies their disruption. Health care and radiology will experience both early and late diversifications, often simultaneously. Although oversimplifying Christensen's concepts, early diversifications are likely to deliver disruptive innovation, whereas late diversifications tend to produce sustaining innovations. Current health care consolidation is a manifestation of late diversification. Early diversifications will appear outside traditional care models and physical health care sites, as well as with new science such as molecular diagnostics. They warrant attention because decimation survivors will present both disruptive and sustaining opportunities to radiology. Radiology must participate in late diversification by incorporating sustaining innovations to its value chain. Given the likelihood of disruptive survivors, radiology should seriously consider disrupting itself rather than waiting for others to do so. Disruption entails significant modifications of its value chain, hence, its business model, for which lessons may become available from the pharmaceutical industry's current simultaneous experience with early and late diversifications. Copyright © 2015. Published by Elsevier Inc.
A Model for Determining the Effect of the Wind Velocity on 100 m Sprinting Performance.
Janjic, Natasa; Kapor, Darko; Doder, Dragan; Petrovic, Aleksandar; Doder, Radoslava
2017-06-01
This paper introduces an equation for determining instantaneous and final velocity of a sprinter in a 100 m run completed with a wind resistance ranging from 0.1 to 4.5 m/s. The validity of the equation was verified using the data of three world class sprinters: Carl Lewis, Maurice Green, and Usain Bolt. For the given constant wind velocity with the values + 0.9 and + 1.1 m/s, the wind contribution to the change of sprinter velocity was the same for the maximum as well as for the final velocity. This study assessed how the effect of the wind velocity influenced the change of sprinting velocity. The analysis led to the conclusion that the official limit of safely neglecting the wind influence could be chosen as 1 m/s instead of 2 m/s, if the velocity were presented using three, instead of two decimal digits. This implies that wind velocity should be rounded off to two decimal places instead of the present practice of one decimal place. In particular, the results indicated that the influence of wind on the change of sprinting velocity in the range of up to 2 m/s and was of order of magnitude of 10 -3 m/s. This proves that the IAAF Competition Rules correctly neglect the influence of the wind with regard to such velocities. However, for the wind velocity over 2 m/s, the wind influence is of order 10 -2 m/s and cannot be neglected.
A Model for Determining the Effect of the Wind Velocity on 100 m Sprinting Performance
Janjic, Natasa; Kapor, Darko; Doder, Dragan; Petrovic, Aleksandar; Doder, Radoslava
2017-01-01
Abstract This paper introduces an equation for determining instantaneous and final velocity of a sprinter in a 100 m run completed with a wind resistance ranging from 0.1 to 4.5 m/s. The validity of the equation was verified using the data of three world class sprinters: Carl Lewis, Maurice Green, and Usain Bolt. For the given constant wind velocity with the values + 0.9 and + 1.1 m/s, the wind contribution to the change of sprinter velocity was the same for the maximum as well as for the final velocity. This study assessed how the effect of the wind velocity influenced the change of sprinting velocity. The analysis led to the conclusion that the official limit of safely neglecting the wind influence could be chosen as 1 m/s instead of 2 m/s, if the velocity were presented using three, instead of two decimal digits. This implies that wind velocity should be rounded off to two decimal places instead of the present practice of one decimal place. In particular, the results indicated that the influence of wind on the change of sprinting velocity in the range of up to 2 m/s and was of order of magnitude of 10-3 m/s. This proves that the IAAF Competition Rules correctly neglect the influence of the wind with regard to such velocities. However, for the wind velocity over 2 m/s, the wind influence is of order 10-2 m/s and cannot be neglected. PMID:28713468
Local Structure Theory for Cellular Automata.
NASA Astrophysics Data System (ADS)
Gutowitz, Howard Andrew
The local structure theory (LST) is a generalization of the mean field theory for cellular automata (CA). The mean field theory makes the assumption that iterative application of the rule does not introduce correlations between the states of cells in different positions. This assumption allows the derivation of a simple formula for the limit density of each possible state of a cell. The most striking feature of CA is that they may well generate correlations between the states of cells as they evolve. The LST takes the generation of correlation explicitly into account. It thus has the potential to describe statistical characteristics in detail. The basic assumption of the LST is that though correlation may be generated by CA evolution, this correlation decays with distance. This assumption allows the derivation of formulas for the estimation of the probability of large blocks of states in terms of smaller blocks of states. Given the probabilities of blocks of size n, probabilities may be assigned to blocks of arbitrary size such that these probability assignments satisfy the Kolmogorov consistency conditions and hence may be used to define a measure on the set of all possible (infinite) configurations. Measures defined in this way are called finite (or n-) block measures. A function called the scramble operator of order n maps a measure to an approximating n-block measure. The action of a CA on configurations induces an action on measures on the set of all configurations. The scramble operator is combined with the CA map on measure to form the local structure operator (LSO). The LSO of order n maps the set of n-block measures into itself. It is hypothesised that the LSO applied to n-block measures approximates the rule itself on general measures, and does so increasingly well as n increases. The fundamental advantage of the LSO is that its action is explicitly computable from a finite system of rational recursion equations. Empirical study of a number of CA rules demonstrates the potential of the LST to describe the statistical features of CA. The behavior of some simple rules is derived analytically. Other rules have more complex, chaotic behavior. Even for these rules, the LST yields an accurate portrait of both small and large time statistics.
Drafting: Current Trends and Future Practices
ERIC Educational Resources Information Center
Jensen, C.
1976-01-01
Various research findings are reported on drafting trends which the author feels should be incorporated into teaching drafting: (1) true position and geometric tolerancing, (2) decimal and metric dimensioning, (3) functional drafting, (4) automated drafting, and (5) drawing reproductions. (BP)
Atmospheric Science Data Center
2013-04-22
... the questions are provided. 1. There are no endemic species of cactus on any of the islands. Answer: FALSE. Endemic ... on this island. 6. Several plant species are endangered due to decimation by goats and competition with non-native ...
40 CFR 60.424 - Test methods and procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... to conduct the run, liter/min. B=acid density (a function of acid strength and temperature), g/cc. C=acid strength, decimal fraction. K1/4=conversion factor, 0.0808 (Mg-min-cc)/(g-hr-liter) [0.0891 (ton...
40 CFR 60.424 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... to conduct the run, liter/min. B=acid density (a function of acid strength and temperature), g/cc. C=acid strength, decimal fraction. K1/4=conversion factor, 0.0808 (Mg-min-cc)/(g-hr-liter) [0.0891 (ton...
40 CFR 60.424 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... to conduct the run, liter/min. B=acid density (a function of acid strength and temperature), g/cc. C=acid strength, decimal fraction. K1/4=conversion factor, 0.0808 (Mg-min-cc)/(g-hr-liter) [0.0891 (ton...
Refractive Outcomes of 20 Eyes Undergoing ICL Implantation for Correction of Hyperopic Astigmatism.
Coskunseven, Efekan; Kavadarli, Isilay; Sahin, Onurcan; Kayhan, Belma; Pallikaris, Ioannis
2017-09-01
To analyze 1-week, 1-month, and 12-month postoperative refractive outcomes of eyes that under-went ICL implantation to correct hyperopic astigmatism. The study enrolled 20 eyes of patients with an average age of 32 years (range: 21 to 40 years). The outcomes of spherical and cylindrical refraction, uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), vault, and angle parameters were evaluated 1 week, 1 month, and 12 months postoperatively. The preoperative mean UDVA was 0.15 ± 0.11 (decimal) (20/133 Snellen) and increased to 0.74 ± 0.25 (20/27 Snellen) postoperatively, with a change of 0.59 (decimal) (20/33.9 Snellen) (P < .0001), which was statistically significant. The preoperative mean CDVA was 0.74 ± 0.25 (decimal) (20/27 Snellen) and increased to 0.78 ± 0.21 (20/25 Snellen), with a change of 0.03 (decimal) (20/666 Snellen) (P < .052), which was not statistically significant. The mean preoperative sphere was 6.86 ± 1.77 diopters (D) and the mean preoperative cylinder was -1.44 ± 0.88 D. The mean 12-month postoperative sphere decreased to 0.46 ± 0.89 D (P < .001) and cylinder decreased to -0.61 ± 0.46 D (P < .001), with a change of 6.40 D, both of which were statistically significant. The mean 1-month postoperative vault was 0.65 ± 0.13 mm and decreased to 0.613 ± 0.10 mm at 1 year postoperatively, with a change of 0.44 mm (P < .003). The preoperative/12-month and 1-month/12-month trabecular-iris angle (TIA), trabecular-iris space area 500 mm from the scleral spur (TISA500), and angle opening distance 500 mm from the scleral spur (AOD500) values were analyzed nasally, temporally, and inferiorly. All differences were statistically significant between preoperative/12-month analysis. The only differences between 1- and 12-month analysis were on TISA500 inferior (P < .002) and AOD500 nasal (0.031) values. ICL hyperopic toric implantation is a safe method and provides stable refractive outcomes in patients with high hyperopia (up to 10.00 D) and astigmatism (up to 6.00 D). [J Refract Surg. 2017;33(9):604-609.]. Copyright 2017, SLACK Incorporated.
NASA Astrophysics Data System (ADS)
Baturin, A. P.; Votchel, I. A.
2013-12-01
The problem of asteroid motion sumulation has been considered. At present this simulation is being performed by means of numerical integration taking into account the pertubations from planets and the Moon with some their ephemerides (DE405, DE422, etc.). All these ephemerides contain coefficients for Chebyshev polinomials for the great amount of equal interpolation intervals. However, all ephemerides has been constructed to keep at the junctions of adjacent intervals a continuity of just coordinates and their first derivatives (just in 16-digit decimal format corre-sponding to 64-bit floating-point numbers). But as for the second and higher order derivatives, they have breaks at these junctions. These breaks, if they are within an integration step, decrease the accuracy of numerical integration. If to consider 34-digit format (128-bit floating point numbers) the coordinates and their first derivatives will also have breaks (at 15-16 decimal digit) at interpolation intervals' junctions. Two ways of elimination of influence of such breaks have been considered. The first one is a "smoothing" of ephemerides so that planets' coordinates and their de-rivatives up to some order will be continuous at the junctions. The smoothing algorithm is based on conditional least-square fitting of coefficients for Chebyshev polynomials, the conditions are equalities of coordinates and derivatives up to some order "from the left" and "from the right" at the each junction. The algorithm has been applied for the smoothing of ephemerides DE430 just up to the first-order derivatives. The second way is a correction of integration step so that junctions does not lie within the step and always coincide with its end. But this way may be applied just at 16-digit decimal precision because it assumes a continuity of planets' coordinates and their first derivatives. Both ways was applied in forward and backward numerical integration for asteroids Apophis and 2012 DA14 by means of 15- and 31-order Everhart method at 16- and 34-digit decimal precision correspondently. The ephemerides DE430 (in its original and smoothed form) has been used for the calculation of perturbations. The results of the research indicate that the integration step correction increases a numercal integration accuracy by 3-4 orders. If, in addition, to replace the original ephemerides by the smoothed ones the accuracy increases approximately by 10 orders.
Comparison of heavy-ion transport simulations: Collision integral in a box
NASA Astrophysics Data System (ADS)
Zhang, Ying-Xun; Wang, Yong-Jia; Colonna, Maria; Danielewicz, Pawel; Ono, Akira; Tsang, Manyee Betty; Wolter, Hermann; Xu, Jun; Chen, Lie-Wen; Cozma, Dan; Feng, Zhao-Qing; Das Gupta, Subal; Ikeno, Natsumi; Ko, Che-Ming; Li, Bao-An; Li, Qing-Feng; Li, Zhu-Xia; Mallik, Swagata; Nara, Yasushi; Ogawa, Tatsuhiko; Ohnishi, Akira; Oliinychenko, Dmytro; Papa, Massimo; Petersen, Hannah; Su, Jun; Song, Taesoo; Weil, Janus; Wang, Ning; Zhang, Feng-Shou; Zhang, Zhen
2018-03-01
Simulations by transport codes are indispensable to extract valuable physical information from heavy-ion collisions. In order to understand the origins of discrepancies among different widely used transport codes, we compare 15 such codes under controlled conditions of a system confined to a box with periodic boundary, initialized with Fermi-Dirac distributions at saturation density and temperatures of either 0 or 5 MeV. In such calculations, one is able to check separately the different ingredients of a transport code. In this second publication of the code evaluation project, we only consider the two-body collision term; i.e., we perform cascade calculations. When the Pauli blocking is artificially suppressed, the collision rates are found to be consistent for most codes (to within 1 % or better) with analytical results, or completely controlled results of a basic cascade code. In orderto reach that goal, it was necessary to eliminate correlations within the same pair of colliding particles that can be present depending on the adopted collision prescription. In calculations with active Pauli blocking, the blocking probability was found to deviate from the expected reference values. The reason is found in substantial phase-space fluctuations and smearing tied to numerical algorithms and model assumptions in the representation of phase space. This results in the reduction of the blocking probability in most transport codes, so that the simulated system gradually evolves away from the Fermi-Dirac toward a Boltzmann distribution. Since the numerical fluctuations are weaker in the Boltzmann-Uehling-Uhlenbeck codes, the Fermi-Dirac statistics is maintained there for a longer time than in the quantum molecular dynamics codes. As a result of this investigation, we are able to make judgements about the most effective strategies in transport simulations for determining the collision probabilities and the Pauli blocking. Investigation in a similar vein of other ingredients in transport calculations, like the mean-field propagation or the production of nucleon resonances and mesons, will be discussed in the future publications.
IMS/Satellite Situation Center report. Predicted orbit plots for IMP-H-1976. [Explorer 47 satellite
NASA Technical Reports Server (NTRS)
1975-01-01
Predicted orbit plots are shown in three projections. The time period covered by each set of projections is 12 days 6 hours, corresponding approximately to the period of IMP-H satellite. The three coordinate systems used are the Geocentric Solar Ecliptic system (GSE), the Geocentric Solar Magnetospheric system (GSM), and the Solar Magnetic system (SM). For each of the three projections, time ticks and codes are given on the satellite trajectories. The codes are interpreted in the table at the base of each plot. Time is given in the table as year/day/decimal hour. The total time covered by each plot is shown at the bottom of each table. An additional variable is given in the table for each time tick. For the GSM and SM projection this variable is the geocentric distance to the satellite in earth radii, and for the GSE projection the variable is satellite ecliptic latitude in degrees.
Hybrid Grid Techniques for Propulsion Applications
NASA Technical Reports Server (NTRS)
Koomullil, Roy P.; Soni, Bharat K.; Thornburg, Hugh J.
1996-01-01
During the past decade, computational simulation of fluid flow for propulsion activities has progressed significantly, and many notable successes have been reported in the literature. However, the generation of a high quality mesh for such problems has often been reported as a pacing item. Hence, much effort has been expended to speed this portion of the simulation process. Several approaches have evolved for grid generation. Two of the most common are structured multi-block, and unstructured based procedures. Structured grids tend to be computationally efficient, and have high aspect ratio cells necessary for efficently resolving viscous layers. Structured multi-block grids may or may not exhibit grid line continuity across the block interface. This relaxation of the continuity constraint at the interface is intended to ease the grid generation process, which is still time consuming. Flow solvers supporting non-contiguous interfaces require specialized interpolation procedures which may not ensure conservation at the interface. Unstructured or generalized indexing data structures offer greater flexibility, but require explicit connectivity information and are not easy to generate for three dimensional configurations. In addition, unstructured mesh based schemes tend to be less efficient and it is difficult to resolve viscous layers. Recently hybrid or generalized element solution and grid generation techniques have been developed with the objective of combining the attractive features of both structured and unstructured techniques. In the present work, recently developed procedures for hybrid grid generation and flow simulation are critically evaluated, and compared to existing structured and unstructured procedures in terms of accuracy and computational requirements.
Kennedy, Kristen M.; Rodrigue, Karen M.; Lindenberger, Ulman; Raz, Naftali
2010-01-01
The effects of advanced age and cognitive resources on the course of skill acquisition are unclear, and discrepancies among studies may reflect limitations of data analytic approaches. We applied a multilevel negative exponential model to skill acquisition data from 80 trials (four 20-trial blocks) of a pursuit rotor task administered to healthy adults (19–80 years old). The analyses conducted at the single-trial level indicated that the negative exponential function described performance well. Learning parameters correlated with measures of task-relevant cognitive resources on all blocks except the last and with age on all blocks after the second. Thus, age differences in motor skill acquisition may evolve in 2 phases: In the first, age differences are collinear with individual differences in task-relevant cognitive resources; in the second, age differences orthogonal to these resources emerge. PMID:20047985
FPT- FORTRAN PROGRAMMING TOOLS FOR THE DEC VAX
NASA Technical Reports Server (NTRS)
Ragosta, A. E.
1994-01-01
The FORTRAN Programming Tools (FPT) are a series of tools used to support the development and maintenance of FORTRAN 77 source codes. Included are a debugging aid, a CPU time monitoring program, source code maintenance aids, print utilities, and a library of useful, well-documented programs. These tools assist in reducing development time and encouraging high quality programming. Although intended primarily for FORTRAN programmers, some of the tools can be used on data files and other programming languages. BUGOUT is a series of FPT programs that have proven very useful in debugging a particular kind of error and in optimizing CPU-intensive codes. The particular type of error is the illegal addressing of data or code as a result of subtle FORTRAN errors that are not caught by the compiler or at run time. A TRACE option also allows the programmer to verify the execution path of a program. The TIME option assists the programmer in identifying the CPU-intensive routines in a program to aid in optimization studies. Program coding, maintenance, and print aids available in FPT include: routines for building standard format subprogram stubs; cleaning up common blocks and NAMELISTs; removing all characters after column 72; displaying two files side by side on a VT-100 terminal; creating a neat listing of a FORTRAN source code including a Table of Contents, an Index, and Page Headings; converting files between VMS internal format and standard carriage control format; changing text strings in a file without using EDT; and replacing tab characters with spaces. The library of useful, documented programs includes the following: time and date routines; a string categorization routine; routines for converting between decimal, hex, and octal; routines to delay process execution for a specified time; a Gaussian elimination routine for solving a set of simultaneous linear equations; a curve fitting routine for least squares fit to polynomial, exponential, and sinusoidal forms (with a screen-oriented editor); a cubic spline fit routine; a screen-oriented array editor; routines to support parsing; and various terminal support routines. These FORTRAN programming tools are written in FORTRAN 77 and ASSEMBLER for interactive and batch execution. FPT is intended for implementation on DEC VAX series computers operating under VMS. This collection of tools was developed in 1985.
Code of Federal Regulations, 2012 CFR
2012-07-01
... deterioration factor to one more significant figure than the emission standard. You may use assigned... rounding the adjusted figure to the same number of decimal places as the emission standard. Compare the...
Code of Federal Regulations, 2013 CFR
2013-07-01
... deterioration factor to one more significant figure than the emission standard. You may use assigned... rounding the adjusted figure to the same number of decimal places as the emission standard. Compare the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... deterioration factor to one more significant figure than the emission standard. You may use assigned... rounding the adjusted figure to the same number of decimal places as the emission standard. Compare the...
Code of Federal Regulations, 2011 CFR
2011-07-01
... deterioration factor to one more significant figure than the emission standard. You may use assigned... rounding the adjusted figure to the same number of decimal places as the emission standard. Compare the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... deterioration factor to one more significant figure than the emission standard. You may use assigned... rounding the adjusted figure to the same number of decimal places as the emission standard. Compare the...
Programs for Fundamentals of Chemistry.
ERIC Educational Resources Information Center
Gallardo, Julio; Delgado, Steven
This document provides computer programs, written in BASIC PLUS, for presenting fundamental or remedial college chemistry students with chemical problems in a computer assisted instructional program. Programs include instructions, a sample run, and 14 separate practice sessions covering: mathematical operations, using decimals, solving…
40 CFR 98.276 - Data reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... CO2, CH4, biogenic CH4 N2O, and biogenic N2O (metric tons per year). (b) Annual quantities fossil... weight, expressed as a decimal fraction, e.g., 95% = 0.95). (g) Annual quantities of fossil fuels by type...
Yield table for hardwood bark residue
Jeffery L. Wartluft
1974-01-01
Bark residue weights are tabulated for eight species of hardwood sawlogs according to log volume by the Doyle, International 1/4-inch, and Scribner decimal C log rules. Factors are provided for converting from weight in pounds to volume in cubic yards.
Ten Essential Concepts for Remediation in Mathematics.
ERIC Educational Resources Information Center
Roseman, Louis
1985-01-01
Ten crucial mathematical concepts with which errors are made are listed, with methods used to teach them to high school students. The concepts concern order, place values, inverse operations, multiplication and division, remainders, identity elements, fractions, conversions, decimal points, and percentages. (MNS)
Space Launch System Advanced Development Office, FY 2013 Annual Report
NASA Technical Reports Server (NTRS)
Crumbly, C. M.; Bickley, F. P.; Hueter, U.
2013-01-01
The Advanced Development Office (ADO), part of the Space Launch System (SLS) program, provides SLS with the advanced development needed to evolve the vehicle from an initial Block 1 payload capability of 70 metric tons (t) to an eventual capability Block 2 of 130 t, with intermediary evolution options possible. ADO takes existing technologies and matures them to the point that insertion into the mainline program minimizes risk. The ADO portfolio of tasks covers a broad range of technical developmental activities. The ADO portfolio supports the development of advanced boosters, upper stages, and other advanced development activities benefiting the SLS program. A total of 34 separate tasks were funded by ADO in FY 2013.
Mazzola, Priscila G; Martins, Alzira MS; Penna, Thereza CV
2006-01-01
Background Purified water for pharmaceutical purposes must be free of microbial contamination and pyrogens. Even with the additional sanitary and disinfecting treatments applied to the system (sequential operational stages), Pseudomonas aeruginosa, Pseudomonas fluorescens, Pseudomonas alcaligenes, Pseudomonas picketti, Flavobacterium aureum, Acinetobacter lowffi and Pseudomonas diminuta were isolated and identified from a thirteen-stage purification system. To evaluate the efficacy of the chemical agents used in the disinfecting process along with those used to adjust chemical characteristics of the system, over the identified bacteria, the kinetic parameter of killing time (D-value) necessary to inactivate 90% of the initial bioburden (decimal reduction time) was experimentally determined. Methods Pseudomonas aeruginosa, Pseudomonas fluorescens, Pseudomonas alcaligenes, Pseudomonas picketti, Flavobacterium aureum, Acinetobacter lowffi and Pseudomonas diminuta were called in house (wild) bacteria. Pseudomonas diminuta ATCC 11568, Pseudomonas alcaligenes INCQS , Pseudomonas aeruginosa ATCC 15442, Pseudomonas fluorescens ATCC 3178, Pseudomonas picketti ATCC 5031, Bacillus subtilis ATCC 937 and Escherichia coli ATCC 25922 were used as 'standard' bacteria to evaluate resistance at 25°C against either 0.5% citric acid, 0.5% hydrochloric acid, 70% ethanol, 0.5% sodium bisulfite, 0.4% sodium hydroxide, 0.5% sodium hypochlorite, or a mixture of 2.2% hydrogen peroxide (H2O2) and 0.45% peracetic acid. Results The efficacy of the sanitizers varied with concentration and contact time to reduce decimal logarithmic (log10) population (n cycles). To kill 90% of the initial population (or one log10 cycle), the necessary time (D-value) was for P. aeruginosa into: (i) 0.5% citric acid, D = 3.8 min; (ii) 0.5% hydrochloric acid, D = 6.9 min; (iii) 70% ethanol, D = 9.7 min; (iv) 0.5% sodium bisulfite, D = 5.3 min; (v) 0.4% sodium hydroxide, D = 14.2 min; (vi) 0.5% sodium hypochlorite, D = 7.9 min; (vii) mixture of hydrogen peroxide (2.2%) plus peracetic acid (0.45%), D = 5.5 min. Conclusion The contact time of 180 min of the system with the mixture of H2O2+ peracetic acid, a total theoretical reduction of 6 log10 cycles was attained in the water purified storage tank and distribution loop. The contact time between the water purification system (WPS) and the sanitary agents should be reviewed to reach sufficient bioburden reduction (over 6 log10). PMID:16914053
Investigation from Japanese MAGSAT team
NASA Technical Reports Server (NTRS)
Fukushima, N. (Principal Investigator)
1981-01-01
The acquisition of tapes which contain vector and scalar data decimated at an interval of 0.5 sec, together with time and position data, is reported. Progress in the study of magnetic anomalies in the vicinity of Japan and in electric currents in the ionosphere and magnetosphere is also reported. MAGSAT data was used in obtaining a map of total force anomaly for the area of latitude 10-70 deg N and longitude 110-170 deg E. One of the outstanding features in the map of the magnetic anomaly is a negative magnetic anomaly in the Okhotsk Sea, which is of geophysical interest because of its possible connection with high heat flow values in that area.
Classification System and Information Services in the Library of SAO RAS
NASA Astrophysics Data System (ADS)
Shvedova, G. S.
The classification system used at SAO RAS is described. It includes both special determinants from UDC (Universal Decimal Classification) and newer tables with astronomical terms from the Library-Bibliographical Classification (LBC). The classification tables are continually modified, and new astronomical terms are introduced. At the present time the information services of the scientists is fulfilled with the help of the Abstract Journal Astronomy, Astronomy and Astrophysics Abstracts, catalogues and card indexes of the library. Based on our classification system and The Astronomy Thesaurus completed by R.M. Shobbrook and R.R. Shobbrook the development of a database for the library has been started, which allows prompt service of the observatory's staff members.
Ordered fast Fourier transforms on a massively parallel hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Tong, Charles; Swarztrauber, Paul N.
1991-01-01
The present evaluation of alternative, massively parallel hypercube processor-applicable designs for ordered radix-2 decimation-in-frequency FFT algorithms gives attention to the reduction of computation time-dominating communication. A combination of the order and computational phases of the FFT is accordingly employed, in conjunction with sequence-to-processor maps which reduce communication. Two orderings, 'standard' and 'cyclic', in which the order of the transform is the same as that of the input sequence, can be implemented with ease on the Connection Machine (where orderings are determined by geometries and priorities. A parallel method for trigonometric coefficient computation is presented which does not employ trigonometric functions or interprocessor communication.
Multiple-scale neuroendocrine signals connect brain and pituitary hormone rhythms
Romanò, Nicola; Guillou, Anne; Martin, Agnès O; Mollard, Patrice
2017-01-01
Small assemblies of hypothalamic “parvocellular” neurons release their neuroendocrine signals at the median eminence (ME) to control long-lasting pituitary hormone rhythms essential for homeostasis. How such rapid hypothalamic neurotransmission leads to slowly evolving hormonal signals remains unknown. Here, we show that the temporal organization of dopamine (DA) release events in freely behaving animals relies on a set of characteristic features that are adapted to the dynamic dopaminergic control of pituitary prolactin secretion, a key reproductive hormone. First, locally generated DA release signals are organized over more than four orders of magnitude (0.001 Hz–10 Hz). Second, these DA events are finely tuned within and between frequency domains as building blocks that recur over days to weeks. Third, an integration time window is detected across the ME and consists of high-frequency DA discharges that are coordinated within the minutes range. Thus, a hierarchical combination of time-scaled neuroendocrine signals displays local–global integration to connect brain–pituitary rhythms and pace hormone secretion. PMID:28193889
Code of Federal Regulations, 2011 CFR
2011-07-01
... beneath the earth's surface. The principal hydrocarbon constituent is methane. Onshore means all... sulfur compounds means H2S, carbonyl sulfide (COS), and carbon disulfide (CS2). Sulfur production rate... efficiency achieved in percent, carried to one decimal place. SThe sulfur production rate, kilograms per hour...
Code of Federal Regulations, 2010 CFR
2010-07-01
... beneath the earth's surface. The principal hydrocarbon constituent is methane. Onshore means all... sulfur compounds means H2S, carbonyl sulfide (COS), and carbon disulfide (CS2). Sulfur production rate... efficiency achieved in percent, carried to one decimal place. SThe sulfur production rate, kilograms per hour...
Code of Federal Regulations, 2012 CFR
2012-07-01
... beneath the earth's surface. The principal hydrocarbon constituent is methane. Onshore means all... sulfur compounds means H2S, carbonyl sulfide (COS), and carbon disulfide (CS2). Sulfur production rate... efficiency achieved in percent, carried to one decimal place. SThe sulfur production rate, kilograms per hour...
Atmospheric Science Data Center
2013-04-22
... statement true or false: 1. There are no endemic species of cactus on any of the islands. 2. Flamingos, whose diets ... to over 2,000 people. 6. Several plant species are endangered due to decimation by goats and competition with non-native ...
16 CFR 500.11 - Measurement of commodity length, how expressed.
Code of Federal Regulations, 2010 CFR
2010-01-01
... follows: (a) If less than 1 foot, in terms of inches and fractions thereof. (b) If 1 foot or more, in... decimal fractions of the foot or yard, except that it shall be optional to express the length in the...
Promoting Decimal Number Sense and Representational Fluency
ERIC Educational Resources Information Center
Suh, Jennifer M.; Johnston, Chris; Jamieson, Spencer; Mills, Michelle
2008-01-01
The abstract nature of mathematics requires the communication of mathematical ideas through multiple representations, such as words, symbols, pictures, objects, or actions. Building representational fluency involves using mathematical representations flexibly and being able to interpret and translate among these different models and mathematical…
Extending the Regular Curriculum through Creative Problem Solving.
ERIC Educational Resources Information Center
Bohan, Harry; Bohan, Susan
1993-01-01
Uses ancient Egyptian numeration system in a new setting to extend the concepts of base, place value, and correspondence. Discusses similarities and differences between the Egyptian and decimal systems. Students are asked to propose changes to make the Egyptian system easier. (LDR)
Workplace Math I: Easing into Math.
ERIC Educational Resources Information Center
Wilson, Nancy; Goschen, Claire
This basic skills learning module includes instruction in performing basic computations, using general numerical concepts such as whole numbers, fractions, decimals, averages, ratios, proportions, percentages, and equivalents in practical situations. The problems are relevant to all aspects of the printing and manufacturing industry, with emphasis…
How can we eradicate chlamydia?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rousser, Margaret; He, Wei
Chlamydia is the most commonly contracted STI and affects millions of people worldwide, but it's not just hurting humans--it's also decimating koala populations! Find out how researches at the Lab are working toward developing the first vaccine against chlamydia--good news for humans and koalas.
The Native American Holocaust.
ERIC Educational Resources Information Center
Thornton, Russell
1989-01-01
Describes the American Indian "Holocaust," decimation of Indian populations following European discovery of the Americas. European and African diseases, warfare with Europeans, and genocide reduced native populations from 75 million to only a few million. Discusses population statistics and demographic effects of epidemics, continuing infection,…
Development and Validation of a New Air Carrier Block Time Prediction Model and Methodology
NASA Astrophysics Data System (ADS)
Litvay, Robyn Olson
Commercial airline operations rely on predicted block times as the foundation for critical, successive decisions that include fuel purchasing, crew scheduling, and airport facility usage planning. Small inaccuracies in the predicted block times have the potential to result in huge financial losses, and, with profit margins for airline operations currently almost nonexistent, potentially negate any possible profit. Although optimization techniques have resulted in many models targeting airline operations, the challenge of accurately predicting and quantifying variables months in advance remains elusive. The objective of this work is the development of an airline block time prediction model and methodology that is practical, easily implemented, and easily updated. Research was accomplished, and actual U.S., domestic, flight data from a major airline was utilized, to develop a model to predict airline block times with increased accuracy and smaller variance in the actual times from the predicted times. This reduction in variance represents tens of millions of dollars (U.S.) per year in operational cost savings for an individual airline. A new methodology for block time prediction is constructed using a regression model as the base, as it has both deterministic and probabilistic components, and historic block time distributions. The estimation of the block times for commercial, domestic, airline operations requires a probabilistic, general model that can be easily customized for a specific airline’s network. As individual block times vary by season, by day, and by time of day, the challenge is to make general, long-term estimations representing the average, actual block times while minimizing the variation. Predictions of block times for the third quarter months of July and August of 2011 were calculated using this new model. The resulting, actual block times were obtained from the Research and Innovative Technology Administration, Bureau of Transportation Statistics (Airline On-time Performance Data, 2008-2011) for comparison and analysis. Future block times are shown to be predicted with greater accuracy, without exception and network-wide, for a major, U.S., domestic airline.
Fractally Fourier decimated homogeneous turbulent shear flow in noninteger dimensions.
Fathali, Mani; Khoei, Saber
2017-02-01
Time evolution of the fully resolved incompressible homogeneous turbulent shear flow in noninteger Fourier dimensions is numerically investigated. The Fourier dimension of the flow field is extended from the integer value 3 to the noninteger values by projecting the Navier-Stokes equation on the fractal set of the active Fourier modes with dimensions 2.7≤d≤3.0. The results of this study revealed that the dynamics of both large and small scale structures are nontrivially influenced by changing the Fourier dimension d. While both turbulent production and dissipation are significantly hampered as d decreases, the evolution of their ratio is almost independent of the Fourier dimension. The mechanism of the energy distribution among different spatial directions is also impeded by decreasing d. Due to this deficient energy distribution, turbulent field shows a higher level of the large-scale anisotropy in lower Fourier dimensions. In addition, the persistence of the vortex stretching mechanism and the forward spectral energy transfer, which are three-dimensional turbulence characteristics, are examined at changing d, from the standard case d=3.0 to the strongly decimated flow field for d=2.7. As the Fourier dimension decreases, these forward energy transfer mechanisms are strongly suppressed, which in turn reduces both the small-scale intermittency and the deviation from Gaussianity. Besides the energy exchange intensity, the variations of d considerably modify the relative weights of local to nonlocal triadic interactions. It is found that the contribution of the nonlocal triads to the total turbulent kinetic energy exchange increases as the Fourier dimension increases.
Outcomes of transconjunctival sutureless 27-gauge vitrectomy for vitreoretinal diseases.
Li, Jie; Liu, San-Mei; Dong, Wen-Tao; Li, Fang; Zhou, Cai-Hong; Xu, Xiao-Dan; Zhong, Jie
2018-01-01
To evaluate the safety and efficacy profile of 27-gauge (27G) pars plana vitrectomy (PPV) for the treatment of various vitreoretinal diseases. The clinical outcomes of 61 eyes (58 patients) with various vitreoretinal diseases following 27G PPV were retrospectively reviewed. Surgical indications included rhegmatogenous retinal detachment ( n =24), full-thickness macular hole ( n =12), diabetic retinopathy ( n =11), vitreous hemorrhage ( n =6), Eales disease ( n =4), pathological myopia-related vitreous floater ( n =2), and macular epiretinal membrane ( n =2). The mean follow-up was 166.4±61.3d (range 98-339d). The mean logMAR best-corrected visual acuity (BCVA) improved from 1.7±1.1 [0.02 decimal visual acuity (VA) equivalent] preoperatively to 1.2±1.0 (0.06 decimal VA equivalent) at the last postoperative visit ( P <0.001). The mean operative time was 49.9min. With the exception of complicated cataract in one eye, no intraoperative complications were encountered. No case required conversion to conventional 20-, 23- or 25G instrumentation in all surgical maneuvers except for silicone oil infusion, which required a 25G oil injection syringe. Postoperative complications included transient ocular hypertension, vitreous hemorrhage, persistent intraocular pressure elevation, subconjunctival oil leakage, and recurrent retinal detachment. No cases of hypotony, endophthalmitis, and sclerotomy-related tears were observed. The current results suggest that 27G PPV system is a safe and effective treatment for various vitreoretinal diseases. When learning to perform 27G PPV, surgeons may encounter a learning curve and should gradually expand surgical indications from easy to pathologically complicated cases.
NASA Astrophysics Data System (ADS)
Imola, Molnar; Judit, Papp; Alpar, Simon; Sorin, Dan Anghel
2013-06-01
This paper presents a study of the effect of the low temperature atmospheric helium dielectric barrier discharge (DBD) on the Streptococcus mutans biofilms formed on tooth surface. Pig jaws were also treated by plasma to detect if there is any harmful effect on the gingiva. The plasma was characterized by using optical emission spectroscopy. Experimental data indicated that the discharge is very effective in deactivating Streptococcus mutans biofilms. It can destroy them with an average decimal reduction time (D-time) of 19 s and about 98% of them were killed after a treatment time of 30 s. According to the survival curve kinetic an overall 32 s treatment time would be necessary to perform a complete sterilization. The experimental results presented in this study indicated that the helium dielectric barrier discharge, in plan-parallel electrode configuration, could be a very effective tool for deactivation of oral bacteria and might be a promising technique in various dental clinical applications.
Evolving neural networks with genetic algorithms to study the string landscape
NASA Astrophysics Data System (ADS)
Ruehle, Fabian
2017-08-01
We study possible applications of artificial neural networks to examine the string landscape. Since the field of application is rather versatile, we propose to dynamically evolve these networks via genetic algorithms. This means that we start from basic building blocks and combine them such that the neural network performs best for the application we are interested in. We study three areas in which neural networks can be applied: to classify models according to a fixed set of (physically) appealing features, to find a concrete realization for a computation for which the precise algorithm is known in principle but very tedious to actually implement, and to predict or approximate the outcome of some involved mathematical computation which performs too inefficient to apply it, e.g. in model scans within the string landscape. We present simple examples that arise in string phenomenology for all three types of problems and discuss how they can be addressed by evolving neural networks from genetic algorithms.
Hot Gas in Merging Subgroups; Probing the Early Stages of Structure Formation
NASA Astrophysics Data System (ADS)
Machacek, Marie
2014-08-01
To fully understand the growth of large scale structure in hierarchical cosmological models, we must first understand how their building blocks, low mass galaxy subgroups, evolve through mergers. These galaxy subgroups are X-ray faint and difficult to observe at high redshift. The study of near-by subgroup mergers may be used as templates to gain insight into the dominant dynamical processes that are at work in the early universe. We use Chandra observations of edges, tails and wings in a sample of near-by galaxy groups ( Pavo, Telescopium, Pegasus, NGC7618/UGC12491 to measure the properties of the diffuse gas, merger velocities, shocks and non-hydrostatic gas 'sloshing', as their common ICM envelopes evolves.
USDA-ARS?s Scientific Manuscript database
Plants and animals both independently evolved the ability to recognize flagellin (also called FliC), the building block of the bacterial flagellum, as part of their innate immune response. Most plants recognize one or two short epitopes of FliC: flg22 and flgII-28. However, since most research in pl...
Research Notes - An Introduction to Openness and Evolvability Assessment
2016-08-01
importance of different business and technical characteristics that combine to achieve an open solution. The complexity of most large-scale systems of...process characteristic) Granularity of the architecture (size of functional blocks) Modularity (cohesion and coupling) Support for multiple...Description) OV-3 (Operational Information Exchange Matrix) SV-1 (Systems Interface Description) TV-1 ( Technical Standards Profile). Note that there
A concept to standardize raw biosignal transmission for brain-computer interfaces.
Breitwieser, Christian; Neuper, Christa; Müller-Putz, Gernot R
2011-01-01
With this concept we introduced the attempt of a standardized interface called TiA to transmit raw biosignals. TiA is able to deal with multirate and block-oriented data transmission. Data is distinguished by different signal types (e.g., EEG, EOG, NIRS, …), whereby those signals can be acquired at the same time from different acquisition devices. TiA is built as a client-server model. Multiple clients can connect to one server. Information is exchanged via a control- and a separated data connection. Control commands and meta information are transmitted over the control connection. Raw biosignal data is delivered using the data connection in a unidirectional way. For this purpose a standardized handshaking protocol and raw data packet have been developed. Thus, an abstraction layer between hardware devices and data processing was evolved facilitating standardization.
NASA Astrophysics Data System (ADS)
Congy, T.; Ivanov, S. K.; Kamchatnov, A. M.; Pavloff, N.
2017-08-01
We consider the space-time evolution of initial discontinuities of depth and flow velocity for an integrable version of the shallow water Boussinesq system introduced by Kaup. We focus on a specific version of this "Kaup-Boussinesq model" for which a flat water surface is modulationally stable, we speak below of "positive dispersion" model. This model also appears as an approximation to the equations governing the dynamics of polarisation waves in two-component Bose-Einstein condensates. We describe its periodic solutions and the corresponding Whitham modulation equations. The self-similar, one-phase wave structures are composed of different building blocks, which are studied in detail. This makes it possible to establish a classification of all the possible wave configurations evolving from initial discontinuities. The analytic results are confirmed by numerical simulations.
Congy, T; Ivanov, S K; Kamchatnov, A M; Pavloff, N
2017-08-01
We consider the space-time evolution of initial discontinuities of depth and flow velocity for an integrable version of the shallow water Boussinesq system introduced by Kaup. We focus on a specific version of this "Kaup-Boussinesq model" for which a flat water surface is modulationally stable, we speak below of "positive dispersion" model. This model also appears as an approximation to the equations governing the dynamics of polarisation waves in two-component Bose-Einstein condensates. We describe its periodic solutions and the corresponding Whitham modulation equations. The self-similar, one-phase wave structures are composed of different building blocks, which are studied in detail. This makes it possible to establish a classification of all the possible wave configurations evolving from initial discontinuities. The analytic results are confirmed by numerical simulations.
NASA Astrophysics Data System (ADS)
Meintanis, Evangelos Anastasios
We have extended the HOLA molecular dynamics (MD) code to run slider-on-block friction experiments for Al and Cu. Both objects are allowed to evolve freely and show marked deformation despite the hardness difference. We recover realistic coefficients of friction and verify the importance of cold-welding and plastic deformations in dry sliding friction. Our first data also show a mechanism for decoupling between load and friction at high velocities. Such a mechanism can explain an increase in the coefficient of friction of metals with velocity. The study of the effects of currents on our system required the development of a suitable electrodynamic (ED) solver, as the disparity of MD and ED time scales threatened the efficiency of our code. Our first simulations combining ED and MD are presented.
Ince, Ilker; Aksoy, Mehmet; Celik, Mine
2016-01-01
Objective: Distal nerve blocks are used in the event of unsuccessful blocks as rescue techniques. The primary purpose of this study was to determine the sufficiency for anesthesia of distal nerve block without the need for deep sedation or general anesthesia. The secondary purpose was to compare block performance times, block onset times, and patient and surgeon satisfaction. Materials and Methods: Patients who underwent hand surgery associated with the innervation area of the radial and median nerves were included in the study. Thirty-four patients who were 18–65 years old and American Society of Anesthesiologists grade I–III and who were scheduled for elective hand surgery under conscious nerve block anesthesia were randomly included in an infraclavicular block group (Group I, n=17) or a radial plus median block group (Group RM, n=17). The block performance time, block onset time, satisfaction of the patient and surgeon, and number of fentanyl administrations were recorded. Results: The numbers of patients who needed fentanyl administration and conversion to general anesthesia were the same in Group I and Group RM and there was no statistically significant difference (p>0.05). The demographics, surgery times, tourniquet times, block perfomance times, and patient and surgeon satisfaction of the groups were similar and there were no statistically significant differences (p>0.05). There was a statistically significant difference in block onset times between the groups (p<0.05). Conclusions: Conscious hand surgery can be performed under distal nerve block anesthesia safely and successfully. PMID:28149139
Koyama, Takuya; Ito, Hiromu; Fujisawa, Tomochika; Ikeda, Hiroshi; Kakishima, Satoshi; Cooley, John R; Simon, Chris; Yoshimura, Jin; Sota, Teiji
2016-11-01
Life history evolution spurred by post-Pleistocene climatic change is hypothesized to be responsible for the present diversity in periodical cicadas (Magicicada), but the mechanism of life cycle change has been controversial. To understand the divergence process of 13-year and 17-year cicada life cycles, we studied genetic relationships between two synchronously emerging, parapatric 13-year periodical cicada species in the Decim group, Magicicada tredecim and M. neotredecim. The latter was hypothesized to be of hybrid origin or to have switched from a 17-year cycle via developmental plasticity. Phylogenetic analysis using restriction-site-associated DNA sequences for all Decim species and broods revealed that the 13-year M. tredecim lineage is genomically distinct from 17-year Magicicada septendecim but that 13-year M. neotredecim is not. We detected no significant introgression between M. tredecim and M. neotredecim/M. septendecim thus refuting the hypothesis that M. neotredecim are products of hybridization between M. tredecim and M. septendecim. Further, we found that introgressive hybridization is very rare or absent in the contact zone between the two 13-year species evidenced by segregation patterns in single nucleotide polymorphisms, mitochondrial lineage identity and head width and abdominal sternite colour phenotypes. Our study demonstrates that the two 13-year Decim species are of independent origin and nearly completely reproductively isolated. Combining our data with increasing observations of occasional life cycle change in part of a cohort (e.g. 4-year acceleration of emergence in 17-year species), we suggest a pivotal role for developmental plasticity in Magicicada life cycle evolution. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Baturin, A. P.
2011-07-01
The results of major planets' and Moon's ephemerides smoothing by cubic polynomials are presented. Considered ephemerides are DE405, DE406, DE408, DE421, DE423 and DE722. The goal of the smoothig is an elimination of discontinu-ous behavior of interpolated coordinates and their derivatives at the junctions of adjacent interpolation intervals when calculations are made with 34-digit decimal accuracy. The reason of such a behavior is a limited 16-digit decimal accuracy of coefficients in ephemerides for interpolating Chebyshev's polynomials. Such discontinuity of perturbing bodies' coordinates signifi-cantly reduces the advantages of 34-digit calculations because the accuracy of numerical integration of asteroids' motion equations increases in this case just by 3 orders to compare with 16-digit calculations. It is demonstrated that the cubic-polynomial smoothing of ephemerides results in elimination of jumps of perturbing bodies' coordinates and their derivatives. This leads to increasing of numerical integration accuracy by 7-9 orders. All calculations in this work were made with 34-digit decimal accuracy on the computer cluster "Skif Cyberia" of Tomsk State University.
NASA Astrophysics Data System (ADS)
Miki, Nobuhiko; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru; Nakagawa, Masao
In the Evolved UTRA (UMTS Terrestrial Radio Access) downlink, Orthogonal Frequency Division Multiplexing (OFDM) based radio access was adopted because of its inherent immunity to multipath interference and flexible accommodation of different spectrum arrangements. This paper presents the optimum adaptive modulation and channel coding (AMC) scheme when resource blocks (RBs) is simultaneously assigned to the same user when frequency and time domain channel-dependent scheduling is assumed in the downlink OFDMA radio access with single-antenna transmission. We start by presenting selection methods for the modulation and coding scheme (MCS) employing mutual information both for RB-common and RB-dependent modulation schemes. Simulation results show that, irrespective of the application of power adaptation to RB-dependent modulation, the improvement in the achievable throughput of the RB-dependent modulation scheme compared to that for the RB-common modulation scheme is slight, i.e., 4 to 5%. In addition, the number of required control signaling bits in the RB-dependent modulation scheme becomes greater than that for the RB-common modulation scheme. Therefore, we conclude that the RB-common modulation and channel coding rate scheme is preferred, when multiple RBs of the same coded stream are assigned to one user in the case of single-antenna transmission.
Brown, K
2008-12-01
This article provides an historical overview of developments in veterinary entomology during the late nineteenth and early twentieth centuries. During that period state employed entomologists and veterinary scientists discovered that ticks were responsible for transmitting a number of livestock diseases in South Africa. Diseases such as heartwater, redwater and gallsickness were endemic to the country. They had a detrimental effect on pastoral output, which was a mainstay of the national economy. Then in 1902 the decimating cattle disease East Coast fever arrived making the search for cures or preventatives all the more urgent. Vaccine technologies against tick-borne diseases remained elusive overall and on the basis of scientific knowledge, the South African state recommended regularly dipping animals in chemical solutions to destroy the ticks. Dipping along with quarantines and culls resulted in the eradication of East Coast fever from South Africa in the early 1950s. However, from the 1930s some ticks evolved a resistance to the chemical dips meaning that diseases like redwater were unlikely to be eliminated by that means. Scientists toiled to improve upon existing dipping technologies and also carried out ecological surveys to enhance their ability to predict outbreaks. Over the longer term dipping was not a panacea and ticks continue to present a major challenge to pastoral farming.
Improved algorithm for calculating the Chandrasekhar function
NASA Astrophysics Data System (ADS)
Jablonski, A.
2013-02-01
Theoretical models of electron transport in condensed matter require an effective source of the Chandrasekhar H(x,omega) function. A code providing the H(x,omega) function has to be both accurate and very fast. The current revision of the code published earlier [A. Jablonski, Comput. Phys. Commun. 183 (2012) 1773] decreased the running time, averaged over different pairs of arguments x and omega, by a factor of more than 20. The decrease of the running time in the range of small values of the argument x, less than 0.05, is even more pronounced, reaching a factor of 30. The accuracy of the current code is not affected, and is typically better than 12 decimal places. New version program summaryProgram title: CHANDRAS_v2 Catalogue identifier: AEMC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 976 No. of bytes in distributed program, including test data, etc.: 11416 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a Fortran 90 compiler Operating system: Windows 7, Windows XP, Unix/Linux RAM: 0.7 MB Classification: 2.4, 7.2 Catalogue identifier of previous version: AEMC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 1773 Does the new version supersede the old program: Yes Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two algorithms by selecting ranges of the argument omega in which the performance is the fastest. Reasons for the new version: Some of the theoretical models describing electron transport in condensed matter need a source of the Chandrasekhar H function values with an accuracy of at least 10 decimal places. Additionally, calculations of this function should be as fast as possible since frequent calls to a subroutine providing this function are made (e.g., numerical evaluation of a double integral with a complicated integrand containing the H function). Both conditions were satisfied in the algorithm previously published [1]. However, it has been found that a proper selection of the quadrature in an integral representation of the Chandrasekhar function may considerably decrease the running time. By suitable selection of the number of abscissas in Gauss-Legendre quadrature, the execution time was decreased by a factor of more than 20. Simultaneously, the accuracy of results has not been affected. Summary of revisions: (1) As in previous work [1], two integral representations of the Chandrasekhar function, H(x,omega), were considered: the expression published by Dudarev and Whelan [2] and the expression published by Davidović et al. [3]. The algorithms implementing these representations were designated A and B, respectively. All integrals in these implementations were previously calculated using Romberg quadrature. It has been found, however, that the use of Gauss-Legendre quadrature considerably improved the performance of both algorithms. Two conditions have to be satisfied. (i) The number of abscissas, N, has to be rather large, and (ii) the abscissas and corresponding weights should be determined with accuracy as high as possible. The abscissas and weights are available for N=16, 20, 24, 32, 40, 48, 64, 80, and 96 with accuracy of 20 decimal places [4], and all these values were introduced into a new procedure GAUSS replacing procedure ROMBERG. Due to the fact that the implemented tables are rather extensive, they were recalculated using the Rybicki algorithm (Ref. [5], pp. 183-184) and rechecked. No errors or misprints were found. (2) In the integral representation of the H function derived by Davidović et al. [3], the positive root ν0 of the so-called dispersion function needs to be calculated with accuracy of at least 10 decimal places (see. Ref [6], pp. 61-64 and Ref. [1], Eqs. (5) and (29)). For small values of the argument omega and values of omega close to unity, the nonlinear equation in one unknown, ν0, can be solved analytically. New simple analytical expressions were derived here that can be efficiently used in calculations of the root. (3) The above modifications of the code considerably decreased the time of calculation of both algorithms A and B. The results are summarized in Fig. 1. The time of calculations is in fact the CPU time in microseconds for a computer equipped with an Inter Xeon processor (3.46 GHz) using Lahey-Fujitsu Fortran v. 7.2. Time of calculations of the H(x,omega) function averaged over different pairs of arguments x and omega. (a) 400 pairs uniformly distributed in the ranges 0<=x<=0.05 and 0<=omega<=1; (b) 400 pairs uniformly distributed in the ranges 0.05<=x<=1 and 0<=omega<=1. The shortest execution time averaged over values of the argument x exceeding 0.05 has been observed for algorithm B and Gauss-Legendre quadrature with the number of abscissas equal to 64 (23.2 μs). As compared with Romberg quadrature, the execution time was shortened by a factor of 22.5. For small x values, below 0.05, both algorithms A and B are considerably faster if Gauss-Legendre quadrature is used. For N=64, the average time of execution of algorithm B is decreased with respect to Romberg quadrature by a factor close to 30. However, in that range of argument x, algorithm A exhibits much faster performance. Furthermore, the average execution time of algorithm A, equal to about 100 μs, is practically independent of the number of abscissas N. (4) For Romberg quadrature, to optimize the performance, the mixed algorithm C was proposed in which algorithm A is used for argument x smaller than or equal to x0=0.4, while algorithm B is used for x larger than 0.4 [1]. For Gauss-Legendre quadrature, the limit x0 was found to depend on the number of abscissas N. For each value of N considered, the time of calculations of the H function was determined for pairs of arguments uniformly distributed in the ranges 0<=x<=0.05 and 0<=omega<=1, and for pairs of arguments uniformly distributed in the ranges 0.05<=x<=1 and 0<=omega<=1. As shown in Fig. 2 for N=64, algorithm A is faster than algorithm B for x smaller than or equal to 0.0225. Comparison of the running times of algorithms A and B. Open circles: algorithm B is faster than the algorithm A; full circles: algorithm A is faster than algorithm B. Thus, the value of x0=0.0225 is proposed for the mixed algorithm C when Gauss-Legendere quadrature with N=64 is used. Similar computer experiments performed for other values of N are summarized below. L N0 1 16 0.25 2 20 0.15 3 24 0.10 4 32 0.050 5 40 0.030 6 48 0.045 7 64 0.0225-Recommended 8 80 0.0125 9 96 0.020 The flag L is one of the input parameters for the subroutine GAUSS. In the programs implementing algorithms A, B, and C (CHANDRA, CHANDRB, and CHANDRC), Gauss-Legendre quadrature with N=64 is currently set. As follows from Fig. 1, algorithm B (and consequently algorithm C) is the fastest in that case. It is still possible to change the number of abscissas; the flag L then has to be modified in lines 165, 169, 185, 189, and 304 of program CHANDRAS_v2, and the value of x0 in line 111 has to be adjusted according to the table above. (5) The above modifications of the code did not affect the accuracy of the calculated Chandrasekhar function, as compared to the original code [1]. For the pairs of arguments shown in Fig. 2, the accuracy of the H function, calculated from algorithms A and B, reached at least 12 decimal digits; however, in the majority of cases, the accuracy is equal to 13 decimal digits. Restrictions: Two input parameters for the Chandrasekhar function, x and omega, are restricted to the ranges 0<=x<=1 and 0<=omega<=1, which is sufficient in numerous applications. Running time: between 15 and 100 μs for one pair of arguments of the Chandrasekhar function.
A New Model of the Early Paleozoic Tectonics and Evolutionary History in the Northern Qinling, China
NASA Astrophysics Data System (ADS)
Dong, Yunpeng; Zhang, Guowei; Yang, Zhao; Qu, Hongjun; Liu, Xiaoming
2010-05-01
The Qinling Orogenic Belt extends from the Qinling Mountains in the west to the Dabie Mountains in the east. It lies between the North China and South China Blocks, and is bounded on the north by the Lushan fault and on the south by the Mianlue-Bashan-Xiangguang fault (Zhang et al., 2000). The Qinling Orogenic Belt itself is divided into the North and South Qinling Terranes by the Shangdan suture zone. Although the Shangdan zone is thought to represent the major suture separating the two blocks, there still exists debate about the timing and mechanism of convergence between these two blocks. For instance, some authors suggested an Early Paleozoic collision between the North China Block and South China Block (Ren et al., 1991; Kroner et al., 1993; Zhai et al., 1998). Others postulated left-lateral strike-slip faulting along the Shangdan suture at ca. 315 Ma and inferred a pre-Devonian collision between the two blocks (Mattauer et al., 1985; Xu et al., 1988). Geochemistry of fine-grained sediments in the Qinling Mountains was used to argue for a Silurian-Devonian collision (Gao et al., 1995). A Late Triassic collision has also been proposed (Sengor, 1985; Hsu et al., 1987; Wang et al., 1989), based on the formation of ultrahigh-pressure metamorphic rocks in the easternmost part of the Qinling Orogenic Belt at ~230 Ma (e.g., Li et al., 1993; Ames et al., 1996). Paleomagnetic data favor a Late Triassic-Middle Jurassic amalgamation of the North China and South China Blocks (Zhao and Coe, 1987; Enkin et al., 1992). It is clear that most authors thought that the Qinling Mountains are a collisional orogen, even they have different methods about the timing of the orogeny. Based on new detailed investigations, we propose a new model of the Early Paleozoic Tectonics and Evolutionary History between the North China and South China Blocks along the Shangdan Suture. The Shangdan suture is marked by a great number of ophiolites, island-arc volcanic rocks and other related rock assemblages. Our new geological and geochemical data revealed a lot of ophiolitic mélanges along the Shangdan suture, such as the Guojiagou, Ziyu, Xiaowangjian, Yanwan, Tangzang, Guanzizhen and Wushan areas from east to west. The ophiolite assemblage in Guojiagou, Ziyu area consists mainly of some blocks of E-MORB type and IAB-type basalts, while the pillow lavas from Xiaowangjian are IAB-type basalts. The basalts from the ophiolite assemblages in Yanwan, Tangzang and Wushan areas possess E-MORB geochemical compositions. The zircons of gabbro from Yanwan ophiolite mélange yield an U-Pb age of 516±3.8 Ma, which represents the formation age of the Yanwan ophiolite. Meanwhile, the basalts in the Guanzizhen ophiolite mélange show N-MORB type geochemical signature, and the zircons from gabbro yield a U-Pb age of 471±1.4 Ma, which constraints the formation age of the mature oceanic crust. Additionally, there also exists a U-Pb age of 523±26 Ma (Lu et al.,2003) and Cambrian-Ordovician radiolarites from the interlayed silicarites within the volcanic rock in the Guojiagou ophiolite mélange (Cui et al., 1995). All these geochemical and geochronological evidences indicate that there existed an oceanic basin and its subduction, which separated the Northern China Block from the Southern China Block during 523 -471 Ma. Accordant with this ocean and its subduction, there had been existed an active continental margin, island-arc setting on the north side of the Shangdan ophiolite mélange which were marked by a series of moderate-basic intrude igneous mass along the Sifangtai-Lajimiao area (Li et al., 1993) and the Fushui area (Dong et al., 1997). In addition to, there also exist a great number of subduction-collisional granites intruding into island-arc basement along the active continental margin. Zircons from the Fushui intrusion yield a U-Pb age of 514±1.3 Ma (Chen et al., 2004), which constraints the time of the subduction. Above all, more and more data suggest that there exists a back-arc basin on the northern side of the island-arc terrain. To the east, it is presented by the Erlangping group in Xixia area, which consists mainly of clastic sediments, carbonatites and basic volcanic rocks. The geochemistry of the basalts show that they were formed in a back-arc basin setting (Sun et al.,1996), and the radiolarites from the interlayed silicalites show the Orovician-Silurian age (Wang et al., 1995). Our new investigation reveals some new tectonic assemblages exposed in the Yinggerzui area, Qinghusi area to the west. The detailed geochemical studies indicate that they were formed in a back-arc basin. All above evidences suggest that there had existed an Early Paleozoic subduction system, which consists of a subduction trench, island-Arc and back-arc basin along the northern Qinling zone. It is also indicated that the Paleo-ocean had been evolved into a complete evolutionary process including initial spreading (E-MORB ophiolite), maturated extension (N-MORB ophiolite) and subduction (Island-arc volcanic rocks). However, it is notable that there are large scale of Devonian clastic sediments distributing on the south of the Shangdan suture, and the pre-Mesozoic rocks in the South Qinling without any metamorphism or just underwent the low-greenschist facies metamorphism in some places, which are very different from the North Qinling Terrane consisting mainly of Precambrian rocks and evolving into an amphibolite facies metamorphism at ~1.0 Ga and greenschist facies metamorphism at ~400 Ma (Liu et al., 1993; Zhang et al., 1994). Accordingly, it is prefer that there only occurred a subduction of the Shangdan oceanic crust from south to north along the Shangdan suture on the south of the Northern Qinling Terrane. However, the Piaochi and the Anjiping granites possessing the sym-collisional type granite geochemistry and formation age of 450-486 (Chen et al., 1991; zhang et al., 1996) indicate that there occurred a collisional event between the North Qinling Island-arc Terrane and the Northern China Block caused by closing of the Early Paleozoic back-arc basin. Additionally, the studies of the metamorphism show that there are two zones of high / ultra-high pressure metamorphic rocks outcropping along the both side of the Northern Qingling island-arc terrane. On the north, it is characterized by eclogite and coesite outcropping in the Guanpo area, and the metamorphic zircon U-Pb age of 507±38 Ma and 509±12 Ma by means of SHRIM (Yang et al., 2002). Meanwhile, there also exist some high pressure basic granulite (Liu et al., 1995) and felsic granulite (Liu et al., 1996) distributing in the Xigou fault on the south margin of the Northern Qingling island-arc terrane. Zircon U-Pb ages of 485±3.3 Ma by means of LA-ICP-MS method (Chen et al., 2004) and 518±12 Ma by means of SHRIM (Liu et al., 2003) constrain the time of the metamorphism. All these metamorphic data suggest the Northern Qingling island-arc terrane had been evolved into a deep subduction event during 485-518 Ma. Based on all above evidences, we infer a new model about the tectonics and evolutionary history of the Norhtern Qinling Terrane. It is emphasized that the Early Paleozoic tectonics between the North China and Southern China Blocks had existed an ocean, island-arc and back-arc basin, and evolved into four stages of evolutionary stages: 1) initial spreading along the Shangdan zone during 516-523 Ma; 2) maturated ocean along the Shangdan zone during 516-471 Ma; 3) subduction along the south side of the Northern Qinling Terrane and formation of the Back-arc basin along the north side of the Northern Qinling Terrane during518-514; 4) closing of the back-arc basin, collision between the Northern Qingling island-arc terrane and the Northern China Block, and deep subduction of the Northern Qingling island-arc terrane during 518-485Ma. This work was supported by NSFC (40772140 & 40972140)
Unified commutation-pruning technique for efficient computation of composite DFTs
NASA Astrophysics Data System (ADS)
Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.
2015-12-01
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.
The Mathematics of Computer Error.
ERIC Educational Resources Information Center
Wood, Eric
1988-01-01
Why a computer error occurred is considered by analyzing the binary system and decimal fractions. How the computer stores numbers is then described. Knowledge of the mathematics behind computer operation is important if one wishes to understand and have confidence in the results of computer calculations. (MNS)
Code of Federal Regulations, 2014 CFR
2014-07-01
... formations beneath the earth's surface. The principal hydrocarbon constituent is methane. Onshore means all... sulfur compounds means H2S, carbonyl sulfide (COS), and carbon disulfide (CS2). Sulfur production rate... efficiency achieved in percent, carried to one decimal place. SThe sulfur production rate, kilograms per hour...
Code of Federal Regulations, 2013 CFR
2013-07-01
... formations beneath the earth's surface. The principal hydrocarbon constituent is methane. Onshore means all... sulfur compounds means H2S, carbonyl sulfide (COS), and carbon disulfide (CS2). Sulfur production rate... efficiency achieved in percent, carried to one decimal place. SThe sulfur production rate, kilograms per hour...
ERIC Educational Resources Information Center
Bell, Garry
1991-01-01
Mechanical devices offer an alternative to computers to explore mathematical concepts in different curricula areas. Described is the discograph, a series of pulleys and wheels, that can be used to teach mathematical principles in pattern drawing, locus, rotation in geared systems, gearing, rotational symmetry, regular plane figures, decimal, and…
Wavelet-Based Processing for Fiber Optic Sensing Systems
NASA Technical Reports Server (NTRS)
Hamory, Philip J. (Inventor); Parker, Allen R., Jr. (Inventor)
2016-01-01
The present invention is an improved method of processing conglomerate data. The method employs a Triband Wavelet Transform that decomposes and decimates the conglomerate signal to obtain a final result. The invention may be employed to improve performance of Optical Frequency Domain Reflectometry systems.
Laurel wilt: Understanding an unusual and exotic vascular wilt disease
USDA-ARS?s Scientific Manuscript database
Laurel wilt kills American members of the Lauraceae plant family (Laurales, Magnoliid complex). These include significant components of Coastal Plain forest communities in the southeastern USA, most importantly redbay, as well as the commercial crop avocado. The disease has decimated redbay, swamp ...
Crisis in Cataloging Revisited: The Year's Work in Subject Analysis, 1990.
ERIC Educational Resources Information Center
Young, James Bradford
1991-01-01
Reviews the 1990 literature that concerns subject analysis. Issues addressed include subject cataloging, including Library of Congress Subject Headings (LCSH); classification, including Dewey Decimal Classification (DDC), Library of Congress Classification, and classification in online systems; subject access, including the online use of…
ERIC Educational Resources Information Center
Berman, Sanford
1980-01-01
Criticizes the 19th edition of "Dewey Decimal Classification" for violating traditional classification goals for library materials and ignoring the desires of libraries and other users. A total reform is proposed to eliminate Phoenix schedules and to accept only those relocations approved by an editorial board of users. (RAA)
Technical Mathematics: Restructure of Technical Mathematics.
ERIC Educational Resources Information Center
Flannery, Carol A.
Designed to accompany a series of videotapes, this textbook provides information, examples, problems, and solutions relating to mathematics and its applications in technical fields. Chapter I deals with basic arithmetic, providing information on fractions, decimals, ratios, proportions, percentages, and order of operations. Chapter II focuses on…
ERIC Educational Resources Information Center
Mercer County Community Coll., Trenton, NJ.
This document offers instructional materials for a 60-hour course on basic math operations involving decimals, fractions, and proportions as applied in the workplace. The course, part of a workplace literacy project developed by Mercer County Community College (New Jersey) and its partners, contains the following: course outline; 17 lesson…
77 FR 76572 - Decimalization Roundtable
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-28
... discuss the impact of tick sizes on small and mid-sized companies, market professionals, investors, and U...) of the Securities and Exchange Commission headquarters at 100 F Street NE., in Washington, DC. The... sizes on small and middle capitalization companies, the economic consequences (including the costs and...
Impact of Math Snacks Games on Students' Conceptual Understanding
ERIC Educational Resources Information Center
Winburg, Karin; Chamberlain, Barbara; Valdez, Alfred; Trujillo, Karen; Stanford, Theodore B.
2016-01-01
This "Math Snacks" intervention measured 741 fifth grade students' gains in conceptual understanding of core math concepts after game-based learning activities. Teachers integrated four "Math Snacks" games and related activities into instruction on ratios, coordinate plane, number systems, fractions and decimals. Using a…
Should We Limit the Number of Astronomy Students?
ERIC Educational Resources Information Center
Bachmann, Kurt T.; Boyce, Peter B.
1994-01-01
Presents two views about the future of astronomy. Explains that government budget cuts and an oversupply of young scientists have decimated the employment prospects. Encourages students to train for a wide variety of careers and to become entrepreneurs who bring technologies to the consumer. (DDR)
40 CFR 98.336 - Data reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... carbon analysis (percent by weight, expressed as a decimal fraction). (11) Whether carbon content of the...) Carbon content of each carbon-containing input material charged to each kiln or furnace (including zinc bearing material, flux materials, and other carbonaceous materials) from the annual carbon analysis for...
NASA Astrophysics Data System (ADS)
Natraj, Vijay; Li, King-Fai; Yung, Yuk L.
2009-02-01
Tables that have been used as a reference for nearly 50 years for the intensity and polarization of reflected and transmitted light in Rayleigh scattering atmospheres have been found to be inaccurate, even to four decimal places. We convert the integral equations describing the X and Y functions into a pair of coupled integro-differential equations that can be efficiently solved numerically. Special care has been taken in evaluating Cauchy principal value integrals and their derivatives that appear in the solution of the Rayleigh scattering problem. The new approach gives results accurate to eight decimal places for the entire range of tabulation (optical thicknesses 0.02-1.0, surface reflectances 0-0.8, solar and viewing zenith angles 0°-88.85°, and relative azimuth angles 0°-180°), including the most difficult case of direct transmission in the direction of the sun. Revised tables have been created and stored electronically for easy reference by the planetary science and astrophysics community.
Utah FORGE Gravity Data Shapefile
Joe Moore
2016-03-13
This is a zipped GIS compatible shapefile of gravity data points used in the Milford, Utah FORGE project as of March 21st, 2016. The shapefile is native to ArcGIS, but can be used with many GIS software packages. Additionally, there is a .dbf (dBase) file that contains the dataset which can be read with Microsoft Excel. The Data was downloaded from the PACES (Pan American Center for Earth and Environmental Studies) hosted by University of Texas El Paso (http://research.utep.edu/Default.aspx?alias=research.utep.edu/paces) Explanation:Source: data source code if available LatNAD83: latitude in NAD83 [decimal degrees] LonNAD83: longitude in NAD83 [decimal degrees]zWGS84: elevation in WGS84 (ellipsoidal) [m]OBSless976: observed gravity minus 976000 mGalIZTC: inner zone terrain correction [mGal]OZTC: outer zone terrain correction [mGal]FA: Free Air anomaly value [mGal]CBGA: Complete Bouguer gravity anomaly value [mGal
Anatomy and histology as socially networked learning environments: some preliminary findings.
Hafferty, Frederic W; Castellani, Brian; Hafferty, Philip K; Pawlina, Wojciech
2013-09-01
An exploratory study to better understand the "networked" life of the medical school as a learning environment. In a recent academic year, the authors gathered data during two six-week blocks of a sequential histology and anatomy course at a U.S. medical college. An eight-item questionnaire captured different dimensions of student interactions. The student cohort/network was 48 first-year medical students. Using social network analysis (SNA), the authors focused on (1) the initial structure and the evolution of informal class networks over time, (2) how informal class networks compare to formal in-class small-group assignments in influencing student information gathering, and (3) how peer assignment of professionalism role model status is shaped more by informal than formal ties. In examining these latter two issues, the authors explored not only how formal group assignment persisted over time but also how it functioned to prevent the tendency for groupings based on gender or ethnicity. The study revealed an evolving dynamic between the formal small-group learning structure of the course blocks and the emergence of informal student networks. For example, whereas formal group membership did influence in-class questions and did prevent formation of groups of like gender and ethnicity, outside-class questions and professionalism were influenced more by informal group ties where gender and, to a much lesser extent, ethnicity influence student information gathering. The richness of these preliminary findings suggests that SNA may be a useful tool in examining an array of medical student learning encounters.
Evolution of Bow-Tie Architectures in Biology
Friedlander, Tamar; Mayo, Avraham E.; Tlusty, Tsvi; Alon, Uri
2015-01-01
Bow-tie or hourglass structure is a common architectural feature found in many biological systems. A bow-tie in a multi-layered structure occurs when intermediate layers have much fewer components than the input and output layers. Examples include metabolism where a handful of building blocks mediate between multiple input nutrients and multiple output biomass components, and signaling networks where information from numerous receptor types passes through a small set of signaling pathways to regulate multiple output genes. Little is known, however, about how bow-tie architectures evolve. Here, we address the evolution of bow-tie architectures using simulations of multi-layered systems evolving to fulfill a given input-output goal. We find that bow-ties spontaneously evolve when the information in the evolutionary goal can be compressed. Mathematically speaking, bow-ties evolve when the rank of the input-output matrix describing the evolutionary goal is deficient. The maximal compression possible (the rank of the goal) determines the size of the narrowest part of the network—that is the bow-tie. A further requirement is that a process is active to reduce the number of links in the network, such as product-rule mutations, otherwise a non-bow-tie solution is found in the evolutionary simulations. This offers a mechanism to understand a common architectural principle of biological systems, and a way to quantitate the effective rank of the goals under which they evolved. PMID:25798588
Narasimhulu, D M; Scharfman, L; Minkoff, H; George, B; Homel, P; Tyagaraj, K
2018-04-27
Injection of local anesthetic into the transversus abdominis plane (TAP block) decreases systemic morphine requirements after abdominal surgery. We compared intraoperative surgeon-administered TAP block (surgical TAP) to anesthesiologist-administered transcutaneous ultrasound-guided TAP block (conventional TAP) for post-cesarean analgesia. We hypothesized that surgical TAP blocks would take less time to perform than conventional TAP blocks. We performed a randomized trial, recruiting 41 women undergoing cesarean delivery under neuraxial anesthesia, assigning them to either surgical TAP block (n=20) or conventional TAP block (n=21). Time taken to perform the block was the primary outcome, while postoperative pain scores and 24-hour opioid requirements were secondary outcomes. Student's t-test was used to compare block time and Kruskal-Wallis test opioid consumption and pain-scores. Time taken to perform the block (2.4 vs 12.1 min, P <0.001), and time spent in the operating room after delivery (55.3 vs 77.9 min, P <0.001) were significantly less for surgical TAP. The 24 h morphine consumption (P=0.17) and postoperative pain scores at 4, 8, 24 and 48 h were not significantly different between the groups. Surgical TAP blocks are feasible and less time consuming than conventional TAP blocks, while providing comparable analgesia after cesarean delivery. Copyright © 2018 Elsevier Ltd. All rights reserved.
Al-Holy, M; Quinde, Z; Guan, D; Tang, J; Rasco, B
2004-02-01
Differences in the come-up times and thermal inactivation parameters of Listeria innocua in salmon (Oncorhynchus keta) caviar containing 2.5% salt using conventional thermal-death-time (TDT) glass tubes and a novel aluminum tube were tested and compared. Generally, the come-up times and decimal reduction times (D-values) were shorter and the change in temperature required to change the D-value (z-value) was longer in the aluminum than in the glass tubes. The D-values at 60, 63, and 65 degrees C for the aluminum TDT tubes were 2.97, 0.77, and 0.40 min, respectively, and for the glass TDT tubes, these values were 3.55, 0.84, and 0.41 min. The z-values were 5.7 degrees C in the aluminum and 5.3 degrees C in the glass. Because of the shorter come-up time, the aluminum TDT tubes may provide a more precise measurement of microbial thermal inactivation than the glass TDT tubes, particularly for viscous materials, solid foods, and foods containing particulate matter.
Unexpected delayed complete atrioventricular block after Cardioband implantation.
Sorini Dini, Carlotta; Landi, Daniele; Meucci, Francesco; Di Mario, Carlo
2018-03-06
The Cardioband system is a transcatheter direct annuloplasty device that is implanted in patients with severe symptomatic functional mitral regurgitation (MR) due to annulus dilatation and high surgical risk. This device covers the posterior two-thirds of the annulus, from the anterolateral to the posteromedial commissure, implanted in close proximity of the left circumflex artery, atrioventricular (AV) conduction system, and coronary sinus. We present the case of an 80-year-old-gentleman with prohibitive surgical risk, treated with Cardioband implantation for functional MR with an evident P1-P2 cleft and P2-P3 indentation, a relative contraindication to MitraClip implantation. We achieved procedural success with significative mitral annulus reduction (30% anteroposterior reduction from 37 to 26 mm) and MR reduction (from grade 4 to grade 1-2). A late onset Mobitz 2 AV block developed after 26 hr and evolved to complete AV block in the following day, requiring definitive biventricular pacemaker (PM). Less than 200 Cardioband implantations have been performed but, to our knowledge, this is the first reported AV block, possibly facilitated by the pre-existing bifascicular block, suggesting the opportunity of prolonged ECG monitoring after Cardioband like any other mechanical transcatheter structural intervention possibly affecting the AV conduction system. © 2018 Wiley Periodicals, Inc.
Keller, Johannes; Grön, Georg
2016-01-01
Previously, experimentally induced flow experiences have been demonstrated with perfusion imaging during activation blocks of 3 min length to accommodate with the putatively slowly evolving “mood” characteristics of flow. Here, we used functional magnetic resonance imaging (fMRI) in a sample of 23 healthy, male participants to investigate flow in the context of a typical fMRI block design with block lengths as short as 30 s. To induce flow, demands of arithmetic tasks were automatically and continuously adjusted to the individual skill level. Compared against conditions of boredom and overload, experience of flow was evident from individuals’ reported subjective experiences and changes in electrodermal activity. Neural activation was relatively increased during flow, particularly in the anterior insula, inferior frontal gyri, basal ganglia and midbrain. Relative activation decreases during flow were observed in medial prefrontal and posterior cingulate cortex, and in the medial temporal lobe including the amygdala. Present findings suggest that even in the context of comparably short activation blocks flow can be reliably experienced and is associated with changes in neural activation of brain regions previously described. Possible mechanisms of interacting brain regions are outlined, awaiting further investigation which should now be possible given the greater temporal resolution compared with previous perfusion imaging. PMID:26508774
Controlling the Pore Size of Mesoporous Carbon Thin Films through Thermal and Solvent Annealing.
Zhou, Zhengping; Liu, Guoliang
2017-04-01
Herein an approach to controlling the pore size of mesoporous carbon thin films from metal-free polyacrylonitrile-containing block copolymers is described. A high-molecular-weight poly(acrylonitrile-block-methyl methacrylate) (PAN-b-PMMA) is synthesized via reversible addition-fragmentation chain transfer (RAFT) polymerization. The authors systematically investigate the self-assembly behavior of PAN-b-PMMA thin films during thermal and solvent annealing, as well as the pore size of mesoporous carbon thin films after pyrolysis. The as-spin-coated PAN-b-PMMA is microphase-separated into uniformly spaced globular nanostructures, and these globular nanostructures evolve into various morphologies after thermal or solvent annealing. Surprisingly, through thermal annealing and subsequent pyrolysis of PAN-b-PMMA into mesoporous carbon thin films, the pore size and center-to-center spacing increase significantly with thermal annealing temperature, different from most block copolymers. In addition, the choice of solvent in solvent annealing strongly influences the block copolymer nanostructure and the pore size of mesoporous carbon thin films. The discoveries herein provide a simple strategy to control the pore size of mesoporous carbon thin films by tuning thermal or solvent annealing conditions, instead of synthesizing a series of block copolymers of various molecular weights and compositions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gadsden, Jeffrey; Ayad, Sabry; Gonzales, Jeffrey J; Mehta, Jaideep; Boublik, Jan; Hutchins, Jacob
2015-01-01
Transversus abdominis plane (TAP) infiltration is a regional anesthesia technique that has been demonstrated to be effective for management of postsurgical pain after abdominal surgery. There are several different clinical variations in the approaches used for achieving analgesia via TAP infiltration, and methods for identification of the TAP have evolved considerably since the landmark-guided technique was first described in 2001. There are many factors that impact the analgesic outcomes following TAP infiltration, and the various nuances of this technique have led to debate regarding procedural classification of TAP infiltration. Based on our current understanding of fascial and neuronal anatomy of the anterior abdominal wall, as well as available evidence from studies assessing local anesthetic spread and cutaneous sensory block following TAP infiltration, it is clear that TAP infiltration techniques are appropriately classified as field blocks. While the objective of peripheral nerve block and TAP infiltration are similar in that both approaches block sensory response in order to achieve analgesia, the technical components of the two procedures are different. Unlike peripheral nerve block, which involves identification or stimulation of a specific nerve or nerve plexus, followed by administration of a local anesthetic in close proximity, TAP infiltration involves administration and spread of local anesthetic within an anatomical plane of the surgical site.
Marjanovic, Josip; Weiger, Markus; Reber, Jonas; Brunner, David O; Dietrich, Benjamin E; Wilm, Bertram J; Froidevaux, Romain; Pruessmann, Klaas P
2018-02-01
For magnetic resonance imaging of tissues with very short transverse relaxation times, radio-frequency excitation must be immediately followed by data acquisition with fast spatial encoding. In zero-echo-time (ZTE) imaging, excitation is performed while the readout gradient is already on, causing data loss due to an initial dead time. One major dead time contribution is the settling time of the filters involved in signal down-conversion. In this paper, a multi-rate acquisition scheme is proposed to minimize dead time due to filtering. Short filters and high output bandwidth are used initially to minimize settling time. With increasing time since the signal onset, longer filters with better frequency selectivity enable stronger signal decimation. In this way, significant dead time reduction is accomplished at only a slight increase in the overall amount of output data. Multi-rate acquisition was implemented with a two-stage filter cascade in a digital receiver based on a field-programmable gate array. In ZTE imaging in a phantom and in vivo, dead time reduction by multi-rate acquisition is shown to improve image quality and expand the feasible bandwidth while increasing the amount of data collected by only a few percent.
Study on the mesophase development of pressure-responsive ABC triblock copolymers
NASA Astrophysics Data System (ADS)
Cho, Junhan
Here we focus on the revelation of new nanoscale morphologies for a molten compressible polymeric surfactant through a compressible self-consistent field approach. A linear ABC block copolymer is set to allow a disparity in the propensities for curved interfaces and in pressure responses of ij-pairs. Under these conditions, the copolymer evolves into noble morphologies at selected segregation levels such as networks with tetrapod connections, rectangularly packed cylinders in a 2-dimensional array, and also body-centered cubic phases. Those new structures are considered to turn up by interplay between disparity in the densities of block domains and packing frustration. Comparison with the classical mesophase structures is also given. The author acknowledges the support from the Center for Photofunctional Energy Materials (GRRC).
NASA Technical Reports Server (NTRS)
Crumbly, C. M.; Bickley, F. P.; Hueter, U.
2015-01-01
The Advanced Development Office (ADO), part of the Space Launch System (SLS) program, provides SLS with the advanced development needed to evolve the vehicle from an initial Block 1 payload capability of 70 metric tons (t) to an eventual capability Block 2 of 130 t, with intermediary evolution options possible. ADO takes existing technologies and matures them to the point that insertion into the mainline program minimizes risk. The ADO portfolio of tasks covers a broad range of technical developmental activities. The ADO portfolio supports the development of advanced boosters, upper stages, and other advanced development activities benefiting the SLS program. A total of 36 separate tasks were funded by ADO in FY 2014.
Development of a Real-Time GPS/Seismic Displacement Meter: GPS Component
NASA Astrophysics Data System (ADS)
Bock, Y.; Canas, J.; Andrew, A.; Vernon, F.
2002-12-01
We report on the status of the Orange County Real-Time GPS Network (OCRTN), an upgrade of the SCIGN sites in Orange County and Catalina Island to low latency (1 sec), high-rate (1 Hz) data streaming, analysis, and dissemination. The project is a collaborative effort of the California Spatial Reference Center (CSRC) and the Orange County Dept. of Geomatics, with partners from the geophysical community (SCIGN), local and state government, and the private sector. As part of Phase 1 of the project, nine sites are streaming data by dedicated, point-to-point radio modems to a central data server located in Santa Ana. Instantaneous positions are computed for each site. Data are converted from 1 Hz Ashtech binary MBEN format to (1) 1 Hz RTCM format, and (2) decimated (15 sec) RINEX format. A second computer outside a firewall and located in another building at the Orange County's Computer Center is a TCP-based client of RTCM data (messages 18, 19, 3, and 22) from the data server, as well as a TCP-based server of RTCM data to the outside world. An external computer can access the RTCM data from all active sites through an IP socket connection. Data latency, in the best case, is less than 1 sec from real-time. Once a day, the decimated RINEX data are transferred by ftp from the data server to the SOPAC-CSRC archive at Scripps. Data recovery is typically 99-100%. As part of the second phase of the project, the RTCM server provides data to field receivers to perform RTK surveying. On connection to the RTCM server the user gets a list of active stations, and can then choose from which site to retrieve RTCM data. This site then plays the role of the RTK base station and a CDPD-based wireless Internet device plays the role of the normal RTK radio link. If an Internet connection is available, we will demonstrate how the system operates. This system will serve as a prototype for the GPS component of the GPS/seismic displacement meter.
Competency Based Business Education: Business Math/Related Rules.
ERIC Educational Resources Information Center
Wisconsin Univ., Madison. Wisconsin Vocational Studies Center.
Modules on fractions, decimals, percentages, discounts, interest, the adding machine, and the calculation of a depreciation are included. Each module contains objectives, learning activities, pre-practice exercises, practice exercises, and post-practice exercises. At the beginning of each module, the importance of the module is explained. (MK)
Letting Your Students "Fly" in the Classroom.
ERIC Educational Resources Information Center
Adams, Thomas
1997-01-01
Students investigate the concept of motion by making simple paper airplanes and flying them in the classroom. Students are introduced to conversion factors to calculate various speeds. Additional activities include rounding decimal numbers, estimating, finding averages, making bar graphs, and solving problems. Offers ideas for extension such as…
THE SHIFTING BASELINE OF NORTHERN FUR SEAL ECOLOGY IN THE NORTHEAST PACIFIC OCEAN
Historical data provide a baseline against which to judge the significance of recent ecological shifts and guide conservation strategies, especially for species decimated by pre-20th century harvesting. Northern fur seals (NFS; Callorhinus ursinus) are a common pinniped species i...
Mathematics for Commercial Foods.
ERIC Educational Resources Information Center
Wersan, Norman
A review of basic mathematics operations is presented with problems and examples applied to activities in the food service industry. The text is divided into eight units: measurement, fractions, arithmetic operations, money and decimals, percentage, ratio and proportion, wages and taxes, and business records. Each unit contains a series of lessons…
Aviation Technician Training I and Task Analyses: Semester II. Field Review Copy.
ERIC Educational Resources Information Center
Upchurch, Richard
This guide for aviation technician training begins with a course description, resource information, and a course outline. Tasks/competencies are categorized into 16 concept/duty areas: understanding technical symbols and abbreviations; understanding mathematical terms, symbols, and formulas; computing decimals; computing fractions; computing ratio…
Workplace Math II: Math Works!
ERIC Educational Resources Information Center
Wilson, Nancy; Goschen, Claire
This learning module, a continuation of the math I module, provides review and practice of the concepts explored in the earlier module at an intermediate level involving workplace problems. The following concepts are covered: instruction in performing basic computations, using general numerical concepts such as whole numbers, fractions, decimals,…
ERIC Educational Resources Information Center
International Federation of Library Associations, The Hague (Netherlands).
The papers in this compilation focus on cataloging, classification, and indexing: (1) "Bibliographic Relationships in Library Catalogs" (Barbara B. Tillett, United States); (2) "Bibliographic Description: Past, Present, and Future" (Michael Gorman, United States); (3) "The Dewey Decimal Classification Enters the Computer…
ERIC Educational Resources Information Center
Sisk, Diane
This autoinstructional program, developed as part of a general science course, is offered for students in the middle schools. Mathematics of fractions and decimals is considered to be prerequisite knowledge. The behavioral objectives are directed toward mastery of determining volumes of solid objects using the water displacement method as well as…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu, Banasri; Bandyopadhyay, Pratul; Majumdar, Priyadarshi
We have studied quantum phase transition induced by a quench in different one-dimensional spin systems. Our analysis is based on the dynamical mechanism which envisages nonadiabaticity in the vicinity of the critical point. This causes spin fluctuation which leads to the random fluctuation of the Berry phase factor acquired by a spin state when the ground state of the system evolves in a closed path. The two-point correlation of this phase factor is associated with the probability of the formation of defects. In this framework, we have estimated the density of defects produced in several one-dimensional spin chains. At themore » critical region, the entanglement entropy of a block of L spins with the rest of the system is also estimated which is found to increase logarithmically with L. The dependence on the quench time puts a constraint on the block size L. It is also pointed out that the Lipkin-Meshkov-Glick model in point-splitting regularized form appears as a combination of the XXX model and Ising model with magnetic field in the negative z axis. This unveils the underlying conformal symmetry at criticality which is lost in the sharp point limit. Our analysis shows that the density of defects as well as the scaling behavior of the entanglement entropy follows a universal behavior in all these systems.« less
ERIC Educational Resources Information Center
Goff, Wilhelmina D.; Johnson, Norman J.
2008-01-01
Over thousands of years the brain has evolved. Our ability to change its structure is quite limited. What we can do is change the way we work with the brain and appeal to it. These notions are the building blocks for this paper. Three strands of intellectual work (neuroscience to include social intelligence, pedagogy, and environment/culture) are…
An effective algorithm for calculating the Chandrasekhar function
NASA Astrophysics Data System (ADS)
Jablonski, A.
2012-08-01
Numerical values of the Chandrasekhar function are needed with high accuracy in evaluations of theoretical models describing electron transport in condensed matter. An algorithm for such calculations should be possibly fast and also accurate, e.g. an accuracy of 10 decimal digits is needed for some applications. Two of the integral representations of the Chandrasekhar function are prospective for constructing such an algorithm, but suitable transformations are needed to obtain a rapidly converging quadrature. A mixed algorithm is proposed in which the Chandrasekhar function is calculated from two algorithms, depending on the value of one of the arguments. Catalogue identifier: AEMC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 567 No. of bytes in distributed program, including test data, etc.: 4444 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a FORTRAN 90 compiler Operating system: Linux, Windows 7, Windows XP RAM: 0.6 Mb Classification: 2.4, 7.2 Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two algorithms and by selecting ranges of the argument ω in which performance is the fastest. Restrictions: Two input parameters for the Chandrasekhar function, x and ω (notation used in the code), are restricted to the range: 0⩽x⩽1 and 0⩽ω⩽1, which is sufficient in numerous applications. Unusual features: The program uses the Romberg quadrature for integration. This quadrature is applicable to integrands that satisfy several requirements (the integrand does not vary rapidly and does not change sign in the integration interval; furthermore, the integrand is finite at the endpoints). Consequently, the analyzed integrands were transformed so that these requirements were satisfied. In effect, one can conveniently control the accuracy of integration. Although the desired fractional accuracy was set at 10-10, the obtained accuracy of the Chandrasekhar function was much higher, typically 13 decimal places. Running time: Between 0.7 and 5 milliseconds for one pair of arguments of the Chandrasekhar function.
GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California
NASA Astrophysics Data System (ADS)
Hammond, W. C.; Blewitt, G.
2016-12-01
The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal magma chamber. Contemporary time-variable hazard can be estimated from the time variable slip rate estimated from the evolving GPS velocity field.
NASA Astrophysics Data System (ADS)
Wang, Yubin; Ismail, Marliya; Farid, Mohammed
2017-10-01
Currently baby food is sterilized using retort processing that gives an extended shelf life. However, this type of heat processing leads to reduction of organoleptic and nutrition value. Alternatively, the combination of pressure and heat could be used to achieve sterilization at reduced temperatures. This study investigates the potential of pressure-assisted thermal sterilization (PATS) technology for baby food sterilization. Here, baby food (apple puree), inoculated with Bacillus subtilis spores was treated using PATS at different operating temperatures, pressures and times and was compared with thermal only treatment. The results revealed that the decimal reduction time of B. subtilis in PATS treatment was lower than that of thermal only treatment. At a similar spore inactivation, the retention of ascorbic acid of PATS-treated sample was higher than that of thermally treated sample. The results indicated that PATS could be a potential technology for baby food processing while minimizing quality deterioration.
A 920-kilometer optical fiber link for frequency metrology at the 19th decimal place.
Predehl, K; Grosche, G; Raupach, S M F; Droste, S; Terra, O; Alnis, J; Legero, Th; Hänsch, T W; Udem, Th; Holzwarth, R; Schnatz, H
2012-04-27
Optical clocks show unprecedented accuracy, surpassing that of previously available clock systems by more than one order of magnitude. Precise intercomparisons will enable a variety of experiments, including tests of fundamental quantum physics and cosmology and applications in geodesy and navigation. Well-established, satellite-based techniques for microwave dissemination are not adequate to compare optical clocks. Here, we present phase-stabilized distribution of an optical frequency over 920 kilometers of telecommunication fiber. We used two antiparallel fiber links to determine their fractional frequency instability (modified Allan deviation) to 5 × 10(-15) in a 1-second integration time, reaching 10(-18) in less than 1000 seconds. For long integration times τ, the deviation from the expected frequency value has been constrained to within 4 × 10(-19). The link may serve as part of a Europe-wide optical frequency dissemination network.
Investigation and evaluation of a computer program to minimize three-dimensional flight time tracks
NASA Technical Reports Server (NTRS)
Parke, F. I.
1981-01-01
The program for the DC 8-D3 flight planning was slightly modified for the three dimensional flight planning for DC 10 aircrafts. Several test runs of the modified program over the North Atlantic and North America were made for verifying the program. While geopotential height and temperature were used in a previous program as meteorological data, the modified program uses wind direction and speed and temperature received from the National Weather Service. A scanning program was written to collect required weather information from the raw data received in a packed decimal format. Two sets of weather data, the 12-hour forecast and 24-hour forecast based on 0000 GMT, are used for dynamic processes in testruns. In order to save computing time only the weather data of the North Atlantic and North America is previously stored in a PCF file and then scanned one by one.
Identification of a unique Ca2+-binding site in rat acid-sensing ion channel 3.
Zuo, Zhicheng; Smith, Rachel N; Chen, Zhenglan; Agharkar, Amruta S; Snell, Heather D; Huang, Renqi; Liu, Jin; Gonzales, Eric B
2018-05-25
Acid-sensing ion channels (ASICs) evolved to sense changes in extracellular acidity with the divalent cation calcium (Ca 2+ ) as an allosteric modulator and channel blocker. The channel-blocking activity is most apparent in ASIC3, as removing Ca 2+ results in channel opening, with the site's location remaining unresolved. Here we show that a ring of rat ASIC3 (rASIC3) glutamates (Glu435), located above the channel gate, modulates proton sensitivity and contributes to the formation of the elusive Ca 2+ block site. Mutation of this residue to glycine, the equivalent residue in chicken ASIC1, diminished the rASIC3 Ca 2+ block effect. Atomistic molecular dynamic simulations corroborate the involvement of this acidic residue in forming a high-affinity Ca 2+ site atop the channel pore. Furthermore, the reported observations provide clarity for past controversies regarding ASIC channel gating. Our findings enhance understanding of ASIC gating mechanisms and provide structural and energetic insights into this unique calcium-binding site.
Lievens, P; Verbinnen, B; Bollaert, P; Alderweireldt, N; Mertens, G; Elsen, J; Vandecasteele, C
2011-10-01
Blocking of the collection hoppers of the baghouse filters in a fluidized bed incinerator for co-incineration of high calorific industrial solid waste and sludge was observed. The composition of the flue gas cleaning residue (FGCR), both from a blocked hopper and from a normal hopper, was investigated by (differential) thermogravimetric analysis, quantitative X-ray powder diffraction and wet chemical analysis. The lower elemental carbon concentration and the higher calcium carbonate concentration of the agglomerated sample was the result of oxidation of carbon and subsequent reaction of CO2 with CaO. The evolved heat causes a temperature increase, with the decomposition of CaOHCl as a consequence. The formation of calcite and calcium chloride and the evolution of heat caused agglomeration of the FGCR. Activated lignite coke was replaced by another adsorption agent with less carbon, so the auto-ignition temperature increased; since then no further block formation has occurred.
Model gives a 3-month warning of Amazonian forest fires
NASA Astrophysics Data System (ADS)
Schultz, Colin
2011-08-01
The widespread drought suffered by the Amazon rain forest in the summer of 2005 was heralded at the time as the drought of the century. Because of the dehydrated conditions, supplemented by slash and burn agricultural practices, the drought led to widespread forest fires throughout the western Amazon, a portion of the rain forest usually too lush to support spreading wildfires. Only 5 years later, the 2005 season was outdone by even more widespread drought, with fires decimating more than 3000 square kilometers of western Amazonian rain forest. Blame for the wildfires has been consistently laid on deforestation and agricultural practices, but a convincing climatological explanation exists as well. (Geophysical Research Letters, doi:10.1029/2011GL047392, 2011)
49 CFR 178.346-2 - Material and thickness of material.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming Material... Thickness of Shell Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel...
49 CFR 178.347-2 - Material and thickness of material.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming Volume... (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL...
49 CFR 178.346-2 - Material and thickness of material.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming Material... Thickness of Shell Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel...
49 CFR 178.346-2 - Material and thickness of material.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming Material... Thickness of Shell Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel...
49 CFR 178.348-2 - Material and thickness of material.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Bulkheads and Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming... Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL...
49 CFR 178.346-2 - Material and thickness of material.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming Material... Thickness of Shell Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel...
49 CFR 178.348-2 - Material and thickness of material.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Bulkheads and Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming... Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL...
49 CFR 178.347-2 - Material and thickness of material.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming Volume... (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL...
49 CFR 178.348-2 - Material and thickness of material.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Bulkheads and Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming... Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL...
49 CFR 178.347-2 - Material and thickness of material.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming Volume... (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL...
49 CFR 178.347-2 - Material and thickness of material.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming Volume... (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL...
49 CFR 178.347-2 - Material and thickness of material.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming Volume... (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL...
49 CFR 178.348-2 - Material and thickness of material.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Bulkheads and Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming... Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL...
49 CFR 178.348-2 - Material and thickness of material.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Bulkheads and Baffles When Used as Tank Reinforcement) Using Mild Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL)—Expressed in Decimals of an Inch After Forming... Steel (MS), High Strength Low Alloy Steel (HSLA), Austenitic Stainless Steel (SS), or Aluminum (AL...
25 CFR 163.22 - Payment for forest products.
Code of Federal Regulations, 2010 CFR
2010-04-01
...) Terms and conditions for payment of forest products under lump sum (predetermined volume) sales shall be... Forest Management and Operations § 163.22 Payment for forest products. (a) The basis of volume determination for forest products sold shall be the Scribner Decimal C log rules, cubic volume, lineal...
ERIC Educational Resources Information Center
Seltz-Petrash, Ann, Ed.; Wolff, Kathryn, Ed.
Currently available American 16mm films in the areas of pure science, applied science and technology, and science and society are identified and listed. Included are films that are available from commercial, government, university, and industry producers. The first section of the catalog lists in Dewey Decimal order films intended for junior high…
Decimals, Denominators, Demons, Calculators, and Connections
ERIC Educational Resources Information Center
Sparrow, Len; Swan, Paul
2005-01-01
The authors provide activities for overcoming some fraction misconceptions using calculators specially designed for learners in primary years. The writers advocate use of the calculator as a way to engage children in thinking about mathematics. By engaging with a calculator as part of mathematics learning, children are learning about and using the…
40 CFR 98.317 - Records that must be retained.
Code of Federal Regulations, 2012 CFR
2012-07-01
... coke purchases. (2) Annual operating hours for each titanium dioxide process line. (b) If a CEMS is not... paraghraph: (1) Records of all calcined petroleum coke purchases (tons). (2) Records of all analyses and... content of consumed calcined petroleum coke (percent by weight expressed as a decimal fraction). (4...
40 CFR 98.317 - Records that must be retained.
Code of Federal Regulations, 2011 CFR
2011-07-01
... coke purchases. (2) Annual operating hours for each titanium dioxide process line. (b) If a CEMS is not... paraghraph: (1) Records of all calcined petroleum coke purchases (tons). (2) Records of all analyses and... content of consumed calcined petroleum coke (percent by weight expressed as a decimal fraction). (4...
40 CFR 98.317 - Records that must be retained.
Code of Federal Regulations, 2014 CFR
2014-07-01
... coke purchases. (2) Annual operating hours for each titanium dioxide process line. (b) If a CEMS is not... paraghraph: (1) Records of all calcined petroleum coke purchases (tons). (2) Records of all analyses and... content of consumed calcined petroleum coke (percent by weight expressed as a decimal fraction). (4...
40 CFR 98.317 - Records that must be retained.
Code of Federal Regulations, 2013 CFR
2013-07-01
... coke purchases. (2) Annual operating hours for each titanium dioxide process line. (b) If a CEMS is not... paraghraph: (1) Records of all calcined petroleum coke purchases (tons). (2) Records of all analyses and... content of consumed calcined petroleum coke (percent by weight expressed as a decimal fraction). (4...
ERIC Educational Resources Information Center
Sisk, Diane
This autoinstructional program, developed for high, medium and low level achievers, is directed toward a course in general science in middle schools. Mathematics of fractions and decimals is described as a prerequisite to the use of the packet. Two behavioral objectives are listed. Both involve the students' determining mass, first to the nearest…
Color Your Classroom II. A Math Curriculum Guide.
ERIC Educational Resources Information Center
Mississippi State Dept. of Education, Jackson.
This math curriculum guide, correlated with the numerical coding of the Math Skills List published by the Migrant Student Record Transfer System, covers 10 learning areas: readiness, number meaning, whole numbers, fractions, decimals, percent, measurement, geometry, probability and statistics, and sets. Each exercise is illustrated by a large…
Developing Basic Math Skills for Marketing. Student Manual and Laboratory Guide.
ERIC Educational Resources Information Center
Klewer, Edwin D.
Field tested with students in grades 10-12, this manual is designed to teach students in marketing courses basic mathematical concepts. The instructional booklet contains seven student assignments covering the following topics: why basic mathematics is so important, whole numbers, fractions, decimals, percentages, weights and measures, and dollars…
Optical triple-in digital logic using nonlinear optical four-wave mixing
NASA Astrophysics Data System (ADS)
Widjaja, Joewono; Tomita, Yasuo
1995-08-01
A new programmable optical processor is proposed for implementing triple-in combinatorial digital logic that uses four-wave mixing. Binary-coded decimal-to-octal decoding is experimentally demonstrated by use of a photorefractive BaTiO 3 crystal. The result confirms the feasibility of the proposed system.
40 CFR 90.509 - Calculation and reporting of test results.
Code of Federal Regulations, 2013 CFR
2013-07-01
... results. 90.509 Section 90.509 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Selective Enforcement Auditing § 90.509 Calculation and reporting of test results. (a) Initial test results... manufacturer shall round these results, in accordance with ASTM E29-93a, to the number of decimal places...
40 CFR 90.509 - Calculation and reporting of test results.
Code of Federal Regulations, 2010 CFR
2010-07-01
... results. 90.509 Section 90.509 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Selective Enforcement Auditing § 90.509 Calculation and reporting of test results. (a) Initial test results... manufacturer shall round these results, in accordance with ASTM E29-93a, to the number of decimal places...
40 CFR 90.509 - Calculation and reporting of test results.
Code of Federal Regulations, 2012 CFR
2012-07-01
... results. 90.509 Section 90.509 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Selective Enforcement Auditing § 90.509 Calculation and reporting of test results. (a) Initial test results... manufacturer shall round these results, in accordance with ASTM E29-93a, to the number of decimal places...
ERIC Educational Resources Information Center
Alabama State Dept. of Education, Montgomery. Div. of Instructional Services.
Topics covered in the first part of this document include eight advantages of the metric system; a summary of metric instruction; the International System of Units (SI) style and usage; metric decimal tables; the metric system; and conversion tables. An alphabetized list of organizations which market metric materials for educators is provided with…
American chestnut (Castanea dentata) was once a dominant overstory tree in the eastern United States but was decimated by chestnut blight (Cryphonectria parasitica). Blight resistant chestnut is being developed as part of a concerted restoration effort to bring this heritage tree...
ERIC Educational Resources Information Center
Usiskin, Zalman P.
2007-01-01
In the 1970s, the movement to the metric system (which has still not completely occurred in the United States) and the advent of hand-held calculators led some to speculate that decimal representation of numbers would render fractions obsolete. This provocative proposition stimulated Zalman Usiskin to write "The Future of Fractions" in 1979. He…
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Fractions. 500.17 Section 500.17 Commercial... LABELING ACT § 500.17 Fractions. (a) SI metric declarations of net quantity of contents of any consumer commodity may contain only decimal fractions. Other declarations of net quantity of contents may contain...
Analysis of a Bibliographic Database Enhanced with a Library Classification.
ERIC Educational Resources Information Center
Drabenstott, Karen Markey; And Others
1990-01-01
Describes a project that examined the effects of incorporating subject terms from the Dewey Decimal Classification (DDC) into a bibliographic database. It is concluded that the incorporation of DDC and possibly other library classifications into online catalogs can enhance subject access and provide additional subject searching strategies. (11…
ERIC Educational Resources Information Center
Buchter, Holli
2013-01-01
In this article St. Vrain Valley (CO) School District (SVVSD) librarian, Holli Buchter describes what took place in the school libraries in the district when the newest elementary school, Red Hawk, opened its doors. Red Hawk asked and answered the question: "Is the Dewey Decimal Classification system still the best way for students to locate…
Characteristics of the Class of 1982
1978-08-01
classification systems have you used? A. Dewey Decimal System. B. Library of Congress System. C. Both. D. Neither. 93. Have you consulted periodical...I y 4-019 UNITED V STATES, -- I ~j~1MILITARY I ~ ACADEMY WEST POINT, NEW YORK ~ J~x~ A ’ CHARACTERISTICS OF -THE CLASS OF 1982 ID DDC
ERIC Educational Resources Information Center
Bogdany, Melvin
The curriculum guide offers a course of training in the fundamentals of mathematics as applied to baking. Problems specifically related to the baking trade are included to maintain a practical orientation. The course is designed to help the student develop proficiency in the basic computation of whole numbers, fractions, decimals, percentage,…
Grandfather Tang Goes to High School
ERIC Educational Resources Information Center
Johnson, Iris DeLoach
2006-01-01
This article describes how the children's literature book, Grandfather Tang's Story, which is commonly used in the elementary grades, may be used at the high school level to engage students in an exploration of area and perimeter which includes basic operations with square roots, ordering numbers (decimal approximations, and their exact…
48 CFR 14.407-2 - Apparent clerical mistakes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CONTRACTING METHODS AND CONTRACT TYPES SEALED BIDDING Opening of Bids and Award of Contract 14.407-2 Apparent... contracting officer before award. The contracting officer first shall obtain from the bidder a verification of the bid intended. Examples of apparent mistakes are— (1) Obvious misplacement of a decimal point; (2...
48 CFR 14.407-2 - Apparent clerical mistakes.
Code of Federal Regulations, 2011 CFR
2011-10-01
... CONTRACTING METHODS AND CONTRACT TYPES SEALED BIDDING Opening of Bids and Award of Contract 14.407-2 Apparent... contracting officer before award. The contracting officer first shall obtain from the bidder a verification of the bid intended. Examples of apparent mistakes are— (1) Obvious misplacement of a decimal point; (2...
Algebra Students' Difficulty with Fractions: An Error Analysis
ERIC Educational Resources Information Center
Brown, George; Quinn, Robert J.
2006-01-01
An analysis of the 1990 National Assessment of Educational Progress (NAEP) found that only 46 percent of all high school seniors demonstrated success with a grasp of decimals, percentages, fractions and simple algebra. This article investigates error patterns that emerge as students attempt to answer questions involving the ability to apply…
An Elementary Algorithm to Evaluate Trigonometric Functions to High Precision
ERIC Educational Resources Information Center
Johansson, B. Tomas
2018-01-01
Evaluation of the cosine function is done via a simple Cordic-like algorithm, together with a package for handling arbitrary-precision arithmetic in the computer program Matlab. Approximations to the cosine function having hundreds of correct decimals are presented with a discussion around errors and implementation.
Basic Applied Mathematics Part 1.
ERIC Educational Resources Information Center
New York City Board of Education, Brooklyn, NY. Div. of Curriculum and Instruction.
This guide, published by the New York City Board of Education, presents 62 lesson plans in basic mathematics for tenth grade students. Lesson plans and performance objectives focus on the following areas: (1) fundamental operations with signed numbers; (2) linear, weight and temperature measurements; (3) fractions, decimals and percents; (4)…