Fast Exact Search in Hamming Space With Multi-Index Hashing.
Norouzi, Mohammad; Punjani, Ali; Fleet, David J
2014-06-01
There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.
Horizontal decomposition of data table for finding one reduct
NASA Astrophysics Data System (ADS)
Hońko, Piotr
2018-04-01
Attribute reduction, being one of the most essential tasks in rough set theory, is a challenge for data that does not fit in the available memory. This paper proposes new definitions of attribute reduction using horizontal data decomposition. Algorithms for computing superreduct and subsequently exact reducts of a data table are developed and experimentally verified. In the proposed approach, the size of subtables obtained during the decomposition can be arbitrarily small. Reducts of the subtables are computed independently from one another using any heuristic method for finding one reduct. Compared with standard attribute reduction methods, the proposed approach can produce superreducts that usually inconsiderably differ from an exact reduct. The approach needs comparable time and much less memory to reduce the attribute set. The method proposed for removing unnecessary attributes from superreducts executes relatively fast for bigger databases.
A fast exact simulation method for a class of Markov jump processes.
Li, Yao; Hu, Lili
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.
A fast exact simulation method for a class of Markov jump processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yao, E-mail: yaoli@math.umass.edu; Hu, Lili, E-mail: lilyhu86@gmail.com
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze itsmore » properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.« less
Ontology-Based Peer Exchange Network (OPEN)
ERIC Educational Resources Information Center
Dong, Hui
2010-01-01
In current Peer-to-Peer networks, distributed and semantic free indexing is widely used by systems adopting "Distributed Hash Table" ("DHT") mechanisms. Although such systems typically solve a. user query rather fast in a deterministic way, they only support a very narrow search scheme, namely the exact hash key match. Furthermore, DHT systems put…
Exact Bayesian p-values for a test of independence in a 2 × 2 contingency table with missing data.
Lin, Yan; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett; Lipshultz, Steven
2017-01-01
Altham (Altham PME. Exact Bayesian analysis of a 2 × 2 contingency table, and Fisher's "exact" significance test. J R Stat Soc B 1969; 31: 261-269) showed that a one-sided p-value from Fisher's exact test of independence in a 2 × 2 contingency table is equal to the posterior probability of negative association in the 2 × 2 contingency table under a Bayesian analysis using an improper prior. We derive an extension of Fisher's exact test p-value in the presence of missing data, assuming the missing data mechanism is ignorable (i.e., missing at random or completely at random). Further, we propose Bayesian p-values for a test of independence in a 2 × 2 contingency table with missing data using alternative priors; we also present results from a simulation study exploring the Type I error rate and power of the proposed exact test p-values. An example, using data on the association between blood pressure and a cardiac enzyme, is presented to illustrate the methods.
FAST TRACK COMMUNICATION Time-dependent exact solutions of the nonlinear Kompaneets equation
NASA Astrophysics Data System (ADS)
Ibragimov, N. H.
2010-12-01
Time-dependent exact solutions of the Kompaneets photon diffusion equation are obtained for several approximations of this equation. One of the approximations describes the case when the induced scattering is dominant. In this case, the Kompaneets equation has an additional symmetry which is used for constructing some exact solutions as group invariant solutions.
The yield of colorectal cancer among fast track patients with normocytic and microcytic anaemia.
Panagiotopoulou, I G; Fitzrol, D; Parker, R A; Kuzhively, J; Luscombe, N; Wells, A D; Menon, M; Bajwa, F M; Watson, M A
2014-05-01
We receive fast track referrals on the basis of iron deficiency anaemia (IDA) for patients with normocytic anaemia or for patients with no iron studies. This study examined the yield of colorectal cancer (CRC) among fast track patients to ascertain whether awaiting confirmation of IDA is necessary prior to performing bowel investigations. A review was undertaken of 321 and 930 consecutive fast track referrals from Centre A and Centre B respectively. Contingency tables were analysed using Fisher's exact test. Logistic regression analyses were performed to investigate significant predictors of CRC. Overall, 229 patients were included from Centre A and 689 from Centre B. The odds ratio for microcytic anaemia versus normocytic anaemia in the outcome of CRC was 1.3 (95% confidence interval [CI]: 0.5-3.9) for Centre A and 1.6 (95% CI: 0.8-3.3) for Centre B. In a logistic regression analysis (Centre B only), no significant difference in CRC rates was seen between microcytic and normocytic anaemia (adjusted odds ratio: 1.9, 95% CI: 0.9-3.9). There was no statistically significant difference in the yield of CRC between microcytic and normocytic anaemia (p=0.515, Fisher's exact test) in patients with anaemia only and no colorectal symptoms. Finally, CRC cases were seen in both microcytic and normocytic groups with or without low ferritin. There is no significant difference in the yield of CRC between fast track patients with microcytic and normocytic anaemia. This study provides insufficient evidence to support awaiting confirmation of IDA in fast track patients with normocytic anaemia prior to requesting bowel investigations.
ProGeRF: Proteome and Genome Repeat Finder Utilizing a Fast Parallel Hash Function
Moraes, Walas Jhony Lopes; Rodrigues, Thiago de Souza; Bartholomeu, Daniella Castanheira
2015-01-01
Repetitive element sequences are adjacent, repeating patterns, also called motifs, and can be of different lengths; repetitions can involve their exact or approximate copies. They have been widely used as molecular markers in population biology. Given the sizes of sequenced genomes, various bioinformatics tools have been developed for the extraction of repetitive elements from DNA sequences. However, currently available tools do not provide options for identifying repetitive elements in the genome or proteome, displaying a user-friendly web interface, and performing-exhaustive searches. ProGeRF is a web site for extracting repetitive regions from genome and proteome sequences. It was designed to be efficient, fast, and accurate and primarily user-friendly web tool allowing many ways to view and analyse the results. ProGeRF (Proteome and Genome Repeat Finder) is freely available as a stand-alone program, from which the users can download the source code, and as a web tool. It was developed using the hash table approach to extract perfect and imperfect repetitive regions in a (multi)FASTA file, while allowing a linear time complexity. PMID:25811026
Some Exact Conditional Tests of Independence for R X C Cross-Classification Tables
ERIC Educational Resources Information Center
Agresti, Alan; Wackerly, Dennis
1977-01-01
Exact conditional tests of independence in cross-classification tables are formulated based on chi square and other statistics with stronger operational interpretations, such as some nominal and ordinal measures of association. Guidelines for table dimensions and sample sizes for which the tests are economically implemented on a computer are…
Overview of fast algorithm in 3D dynamic holographic display
NASA Astrophysics Data System (ADS)
Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian
2013-08-01
3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.
On Determining the Rise, Size, and Duration Classes of a Sunspot Cycle
NASA Astrophysics Data System (ADS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1996-09-01
The behavior of ascent duration, maximum amplitude, and period for cycles 1 to 21 suggests that they are not mutually independent. Analysis of the resultant three-dimensional contingency table for cycles divided according to rise time (ascent duration), size (maximum amplitude), and duration (period) yields a chi-square statistic (= 18.59) that is larger than the test statistic (= 9.49 for 4 degrees-of-freedom at the 5-percent level of significance), thereby, inferring that the null hypothesis (mutual independence) can be rejected. Analysis of individual 2 by 2 contingency tables (based on Fisher's exact test) for these parameters shows that, while ascent duration is strongly related to maximum amplitude in the negative sense (inverse correlation) - the Waldmeier effect, it also is related (marginally) to period, but in the positive sense (direct correlation). No significant (or marginally significant) correlation is found between period and maximum amplitude. Using cycle 22 as a test case, we show that by the 12th month following conventional onset, cycle 22 appeared highly likely to be a fast-rising, larger-than-average-size cycle. Because of the inferred correlation between ascent duration and period, it also seems likely that it will have a period shorter than average length.
On Determining the Rise, Size, and Duration Classes of a Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1996-01-01
The behavior of ascent duration, maximum amplitude, and period for cycles 1 to 21 suggests that they are not mutually independent. Analysis of the resultant three-dimensional contingency table for cycles divided according to rise time (ascent duration), size (maximum amplitude), and duration (period) yields a chi-square statistic (= 18.59) that is larger than the test statistic (= 9.49 for 4 degrees-of-freedom at the 5-percent level of significance), thereby, inferring that the null hypothesis (mutual independence) can be rejected. Analysis of individual 2 by 2 contingency tables (based on Fisher's exact test) for these parameters shows that, while ascent duration is strongly related to maximum amplitude in the negative sense (inverse correlation) - the Waldmeier effect, it also is related (marginally) to period, but in the positive sense (direct correlation). No significant (or marginally significant) correlation is found between period and maximum amplitude. Using cycle 22 as a test case, we show that by the 12th month following conventional onset, cycle 22 appeared highly likely to be a fast-rising, larger-than-average-size cycle. Because of the inferred correlation between ascent duration and period, it also seems likely that it will have a period shorter than average length.
Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models
NASA Astrophysics Data System (ADS)
Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido
2016-06-01
We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.
33 CFR 165.1191 - Safety zones: Northern California annual fireworks events.
Code of Federal Regulations, 2011 CFR
2011-07-01
... established for the events listed in Table 1 of this section. Further information on exact dates, times, and...,000 feet off Incline Village, Nevada in Crystal Bay. Regulated Area That area of navigable waters...
33 CFR 165.1191 - Safety zones: Northern California annual fireworks events.
Code of Federal Regulations, 2010 CFR
2010-07-01
... established for the events listed in Table 1 of this section. Further information on exact dates, times, and...,000 feet off Incline Village, Nevada in Crystal Bay. Regulated Area That area of navigable waters...
Geiss, Karla; Meyer, Martin
2013-09-01
Standardized mortality ratios and standardized incidence ratios are widely used in cohort studies to compare mortality or incidence in a study population to that in the general population on a age-time-specific basis, but their computation is not included in standard statistical software packages. Here we present a user-friendly Microsoft Windows program for computing standardized mortality ratios and standardized incidence ratios based on calculation of exact person-years at risk stratified by sex, age and calendar time. The program offers flexible import of different file formats for input data and easy handling of general population reference rate tables, such as mortality or incidence tables exported from cancer registry databases. The application of the program is illustrated with two examples using empirical data from the Bavarian Cancer Registry. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Shark: Fast Data Analysis Using Coarse-grained Distributed Memory
2013-05-01
Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 7.1.1 Java Objects...often MySQL or Derby) with a namespace for tables, table metadata, and par- tition information. Table data is stored in an HDFS directory, while a...saving time and space for large data sets. This is achieved with support for custom SerDe (serialization/deserialization) java interface implementations
The exact analysis of contingency tables in medical research.
Mehta, C R
1994-01-01
A unified view of exact nonparametric inference, with special emphasis on data in the form of contingency tables, is presented. While the concept of exact tests has been in existence since the early work of RA Fisher, the computational complexity involved in actually executing such tests precluded their use until fairly recently. Modern algorithmic advances, combined with the easy availability of inexpensive computing power, has renewed interest in exact methods of inference, especially because they remain valid in the face of small, sparse, imbalanced, or heavily tied data. After defining exact p-values in terms of the permutation principle, we reference algorithms for computing them. Several data sets are then analysed by both exact and asymptotic methods. We end with a discussion of the available software.
Exact and approximate solutions to the oblique shock equations for real-time applications
NASA Technical Reports Server (NTRS)
Hartley, T. T.; Brandis, R.; Mossayebi, F.
1991-01-01
The derivation of exact solutions for determining the characteristics of an oblique shock wave in a supersonic flow is investigated. Specifically, an explicit expression for the oblique shock angle in terms of the free stream Mach number, the centerbody deflection angle, and the ratio of the specific heats, is derived. A simpler approximate solution is obtained and compared to the exact solution. The primary objectives of obtaining these solutions is to provide a fast algorithm that can run in a real time environment.
F-Test Alternatives to Fisher's Exact Test and to the Chi-Square Test of Homogeneity in 2x2 Tables.
ERIC Educational Resources Information Center
Overall, John E.; Starbuck, Robert R.
1983-01-01
An alternative to Fisher's exact test and the chi-square test for homogeneity in two-by-two tables is developed. The method provides for Type I error rates which are closer to the stated alpha level than either of the alternatives. (JKS)
Analysis of Multiple Contingency Tables by Exact Conditional Tests for Zero Partial Association.
ERIC Educational Resources Information Center
Kreiner, Svend
The tests for zero partial association in a multiple contingency table have gained new importance with the introduction of graphical models. It is shown how these may be performed as exact conditional tests, using as test criteria either the ordinary likelihood ratio, the standard x squared statistic, or any other appropriate statistics. A…
Exact one-sided confidence bounds for the risk ratio in 2 x 2 tables with structural zero.
Lloyd, Chris J; Moldovan, Max V
2007-12-01
This paper examines exact one-sided confidence limits for the risk ratio in a 2 x 2 table with structural zero. Starting with four approximate lower and upper limits, we adjust each using the algorithm of Buehler (1957) to arrive at lower (upper) limits that have exact coverage properties and are as large (small) as possible subject to coverage, as well as an ordering, constraint. Different Buehler limits are compared by their mean size, since all are exact in their coverage. Buehler limits based on the signed root likelihood ratio statistic are found to have the best performance and recommended for practical use. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
JANUS: a bit-wise reversible integrator for N-body dynamics
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2018-01-01
Hamiltonian systems such as the gravitational N-body problem have time-reversal symmetry. However, all numerical N-body integration schemes, including symplectic ones, respect this property only approximately. In this paper, we present the new N-body integrator JANUS , for which we achieve exact time-reversal symmetry by combining integer and floating point arithmetic. JANUS is explicit, formally symplectic and satisfies Liouville's theorem exactly. Its order is even and can be adjusted between two and ten. We discuss the implementation of JANUS and present tests of its accuracy and speed by performing and analysing long-term integrations of the Solar system. We show that JANUS is fast and accurate enough to tackle a broad class of dynamical problems. We also discuss the practical and philosophical implications of running exactly time-reversible simulations.
Multiple-Frequency Ultrasonic Pulse-Echo Display System.
1982-09-28
will sweep across some time interval. Adjust the ramp rate potentiometer to set this interval to exactly 10 ps. Ramp Delay None Set time base to 1.0 lis...the function keys. The table is a printout which results F.irectly from exercising Program KEE, listed in Appendix C-I. Note that "(ESC)B" refers to...flag +21 ŕ" = one-time flag (nessage is presented prior to full plot once per session) +22 time- base duration code +23 (High order digit) +24 * +25
Fast Pixel Buffer For Processing With Lookup Tables
NASA Technical Reports Server (NTRS)
Fisher, Timothy E.
1992-01-01
Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.; Collins, Stuart A., Jr.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.
Habiby, S F; Collins, S A
1987-11-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.
Exact numerical calculation of fixation probability and time on graphs.
Hindersin, Laura; Möller, Marius; Traulsen, Arne; Bauer, Benedikt
2016-12-01
The Moran process on graphs is a popular model to study the dynamics of evolution in a spatially structured population. Exact analytical solutions for the fixation probability and time of a new mutant have been found for only a few classes of graphs so far. Simulations are time-expensive and many realizations are necessary, as the variance of the fixation times is high. We present an algorithm that numerically computes these quantities for arbitrary small graphs by an approach based on the transition matrix. The advantage over simulations is that the calculation has to be executed only once. Building the transition matrix is automated by our algorithm. This enables a fast and interactive study of different graph structures and their effect on fixation probability and time. We provide a fast implementation in C with this note (Hindersin et al., 2016). Our code is very flexible, as it can handle two different update mechanisms (Birth-death or death-Birth), as well as arbitrary directed or undirected graphs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Why bigger may in fact be better... in the context of table tennis
NASA Astrophysics Data System (ADS)
Truscott, Tadd; Pan, Zhao; Belden, Jesse
2014-11-01
We submit that table tennis is too fast. Because of the high ball velocities relative to the small table size, players are required to act extremely quickly, often exceeding the limits of human reaction time. Additionally, the Magnus effect resulting from large rotation rates introduces dramatically curved paths and causes rapid direction changes after striking the table or paddle, which effectively reduces reaction time further. Moreover, watching a professional game is often uninteresting and even tiring because the ball is moving too quickly to follow with the naked eye and the action of the players is too subtle to resolve from a distance. These facts isolate table tennis from our quantitatively defined ``fun game club,'' and make it less widely appealing than sports like baseball and soccer. Over the past 100 years, the rules of table tennis have changed several times in an effort to make the game more attractive to players and spectators alike, but the game continues to lose popularity. Here, we experimentally quantify the historic landmark equipment changes of table tennis from a fluid dynamics perspective. Based on theory and observation, we suggest a larger diameter ball for table tennis to make the game more appealing to both spectators and amateur players.
Fault Tolerant Signal Processing Using Finite Fields and Error-Correcting Codes.
1983-06-01
Decimation in Frequency Form, Fast Inverse Transform F-18 F-4 Part of Decimation in Time Form, Fast Inverse Transform F-21 I . LIST OF TABLES fable Title Page...F-2 Intermediate Variables In A Fast Inverse Transform F-14 Accession For NTIS GRA&il DTIC TAB E Unannounced El ** Dist ribut ion/ ____ AvailabilitY...component polynomials may be transformed to an equiva- lent series of multiplications of the related transform ’.. coefficients. The inverse transform of
Belkić, Dzevad
2006-12-21
This study deals with the most challenging numerical aspect for solving the quantification problem in magnetic resonance spectroscopy (MRS). The primary goal is to investigate whether it could be feasible to carry out a rigorous computation within finite arithmetics to reconstruct exactly all the machine accurate input spectral parameters of every resonance from a synthesized noiseless time signal. We also consider simulated time signals embedded in random Gaussian distributed noise of the level comparable to the weakest resonances in the corresponding spectrum. The present choice for this high-resolution task in MRS is the fast Padé transform (FPT). All the sought spectral parameters (complex frequencies and amplitudes) can unequivocally be reconstructed from a given input time signal by using the FPT. Moreover, the present computations demonstrate that the FPT can achieve the spectral convergence, which represents the exponential convergence rate as a function of the signal length for a fixed bandwidth. Such an extraordinary feature equips the FPT with the exemplary high-resolution capabilities that are, in fact, theoretically unlimited. This is illustrated in the present study by the exact reconstruction (within machine accuracy) of all the spectral parameters from an input time signal comprised of 25 harmonics, i.e. complex damped exponentials, including those for tightly overlapped and nearly degenerate resonances whose chemical shifts differ by an exceedingly small fraction of only 10(-11) ppm. Moreover, without exhausting even a quarter of the full signal length, the FPT is shown to retrieve exactly all the input spectral parameters defined with 12 digits of accuracy. Specifically, we demonstrate that when the FPT is close to the convergence region, an unprecedented phase transition occurs, since literally a few additional signal points are sufficient to reach the full 12 digit accuracy with the exponentially fast rate of convergence. This is the critical proof-of-principle for the high-resolution power of the FPT for machine accurate input data. Furthermore, it is proven that the FPT is also a highly reliable method for quantifying noise-corrupted time signals reminiscent of those encoded via MRS in clinical neuro-diagnostics.
A Large Class of Exact Solutions to the One-Dimensional Schrodinger Equation
ERIC Educational Resources Information Center
Karaoglu, Bekir
2007-01-01
A remarkable property of a large class of functions is exploited to generate exact solutions to the one-dimensional Schrodinger equation. The method is simple and easy to implement. (Contains 1 table and 1 figure.)
Fast and Exact Continuous Collision Detection with Bernstein Sign Classification
Tang, Min; Tong, Ruofeng; Wang, Zhendong; Manocha, Dinesh
2014-01-01
We present fast algorithms to perform accurate CCD queries between triangulated models. Our formulation uses properties of the Bernstein basis and Bézier curves and reduces the problem to evaluating signs of polynomials. We present a geometrically exact CCD algorithm based on the exact geometric computation paradigm to perform reliable Boolean collision queries. Our algorithm is more than an order of magnitude faster than prior exact algorithms. We evaluate its performance for cloth and FEM simulations on CPUs and GPUs, and highlight the benefits. PMID:25568589
Optimum Vessel Performance in Evolving Nonlinear Wave Fields
2012-11-01
TEMPEST , the new, nonlinear, time-domain ship motion code being developed by the Navy. Table of Contents Executive Summary i List of Figures iii...domain ship motion code TEMPEST . The radiation and diffraction forces in the level 3.0 version of TEMPEST will be computed by the body-exact strip theory...nonlinear responses of a ship to a seaway are being incorporated into version 3 of TEMPEST , the new, nonlinear, time-domain ship motion code that
Fast Mix Table Construction for Material Discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Seth R
2013-01-01
An effective hybrid Monte Carlo--deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a ``mix table,'' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mix table inmore » $$O(\\text{number of voxels}\\times \\log \\text{number of mixtures})$$ time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation.« less
Ultrafast adiabatic quantum algorithm for the NP-complete exact cover problem
Wang, Hefeng; Wu, Lian-Ao
2016-01-01
An adiabatic quantum algorithm may lose quantumness such as quantum coherence entirely in its long runtime, and consequently the expected quantum speedup of the algorithm does not show up. Here we present a general ultrafast adiabatic quantum algorithm. We show that by applying a sequence of fast random or regular signals during evolution, the runtime can be reduced substantially, whereas advantages of the adiabatic algorithm remain intact. We also propose a randomized Trotter formula and show that the driving Hamiltonian and the proposed sequence of fast signals can be implemented simultaneously. We illustrate the algorithm by solving the NP-complete 3-bit exact cover problem (EC3), where NP stands for nondeterministic polynomial time, and put forward an approach to implementing the problem with trapped ions. PMID:26923834
Onimaru, Koh; Motone, Fumio; Kiyatake, Itsuki; Nishida, Kiyonori
2018-01-01
Background: Studying cartilaginous fishes (chondrichthyans) has helped us understand vertebrate evolution and diversity. However, resources such as genome sequences, embryos, and detailed staging tables are limited for species within this clade. To overcome these limitations, we have focused on a species, the brownbanded bamboo shark (Chiloscyllium punctatum), which is a relatively common aquarium species that lays eggs continuously throughout the year. In addition, because of its relatively small genome size, this species is promising for molecular studies. Results: To enhance biological studies of cartilaginous fishes, we establish a normal staging table for the embryonic development of the brownbanded bamboo shark. Bamboo shark embryos take around 118 days to reach the hatching period at 25°C, which is approximately 1.5 times as fast as the small‐spotted catshark (Scyliorhinus canicula) takes. Our staging table divides the embryonic period into 38 stages. Furthermore, we found culture conditions that allow early embryos to grow in partially opened egg cases. Conclusions: In addition to the embryonic staging table, we show that bamboo shark embryos exhibit relatively fast embryonic growth and are amenable to culture, key characteristics that enhance their experimental utility. Therefore, the present study is a foundation for cartilaginous fish research. Developmental Dynamics 247:712–723, 2018. © 2017 Wiley Periodicals, Inc. PMID:29396887
NASA Astrophysics Data System (ADS)
Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.
2011-02-01
The current success of the continuous cellular automata for the simulation of anisotropic wet chemical etching of silicon in microengineering applications is based on a relatively fast, approximate, constant time stepping implementation (CTS), whose accuracy against the exact algorithm—a computationally slow, variable time stepping implementation (VTS)—has not been previously analyzed in detail. In this study we show that the CTS implementation can generate moderately wrong etch rates and overall etching fronts, thus justifying the presentation of a novel, exact reformulation of the VTS implementation based on a new state variable, referred to as the predicted removal time (PRT), and the use of a self-balanced binary search tree that enables storage and efficient access to the PRT values in each time step in order to quickly remove the corresponding surface atom/s. The proposed PRT method reduces the simulation cost of the exact implementation from {O}(N^{5/3}) to {O}(N^{3/2} log N) without introducing any model simplifications. This enables more precise simulations (only limited by numerical precision errors) with affordable computational times that are similar to the less precise CTS implementation and even faster for low reactivity systems.
Solving LR Conflicts Through Context Aware Scanning
NASA Astrophysics Data System (ADS)
Leon, C. Rodriguez; Forte, L. Garcia
2011-09-01
This paper presents a new algorithm to compute the exact list of tokens expected by any LR syntax analyzer at any point of the scanning process. The lexer can, at any time, compute the exact list of valid tokens to return only tokens in this set. In the case than more than one matching token is in the valid set, the lexer can resort to a nested LR parser to disambiguate. Allowing nested LR parsing requires some slight modifications when building the LR parsing tables. We also show how LR parsers can parse conflictive and inherently ambiguous languages using a combination of nested parsing and context aware scanning. These expanded lexical analyzers can be generated from high level specifications.
Schaafsma, Murk; van der Deijl, Wilfred; Smits, Jacqueline M; Rahmel, Axel O; de Vries Robbé, Pieter F; Hoitsma, Andries J
2011-05-01
Organ allocation systems have become complex and difficult to comprehend. We introduced decision tables to specify the rules of allocation systems for different organs. A rule engine with decision tables as input was tested for the Kidney Allocation System (ETKAS). We compared this rule engine with the currently used ETKAS by running 11,000 historical match runs and by running the rule engine in parallel with the ETKAS on our allocation system. Decision tables were easy to implement and successful in verifying correctness, completeness, and consistency. The outcomes of the 11,000 historical matches in the rule engine and the ETKAS were exactly the same. Running the rule engine simultaneously in parallel and in real time with the ETKAS also produced no differences. Specifying organ allocation rules in decision tables is already a great step forward in enhancing the clarity of the systems. Yet, using these tables as rule engine input for matches optimizes the flexibility, simplicity and clarity of the whole process, from specification to the performed matches, and in addition this new method allows well controlled simulations. © 2011 The Authors. Transplant International © 2011 European Society for Organ Transplantation.
40 CFR Table 1 to Subpart E of... - Product-Weighted Reactivity Limits by Coating Category
Code of Federal Regulations, 2014 CFR
2014-07-01
... Primers ABP 1.55 Automotive Bumper and Trim Products ABT 1.75 Aviation or Marine Primers AMP 2.00 Aviation... Finish—Engine Enamel EEE 1.70 Exact Match Finish—Automotive EFA 1.50 Exact Match Finish—Industrial EFI 2...
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vítková, Gabriela; Prokeš, Lubomír; Novotný, Karel; Pořízka, Pavel; Novotný, Jan; Všianský, Dalibor; Čelko, Ladislav; Kaiser, Jozef
2014-11-01
Focusing on historical aspect, during archeological excavation or restoration works of buildings or different structures built from bricks it is important to determine, preferably in-situ and in real-time, the locality of bricks origin. Fast classification of bricks on the base of Laser-Induced Breakdown Spectroscopy (LIBS) spectra is possible using multivariate statistical methods. Combination of principal component analysis (PCA) and linear discriminant analysis (LDA) was applied in this case. LIBS was used to classify altogether the 29 brick samples from 7 different localities. Realizing comparative study using two different LIBS setups - stand-off and table-top it is shown that stand-off LIBS has a big potential for archeological in-field measurements.
NASA Astrophysics Data System (ADS)
Einstein, Gnanatheepam; Udayakumar, Kanniyappan; Aruna, Prakasarao; Ganesan, Singaravelu
2017-03-01
Fluorescence of Protein has been widely used in diagnostic oncology for characterizing cellular metabolism. However, the intensity of fluorescence emission is affected due to the absorbers and scatterers in tissue, which may lead to error in estimating exact protein content in tissue. Extraction of intrinsic fluorescence from measured fluorescence has been achieved by different methods. Among them, Monte Carlo based method yields the highest accuracy for extracting intrinsic fluorescence. In this work, we have attempted to generate a lookup table for Monte Carlo simulation of fluorescence emission by protein. Furthermore, we fitted the generated lookup table using an empirical relation. The empirical relation between measured and intrinsic fluorescence is validated using tissue phantom experiments. The proposed relation can be used for estimating intrinsic fluorescence of protein for real-time diagnostic applications and thereby improving the clinical interpretation of fluorescence spectroscopic data.
31 CFR 306.35 - Computation of interest.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the discount rates on Treasury bills. Also included are tables of computation of interest on... 6 months, the accrued interest is computed by determining the daily rate of accrual on the basis of the exact number of days in the full interest period and multiplying the daily rate by the exact...
Singular perturbations and time scales in the design of digital flight control systems
NASA Technical Reports Server (NTRS)
Naidu, Desineni S.; Price, Douglas B.
1988-01-01
The results are presented of application of the methodology of Singular Perturbations and Time Scales (SPATS) to the control of digital flight systems. A block diagonalization method is described to decouple a full order, two time (slow and fast) scale, discrete control system into reduced order slow and fast subsystems. Basic properties and numerical aspects of the method are discussed. A composite, closed-loop, suboptimal control system is constructed as the sum of the slow and fast optimal feedback controls. The application of this technique to an aircraft model shows close agreement between the exact solutions and the decoupled (or composite) solutions. The main advantage of the method is the considerable reduction in the overall computational requirements for the evaluation of optimal guidance and control laws. The significance of the results is that it can be used for real time, onboard simulation. A brief survey is also presented of digital flight systems.
SW#db: GPU-Accelerated Exact Sequence Similarity Database Search.
Korpar, Matija; Šošić, Martin; Blažeka, Dino; Šikić, Mile
2015-01-01
In recent years we have witnessed a growth in sequencing yield, the number of samples sequenced, and as a result-the growth of publicly maintained sequence databases. The increase of data present all around has put high requirements on protein similarity search algorithms with two ever-opposite goals: how to keep the running times acceptable while maintaining a high-enough level of sensitivity. The most time consuming step of similarity search are the local alignments between query and database sequences. This step is usually performed using exact local alignment algorithms such as Smith-Waterman. Due to its quadratic time complexity, alignments of a query to the whole database are usually too slow. Therefore, the majority of the protein similarity search methods prior to doing the exact local alignment apply heuristics to reduce the number of possible candidate sequences in the database. However, there is still a need for the alignment of a query sequence to a reduced database. In this paper we present the SW#db tool and a library for fast exact similarity search. Although its running times, as a standalone tool, are comparable to the running times of BLAST, it is primarily intended to be used for exact local alignment phase in which the database of sequences has already been reduced. It uses both GPU and CPU parallelization and was 4-5 times faster than SSEARCH, 6-25 times faster than CUDASW++ and more than 20 times faster than SSW at the time of writing, using multiple queries on Swiss-prot and Uniref90 databases.
Fast mix table construction for material discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, S. R.
2013-07-01
An effective hybrid Monte Carlo-deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a 'mix table,' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mixmore » table in O(number of voxels x log number of mixtures) time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation. (authors)« less
Behavioral Context of Call Production by Eastern North Pacific Blue Whales
2007-01-25
pairs occurring in a repeated song sequence; B calls from a different blue whale are also evident; spectrogram parameters: fast Fourier transform (FFT...Acoustic data were viewed in spectrogram form ( fast Fourier transform [FFT] length 1 s, 80% overlap, Hanning window) to de- termine the presence of calls...dura- tion to song A and B units (Table 2), but the intermit - tent timing clearly distinguishes them from song. Whales producing singular calls were
Diagnostic articulation tables
NASA Astrophysics Data System (ADS)
Mikhailov, V. G.
2002-09-01
In recent years, considerable progress has been made in the development of instrumental methods for general speech quality and intelligibility evaluation on the basis of modeling the auditory perception of speech and measuring the signal-to-noise ratio. Despite certain advantages (fast measurement procedures with a low labor consumption), these methods are not universal and, in essence, secondary, because they rely on the calibration based on subjective-statistical measurements. At the same time, some specific problems of speech quality evaluation, such as the diagnostics of the factors responsible for the deviation of the speech quality from standard (e.g., accent features of a speaker or individual voice distortions), can be solved by psycholinguistic methods. This paper considers different kinds of diagnostic articulation tables: tables of minimal pairs of monosyllabic words (DRT) based on the Jacobson differential features, tables consisting of multisyllabic quartets of Russian words (the choice method), and tables of incomplete monosyllables of the _VC/CV_ type (the supplementary note method). Comparative estimates of the tables are presented along with the recommendations concerning their application.
Variation in the iodine concentration of foods: considerations for dietary assessment
USDA-ARS?s Scientific Manuscript database
Food composition tables are used to estimate the nutritional content of foods. Because the nutrient content of each food is given as a single summary value, it is likely that the actual food consumed by a survey participant will have a nutrient content that is not exactly equal to the table value. ...
Press, William H.
2006-01-01
Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N × N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude. PMID:17159155
Press, William H
2006-12-19
Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N x N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude.
1989-09-16
SWOTHR was conceived to be an organic asset capable of providing early detection and tracking of fast , surface-skimming threats, such as cruise missiles...distributed real-time processing and threat tracking system. Spe- cific project goals were to verify detection performance pree ctions for small, fast targets...means that enlarging the ground plane would have been a fruitless excercise in any event. B-6 5 i I U Table B-1 summarizes the calculated parameters of
One-step trinary signed-digit arithmetic using an efficient encoding scheme
NASA Astrophysics Data System (ADS)
Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.
2000-11-01
The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
PI-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch.
Zou, Yu; Pan, Xiaochuan; Xia, Dan; Wang, Ge
2005-08-01
Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.
... Statistics Tables for U.S. Adults: National Health Interview Survey, 2016, Table A-11c [PDF – 133 KB] Alcohol ... on data from the 2016 National Health Interview Survey, data table for figure 9.2 [PDF – 1. ...
Aerospace plane guidance using geometric control theory
NASA Technical Reports Server (NTRS)
Van Buren, Mark A.; Mease, Kenneth D.
1990-01-01
A reduced-order method employing decomposition, based on time-scale separation, of the 4-D state space in a 2-D slow manifold and a family of 2-D fast manifolds is shown to provide an excellent approximation to the full-order minimum-fuel ascent trajectory. Near-optimal guidance is obtained by tracking the reduced-order trajectory. The tracking problem is solved as regulation problems on the family of fast manifolds, using the exact linearization methodology from nonlinear geometric control theory. The validity of the overall guidance approach is indicated by simulation.
ERIC Educational Resources Information Center
Reeves, Sue; Wake, Yvonne; Zick, Andrea
2011-01-01
Objective: To investigate meals, price, nutritional content, and nutrition and portion size information available on children's menus in fast-food and table-service chain restaurants in London, since the United Kingdom does not currently require such information but may be initiating a voluntary guideline. Methods: Children's menus were assessed…
Equation-of-State Scaling Factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scannapieco, Anthony J.
2016-06-28
Equation-of-State scaling factors are needed when using a tabular EOS in which the user de ned material isotopic fractions di er from the actual isotopic fractions used by the table. Additionally, if a material is dynamically changing its isotopic structure, then an EOS scaling will again be needed, and will vary in time and location. The procedure that allows use of a table to obtain information about a similar material with average atomic mass Ms and average atomic number Zs is described below. The procedure is exact for a fully ionized ideal gas. However, if the atomic number is replacemore » by the e ective ionization state the procedure can be applied to partially ionized material as well, which extends the applicability of the scaling approximation continuously from low to high temperatures.« less
Wang, Ling; Xia, Jie-lai; Yu, Li-li; Li, Chan-juan; Wang, Su-zhen
2008-06-01
To explore several numerical methods of ordinal variable in one-way ordinal contingency table and their interrelationship, and to compare corresponding statistical analysis methods such as Ridit analysis and rank sum test. Formula deduction was based on five simplified grading approaches including rank_r(i), ridit_r(i), ridit_r(ci), ridit_r(mi), and table scores. Practical data set was verified by SAS8.2 in clinical practice (to test the effect of Shiwei solution in treatment for chronic tracheitis). Because of the linear relationship of rank_r(i) = N ridit_r(i) + 1/2 = N ridit_r(ci) = (N + 1) ridit_r(mi), the exact chi2 values in Ridit analysis based on ridit_r(i), ridit_r(ci), and ridit_r(mi), were completely the same, and they were equivalent to the Kruskal-Wallis H test. Traditional Ridit analysis was based on ridit_r(i), and its corresponding chi2 value calculated with an approximate variance (1/12) was conservative. The exact chi2 test of Ridit analysis should be used when comparing multiple groups in the clinical researches because of its special merits such as distribution of mean ridit value on (0,1) and clear graph expression. The exact chi2 test of Ridit analysis can be output directly by proc freq of SAS8.2 with ridit and modridit option (SCORES =). The exact chi2 test of Ridit analysis is equivalent to the Kruskal-Wallis H test, and should be used when comparing multiple groups in the clinical researches.
Ferrer, Imma; Thurman, E Michael
2012-10-12
A straightforward methodology for the chromatographic separation and accurate mass identification of 100 pharmaceuticals including some of their degradation products was developed using liquid chromatography/quadrupole time-of-flight mass spectrometry (LC/Q-TOF-MS). A table compiling the protonated or deprotonated exact masses for all compounds, as well as the exact mass of several fragment ions obtained by MS-MS is included. Excellent chromatographic separation was achieved by using 3.5 μm particle size columns and a slow and generic 30-min gradient. Isobaric and isomeric compounds (same nominal mass and same exact mass, respectively) were distinguished by various methods, including chromatography separation, MS-MS fragmentation, and isotopic signal identification. Method reporting limits of detection ranged from 1 to 1000 ng/L, after solid-phase extraction of 100mL aqueous samples. The methodology was successfully applied to the analysis of surface water impacted by wastewater effluent by identifying many of the pharmaceuticals and metabolites included in the list. Examples are given for some of the most unusual findings in environmental samples. This paper is meant to serve as a guide for those doing analysis of pharmaceuticals in environmental samples, by providing exact mass measurements of several well known, as well as newly identified and environmentally relevant pharmaceuticals in water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Palmesi, P.; Exl, L.; Bruckner, F.; Abert, C.; Suess, D.
2017-11-01
The long-range magnetic field is the most time-consuming part in micromagnetic simulations. Computational improvements can relieve problems related to this bottleneck. This work presents an efficient implementation of the Fast Multipole Method [FMM] for the magnetic scalar potential as used in micromagnetics. The novelty lies in extending FMM to linearly magnetized tetrahedral sources making it interesting also for other areas of computational physics. We treat the near field directly and in use (exact) numerical integration on the multipole expansion in the far field. This approach tackles important issues like the vectorial and continuous nature of the magnetic field. By using FMM the calculations scale linearly in time and memory.
Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.
ERIC Educational Resources Information Center
Parshall, Cynthia G.; Kromrey, Jeffrey D.
1996-01-01
Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)
Magdalena M. Wiedermann; Evan S. Kane; Lynette R. Potvin; Erik A. Lilleskov
2017-01-01
Peatland decomposition may be altered by hydrology and plant functional groups (PFGs), but exactly how the latter influences decomposition is unclear, as are potential interactions of these factors.We used a factorial mesocosm experiment with intact 1 m3 peat monoliths to explore how PFGs (sedges vs Ericaceae) and water table level individually...
NASA Astrophysics Data System (ADS)
Li, H.; Wong, Wai-Hoi; Zhang, N.; Wang, J.; Uribe, J.; Baghaei, H.; Yokoyama, S.
1999-06-01
Electronics for a prototype high-resolution PET camera with eight position-sensitive detector modules has been developed. Each module has 16 BGO (Bi/sub 4/Ge/sub 3/O/sub 12/) blocks (each block is composed of 49 crystals). The design goals are component and space reduction. The electronics is composed of five parts: front-end analog processing, digital position decoding, fast timing, coincidence processing and master data acquisition. The front-end analog circuit is a zone-based structure (each zone has 3/spl times/3 PMTs). Nine ADCs digitize integration signals of an active zone identified by eight trigger clusters; each cluster is composed of six photomultiplier tubes (PMTs). A trigger corresponding to a gamma ray is sent to a fast timing board to obtain a time-mark, and the nine digitized signals are passed to the position decoding board, where a real block (four PMTs) can be picked out from the zone for position decoding. Lookup tables are used for energy discrimination and to identify the gamma-hit crystal location. The coincidence board opens a 70-ns initial timing window, followed by two 20-ns true/accidental time-mark lookup table windows. The data output from the coincidence board can be acquired either in sinogram mode or in list mode with a Motorola/IRONICS VME-based system.
26 CFR 1.7701(l)-0 - Table of contents.
Code of Federal Regulations, 2011 CFR
2011-04-01
... arrangements. § 1.7701(l)-3Recharacterizing financing arrangements involving fast-pay stock. (a) Purpose and scope. (b) Definitions. (1) Fast-pay arrangement. (2) Fast-pay stock. (i) Defined. (ii) Determination. (3) Benefited stock. (c) Recharacterization of certain fast-pay arrangements. (1) Scope. (2...
26 CFR 1.7701(l)-0 - Table of contents.
Code of Federal Regulations, 2010 CFR
2010-04-01
....7701(l)-3Recharacterizing financing arrangements involving fast-pay stock. (a) Purpose and scope. (b) Definitions. (1) Fast-pay arrangement. (2) Fast-pay stock. (i) Defined. (ii) Determination. (3) Benefited stock. (c) Recharacterization of certain fast-pay arrangements. (1) Scope. (2) Recharacterization. (i...
26 CFR 1.7701(l)-0 - Table of contents.
Code of Federal Regulations, 2013 CFR
2013-04-01
... arrangements. § 1.7701(l)-3Recharacterizing financing arrangements involving fast-pay stock. (a) Purpose and scope. (b) Definitions. (1) Fast-pay arrangement. (2) Fast-pay stock. (i) Defined. (ii) Determination. (3) Benefited stock. (c) Recharacterization of certain fast-pay arrangements. (1) Scope. (2...
26 CFR 1.7701(l)-0 - Table of contents.
Code of Federal Regulations, 2014 CFR
2014-04-01
... arrangements. § 1.7701(l)-3Recharacterizing financing arrangements involving fast-pay stock. (a) Purpose and scope. (b) Definitions. (1) Fast-pay arrangement. (2) Fast-pay stock. (i) Defined. (ii) Determination. (3) Benefited stock. (c) Recharacterization of certain fast-pay arrangements. (1) Scope. (2...
26 CFR 1.7701(l)-0 - Table of contents.
Code of Federal Regulations, 2012 CFR
2012-04-01
... arrangements. § 1.7701(l)-3Recharacterizing financing arrangements involving fast-pay stock. (a) Purpose and scope. (b) Definitions. (1) Fast-pay arrangement. (2) Fast-pay stock. (i) Defined. (ii) Determination. (3) Benefited stock. (c) Recharacterization of certain fast-pay arrangements. (1) Scope. (2...
ERIC Educational Resources Information Center
Schultz, James E.; Waters, Michael S.
2000-01-01
Discusses representations in the context of solving a system of linear equations. Views representations (concrete, tables, graphs, algebraic, matrices) from perspectives of understanding, technology, generalization, exact versus approximate solution, and learning style. (KHR)
Scheltema, Richard Alexander; Hauschild, Jan-Peter; Lange, Oliver; Hornburg, Daniel; Denisov, Eduard; Damoc, Eugen; Kuehn, Andreas; Makarov, Alexander; Mann, Matthias
2014-01-01
The quadrupole Orbitrap mass spectrometer (Q Exactive) made a powerful proteomics instrument available in a benchtop format. It significantly boosted the number of proteins analyzable per hour and has now evolved into a proteomics analysis workhorse for many laboratories. Here we describe the Q Exactive Plus and Q Exactive HF mass spectrometers, which feature several innovations in comparison to the original Q Exactive instrument. A low-resolution pre-filter has been implemented within the injection flatapole, preventing unwanted ions from entering deep into the system, and thereby increasing its robustness. A new segmented quadrupole, with higher fidelity of isolation efficiency over a wide range of isolation windows, provides an almost 2-fold improvement of transmission at narrow isolation widths. Additionally, the Q Exactive HF has a compact Orbitrap analyzer, leading to higher field strength and almost doubling the resolution at the same transient times. With its very fast isolation and fragmentation capabilities, the instrument achieves overall cycle times of 1 s for a top 15 to 20 higher energy collisional dissociation method. We demonstrate the identification of 5000 proteins in standard 90-min gradients of tryptic digests of mammalian cell lysate, an increase of over 40% for detected peptides and over 20% for detected proteins. Additionally, we tested the instrument on peptide phosphorylation enriched samples, for which an improvement of up to 60% class I sites was observed. PMID:25360005
VizieR Online Data Catalog: Speckle interferometry at SOAR in 2015 (Tokovinin+, 2016)
NASA Astrophysics Data System (ADS)
Tokovinin, A.; Mason, B. D.; Hartkopf, W. I.; Mendez, R. A.; Horch, E. P.
2018-01-01
The observations reported here were obtained with the high-resolution camera (HRCam)-a fast imager designed to work at the 4.1m SOAR telescope. For practical reasons, the camera was mounted on the SOAR Adaptive Module (SAM). We mostly used the Stromgren y filter (543/22nm) and the near-infrared I filter (788/132nm). The observing time for this program was allocated through NOAO (three nights, programs 15A-0097 and 15B-0009, PI A.T.) and by the Chilean National Time Allocation Committee (three nights in 2015B, program CN2015B-6, PI R.A.M.). All observations were made by A.T. Table2 lists 1303 measures of 924 resolved binary stars and subsystems, including 27 newly resolved pairs. Table3 contains the data on 360 unresolved stars, some of which are listed as binaries in the WDS or resolved here in other filters. Table4 lists 27 newly resolved pairs. (5 data files).
Advanced Joining of Aerospace Metallic Materials.
1986-07-01
uniaxial tensile test with varying temperature and cyclic loading. This simple test problem excercises maray aspects of the phenomena. suOn- ,Ual Yield...6vidence Ia seconde configuration apparait plus n~ faste . 5.3. Ents-e-igne-men-t-s s-u r a dyna-mique de. bain-s de fu-sion A lusage il svest r~vle que la...scanning system for fast and exact alignment of the EB-qun is used. In a fixture the cleaned detail parts are positioned exactly and clamped for welding. At
... Department Summary Tables, table 27 [PDF – 676 KB] Mortality Number of deaths: 51,811 Deaths per 100, ... States, 2015 Related Links National Health Interview Survey Mortality data Centers for Disease Control and Prevention: Pneumonia ...
Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation
NASA Astrophysics Data System (ADS)
Wen, Bo; Zhang, Qiheng; Zhang, Jianlin
2011-11-01
Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.
... Department Summary Tables, table 11 [PDF – 676 KB] Mortality Number of suicide deaths: 42,773 Suicide deaths ... and Nutrition Examination Survey National Health Interview Survey Mortality data Centers for Disease Control and Prevention: Mental ...
On the multiple depots vehicle routing problem with heterogeneous fleet capacity and velocity
NASA Astrophysics Data System (ADS)
Hanum, F.; Hartono, A. P.; Bakhtiar, T.
2018-03-01
This current manuscript concerns with the optimization problem arising in a route determination of products distribution. The problem is formulated in the form of multiple depots and time windowed vehicle routing problem with heterogeneous capacity and velocity of fleet. Model includes a number of constraints such as route continuity, multiple depots availability and serving time in addition to generic constraints. In dealing with the unique feature of heterogeneous velocity, we generate a number of velocity profiles along the road segments, which then converted into traveling-time tables. An illustrative example of rice distribution among villages by bureau of logistics is provided. Exact approach is utilized to determine the optimal solution in term of vehicle routes and starting time of service.
FastStats: Self-Inflicted Injury/Suicide
... this? Submit What's this? Submit Button NCHS Home Suicide and Self-Inflicted Injury Recommend on Facebook Tweet ... Tables, table 17 [PDF – 676 KB] Mortality All suicides Number of deaths: 44,193 Deaths per 100, ...
A New Method for Generating Probability Tables in the Unresolved Resonance Region
Holcomb, Andrew M.; Leal, Luiz C.; Rahnema, Farzad; ...
2017-04-18
One new method for constructing probability tables in the unresolved resonance region (URR) has been developed. This new methodology is an extensive modification of the single-level Breit-Wigner (SLBW) pseudo-resonance pair sequence method commonly used to generate probability tables in the URR. The new method uses a Monte Carlo process to generate many pseudo-resonance sequences by first sampling the average resonance parameter data in the URR and then converting the sampled resonance parameters to the more robust R-matrix limited (RML) format. Furthermore, for each sampled set of pseudo-resonance sequences, the temperature-dependent cross sections are reconstructed on a small grid around themore » energy of reference using the Reich-Moore formalism and the Leal-Hwang Doppler broadening methodology. We then use the effective cross sections calculated at the energies of reference to construct probability tables in the URR. The RML cross-section reconstruction algorithm has been rigorously tested for a variety of isotopes, including 16O, 19F, 35Cl, 56Fe, 63Cu, and 65Cu. The new URR method also produced normalized cross-section factor probability tables for 238U that were found to be in agreement with current standards. The modified 238U probability tables were shown to produce results in excellent agreement with several standard benchmarks, including the IEU-MET-FAST-007 (BIG TEN), IEU-MET-FAST-003, and IEU-COMP-FAST-004 benchmarks.« less
NASA Astrophysics Data System (ADS)
Murguía, Gabriela; Raya, Alfredo
2010-10-01
We derive the exact Foldy-Wouthuysen transformation for Dirac fermions in a time-independent external electromagnetic field in the basis of the Ritus eigenfunctions, namely the eigenfunctions of the operator (γ sdot Π)2, with Πμ = pμ - eAμ. On this basis, the transformation acquires a free form involving the dynamical quantum numbers induced by the field.
Kovalev, S; Green, B; Golz, T; Maehrlein, S; Stojanovic, N; Fisher, A S; Kampfrath, T; Gensch, M
2017-03-01
Understanding dynamics on ultrafast timescales enables unique and new insights into important processes in the materials and life sciences. In this respect, the fundamental pump-probe approach based on ultra-short photon pulses aims at the creation of stroboscopic movies. Performing such experiments at one of the many recently established accelerator-based 4th-generation light sources such as free-electron lasers or superradiant THz sources allows an enormous widening of the accessible parameter space for the excitation and/or probing light pulses. Compared to table-top devices, critical issues of this type of experiment are fluctuations of the timing between the accelerator and external laser systems and intensity instabilities of the accelerator-based photon sources. Existing solutions have so far been only demonstrated at low repetition rates and/or achieved a limited dynamic range in comparison to table-top experiments, while the 4th generation of accelerator-based light sources is based on superconducting radio-frequency technology, which enables operation at MHz or even GHz repetition rates. In this article, we present the successful demonstration of ultra-fast accelerator-laser pump-probe experiments performed at an unprecedentedly high repetition rate in the few-hundred-kHz regime and with a currently achievable optimal time resolution of 13 fs (rms). Our scheme, based on the pulse-resolved detection of multiple beam parameters relevant for the experiment, allows us to achieve an excellent sensitivity in real-world ultra-fast experiments, as demonstrated for the example of THz-field-driven coherent spin precession.
OPTRAN- OPTIMAL LOW THRUST ORBIT TRANSFERS
NASA Technical Reports Server (NTRS)
Breakwell, J. V.
1994-01-01
OPTRAN is a collection of programs that solve the problem of optimal low thrust orbit transfers between non-coplanar circular orbits for spacecraft with chemical propulsion systems. The programs are set up to find Hohmann-type solutions, with burns near the perigee and apogee of the transfer orbit. They will solve both fairly long burn-arc transfers and "divided-burn" transfers. Program modeling includes a spherical earth gravity model and propulsion system models for either constant thrust or constant acceleration. The solutions obtained are optimal with respect to fuel use: i.e., final mass of the spacecraft is maximized with respect to the controls. The controls are the direction of thrust and the thrust on/off times. Two basic types of programs are provided in OPTRAN. The first type is for "exact solution" which results in complete, exact tkme-histories. The exact spacecraft position, velocity, and optimal thrust direction are given throughout the maneuver, as are the optimal thrust switch points, the transfer time, and the fuel costs. Exact solution programs are provided in two versions for non-coplanar transfers and in a fast version for coplanar transfers. The second basic type is for "approximate solutions" which results in approximate information on the transfer time and fuel costs. The approximate solution is used to estimate initial conditions for the exact solution. It can be used in divided-burn transfers to find the best number of burns with respect to time. The approximate solution is useful by itself in relatively efficient, short burn-arc transfers. These programs are written in FORTRAN 77 for batch execution and have been implemented on a DEC VAX series computer with the largest program having a central memory requirement of approximately 54K of 8 bit bytes. The OPTRAN program were developed in 1983.
Daring to venture beyond the bench.
Wylie, Catarina
2011-02-15
Few people are exactly where they thought they would be 20 years into their careers. Careers, like life, are full of twists and turns. Brains and an inviolable work ethic are table stakes in virtually every profession involving science. Beyond these basics, however, each of us brings our personal blend of talents and skills to creating a career. Some of us know exactly what we want and chart a direct course. Others are masters at seizing opportunities. Still others go with the natural flow of events. Since success and security can only come with time, begin by choosing your adventure. Follow your interests and passions. Radically rewrite your resume, network, take people to coffee, get out of your comfort zone. Sticking to the same things you've already tried simply means you travel the same path over and over. Stepping out into the unknown can be scary, but it can also lead to unexpected places. Take a look.
Akazawa, K; Nakamura, T; Moriguchi, S; Shimada, M; Nose, Y
1991-07-01
Small sample properties of the maximum partial likelihood estimates for Cox's proportional hazards model depend on the sample size, the true values of regression coefficients, covariate structure, censoring pattern and possibly baseline hazard functions. Therefore, it would be difficult to construct a formula or table to calculate the exact power of a statistical test for the treatment effect in any specific clinical trial. The simulation program, written in SAS/IML, described in this paper uses Monte-Carlo methods to provide estimates of the exact power for Cox's proportional hazards model. For illustrative purposes, the program was applied to real data obtained from a clinical trial performed in Japan. Since the program does not assume any specific function for the baseline hazard, it is, in principle, applicable to any censored survival data as long as they follow Cox's proportional hazards model.
How Fast Do Trees Grow? Using Tables and Graphs to Explore Slope
ERIC Educational Resources Information Center
Joram, Elana; Oleson, Vicki
2007-01-01
This article describes a lesson unit in which students constructed tables and graphs to represent the growth of different trees. Students then compared the graphs to develop an understanding of slope.
Subscale Fast Cookoff Testing and Modeling for the Hazard Assessment of Large Rocket Motors
2001-03-01
41 LIST OF TABLES Table 1 Heats of Vaporization Parameter for Two-liner Phase Transformation - Complete Liner Sublimation and/or Combined Liner...One-dimensional 2-D Two-dimensional ALE3D Arbitrary-Lagrange-Eulerian (3-D) Computer Code ALEGRA 3-D Arbitrary-Lagrange-Eulerian Computer Code for...case-liner bond areas and in the grain inner bore to explore the pre-ignition and ignition phases , as well as burning evolution in rocket motor fast
Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.
Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F
2011-03-01
This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.
Fast and accurate Monte Carlo sampling of first-passage times from Wiener diffusion models.
Drugowitsch, Jan
2016-02-11
We present a new, fast approach for drawing boundary crossing samples from Wiener diffusion models. Diffusion models are widely applied to model choices and reaction times in two-choice decisions. Samples from these models can be used to simulate the choices and reaction times they predict. These samples, in turn, can be utilized to adjust the models' parameters to match observed behavior from humans and other animals. Usually, such samples are drawn by simulating a stochastic differential equation in discrete time steps, which is slow and leads to biases in the reaction time estimates. Our method, instead, facilitates known expressions for first-passage time densities, which results in unbiased, exact samples and a hundred to thousand-fold speed increase in typical situations. In its most basic form it is restricted to diffusion models with symmetric boundaries and non-leaky accumulation, but our approach can be extended to also handle asymmetric boundaries or to approximate leaky accumulation.
Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel
2012-01-01
Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023
Alignment of high-throughput sequencing data inside in-memory databases.
Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias
2014-01-01
In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.
NASA Astrophysics Data System (ADS)
An, Hao; Wang, Changhong; Fidan, Baris
2017-10-01
This paper presents a backstepping procedure to design an adaptive controller for the air-breathing hypersonic flight vehicle (AHFV) subject to external disturbances and actuator saturations. In each step, a sliding mode exact disturbance observer (SMEDO) is exploited to exactly estimate the lumped disturbance in finite time. Specific dynamics are introduced to handle the possible actuator saturations. Based on SMEDO and introduced dynamics, an adaptive control law is designed, along with the consideration on ;explosion of complexity; in backstepping design. The developed controller is equipped with fast disturbance rejection and great capability to accommodate the saturated actuators, which also lead to a wider application scope. A simulation study is provided to show the effectiveness and superiority of the proposed controller.
Children, adolescents, obesity, and the media.
Strasburger, Victor C
2011-07-01
Obesity has become a worldwide public health problem. Considerable research has shown that the media contribute to the development of child and adolescent obesity, although the exact mechanism remains unclear. Screen time may displace more active pursuits, advertising of junk food and fast food increases children's requests for those particular foods and products, snacking increases while watching TV or movies, and late-night screen time may interfere with getting adequate amounts of sleep, which is a known risk factor for obesity. Sufficient evidence exists to warrant a ban on junk-food or fast-food advertising in children's TV programming. Pediatricians need to ask 2 questions about media use at every well-child or well-adolescent visit: (1) How much screen time is being spent per day? and (2) Is there a TV set or Internet connection in the child's bedroom? Copyright © 2011 by the American Academy of Pediatrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovalev, S.; Green, B.; Golz, T.
Here, understanding dynamics on ultrafast timescales enables unique and new insights into important processes in the materials and life sciences. In this respect, the fundamental pump-probe approach based on ultra-short photon pulses aims at the creation of stroboscopic movies. Performing such experiments at one of the many recently established accelerator-based 4th-generation light sources such as free-electron lasers or superradiant THz sources allows an enormous widening of the accessible parameter space for the excitation and/or probing light pulses. Compared to table-top devices, critical issues of this type of experiment are fluctuations of the timing between the accelerator and external laser systemsmore » and intensity instabilities of the accelerator-based photon sources. Existing solutions have so far been only demonstrated at low repetition rates and/or achieved a limited dynamic range in comparison to table-top experiments, while the 4th generation of accelerator-based light sources is based on superconducting radio-frequency technology, which enables operation at MHz or even GHz repetition rates. In this article, we present the successful demonstration of ultra-fast accelerator-laser pump-probe experiments performed at an unprecedentedly high repetition rate in the few-hundred-kHz regime and with a currently achievable optimal time resolution of 13 fs (rms). Our scheme, based on the pulse-resolved detection of multiple beam parameters relevant for the experiment, allows us to achieve an excellent sensitivity in real-world ultra-fast experiments, as demonstrated for the example of THz-field-driven coherent spin precession.« less
Kovalev, S.; Green, B.; Golz, T.; ...
2017-03-06
Here, understanding dynamics on ultrafast timescales enables unique and new insights into important processes in the materials and life sciences. In this respect, the fundamental pump-probe approach based on ultra-short photon pulses aims at the creation of stroboscopic movies. Performing such experiments at one of the many recently established accelerator-based 4th-generation light sources such as free-electron lasers or superradiant THz sources allows an enormous widening of the accessible parameter space for the excitation and/or probing light pulses. Compared to table-top devices, critical issues of this type of experiment are fluctuations of the timing between the accelerator and external laser systemsmore » and intensity instabilities of the accelerator-based photon sources. Existing solutions have so far been only demonstrated at low repetition rates and/or achieved a limited dynamic range in comparison to table-top experiments, while the 4th generation of accelerator-based light sources is based on superconducting radio-frequency technology, which enables operation at MHz or even GHz repetition rates. In this article, we present the successful demonstration of ultra-fast accelerator-laser pump-probe experiments performed at an unprecedentedly high repetition rate in the few-hundred-kHz regime and with a currently achievable optimal time resolution of 13 fs (rms). Our scheme, based on the pulse-resolved detection of multiple beam parameters relevant for the experiment, allows us to achieve an excellent sensitivity in real-world ultra-fast experiments, as demonstrated for the example of THz-field-driven coherent spin precession.« less
Numbers of center points appropriate to blocked response surface experiments
NASA Technical Reports Server (NTRS)
Holms, A. G.
1979-01-01
Tables are given for the numbers of center points to be used with blocked sequential designs of composite response surface experiments as used in empirical optimum seeking. The star point radii for exact orthogonal blocking is presented. The center point options varied from a lower limit of one to an upper limit equal to the numbers proposed by Box and Hunter for approximate rotatability and uniform variance, and exact orthogonal blocking. Some operating characteristics of the proposed options are described.
W-tree indexing for fast visual word generation.
Shi, Miaojing; Xu, Ruixin; Tao, Dacheng; Xu, Chao
2013-03-01
The bag-of-visual-words representation has been widely used in image retrieval and visual recognition. The most time-consuming step in obtaining this representation is the visual word generation, i.e., assigning visual words to the corresponding local features in a high-dimensional space. Recently, structures based on multibranch trees and forests have been adopted to reduce the time cost. However, these approaches cannot perform well without a large number of backtrackings. In this paper, by considering the spatial correlation of local features, we can significantly speed up the time consuming visual word generation process while maintaining accuracy. In particular, visual words associated with certain structures frequently co-occur; hence, we can build a co-occurrence table for each visual word for a large-scale data set. By associating each visual word with a probability according to the corresponding co-occurrence table, we can assign a probabilistic weight to each node of a certain index structure (e.g., a KD-tree and a K-means tree), in order to re-direct the searching path to be close to its global optimum within a small number of backtrackings. We carefully study the proposed scheme by comparing it with the fast library for approximate nearest neighbors and the random KD-trees on the Oxford data set. Thorough experimental results suggest the efficiency and effectiveness of the new scheme.
Tables for pressure of air on coming to rest from various speeds
NASA Technical Reports Server (NTRS)
Zahm, A F; Louden, F A
1930-01-01
In Technical Report no. 247 of the National Advisory Committee for Aeronautics theoretical formulas are given from which was computed a table for the pressure of air on coming to rest from various speeds, such as those of aircraft and propeller blades. In that report, the table gave incompressible and adiabatic stop pressures of air for even-speed intervals in miles per hour and for some even-speed intervals in knots per hour. Table II of the present report extends the above-mentioned table by including the stop pressures of air for even-speed intervals in miles per hour, feet per-second, knots per hour, kilometers per hour, and meters per second. The pressure values in table II are also more exact than values given in the previous table. To furnish the aeronautical engineer with ready numerical formulas for finding the pressure of air on coming to rest, table I has been derived for the standard values specified below it. This table first presents the theoretical pressure-speed formulas and their working forms in C. G. S. Units as given in NACA Technical Report No. 247, then furnishes additional working formulas for several special units of speed. (author)
40 CFR 211.207 - Computation of the noise -reduction rating (NRR).
Code of Federal Regulations, 2012 CFR
2012-07-01
... mathematics to determine the combined value of protected ear levels (Step #8) which is used in Step #9 to exactly derive the NRR; or use the following table as a substitute for logarithmic mathematics to...
40 CFR 211.207 - Computation of the noise -reduction rating (NRR).
Code of Federal Regulations, 2014 CFR
2014-07-01
... mathematics to determine the combined value of protected ear levels (Step #8) which is used in Step #9 to exactly derive the NRR; or use the following table as a substitute for logarithmic mathematics to...
40 CFR 211.207 - Computation of the noise -reduction rating (NRR).
Code of Federal Regulations, 2013 CFR
2013-07-01
... mathematics to determine the combined value of protected ear levels (Step #8) which is used in Step #9 to exactly derive the NRR; or use the following table as a substitute for logarithmic mathematics to...
40 CFR 211.207 - Computation of the noise -reduction rating (NRR).
Code of Federal Regulations, 2010 CFR
2010-07-01
... mathematics to determine the combined value of protected ear levels (Step #8) which is used in Step #9 to exactly derive the NRR; or use the following table as a substitute for logarithmic mathematics to...
Josephson A/D Converter Development.
1981-10-01
by Zappe and A Landman [20]. They conclude that the simple model of the Josephson effect is applicable up to frequencies at least as high (a) as 300...GHz. B. Time-Domain Experiments 4ooF so The early high - frequency experiments with Josephson devices I .O suggested their use as very fast logic switches...exactly as for the phenomenological model . The tunneling pacitive current paths dominate the circuit at high frequencies . current is the sum of two
Numerical solution of the exact cavity equations of motion for an unstable optical resonator.
Bowers, M S; Moody, S E
1990-09-20
We solve numerically, we believe for the first time, the exact cavity equations of motion for a realistic unstable resonator with a simple gain saturation model. The cavity equations of motion, first formulated by Siegman ["Exact Cavity Equations for Lasers with Large Output Coupling," Appl. Phys. Lett. 36, 412-414 (1980)], and which we term the dynamic coupled modes (DCM) method of solution, solve for the full 3-D time dependent electric field inside the optical cavity by expanding the field in terms of the actual diffractive transverse eigenmodes of the bare (gain free) cavity with time varying coefficients. The spatially varying gain serves to couple the bare cavity transverse modes and to scatter power from mode to mode. We show that the DCM method numerically converges with respect to the number of eigenmodes in the basis set. The intracavity intensity in the numerical example shown reaches a steady state, and this steady state distribution is compared with that computed from the traditional Fox and Li approach using a fast Fourier transform propagation algorithm. The output wavefronts from both methods are quite similar, and the computed output powers agree to within 10%. The usefulness and advantages of using this method for predicting the output of a laser, especially pulsed lasers used for coherent detection, are discussed.
Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny
NASA Astrophysics Data System (ADS)
Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell
Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.
Fast Track Teaching: Beginning the Experiment in Accelerated Leadership Development
ERIC Educational Resources Information Center
Churches, Richard; Hutchinson, Geraldine; Jones, Jeff
2009-01-01
This article provides an overview of the development of the Fast Track teaching programme and personalised nature of the training and support that has been delivered. Fast Track teacher promotion rates are compared to national statistics demonstrating significant progression for certain groups, particularly women. (Contains 3 tables and 3 figures.)
Senington, Billy; Lee, Raymond Y; Williams, Jonathan Mark
2018-03-09
Fast bowlers display a high risk of lower back injury and pain. Studies report factors that may increase this risk, however exact mechanisms remain unclear. To provide a contemporary analysis of literature, up to April 2016, regarding fast bowling, spinal kinematics, ground reaction force (GRF), lower back pain (LBP) and pathology. Key terms including biomechanics, bowling, spine and injury were searched within MEDLINE, Google Scholar, SPORTDiscuss, Science Citation Index, OAIster, CINAHL, Academic Search Complete, Science Direct and Scopus. Following application of inclusion criteria, 56 studies (reduced from 140) were appraised for quality and pooled for further analysis. Twelve times greater risk of lumbar injury was reported in bowlers displaying excessive shoulder counter-rotation (SCR), however SCR is a surrogate measure which may not describe actual spinal movement. Little is known about LBP specifically. Weighted averages of 5.8 ± 1.3 times body weight (BW) vertically and 3.2 ± 1.1 BW horizontally were calculated for peak GRF during fast bowling. No quantitative synthesis of kinematic data was possible due to heterogeneity of reported results. Fast bowling is highly injurious especially with excessive SCR. Studies adopted similar methodologies, constrained to laboratory settings. Future studies should focus on methods to determine biomechanics during live play.
Measured Thermal and Fast Neutron Fluence Rates for ATF-1 Holders During ATR Cycle 157D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Larry Don; Miller, David Torbet
This report contains the thermal (2200 m/s) and fast (E>1MeV) neutron fluence rate data for the ATF-1 holders located in core for ATR Cycle 157D which were measured by the Radiation Measurements Laboratory (RML) as requested by the Power Reactor Programs (ATR Experiments) Radiation Measurements Work Order. This report contains measurements of the fluence rates corresponding to the particular elevations relative to the 80-ft. core elevation. The data in this report consist of (1) a table of the ATR power history and distribution, (2) a hard copy listing of all thermal and fast neutron fluence rates, and (3) plots ofmore » both the thermal and fast neutron fluence rates. The fluence rates reported are for the average power levels given in the table of power history and distribution.« less
Simpson, Matthew J; Baker, Ruth E
2015-09-07
Unlike standard applications of transport theory, the transport of molecules and cells during embryonic development often takes place within growing multidimensional tissues. In this work, we consider a model of diffusion on uniformly growing lines, disks, and spheres. An exact solution of the partial differential equation governing the diffusion of a population of individuals on the growing domain is derived. Using this solution, we study the survival probability, S(t). For the standard non-growing case with an absorbing boundary, we observe that S(t) decays to zero in the long time limit. In contrast, when the domain grows linearly or exponentially with time, we show that S(t) decays to a constant, positive value, indicating that a proportion of the diffusing substance remains on the growing domain indefinitely. Comparing S(t) for diffusion on lines, disks, and spheres indicates that there are minimal differences in S(t) in the limit of zero growth and minimal differences in S(t) in the limit of fast growth. In contrast, for intermediate growth rates, we observe modest differences in S(t) between different geometries. These differences can be quantified by evaluating the exact expressions derived and presented here.
Jia, Chen
2017-09-01
Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multiscale biochemical reaction kinetics of stochastic gene expression.
NASA Astrophysics Data System (ADS)
Jia, Chen
2017-09-01
Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multiscale biochemical reaction kinetics of stochastic gene expression.
Storm-time Convection Dynamics Viewed from Optical Auroras: from Streamer to Patchy Pulsating Aurora
NASA Astrophysics Data System (ADS)
Yang, B.; Donovan, E.; Liang, J.; Grono, E.
2016-12-01
In a series of statistical and event studies we have demonstrated that the motion of patches in regions of Patchy Pulsating Aurora (PPA) is very close to if not exactly convection. Thus, 2D maps of PPA motion provides us the opportunity to remote sense magnetospheric convection with relatively high space and time resolution, subject to uncertainties associated with mapping between the ionosphere and magnetosphere. In this study, we use THEMIS ASI aurora observations (streamers and patchy pulsating aurora) combined with SuperDARN convection measurements, Swarm ion drift velocity measurements, and RBSP electric field measurements to explore the convection dynamics in storm time. From 0500 UT to 0600 UT on March 19 2015, convection observations across 5 magnetic local time (MLT) inferred from the motion of PPA patches and SuperDARN measurements show that a westward SAPS (Subauroral Polarized Streams) enhancement occurs after an auroral streamer. This suggests that plasma sheet fast flows can affect the inner magnetospheric convection, and possibly trigger very fast flows in the inner magnetosphere.
Scheduling algorithms for automatic control systems for technological processes
NASA Astrophysics Data System (ADS)
Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.
2017-01-01
Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays - all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.
An Algorithm for the Calculation of Exact Term Discrimination Values.
ERIC Educational Resources Information Center
Willett, Peter
1985-01-01
Reports algorithm for calculation of term discrimination values that is sufficiently fast in operation to permit use of exact values. Evidence is presented to show that relationship between term discrimination and term frequency is crucially dependent upon type of inter-document similarity measure used for calculation of discrimination values. (13…
Monasson, Remi; Cocco, Simona
2011-10-01
We present two Bayesian procedures to infer the interactions and external currents in an assembly of stochastic integrate-and-fire neurons from the recording of their spiking activity. The first procedure is based on the exact calculation of the most likely time courses of the neuron membrane potentials conditioned by the recorded spikes, and is exact for a vanishing noise variance and for an instantaneous synaptic integration. The second procedure takes into account the presence of fluctuations around the most likely time courses of the potentials, and can deal with moderate noise levels. The running time of both procedures is proportional to the number S of spikes multiplied by the squared number N of neurons. The algorithms are validated on synthetic data generated by networks with known couplings and currents. We also reanalyze previously published recordings of the activity of the salamander retina (including from 32 to 40 neurons, and from 65,000 to 170,000 spikes). We study the dependence of the inferred interactions on the membrane leaking time; the differences and similarities with the classical cross-correlation analysis are discussed.
BeamDyn: a high-fidelity wind turbine blade solver in the FAST modular framework
Wang, Qi; Sprague, Michael A.; Jonkman, Jason; ...
2017-03-14
Here, this paper presents a numerical implementation of the geometrically exact beam theory based on the Legendre-spectral-finite-element (LSFE) method. The displacement-based geometrically exact beam theory is presented, and the special treatment of three-dimensional rotation parameters is reviewed. An LSFE is a high-order finite element with nodes located at the Gauss-Legendre-Lobatto points. These elements can be an order of magnitude more computationally efficient than low-order finite elements for a given accuracy level. The new module, BeamDyn, is implemented in the FAST modularization framework for dynamic simulation of highly flexible composite-material wind turbine blades within the FAST aeroelastic engineering model. The frameworkmore » allows for fully interactive simulations of turbine blades in operating conditions. Numerical examples are provided to validate BeamDyn and examine the LSFE performance as well as the coupling algorithm in the FAST modularization framework. BeamDyn can also be used as a stand-alone high-fidelity beam tool.« less
BeamDyn: a high-fidelity wind turbine blade solver in the FAST modular framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qi; Sprague, Michael A.; Jonkman, Jason
Here, this paper presents a numerical implementation of the geometrically exact beam theory based on the Legendre-spectral-finite-element (LSFE) method. The displacement-based geometrically exact beam theory is presented, and the special treatment of three-dimensional rotation parameters is reviewed. An LSFE is a high-order finite element with nodes located at the Gauss-Legendre-Lobatto points. These elements can be an order of magnitude more computationally efficient than low-order finite elements for a given accuracy level. The new module, BeamDyn, is implemented in the FAST modularization framework for dynamic simulation of highly flexible composite-material wind turbine blades within the FAST aeroelastic engineering model. The frameworkmore » allows for fully interactive simulations of turbine blades in operating conditions. Numerical examples are provided to validate BeamDyn and examine the LSFE performance as well as the coupling algorithm in the FAST modularization framework. BeamDyn can also be used as a stand-alone high-fidelity beam tool.« less
Investigation of multiple scattering effects in aerosols
NASA Technical Reports Server (NTRS)
Deepak, A.
1980-01-01
The results are presented of investigations on the various aspects of multiple scattering effects on visible and infrared laser beams transversing dense fog oil aerosols contained in a chamber (4' x 4' x 9'). The report briefly describes: (1) the experimental details and measurements; (2) analytical representation of the aerosol size distribution data by two analytical models (the regularized power law distribution and the inverse modified gamma distribution); (3) retrieval of aerosol size distributions from multispectral optical depth measurements by two methods (the two and three parameter fast table search methods and the nonlinear least squares method); (4) modeling of the effects of aerosol microphysical (coagulation and evaporation) and dynamical processes (gravitational settling) on the temporal behavior of aerosol size distribution, and hence on the extinction of four laser beams with wavelengths 0.44, 0.6328, 1.15, and 3.39 micrometers; and (5) the exact and approximate formulations for four methods for computing the effects of multiple scattering on the transmittance of laser beams in dense aerosols, all of which are based on the solution of the radiative transfer equation under the small angle approximation.
Investigation of multiple scattering effects in aerosols
NASA Astrophysics Data System (ADS)
Deepak, A.
1980-05-01
The results are presented of investigations on the various aspects of multiple scattering effects on visible and infrared laser beams transversing dense fog oil aerosols contained in a chamber (4' x 4' x 9'). The report briefly describes: (1) the experimental details and measurements; (2) analytical representation of the aerosol size distribution data by two analytical models (the regularized power law distribution and the inverse modified gamma distribution); (3) retrieval of aerosol size distributions from multispectral optical depth measurements by two methods (the two and three parameter fast table search methods and the nonlinear least squares method); (4) modeling of the effects of aerosol microphysical (coagulation and evaporation) and dynamical processes (gravitational settling) on the temporal behavior of aerosol size distribution, and hence on the extinction of four laser beams with wavelengths 0.44, 0.6328, 1.15, and 3.39 micrometers; and (5) the exact and approximate formulations for four methods for computing the effects of multiple scattering on the transmittance of laser beams in dense aerosols, all of which are based on the solution of the radiative transfer equation under the small angle approximation.
Tensor network method for reversible classical computation
NASA Astrophysics Data System (ADS)
Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.
2018-03-01
We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.
Equilibration in finite Bose systems
NASA Astrophysics Data System (ADS)
Wolschin, Georg
2018-06-01
The equilibration of a finite Bose system is modeled using a gradient expansion of the collision integral that leads to a nonlinear transport equation. For constant transport coefficients, it is solved in closed form through a nonlinear transformation. Using schematic initial conditions, the exact solution and the equilibration time are derived and compared to the corresponding case for fermions. Applications to the fast equilibration of the gluon system created initially in relativistic heavy-ion collisions, and to cold quantum gases are envisaged.
RadVel: General toolkit for modeling Radial Velocities
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-01-01
RadVel models Keplerian orbits in radial velocity (RV) time series. The code is written in Python with a fast Kepler's equation solver written in C. It provides a framework for fitting RVs using maximum a posteriori optimization and computing robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel can perform Bayesian model comparison and produces publication quality plots and LaTeX tables.
NASA Technical Reports Server (NTRS)
Majda, George
1986-01-01
One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.
Zhang, Zhenbin; Dovichi, Norman J
2018-02-25
The effects of MS1 injection time, MS2 injection time, dynamic exclusion time, intensity threshold, and isolation width were investigated on the numbers of peptide and protein identifications for single-shot bottom-up proteomics analysis using CZE-MS/MS analysis of a Xenopus laevis tryptic digest. An electrokinetically pumped nanospray interface was used to couple a linear-polyacrylamide coated capillary to a Q Exactive HF mass spectrometer. A sensitive method that used a 1.4 Th isolation width, 60,000 MS2 resolution, 110 ms MS2 injection time, and a top 7 fragmentation produced the largest number of identifications when the CZE loading amount was less than 100 ng. A programmable autogain control method (pAGC) that used a 1.4 Th isolation width, 15,000 MS2 resolution, 110 ms MS2 injection time, and top 10 fragmentation produced the largest number of identifications for CZE loading amounts greater than 100 ng; 7218 unique peptides and 1653 protein groups were identified from 200 ng by using the pAGC method. The effect of mass spectrometer conditions on the performance of UPLC-MS/MS was also investigated. A fast method that used a 1.4 Th isolation width, 30,000 MS2 resolution, 45 ms MS2 injection time, and top 12 fragmentation produced the largest number of identifications for 200 ng UPLC loading amount (6025 unique peptides and 1501 protein groups). This is the first report where the identification number for CZE surpasses that of the UPLC at the 200 ng loading level. However, more peptides (11476) and protein groups (2378) were identified by using UPLC-MS/MS when the sample loading amount was increased to 2 μg with the fast method. To exploit the fast scan speed of the Q-Exactive HF mass spectrometer, higher sample loading amounts are required for single-shot bottom-up proteomics analysis using CZE-MS/MS. Copyright © 2017 Elsevier B.V. All rights reserved.
Vernez Moudon, Anne; Hurvitz, Philip M.; Aggarwal, Anju; Drewnowski, Adam
2017-01-01
To assess differences between GPS and self-reported measures of location, we examined visits to fast food restaurants and supermarkets using a spatiotemporal framework. Data came from 446 participants who responded to a survey, filled out travel diaries of places visited, and wore a GPS receiver for seven consecutive days. Provided by Public Health Seattle King County, addresses from food permit data were matched to King County tax assessor parcels in a GIS. A three-step process was used to verify travel-diary reported visits using GPS records: (1) GPS records were temporally matched if their timestamps were within the time window created by the arrival and departure times reported in the travel diary; (2) the temporally matched GPS records were then spatially matched if they were located in a food establishment parcel of the same type reported in the diary; (3) the travel diary visit was then GPS-sensed if the name of food establishment in the parcel matched the one reported in the travel diary. To account for errors in reporting arrival and departure times, GPS records were temporally matched to three time windows: the exact time, +/- 10 minutes, and +/- 30 minutes. One third of the participants reported 273 visits to fast food restaurants; 88% reported 1,102 visits to supermarkets. Of these, 77.3 percent of the fast food and 78.6 percent supermarket visits were GPS-sensed using the +/-10-minute time window. At this time window, the mean travel-diary reported fast food visit duration was 14.5 minutes (SD 20.2), 1.7 minutes longer than the GPS-sensed visit. For supermarkets, the reported visit duration was 23.7 minutes (SD 18.9), 3.4 minutes longer than the GPS-sensed visit. Travel diaries provide reasonably accurate information on the locations and brand names of fast food restaurants and supermarkets participants report visiting. PMID:28388619
Scully, Jason Y; Vernez Moudon, Anne; Hurvitz, Philip M; Aggarwal, Anju; Drewnowski, Adam
2017-01-01
To assess differences between GPS and self-reported measures of location, we examined visits to fast food restaurants and supermarkets using a spatiotemporal framework. Data came from 446 participants who responded to a survey, filled out travel diaries of places visited, and wore a GPS receiver for seven consecutive days. Provided by Public Health Seattle King County, addresses from food permit data were matched to King County tax assessor parcels in a GIS. A three-step process was used to verify travel-diary reported visits using GPS records: (1) GPS records were temporally matched if their timestamps were within the time window created by the arrival and departure times reported in the travel diary; (2) the temporally matched GPS records were then spatially matched if they were located in a food establishment parcel of the same type reported in the diary; (3) the travel diary visit was then GPS-sensed if the name of food establishment in the parcel matched the one reported in the travel diary. To account for errors in reporting arrival and departure times, GPS records were temporally matched to three time windows: the exact time, +/- 10 minutes, and +/- 30 minutes. One third of the participants reported 273 visits to fast food restaurants; 88% reported 1,102 visits to supermarkets. Of these, 77.3 percent of the fast food and 78.6 percent supermarket visits were GPS-sensed using the +/-10-minute time window. At this time window, the mean travel-diary reported fast food visit duration was 14.5 minutes (SD 20.2), 1.7 minutes longer than the GPS-sensed visit. For supermarkets, the reported visit duration was 23.7 minutes (SD 18.9), 3.4 minutes longer than the GPS-sensed visit. Travel diaries provide reasonably accurate information on the locations and brand names of fast food restaurants and supermarkets participants report visiting.
NASA Astrophysics Data System (ADS)
Jo, Hyunho; Sim, Donggyu
2014-06-01
We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.
Using Bitmap Indexing Technology for Combined Numerical and TextQueries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stockinger, Kurt; Cieslewicz, John; Wu, Kesheng
2006-10-16
In this paper, we describe a strategy of using compressedbitmap indices to speed up queries on both numerical data and textdocuments. By using an efficient compression algorithm, these compressedbitmap indices are compact even for indices with millions of distinctterms. Moreover, bitmap indices can be used very efficiently to answerBoolean queries over text documents involving multiple query terms.Existing inverted indices for text searches are usually inefficient forcorpora with a very large number of terms as well as for queriesinvolving a large number of hits. We demonstrate that our compressedbitmap index technology overcomes both of those short-comings. In aperformance comparison against amore » commonly used database system, ourindices answer queries 30 times faster on average. To provide full SQLsupport, we integrated our indexing software, called FastBit, withMonetDB. The integrated system MonetDB/FastBit provides not onlyefficient searches on a single table as FastBit does, but also answersjoin queries efficiently. Furthermore, MonetDB/FastBit also provides avery efficient retrieval mechanism of result records.« less
BeamDyn: A High-Fidelity Wind Turbine Blade Solver in the FAST Modular Framework: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Q.; Sprague, M.; Jonkman, J.
2015-01-01
BeamDyn, a Legendre-spectral-finite-element implementation of geometrically exact beam theory (GEBT), was developed to meet the design challenges associated with highly flexible composite wind turbine blades. In this paper, the governing equations of GEBT are reformulated into a nonlinear state-space form to support its coupling within the modular framework of the FAST wind turbine computer-aided engineering (CAE) tool. Different time integration schemes (implicit and explicit) were implemented and examined for wind turbine analysis. Numerical examples are presented to demonstrate the capability of this new beam solver. An example analysis of a realistic wind turbine blade, the CX-100, is also presented asmore » validation.« less
Testing Spatial Symmetry Using Contingency Tables Based on Nearest Neighbor Relations
Ceyhan, Elvan
2014-01-01
We consider two types of spatial symmetry, namely, symmetry in the mixed or shared nearest neighbor (NN) structures. We use Pielou's and Dixon's symmetry tests which are defined using contingency tables based on the NN relationships between the data points. We generalize these tests to multiple classes and demonstrate that both the asymptotic and exact versions of Pielou's first type of symmetry test are extremely conservative in rejecting symmetry in the mixed NN structure and hence should be avoided or only the Monte Carlo randomized version should be used. Under RL, we derive the asymptotic distribution for Dixon's symmetry test and also observe that the usual independence test seems to be appropriate for Pielou's second type of test. Moreover, we apply variants of Fisher's exact test on the shared NN contingency table for Pielou's second test and determine the most appropriate version for our setting. We also consider pairwise and one-versus-rest type tests in post hoc analysis after a significant overall symmetry test. We investigate the asymptotic properties of the tests, prove their consistency under appropriate null hypotheses, and investigate finite sample performance of them by extensive Monte Carlo simulations. The methods are illustrated on a real-life ecological data set. PMID:24605061
ERIC Educational Resources Information Center
Cardinali, Mario Emilio; Giomini, Claudio
1989-01-01
Proposes a simple procedure based on an expansion of the exponential terms of Raoult's law by applying it to the case of the benzene-toluene mixture. The results with experimental values are presented as a table. (YP)
NASA Technical Reports Server (NTRS)
Zahm, A F
1924-01-01
This report gives the description and the use of a specially designed aerodynamic plane table. For the accurate and expeditious geometrical measurement of models in an aerodynamic laboratory, and for miscellaneous truing operations, there is frequent need for a specially equipped plan table. For example, one may have to measure truly to 0.001 inch the offsets of an airfoil at many parts of its surface. Or the offsets of a strut, airship hull, or other carefully formed figure may require exact calipering. Again, a complete airplane model may have to be adjusted for correct incidence at all parts of its surfaces or verified in those parts for conformance to specifications. Such work, if but occasional, may be done on a planing or milling machine; but if frequent, justifies the provision of a special table. For this reason it was found desirable in 1918 to make the table described in this report and to equip it with such gauges and measures as the work should require.
Lin, Jyh-Jiuan; Chang, Ching-Hui; Pal, Nabendu
2015-01-01
To test the mutual independence of two qualitative variables (or attributes), it is a common practice to follow the Chi-square tests (Pearson's as well as likelihood ratio test) based on data in the form of a contingency table. However, it should be noted that these popular Chi-square tests are asymptotic in nature and are useful when the cell frequencies are "not too small." In this article, we explore the accuracy of the Chi-square tests through an extensive simulation study and then propose their bootstrap versions that appear to work better than the asymptotic Chi-square tests. The bootstrap tests are useful even for small-cell frequencies as they maintain the nominal level quite accurately. Also, the proposed bootstrap tests are more convenient than the Fisher's exact test which is often criticized for being too conservative. Finally, all test methods are applied to a few real-life datasets for demonstration purposes.
A MODFLOW Infiltration Device Package for Simulating Storm Water Infiltration.
Jeppesen, Jan; Christensen, Steen
2015-01-01
This article describes a MODFLOW Infiltration Device (INFD) Package that can simulate infiltration devices and their two-way interaction with groundwater. The INFD Package relies on a water balance including inflow of storm water, leakage-like seepage through the device faces, overflow, and change in storage. The water balance for the device can be simulated in multiple INFD time steps within a single MODFLOW time step, and infiltration from the device can be routed through the unsaturated zone to the groundwater table. A benchmark test shows that the INFD Package's analytical solution for stage computes exact results for transient behavior. To achieve similar accuracy by the numerical solution of the MODFLOW Surface-Water Routing (SWR1) Process requires many small time steps. Furthermore, the INFD Package includes an improved representation of flow through the INFD sides that results in lower infiltration rates than simulated by SWR1. The INFD Package is also demonstrated in a transient simulation of a hypothetical catchment where two devices interact differently with groundwater. This simulation demonstrates that device and groundwater interaction depends on the thickness of the unsaturated zone because a shallow groundwater table (a likely result from storm water infiltration itself) may occupy retention volume, whereas a thick unsaturated zone may cause a phase shift and a change of amplitude in groundwater table response to a change of infiltration. We thus find that the INFD Package accommodates the simulation of infiltration devices and groundwater in an integrated manner on small as well as large spatial and temporal scales. © 2014, National Ground Water Association.
Geometric Heat Engines Featuring Power that Grows with Efficiency.
Raz, O; Subaşı, Y; Pugatch, R
2016-04-22
Thermodynamics places a limit on the efficiency of heat engines, but not on their output power or on how the power and efficiency change with the engine's cycle time. In this Letter, we develop a geometrical description of the power and efficiency as a function of the cycle time, applicable to an important class of heat engine models. This geometrical description is used to design engine protocols that attain both the maximal power and maximal efficiency at the fast driving limit. Furthermore, using this method, we also prove that no protocol can exactly attain the Carnot efficiency at nonzero power.
NASA Astrophysics Data System (ADS)
Plante, Ianik; Devroye, Luc
2015-09-01
Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.
NASA Technical Reports Server (NTRS)
Khazanov, George V.; Khabibrakhmanov, Ildar K.; Glocer, Alex
2012-01-01
We present the results of a finite difference implementation of the kinetic Fokker-Planck model with an exact form of the nonlinear collisional operator, The model is time dependent and three-dimensional; one spatial dimension and two in velocity space. The spatial dimension is aligned with the local magnetic field, and the velocity space is defined by the magnitude of the velocity and the cosine of pitch angle. An important new feature of model, the concept of integration along the particle trajectories, is discussed in detail. Integration along the trajectories combined with the operator time splitting technique results in a solution scheme which accurately accounts for both the fast convection of the particles along the magnetic field lines and relatively slow collisional process. We present several tests of the model's performance and also discuss simulation results of the evolution of the plasma distribution for realistic conditions in Earth's plasmasphere under different scenarios.
Analytical properties of time-of-flight PET data.
Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M
2008-06-07
We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the 'bow-tie' property of the 2D Radon transform to the time-of-flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data.
Analytical properties of time-of-flight PET data
NASA Astrophysics Data System (ADS)
Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M.
2008-06-01
We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the 'bow-tie' property of the 2D Radon transform to the time-of-flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data.
Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.
Salis, Howard; Kaznessis, Yiannis
2005-02-01
The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.
Lundström, Johan N.; Gordon, Amy R.; Alden, Eva C.; Boesveldt, Sanne; Albrecht, Jessica
2010-01-01
Many human olfactory experiments call for fast and stable stimulus-rise times as well as exact and stable stimulus-onset times. Due to these temporal demands, an olfactometer is often needed. However, an olfactometer is a piece of equipment that either comes with a high price tag or requires a high degree of technical expertise to build and/or to run. Here, we detail the construction of an olfactometer that is constructed almost exclusively with “off-the-shelf” parts, requires little technical knowledge to build, has relatively low price tags, and is controlled by E-Prime, a turnkey-ready and easily-programmable software commonly used in psychological experiments. The olfactometer can present either solid or liquid odor sources, and it exhibits a fast stimulus-rise time and a fast and stable stimulus-onset time. We provide a detailed description of the olfactometer construction, a list of its individual parts and prices, as well as potential modifications to the design. In addition, we present odor onset and concentration curves as measured with a photoionization detector, together with corresponding GC/MS analyses of signal-intensity drop (5.9%) over a longer period of use. Finally, we present data from behavioral and psychophysiological recordings demonstrating that the olfactometer is suitable for use during event-related EEG experiments. PMID:20688109
A rapid local singularity analysis algorithm with applications
NASA Astrophysics Data System (ADS)
Chen, Zhijun; Cheng, Qiuming; Agterberg, Frits
2015-04-01
The local singularity model developed by Cheng is fast gaining popularity in characterizing mineralization and detecting anomalies of geochemical, geophysical and remote sensing data. However in one of the conventional algorithms involving the moving average values with different scales is time-consuming especially while analyzing a large dataset. Summed area table (SAT), also called as integral image, is a fast algorithm used within the Viola-Jones object detection framework in computer vision area. Historically, the principle of SAT is well-known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respective cumulative distribution functions. We introduce SAT and it's variation Rotated Summed Area Table in the isotropic, anisotropic or directional local singularity mapping in this study. Once computed using SAT, any one of the rectangular sum can be computed at any scale or location in constant time. The area for any rectangular region in the image can be computed by using only 4 array accesses in constant time independently of the size of the region; effectively reducing the time complexity from O(n) to O(1). New programs using Python, Julia, matlab and C++ are implemented respectively to satisfy different applications, especially to the big data analysis. Several large geochemical and remote sensing datasets are tested. A wide variety of scale changes (linear spacing or log spacing) for non-iterative or iterative approach are adopted to calculate the singularity index values and compare the results. The results indicate that the local singularity analysis with SAT is more robust and superior to traditional approach in identifying anomalies.
Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C
2017-08-01
The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.
NASA Astrophysics Data System (ADS)
Torres, J. M.; Sadurní, E.; Seligman, T. H.
2010-05-01
We address the problem of two interacting atoms of different species inside a cavity and find the explicit solutions of the corresponding eigenvalues and eigenfunctions using a new variant. This model encompasses various commonly used models. By way of example we obtain closed expressions for concurrence and purity as a function of time for the case where the cavity is prepared in a number state. We discuss the behaviour of these quantities and their relative behaviour in the concurrence-purity plane.
CANCER CONTROL AND POPULATION SCIENCES FAST STATS
Fast Stats links to tables, charts, and graphs of cancer statistics for all major cancer sites by age, sex, race, and geographic area. The statistics include incidence, mortality, prevalence, and the probability of developing or dying from cancer. A large set of statistics is ava...
Heer, D M
1986-01-01
The impact of the number, order, and spacing of siblings on child and adult outcomes has been the topic of research by scholars in 4 separate fields (human biology, psychology, sociology, and economics), and the barriers to communication between academic disciplines are strong. Also most researchers have had to work with data sets gathered for other purposes. This has resulted in a relative inadequacy of research. Social scientists have 3 theories concerning the relationship between the number, order, and spacing of siblings and child and adult outcomes: that an increase in the number of siblings or a decrease in the spacing between them dilutes the time and material resources that parents can give to each child and that these resource dilutions hinder the outcome for each child; that account must be taken not only of parental resources but also of the resources given to each child by his/her siblings; and that there is no causal relationship between number, order and spacing of siblings and child outcomes and that any apparent relationships are spurious. In light of these theories, the question arises as to how should the sibling variables be measured. The most important aspect of sibling number is that it is a variable over time. Yet, the proper measurement of sibling number has an additional complication. According to all existing theories, the ages of the other siblings are relevant for the outcome for the given child. All of the relevant information is now available only when it is possible to construct a matrix in which the rows present the age of the given child and the columns the age grouping of the siblings for whom a count of sibling number will be made. Many such matrices could be developed, some much more elaborate than others. For illustrative purposes, Table 1 presents the matrix of the number of siblings for a child who is the first-born among 5 children, all of whom are spaced exactly 3 years apart and all of whom are financially dependent only up to exact age 21. Table 2 presents the matrix for the last-born child among 5 children with characteristics identical to those in Table 1. It can be inferred from these tables that the oldest child in the family, as compared to the youngest child, probably will suffer from a diminution of parental resources, most likely financial resources, in adolescence. The youngest will suffer from a reduction of parental resources, probably time resources, in infancy and early childhood. Research concerned with the consequences of the number and spacing of children should be based on data sets for which some version of this matrix can be constructed.
What Exactly is Space Logistics?
2011-01-01
series, movies, and video games. Such phrases as “the final frontier” (from the opening lines of Star Trek ) or “the ulti- mate high ground” (from...years as NASA , DoD, and commercial space launch customers brought individual requirements to the table; there was no single, focused development
Herold, Christian; Ueberreiter, Klaus; Busche, Marc N; Vogt, Peter M
2013-04-01
Autologous fat transplantation has gained great recognition in aesthetic and reconstructive surgery. Two main aspects are of predominant importance for progress control after autologous fat transplantation to the breast: quantitative information about the rate of fat survival in terms of effective volume persistence and qualitative information about the breast tissue to exclude potential complications of autologous fat transplantation. There are several tools available for use in evaluating the rate of volume survival. They are extensively compared in this review. The anthropometric method, thermoplastic casts, and Archimedes' principle of water displacement are not up to date anymore because of major drawbacks, first and foremost being reduced reproducibility and exactness. They have been replaced by more exact and reproducible tools such as MRI volumetry or 3D body surface scans. For qualitative and quantitative progress control, MRI volumetry offers all the necessary information: evaluation of fat survival and diagnostically valuable imaging to exclude possible complications of autologous fat transplantation. For frequent follow-up, e.g., monthly volume analysis, repeated MRI exams would not be good for the patient and are not cost effective. In these cases, 3D surface imaging is a good tool and especially helpful in a private practice setting where fast data acquisition is needed. This tool also offers the possibility of simulating the results of autologous fat transplantation. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
NASA Technical Reports Server (NTRS)
Barnes, A.
1983-01-01
An exact nonlinear solution is found to the relativistic kinetic and electrodynamic equations (in their hydromagnetic limit) that describes the large-amplitude fast-mode magnetoacoustic wave propagating normal to the magnetic field in a collisionless, previously uniform plasma. It is pointed out that a wave of this kind will be generated by transverse compression of any collisionless plasma. The solution is in essence independent of the detailed form of the particle momentum distribution functions. The solution is obtained, in part, through the method of characteristics; the wave exhibits the familiar properties of steepening and shock formation. A detailed analysis is given of the ultrarelativistic limit of this wave.
Exact and approximate solutions for transient squeezing flow
NASA Astrophysics Data System (ADS)
Lang, Ji; Santhanam, Sridhar; Wu, Qianhong
2017-10-01
In this paper, we report two novel theoretical approaches to examine a fast-developing flow in a thin fluid gap, which is widely observed in industrial applications and biological systems. The problem is featured by a very small Reynolds number and Strouhal number, making the fluid convective acceleration negligible, while its local acceleration is not. We have developed an exact solution for this problem which shows that the flow starts with an inviscid limit when the viscous effect has no time to appear and is followed by a subsequent developing flow, in which the viscous effect continues to penetrate into the entire fluid gap. An approximate solution is also developed using a boundary layer integral method. This solution precisely captures the general behavior of the transient fluid flow process and agrees very well with the exact solution. We also performed numerical simulation using Ansys-CFX. Excellent agreement between the analytical and the numerical solutions is obtained, indicating the validity of the analytical approaches. The study presented herein fills the gap in the literature and will have a broad impact on industrial and biomedical applications.
Preventing Serious Conduct Problems in School-Age Youth: The Fast Track Program
ERIC Educational Resources Information Center
Slough, Nancy M.; McMahon, Robert J.; Bierman, Karen L.; Coie, John D.; Dodge, Kenneth A.; Foster, E. Michael; Greenberg, Mark T.; Lochman, John E.; McMahon, Robert J.; Pinderhughes, Ellen E.
2008-01-01
Children with early-starting conduct problems have a very poor prognosis and exact a high cost to society. The Fast Track project is a multisite, collaborative research project investigating the efficacy of a comprehensive, long-term, multicomponent intervention designed to "prevent" the development of serious conduct problems in high-risk…
Salis, Howard; Kaznessis, Yiannis N
2005-12-01
Stochastic chemical kinetics more accurately describes the dynamics of "small" chemical systems, such as biological cells. Many real systems contain dynamical stiffness, which causes the exact stochastic simulation algorithm or other kinetic Monte Carlo methods to spend the majority of their time executing frequently occurring reaction events. Previous methods have successfully applied a type of probabilistic steady-state approximation by deriving an evolution equation, such as the chemical master equation, for the relaxed fast dynamics and using the solution of that equation to determine the slow dynamics. However, because the solution of the chemical master equation is limited to small, carefully selected, or linear reaction networks, an alternate equation-free method would be highly useful. We present a probabilistic steady-state approximation that separates the time scales of an arbitrary reaction network, detects the convergence of a marginal distribution to a quasi-steady-state, directly samples the underlying distribution, and uses those samples to accurately predict the state of the system, including the effects of the slow dynamics, at future times. The numerical method produces an accurate solution of both the fast and slow reaction dynamics while, for stiff systems, reducing the computational time by orders of magnitude. The developed theory makes no approximations on the shape or form of the underlying steady-state distribution and only assumes that it is ergodic. We demonstrate the accuracy and efficiency of the method using multiple interesting examples, including a highly nonlinear protein-protein interaction network. The developed theory may be applied to any type of kinetic Monte Carlo simulation to more efficiently simulate dynamically stiff systems, including existing exact, approximate, or hybrid stochastic simulation techniques.
Minimizing irreversible losses in quantum systems by local counterdiabatic driving
Sels, Dries; Polkovnikov, Anatoli
2017-01-01
Counterdiabatic driving protocols have been proposed [Demirplak M, Rice SA (2003) J Chem Phys A 107:9937–9945; Berry M (2009) J Phys A Math Theor 42:365303] as a means to make fast changes in the Hamiltonian without exciting transitions. Such driving in principle allows one to realize arbitrarily fast annealing protocols or implement fast dissipationless driving, circumventing standard adiabatic limitations requiring infinitesimally slow rates. These ideas were tested and used both experimentally and theoretically in small systems, but in larger chaotic systems, it is known that exact counterdiabatic protocols do not exist. In this work, we develop a simple variational approach allowing one to find the best possible counterdiabatic protocols given physical constraints, like locality. These protocols are easy to derive and implement both experimentally and numerically. We show that, using these approximate protocols, one can drastically suppress heating and increase fidelity of quantum annealing protocols in complex many-particle systems. In the fast limit, these protocols provide an effective dual description of adiabatic dynamics, where the coupling constant plays the role of time and the counterdiabatic term plays the role of the Hamiltonian. PMID:28461472
Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation
ERIC Educational Resources Information Center
Ayre, Colin; Scally, Andrew John
2014-01-01
The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.
A short note on calculating the adjusted SAR index
USDA-ARS?s Scientific Manuscript database
A simple algebraic technique is presented for computing the adjusted SAR Index proposed by Suarez (1981). The statistical formula presented in this note facilitates the computation of the adjusted SAR without the use of either a look-up table, custom computer software or the need to compute exact a...
Reducing Router Forwarding Table Size Using Aggregation and Caching
ERIC Educational Resources Information Center
Liu, Yaoqing
2013-01-01
The fast growth of global routing table size has been causing concerns that the Forwarding Information Base (FIB) will not be able to fit in existing routers' expensive line-card memory, and upgrades will lead to a higher cost for network operators and customers. FIB Aggregation, a technique that merges multiple FIB entries into one, is probably…
Kim, Jaewook; Woo, Sung Sik; Sarpeshkar, Rahul
2018-04-01
The analysis and simulation of complex interacting biochemical reaction pathways in cells is important in all of systems biology and medicine. Yet, the dynamics of even a modest number of noisy or stochastic coupled biochemical reactions is extremely time consuming to simulate. In large part, this is because of the expensive cost of random number and Poisson process generation and the presence of stiff, coupled, nonlinear differential equations. Here, we demonstrate that we can amplify inherent thermal noise in chips to emulate randomness physically, thus alleviating these costs significantly. Concurrently, molecular flux in thermodynamic biochemical reactions maps to thermodynamic electronic current in a transistor such that stiff nonlinear biochemical differential equations are emulated exactly in compact, digitally programmable, highly parallel analog "cytomorphic" transistor circuits. For even small-scale systems involving just 80 stochastic reactions, our 0.35-μm BiCMOS chips yield a 311× speedup in the simulation time of Gillespie's stochastic algorithm over COPASI, a fast biochemical-reaction software simulator that is widely used in computational biology; they yield a 15 500× speedup over equivalent MATLAB stochastic simulations. The chip emulation results are consistent with these software simulations over a large range of signal-to-noise ratios. Most importantly, our physical emulation of Poisson chemical dynamics does not involve any inherently sequential processes and updates such that, unlike prior exact simulation approaches, they are parallelizable, asynchronous, and enable even more speedup for larger-size networks.
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.
Portable real-time color night vision
NASA Astrophysics Data System (ADS)
Toet, Alexander; Hogervorst, Maarten A.
2008-03-01
We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.
LETTER TO THE EDITOR: Exhaustive search for low-autocorrelation binary sequences
NASA Astrophysics Data System (ADS)
Mertens, S.
1996-09-01
Binary sequences with low autocorrelations are important in communication engineering and in statistical mechanics as ground states of the Bernasconi model. Computer searches are the main tool in the construction of such sequences. Owing to the exponential size 0305-4470/29/18/005/img1 of the configuration space, exhaustive searches are limited to short sequences. We discuss an exhaustive search algorithm with run-time characteristic 0305-4470/29/18/005/img2 and apply it to compile a table of exact ground states of the Bernasconi model up to N = 48. The data suggest F > 9 for the optimal merit factor in the limit 0305-4470/29/18/005/img3.
Analytical Properties of Time-of-Flight PET Data
Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M.
2015-01-01
We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the “bow-tie” property of the 2D Radon transform to the time of flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data. PMID:18460746
nu-Anomica: A Fast Support Vector Based Novelty Detection Technique
NASA Technical Reports Server (NTRS)
Das, Santanu; Bhaduri, Kanishka; Oza, Nikunj C.; Srivastava, Ashok N.
2009-01-01
In this paper we propose nu-Anomica, a novel anomaly detection technique that can be trained on huge data sets with much reduced running time compared to the benchmark one-class Support Vector Machines algorithm. In -Anomica, the idea is to train the machine such that it can provide a close approximation to the exact decision plane using fewer training points and without losing much of the generalization performance of the classical approach. We have tested the proposed algorithm on a variety of continuous data sets under different conditions. We show that under all test conditions the developed procedure closely preserves the accuracy of standard one-class Support Vector Machines while reducing both the training time and the test time by 5 - 20 times.
A numerical spectral approach to solve the dislocation density transport equation
NASA Astrophysics Data System (ADS)
Djaka, K. S.; Taupin, V.; Berbenni, S.; Fressengeas, C.
2015-09-01
A numerical spectral approach is developed to solve in a fast, stable and accurate fashion, the quasi-linear hyperbolic transport equation governing the spatio-temporal evolution of the dislocation density tensor in the mechanics of dislocation fields. The approach relies on using the Fast Fourier Transform algorithm. Low-pass spectral filters are employed to control both the high frequency Gibbs oscillations inherent to the Fourier method and the fast-growing numerical instabilities resulting from the hyperbolic nature of the transport equation. The numerical scheme is validated by comparison with an exact solution in the 1D case corresponding to dislocation dipole annihilation. The expansion and annihilation of dislocation loops in 2D and 3D settings are also produced and compared with finite element approximations. The spectral solutions are shown to be stable, more accurate for low Courant numbers and much less computation time-consuming than the finite element technique based on an explicit Galerkin-least squares scheme.
Camblin, C. Christine; Ledoux, Kerry; Boudewyn, Megan; Gordon, Peter C.; Swaab, Tamara Y.
2006-01-01
Previous research has shown that the process of establishing coreference with a repeated name can affect basic repetition priming. Specifically, repetition priming on some measures can be eliminated for repeated names that corefer with an entity that is prominent in the discourse model. However, the exact nature and timing of this modulating effect of discourse are not yet understood. Here, we present two ERP studies that further probe the nature of repeated name coreference by using naturally produced connected speech and fast-rate RSVP methods of presentation. With speech we found that repetition priming was eliminated for repeated names that coreferred with a prominent antecedent. In contrast, with fast-rate RSVP, we found a main effect of repetition that did not interact with sentence context. This indicates that the creation of a discourse model during comprehension can affect repetition priming, but the nature of this effect may depend on input speed. PMID:16904078
A new pre-classification method based on associative matching method
NASA Astrophysics Data System (ADS)
Katsuyama, Yutaka; Minagawa, Akihiro; Hotta, Yoshinobu; Omachi, Shinichiro; Kato, Nei
2010-01-01
Reducing the time complexity of character matching is critical to the development of efficient Japanese Optical Character Recognition (OCR) systems. To shorten processing time, recognition is usually split into separate preclassification and recognition stages. For high overall recognition performance, the pre-classification stage must both have very high classification accuracy and return only a small number of putative character categories for further processing. Furthermore, for any practical system, the speed of the pre-classification stage is also critical. The associative matching (AM) method has often been used for fast pre-classification, because its use of a hash table and reliance solely on logical bit operations to select categories makes it highly efficient. However, redundant certain level of redundancy exists in the hash table because it is constructed using only the minimum and maximum values of the data on each axis and therefore does not take account of the distribution of the data. We propose a modified associative matching method that satisfies the performance criteria described above but in a fraction of the time by modifying the hash table to reflect the underlying distribution of training characters. Furthermore, we show that our approach outperforms pre-classification by clustering, ANN and conventional AM in terms of classification accuracy, discriminative power and speed. Compared to conventional associative matching, the proposed approach results in a 47% reduction in total processing time across an evaluation test set comprising 116,528 Japanese character images.
Improved cache performance in Monte Carlo transport calculations using energy banding
NASA Astrophysics Data System (ADS)
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
Worman, A.; Packman, A.I.; Marklund, L.; Harvey, J.W.; Stone, S.H.
2006-01-01
It has been long known that land surface topography governs both groundwater flow patterns at the regional-to-continental scale and on smaller scales such as in the hyporheic zone of streams. Here we show that the surface topography can be separated in a Fourier-series spectrum that provides an exact solution of the underlying three-dimensional groundwater flows. The new spectral solution offers a practical tool for fast calculation of subsurface flows in different hydrological applications and provides a theoretical platform for advancing conceptual understanding of the effect of landscape topography on subsurface flows. We also show how the spectrum of surface topography influences the residence time distribution for subsurface flows. The study indicates that the subsurface head variation decays exponentially with depth faster than it would with equivalent two-dimensional features, resulting in a shallower flow interaction. Copyright 2006 by the American Geophysical Union.
Wienke, B R; O'Leary, T R
2008-05-01
Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.
On the time-splitting scheme used in the Princeton Ocean Model
NASA Astrophysics Data System (ADS)
Kamenkovich, V. M.; Nechaev, D. A.
2009-05-01
The analysis of the time-splitting procedure implemented in the Princeton Ocean Model (POM) is presented. The time-splitting procedure uses different time steps to describe the evolution of interacting fast and slow propagating modes. In the general case the exact separation of the fast and slow modes is not possible. The main idea of the analyzed procedure is to split the system of primitive equations into two systems of equations for interacting external and internal modes. By definition, the internal mode varies slowly and the crux of the problem is to determine the proper filter, which excludes the fast component of the external mode variables in the relevant equations. The objective of this paper is to examine properties of the POM time-splitting procedure applied to equations governing the simplest linear non-rotating two-layer model of constant depth. The simplicity of the model makes it possible to study these properties analytically. First, the time-split system of differential equations is examined for two types of the determination of the slow component based on an asymptotic approach or time-averaging. Second, the differential-difference scheme is developed and some criteria of its stability are discussed for centered, forward, or backward time-averaging of the external mode variables. Finally, the stability of the POM time-splitting schemes with centered and forward time-averaging is analyzed. The effect of the Asselin filter on solutions of the considered schemes is studied. It is assumed that questions arising in the analysis of the simplest model are inherent in the general model as well.
Analyzing Enron Data: Bitmap Indexing Outperforms MySQL Queries bySeveral Orders of Magnitude
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stockinger, Kurt; Rotem, Doron; Shoshani, Arie
2006-01-28
FastBit is an efficient, compressed bitmap indexing technology that was developed in our group. In this report we evaluate the performance of MySQL and FastBit for analyzing the email traffic of the Enron dataset. The first finding shows that materializing the join results of several tables significantly improves the query performance. The second finding shows that FastBit outperforms MySQL by several orders of magnitude.
Weight Maintenance: Determinants of Success
2005-12-15
inundating the general public. In addition, the heavily promoted sweetened breakfast cereals, salty snacks, candy, desserts, fast food and sugar -containing...34Each year about $20 billion of our taxes are spent to subsidize the production of rice, soybeans, sugar , wheat and -- above all -- corn. No such subsidy...FE R E N C E S ............................................................................... . . 35 LIST OF TABLES Table Page I Increase in
49 CFR 171.10 - Units of measure.
Code of Federal Regulations, 2011 CFR
2011-10-01
... measure in this subchapter are expressed using the International System of Units (“SI” or metric). Where... abbreviated. (c) Conversion values. (1) Conversion values are provided in the following table and are based on values provided in ASTM E 380, “Standard for Metric Practice”. (2) If an exact conversion is needed, the...
49 CFR 171.10 - Units of measure.
Code of Federal Regulations, 2012 CFR
2012-10-01
... measure in this subchapter are expressed using the International System of Units (“SI” or metric). Where... abbreviated. (c) Conversion values. (1) Conversion values are provided in the following table and are based on values provided in ASTM E 380, “Standard for Metric Practice”. (2) If an exact conversion is needed, the...
49 CFR 171.10 - Units of measure.
Code of Federal Regulations, 2014 CFR
2014-10-01
... measure in this subchapter are expressed using the International System of Units (“SI” or metric). Where... abbreviated. (c) Conversion values. (1) Conversion values are provided in the following table and are based on values provided in ASTM E 380, “Standard for Metric Practice”. (2) If an exact conversion is needed, the...
49 CFR 171.10 - Units of measure.
Code of Federal Regulations, 2010 CFR
2010-10-01
... measure in this subchapter are expressed using the International System of Units (“SI” or metric). Where... abbreviated. (c) Conversion values. (1) Conversion values are provided in the following table and are based on values provided in ASTM E 380, “Standard for Metric Practice”. (2) If an exact conversion is needed, the...
49 CFR 171.10 - Units of measure.
Code of Federal Regulations, 2013 CFR
2013-10-01
... measure in this subchapter are expressed using the International System of Units (“SI” or metric). Where... abbreviated. (c) Conversion values. (1) Conversion values are provided in the following table and are based on values provided in ASTM E 380, “Standard for Metric Practice”. (2) If an exact conversion is needed, the...
Adaptive hybrid simulations for multiscale stochastic reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks.
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data
NASA Technical Reports Server (NTRS)
Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan
1997-01-01
A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.
Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets
NASA Astrophysics Data System (ADS)
Juric, Mario
2011-01-01
The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.
NASA Astrophysics Data System (ADS)
Mortensen, Henrik Lund; Sørensen, Jens Jakob W. H.; Mølmer, Klaus; Sherson, Jacob Friis
2018-02-01
We propose an efficient strategy to find optimal control functions for state-to-state quantum control problems. Our procedure first chooses an input state trajectory, that can realize the desired transformation by adiabatic variation of the system Hamiltonian. The shortcut-to-adiabaticity formalism then provides a control Hamiltonian that realizes the reference trajectory exactly but on a finite time scale. As the final state is achieved with certainty, we define a cost functional that incorporates the resource requirements and a perturbative expression for robustness. We optimize this functional by systematically varying the reference trajectory. We demonstrate the method by application to population transfer in a laser driven three-level Λ-system, where we find solutions that are fast and robust against perturbations while maintaining a low peak laser power.
Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo
2013-05-06
A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.
A fast point-cloud computing method based on spatial symmetry of Fresnel field
NASA Astrophysics Data System (ADS)
Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui
2017-10-01
Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.
Lung Mechanics in Marine Mammals
2013-09-30
system of anesthetized pinnipeds (Table 1, Fig. 1). In some animals where euthanasia was planned, we managed to measure both lung mechanics in vivo...during spontaneous breathing (dynamic) and mechanical ventilation (static), and the static compliance after euthanasia . Table 1. Number of samples...airway and esophageal pressures during voluntary breathing and mechanical ventilation (Fig. 1). Aim 2: In the second year we also used a fast response
Analysis of Coolant Options for Advanced Metal Cooled Nuclear Reactors
2006-12-01
24 Table 3.3 Hazards of Sodium Reaction Products, Hydride And Oxide...........................26 Table 3.4 Chemical Reactivity Of Selected...Liquid Metal Fast Breeder Reactor ORIGEN Oak Ridge Isotope Generator ORIGENARP Oak Ridge Isotope Generator Automated Rapid Processing PWR ...nuclear reactors, both because of the possibility of increased reactivity due to boiling and the potential loss of effectiveness of coolant heat transfer
[Comments on the use of the "life-table method" in orthopedics].
Hassenpflug, J; Hahne, H J; Hedderich, J
1992-01-01
In the description of long term results, e.g. of joint replacements, survivorship analysis is used increasingly in orthopaedic surgery. The survivorship analysis is more useful to describe the frequency of failure rather than global statements in percentage. The relative probability of failure for fixed intervals is drawn from the number of controlled patients and the frequency of failure. The complementary probabilities of success are linked in their temporal sequence thus representing the probability of survival at a fixed endpoint. Necessary condition for the use of this procedure is the exact definition of moment and manner of failure. It is described how to establish survivorship tables.
Never Say No … How the Brain Interprets the Pregnant Pause in Conversation
Bögels, Sara; Kendrick, Kobin H.; Levinson, Stephen C.
2015-01-01
In conversation, negative responses to invitations, requests, offers, and the like are more likely to occur with a delay–conversation analysts talk of them as dispreferred. Here we examine the contrastive cognitive load ‘yes’ and ‘no’ responses make, either when relatively fast (300 ms after question offset) or delayed (1000 ms). Participants heard short dialogues contrasting in speed and valence of response while having their EEG recorded. We found that a fast ‘no’ evokes an N400-effect relative to a fast ‘yes’; however, this contrast disappeared in the delayed responses. 'No' responses, however, elicited a late frontal positivity both if they were fast and if they were delayed. We interpret these results as follows: a fast ‘no’ evoked an N400 because an immediate response is expected to be positive–this effect disappears as the response time lengthens because now in ordinary conversation the probability of a ‘no’ has increased. However, regardless of the latency of response, a ‘no’ response is associated with a late positivity, since a negative response is always dispreferred. Together these results show that negative responses to social actions exact a higher cognitive load, but especially when least expected, in immediate response. PMID:26699335
Reveal, A General Reverse Engineering Algorithm for Inference of Genetic Network Architectures
NASA Technical Reports Server (NTRS)
Liang, Shoudan; Fuhrman, Stefanie; Somogyi, Roland
1998-01-01
Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n = 50 (elements) and k = 3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10(exp 15)) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean networks, the algorithm is generalizable to include multi-state models, essentially allowing direct application to realistic biological data sets. The ability to adequately solve the inverse problem may enable in-depth analysis of complex dynamic systems in biology and other fields.
G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases.
Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H
2009-01-01
Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others.Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database.Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep.
XMM Astronomy Abundances Ages AGN Associations Asteroseismology Atomic_Data Binaries:cataclysmic astronomy) CDS cross-match service : fast cross-identification between any 2 tables, including VizieR
Understanding Zipf's law of word frequencies through sample-space collapse in sentence formation
Thurner, Stefan; Hanel, Rudolf; Liu, Bo; Corominas-Murtra, Bernat
2015-01-01
The formation of sentences is a highly structured and history-dependent process. The probability of using a specific word in a sentence strongly depends on the ‘history’ of word usage earlier in that sentence. We study a simple history-dependent model of text generation assuming that the sample-space of word usage reduces along sentence formation, on average. We first show that the model explains the approximate Zipf law found in word frequencies as a direct consequence of sample-space reduction. We then empirically quantify the amount of sample-space reduction in the sentences of 10 famous English books, by analysis of corresponding word-transition tables that capture which words can follow any given word in a text. We find a highly nested structure in these transition tables and show that this ‘nestedness’ is tightly related to the power law exponents of the observed word frequency distributions. With the proposed model, it is possible to understand that the nestedness of a text can be the origin of the actual scaling exponent and that deviations from the exact Zipf law can be understood by variations of the degree of nestedness on a book-by-book basis. On a theoretical level, we are able to show that in the case of weak nesting, Zipf's law breaks down in a fast transition. Unlike previous attempts to understand Zipf's law in language the sample-space reducing model is not based on assumptions of multiplicative, preferential or self-organized critical mechanisms behind language formation, but simply uses the empirically quantifiable parameter ‘nestedness’ to understand the statistics of word frequencies. PMID:26063827
Understanding Zipf's law of word frequencies through sample-space collapse in sentence formation.
Thurner, Stefan; Hanel, Rudolf; Liu, Bo; Corominas-Murtra, Bernat
2015-07-06
The formation of sentences is a highly structured and history-dependent process. The probability of using a specific word in a sentence strongly depends on the 'history' of word usage earlier in that sentence. We study a simple history-dependent model of text generation assuming that the sample-space of word usage reduces along sentence formation, on average. We first show that the model explains the approximate Zipf law found in word frequencies as a direct consequence of sample-space reduction. We then empirically quantify the amount of sample-space reduction in the sentences of 10 famous English books, by analysis of corresponding word-transition tables that capture which words can follow any given word in a text. We find a highly nested structure in these transition tables and show that this 'nestedness' is tightly related to the power law exponents of the observed word frequency distributions. With the proposed model, it is possible to understand that the nestedness of a text can be the origin of the actual scaling exponent and that deviations from the exact Zipf law can be understood by variations of the degree of nestedness on a book-by-book basis. On a theoretical level, we are able to show that in the case of weak nesting, Zipf's law breaks down in a fast transition. Unlike previous attempts to understand Zipf's law in language the sample-space reducing model is not based on assumptions of multiplicative, preferential or self-organized critical mechanisms behind language formation, but simply uses the empirically quantifiable parameter 'nestedness' to understand the statistics of word frequencies. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
The Birth of Rethinking Schools
ERIC Educational Resources Information Center
Peterson, Bob
2011-01-01
The author often says that Rethinking Schools began on his kitchen table with a can of rubber cement and an Apple IIe computer. But that's not exactly true. In some ways the publication started a year and half earlier in a study group of teachers and community activists who were struggling to figure out how to apply a generally progressive,…
Efficient generation of holographic news ticker in holographic 3DTV
NASA Astrophysics Data System (ADS)
Kim, Seung-Cheol; Kim, Eun-Soo
2009-08-01
News ticker is used to show breaking news or news headlines in conventional 2-D broadcasting system. For the case of the breaking news, the fast creation is need, because the information should be sent quickly. In addition, if holographic 3- D broadcasting system is started in the future, news ticker will remain. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic news ticker in holographic 3DTV or 3-D movies using N-LUT method. The proposed method is largely consisted of five steps: construction of the LUT for each character, extraction of characters in news ticker, generation and shift of the CGH pattern for news ticker using the LUT for each character, composition of hologram pattern for 3-D video and hologram pattern for news ticker and reconstruct the holographic 3D video with news ticker. To confirm the proposed method, moving car in front of the castle is used as a 3D video and the words 'HOLOGRAM CAPTION GENERATOR' is used as a news ticker. From this simulation results confirmed the feasibility of the proposed method in fast generation of CGH patterns for holographic captions.
Modulation and coding for fast fading mobile satellite communication channels
NASA Technical Reports Server (NTRS)
Mclane, P. J.; Wittke, P. H.; Smith, W. S.; Lee, A.; Ho, P. K. M.; Loo, C.
1988-01-01
The performance of Gaussian baseband filtered minimum shift keying (GMSK) using differential detection in fast Rician fading, with a novel treatment of the inherent intersymbol interference (ISI) leading to an exact solution is discussed. Trellis-coded differentially coded phase shift keying (DPSK) with a convolutional interleaver is considered. The channel is the Rician Channel with the line-of-sight component subject to a lognormal transformation.
Real-time color image processing for forensic fiber investigations
NASA Astrophysics Data System (ADS)
Paulsson, Nils
1995-09-01
This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.
NASA Technical Reports Server (NTRS)
Fijany, A.; Roberts, J. A.; Jain, A.; Man, G. K.
1993-01-01
Part 1 of this paper presented the requirements for the real-time simulation of Cassini spacecraft along with some discussion of the DARTS algorithm. Here, in Part 2 we discuss the development and implementation of parallel/vectorized DARTS algorithm and architecture for real-time simulation. Development of the fast algorithms and architecture for real-time hardware-in-the-loop simulation of spacecraft dynamics is motivated by the fact that it represents a hard real-time problem, in the sense that the correctness of the simulation depends on both the numerical accuracy and the exact timing of the computation. For a given model fidelity, the computation should be computed within a predefined time period. Further reduction in computation time allows increasing the fidelity of the model (i.e., inclusion of more flexible modes) and the integration routine.
Fast interrupt platform for extended DOS
NASA Technical Reports Server (NTRS)
Duryea, T. W.
1995-01-01
Extended DOS offers the unique combination of a simple operating system which allows direct access to the interrupt tables, 32 bit protected mode access to 4096 MByte address space, and the use of industry standard C compilers. The drawback is that fast interrupt handling requires both 32 bit and 16 bit versions of each real-time process interrupt handler to avoid mode switches on the interrupts. A set of tools has been developed which automates the process of transforming the output of a standard 32 bit C compiler to 16 bit interrupt code which directly handles the real mode interrupts. The entire process compiles one set of source code via a make file, which boosts productivity by making the management of the compile-link cycle very simple. The software components are in the form of classes written mostly in C. A foreground process written as a conventional application which can use the standard C libraries can communicate with the background real-time classes via a message passing mechanism. The platform thus enables the integration of high performance real-time processing into a conventional application framework.
NASA Astrophysics Data System (ADS)
Bates, Jason; Schmitt, Andrew; Klapisch, Marcel; Karasik, Max; Obenschain, Steve
2013-10-01
Modifications to the FAST3D code have been made to enhance its ability to simulate the dynamics of plastic ICF targets with high-Z overcoats. This class of problems is challenging computationally due in part to plasma conditions that are not in a state of local thermodynamic equilibrium and to the presence of mixed computational cells containing more than one material. Recently, new opacity tables for gold, palladium and plastic have been generated with an improved version of the STA code. These improved tables provide smoother, higher-fidelity opacity data over a wider range of temperature and density states than before, and contribute to a more accurate treatment of radiative transfer processes in FAST3D simulations. Furthermore, a new, more efficient subroutine known as ``MMEOS'' has been installed in the FAST3D code for determining pressure and temperature equilibrium conditions within cells containing multiple materials. We will discuss these topics, and present new simulation results for high-Z planar-target experiments performed recently on the NIKE Laser Facility. Work supported by DOE/NNSA.
NASA Astrophysics Data System (ADS)
Jernsletten, J. A.
2005-05-01
Introduction: The purpose of this study is to evaluate the use of (diffusive) Time Domain Electromagnetics (TEM) for sounding of subsurface water in conductive Mars analog environments. To provide a baseline for such studies, I show data from two field studies: 1) Diffusive sounding data (TEM) from Pima County, Arizona; and 2) Shallower sounding data using the Fast-Turnoff TEM method from Peña de Hierro in the Rio Tinto region of Spain. The latter is data from work conducted under the auspices of the Mars Analog Research and Technology Experiment (MARTE). Pima County TEM Survey: A TEM survey was carried out in Pima County, Arizona, in January 2003. Data was collected using 100 m Tx loops and a ferrite-cored magnetic coil Rx antenna, and processed using commercial software. The survey used a 16 Hz sounding frequency, which is sensitive to slightly salty groundwater. Prominent features in the data from Arizona are the ~500 m depth of investigation and the ~120 m depth to the water table, confirmed by data from four USGS test wells surrounding the field area. Note also the conductive (~20-40 ω m) clay-rich soil above the water table. Rio Tinto Fast-Turnoff TEM Survey: During May and June of 2003, a Fast-Turnoff (early time) TEM survey was carried out at the Peña de Hierro field area of the MARTE project, near the town of Nerva, Spain. Data was collected using 20 m and 40 m Tx loop antennae and 10 m loop Rx antennae, with a 32 Hz sounding frequency. Data from Line 4 (of 16) from this survey, collected using 40 m Tx loops, show ~200 m depth of investigation and a conductive high at ~90 m depth below Station 20 (second station of 10 along this line). This is the water table, matching the 431 m MSL elevation of the nearby pit lake. The center of the "pileup" below Station 60 is spatially coincident with the vertical fault plane located here. Data from Line 15 and Line 14 of the Rio Tinto survey, collected using 20 m Tx loops, achieve ~50 m depth of investigation and show conductive highs at ~15 m depth below Station 50 (Line 15) and Station 30 (Line 14), interpreted as subsurface water flow under mine tailings matching surface flows seen coming out from under the tailings, and shown on maps. Conclusions: Results from the Pima County TEM survey were in good agreement with control data from the four USGS test wells located around the field area. This survey also achieved very acceptable 500+ m depths of investigation. Both of the interpretations from Rio Tinto data (Line 4, and Lines 15 & 14) were confirmed by preliminary results from the MARTE ground truth drilling campaign carried out in September and October 2003. Drill Site 1 was moved ~50 m based on recommendations built on data from Line 15 and Line 14 of the Fast-Turnoff TEM survey.
NASA Astrophysics Data System (ADS)
Jain, Shobhit; Tiso, Paolo; Haller, George
2018-06-01
We apply two recently formulated mathematical techniques, Slow-Fast Decomposition (SFD) and Spectral Submanifold (SSM) reduction, to a von Kármán beam with geometric nonlinearities and viscoelastic damping. SFD identifies a global slow manifold in the full system which attracts solutions at rates faster than typical rates within the manifold. An SSM, the smoothest nonlinear continuation of a linear modal subspace, is then used to further reduce the beam equations within the slow manifold. This two-stage, mathematically exact procedure results in a drastic reduction of the finite-element beam model to a one-degree-of freedom nonlinear oscillator. We also introduce the technique of spectral quotient analysis, which gives the number of modes relevant for reduction as output rather than input to the reduction process.
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks
Vestergaard, Christian L.; Génois, Mathieu
2015-01-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.
Vestergaard, Christian L; Génois, Mathieu
2015-10-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.
Landau-Zener extension of the Tavis-Cummings model: Structure of the solution
Sun, Chen; Sinitsyn, Nikolai A.
2016-09-07
We explore the recently discovered solution of the driven Tavis-Cummings model (DTCM). It describes interaction of an arbitrary number of two-level systems with a bosonic mode that has linearly time-dependent frequency. We derive compact and tractable expressions for transition probabilities in terms of the well-known special functions. In this form, our formulas are suitable for fast numerical calculations and analytical approximations. As an application, we obtain the semiclassical limit of the exact solution and compare it to prior approximations. Furthermore, we also reveal connection between DTCM and q-deformed binomial statistics.
Keylogger Application to Monitoring Users Activity with Exact String Matching Algorithm
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Nurdiyanto, Heri; Saleh A, Ansari; Abdullah, Dahlan; Hartama, Dedy; Napitupulu, Darmawan
2018-01-01
The development of technology is very fast, especially in the field of Internet technology that at any time experiencing significant changes, The development also supported by the ability of human resources, Keylogger is a tool that most developed because this application is very rarely recognized a malicious program by antivirus, keylogger will record all activities related to keystrokes, the recording process is accomplished by using string matching method. The application of string matching method in the process of recording the keyboard is to help the admin in knowing what the user accessed on the computer.
Asymptotic solution of Fokker-Planck equation for plasma in Paul traps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Kushal
2010-05-15
An exact analytic solution of the Vlasov equation for the plasma distribution in a Paul trap is known to be a Maxwellian and thus, immune to collisions under the assumption of infinitely fast relaxation [K. Shah and H. S. Ramachandran, Phys. Plasmas 15, 062303 (2008)]. In this paper, it is shown that even for a more realistic situation of finite time relaxation, solutions of the Fokker-Planck equation lead to an equilibrium solution of the form of a Maxwellian with oscillatory temperature. This shows that the rf heating observed in Paul traps cannot be caused due to collisional effects alone.
An iterative solver for the 3D Helmholtz equation
NASA Astrophysics Data System (ADS)
Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir
2017-09-01
We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
NASA Astrophysics Data System (ADS)
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ioannou, J.G.
1977-12-01
The interaction of heavy ion projectiles with the electrons of target atoms gives rise to the production, in the target, of K-, L- or higher shell vacancies which are in turn followed by the emission of characteristic x-rays. The calculation of the theoretical value of the K- and L-shells vacancy production cross section was carried out for heavy ion projectiles of any energy. The transverse component of the cross section is calculated for the first time in detail and extensive tables of its numerical value as a function of its parameters are also given. Experimental work for 4.88 GeV protonsmore » and 3 GeV carbon ions is described. The K vacancy cross section has been measured for a variety of targets from Ti to U. The agreement between the theoretical predictions and experimental results for the 4.88 GeV protons is rather satisfactory. For the 3 GeV carbon ions, however, it is observed that the deviation of the theoretical and experimental values of the K vacancy production becomes larger with the heavier target element. Consequently, the simple scaling law of Z/sub 1//sup 2/ for the cross section of the heavy ion with atomic number Z/sub 1/ to the proton cross section is not true, for the K-shell at least. A dependence on the atomic number Z/sub 2/ of the target of the form (Z/sub 1/ - ..cap alpha..Z/sub 2/)/sup 2/, instead of Z/sub 1//sup 2/, is found to give extremely good agreement between theory and experiment. Although the exact physical meaning of such dependence is not yet clearly understood, it is believed to be indicative of some sort of screening effect of the incoming fast projectile by the fast moving in Bohr orbits K-shell electrons of the target. The enhancement of the K-shell ionization cross section by relativistic heavy ions on heavy targets is also discussed in terms of its practical applications in various branches of science and technology.« less
Minimizing losses by variational counter-diabatic driving
NASA Astrophysics Data System (ADS)
Sels, Dries; Polkovnikov, Anatoli
Despite the time-reversal symmetry of the microscopic dynamics of isolated systems, losses are ubiquitous in any process that tries to manipulate them. Whether it's the heat produced in a car engine or the decoherence of a qubit, all losses arise from our lack of control on the microscopic degrees of freedom of the system. Counter-diabatic driving protocols were proposed as a means to do fast changes in the Hamiltonian without exciting transitions. Such driving in principle allows one to realize arbitrarily fast annealing protocols or implement fast dissipationless driving, circumventing standard adiabatic limitations requiring infinitesimally slow rates. These ideas were tested and used both experimentally and theoretically in small systems, but in larger chaotic systems it is known that exact counter-diabatic protocols do not exist. Here we will present a simple variational approach allowing one to find best physical counter-diabatic protocols. We will show that, while they do not get rid of all transitions, the variational protocols are able to significantly reduce the induced fluctuations in the system. D.S. acknowledges support by the FWO.
Towards denoising XMCD movies of fast magnetization dynamics using extended Kalman filter.
Kopp, M; Harmeling, S; Schütz, G; Schölkopf, B; Fähnle, M
2015-01-01
The Kalman filter is a well-established approach to get information on the time-dependent state of a system from noisy observations. It was developed in the context of the Apollo project to see the deviation of the true trajectory of a rocket from the desired trajectory. Afterwards it was applied to many different systems with small numbers of components of the respective state vector (typically about 10). In all cases the equation of motion for the state vector was known exactly. The fast dissipative magnetization dynamics is often investigated by x-ray magnetic circular dichroism movies (XMCD movies), which are often very noisy. In this situation the number of components of the state vector is extremely large (about 10(5)), and the equation of motion for the dissipative magnetization dynamics (especially the values of the material parameters of this equation) is not well known. In the present paper it is shown by theoretical considerations that - nevertheless - there is no principle problem for the use of the Kalman filter to denoise XMCD movies of fast dissipative magnetization dynamics. Copyright © 2014 Elsevier B.V. All rights reserved.
Vulnerability to Allergic Disorder in Families of Children of Behavioral Inhibition
1990-10-07
third years of life. The temperamentally inhibited child consistently displays an initial timidity, shyness, and emotional restraint when exposed to...with the uninhibited, children, reported a higher prevalence of atopic allergies, especially hayfever and eczema . Although the exact mechanisms...As Table 1 reveals, more relatives of inhibited, compared with uninhibited, children reported having hayfever, eczema , and frequent stomach cramps
Code of Federal Regulations, 2010 CFR
2010-10-01
... rulemaking, comments, reply comments, and other pleadings shall be filed with the Commission. (f) Petitions for reconsideration and responsive pleadings shall be served on parties to the proceeding and on any... expression of interest; (2) The exact nature and amount of any consideration received or promised; (3) An...
NASA Astrophysics Data System (ADS)
Fendley, Paul; Hagendorf, Christian
2010-10-01
We conjecture exact and simple formulas for some physical quantities in two quantum chains. A classic result of this type is Onsager, Kaufman and Yang's formula for the spontaneous magnetization in the Ising model, subsequently generalized to the chiral Potts models. We conjecture that analogous results occur in the XYZ chain when the couplings obey JxJy + JyJz + JxJz = 0, and in a related fermion chain with strong interactions and supersymmetry. We find exact formulas for the magnetization and gap in the former, and the staggered density in the latter, by exploiting the fact that certain quantities are independent of finite-size effects.
Garay-Avendaño, Roger L; Zamboni-Rached, Michel
2014-07-10
In this paper, we propose a method that is capable of describing in exact and analytic form the propagation of nonparaxial scalar and electromagnetic beams. The main features of the method presented here are its mathematical simplicity and the fast convergence in the cases of highly nonparaxial electromagnetic beams, enabling us to obtain high-precision results without the necessity of lengthy numerical simulations or other more complex analytical calculations. The method can be used in electromagnetism (optics, microwaves) as well as in acoustics.
Intact figure-ground segmentation in schizophrenia.
Herzog, Michael H; Kopmann, Sabine; Brand, Andreas
2004-11-30
As revealed by backward masking studies, schizophrenic patients show strong impairments of early visual processing. However, the underlying temporal mechanisms are not yet well understood. To shed light on the exact timing of these deficits, we employed a paradigm in which two masks follow each other. We investigated 16 medicated schizophrenic patients and a matched group of 14 controls with a new backward masking technique, shine-through. In accordance with other masking studies, schizophrenic patients require a dramatically longer processing time to reach a predefined performance level compared with healthy subjects. However, patients are surprisingly sensitive to subtle differences in the timing of the two masks, revealing good temporal resolution. This good temporal resolution indicates intact and fast perceptual grouping and figure-ground segmentation in spite of high susceptibility to masking procedures in schizophrenia.
Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew
2009-01-01
Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.
"Flash" dance: how speed modulates percieved duration in dancers and non-dancers.
Sgouramani, Helena; Vatakis, Argiro
2014-03-01
Speed has been proposed as a modulating factor on duration estimation. However, the different measurement methodologies and experimental designs used have led to inconsistent results across studies, and, thus, the issue of how speed modulates time estimation remains unresolved. Additionally, no studies have looked into the role of expertise on spatiotemporal tasks (tasks requiring high temporal and spatial acuity; e.g., dancing) and susceptibility to modulations of speed in timing judgments. In the present study, therefore, using naturalistic, dynamic dance stimuli, we aimed at defining the role of speed and the interaction of speed and experience on time estimation. We presented videos of a dancer performing identical ballet steps in fast and slow versions, while controlling for the number of changes present. Professional dancers and non-dancers performed duration judgments through a production and a reproduction task. Analysis revealed a significantly larger underestimation of fast videos as compared to slow ones during reproduction. The exact opposite result was true for the production task. Dancers were significantly less variable in their time estimations as compared to non-dancers. Speed and experience, therefore, affect the participants' estimates of time. Results are discussed in association to the theoretical framework of current models by focusing on the role of attention. © 2013 Elsevier B.V. All rights reserved.
Radiocarbon constraints on the glacial ocean circulation and its impact on atmospheric CO2
NASA Astrophysics Data System (ADS)
Skinner, L. C.; Primeau, F.; Freeman, E.; de La Fuente, M.; Goodwin, P. A.; Gottschalk, J.; Huang, E.; McCave, I. N.; Noble, T. L.; Scrivner, A. E.
2017-07-01
While the ocean's large-scale overturning circulation is thought to have been significantly different under the climatic conditions of the Last Glacial Maximum (LGM), the exact nature of the glacial circulation and its implications for global carbon cycling continue to be debated. Here we use a global array of ocean-atmosphere radiocarbon disequilibrium estimates to demonstrate a ~689+/-53 14C-yr increase in the average residence time of carbon in the deep ocean at the LGM. A predominantly southern-sourced abyssal overturning limb that was more isolated from its shallower northern counterparts is interpreted to have extended from the Southern Ocean, producing a widespread radiocarbon age maximum at mid-depths and depriving the deep ocean of a fast escape route for accumulating respired carbon. While the exact magnitude of the resulting carbon cycle impacts remains to be confirmed, the radiocarbon data suggest an increase in the efficiency of the biological carbon pump that could have accounted for as much as half of the glacial-interglacial CO2 change.
Fast Preparation of Critical Ground States Using Superluminal Fronts
NASA Astrophysics Data System (ADS)
Agarwal, Kartiek; Bhatt, R. N.; Sondhi, S. L.
2018-05-01
We propose a spatiotemporal quench protocol that allows for the fast preparation of ground states of gapless models with Lorentz invariance. Assuming the system initially resides in the ground state of a corresponding massive model, we show that a superluminally moving "front" that locally quenches the mass, leaves behind it (in space) a state arbitrarily close to the ground state of the gapless model. Importantly, our protocol takes time O (L ) to produce the ground state of a system of size ˜Ld (d spatial dimensions), while a fully adiabatic protocol requires time ˜O (L2) to produce a state with exponential accuracy in L . The physics of the dynamical problem can be understood in terms of relativistic rarefaction of excitations generated by the mass front. We provide proof of concept by solving the proposed quench exactly for a system of free bosons in arbitrary dimensions, and for free fermions in d =1 . We discuss the role of interactions and UV effects on the free-theory idealization, before numerically illustrating the usefulness of the approach via simulations on the quantum Heisenberg spin chain.
A first-order k-space model for elastic wave propagation in heterogeneous media.
Firouzi, K; Cox, B T; Treeby, B E; Saffari, N
2012-09-01
A pseudospectral model of linear elastic wave propagation is described based on the first order stress-velocity equations of elastodynamics. k-space adjustments to the spectral gradient calculations are derived from the dyadic Green's function solution to the second-order elastic wave equation and used to (a) ensure the solution is exact for homogeneous wave propagation for timesteps of arbitrarily large size, and (b) also allows larger time steps without loss of accuracy in heterogeneous media. The formulation in k-space allows the wavefield to be split easily into compressional and shear parts. A perfectly matched layer (PML) absorbing boundary condition was developed to effectively impose a radiation condition on the wavefield. The staggered grid, which is essential for accurate simulations, is described, along with other practical details of the implementation. The model is verified through comparison with exact solutions for canonical examples and further examples are given to show the efficiency of the method for practical problems. The efficiency of the model is by virtue of the reduced point-per-wavelength requirement, the use of the fast Fourier transform (FFT) to calculate the gradients in k space, and larger time steps made possible by the k-space adjustments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Lehua; Oldenburg, Curtis M.
Potential CO 2 leakage through existing open wellbores is one of the most significant hazards that need to be addressed in geologic carbon sequestration (GCS) projects. In the framework of the National Risk Assessment Partnership (NRAP) which requires fast computations for uncertainty analysis, rigorous simulation of the coupled wellbore-reservoir system is not practical. We have developed a 7,200-point look-up table reduced-order model (ROM) for estimating the potential leakage rate up open wellbores in response to CO 2 injection nearby. The ROM is based on coupled simulations using T2Well/ECO2H which was run repeatedly for representative conditions relevant to NRAP to createmore » a look-up table response-surface ROM. The ROM applies to a wellbore that fully penetrates a 20-m thick reservoir that is used for CO 2 storage. The radially symmetric reservoir is assumed to have initially uniform pressure, temperature, gas saturation, and brine salinity, and it is assumed these conditions are held constant at the far-field boundary (100 m away from the wellbore). In such a system, the leakage can quickly reach quasi-steady state. The ROM table can be used to estimate both the free-phase CO 2 and brine leakage rates through an open well as a function of wellbore and reservoir conditions. Results show that injection-induced pressure and reservoir gas saturation play important roles in controlling leakage. Caution must be used in the application of this ROM because well leakage is formally transient and the ROM lookup table was populated using quasi-steady simulation output after 1000 time steps which may correspond to different physical times for the various parameter combinations of the coupled wellbore-reservoir system.« less
LLSURE: local linear SURE-based edge-preserving image filtering.
Qiu, Tianshuang; Wang, Aiqi; Yu, Nannan; Song, Aimin
2013-01-01
In this paper, we propose a novel approach for performing high-quality edge-preserving image filtering. Based on a local linear model and using the principle of Stein's unbiased risk estimate as an estimator for the mean squared error from the noisy image only, we derive a simple explicit image filter which can filter out noise while preserving edges and fine-scale details. Moreover, this filter has a fast and exact linear-time algorithm whose computational complexity is independent of the filtering kernel size; thus, it can be applied to real time image processing tasks. The experimental results demonstrate the effectiveness of the new filter for various computer vision applications, including noise reduction, detail smoothing and enhancement, high dynamic range compression, and flash/no-flash denoising.
Exact and Approximate Solutions for Transient Squeezing Flow
NASA Astrophysics Data System (ADS)
Lang, Ji; Santhanam, Sridhar; Wu, Qianhong
2017-11-01
In this paper, we report two novel theoretical approaches to examine a fast-developing flow in a thin fluid gap, which is widely observed in industrial applications and biological systems. The problem is featured by a very small Reynolds number and Strouhal number, making the fluid convective acceleration is negligible, while its local acceleration is not. We have developed an exact solution for this problem which shows that the flow starts with an inviscid limit when the viscous effect has no time to appear, and is followed by a subsequent developing flow, in which the viscous effect continues to penetrate into the entire fluid gap. An approximate solution is also developed using a boundary layer integral method. This solution precisely captures the general behavior of the transient fluid flow process, and agrees very well with the exact solution. We also performed numerical simulation using Ansys-CFX. Excellent agreement between the analytical and the numerical solutions is obtained, indicating the validity of the analytical approaches. The study presented herein fills the gap in the literature, and will have a broad impact in industrial and biomedical applications. This work is supported by National Science Foundation CBET Fluid Dynamics Program under Award #1511096, and supported by the Seed Grant from The Villanova Center for the Advancement of Sustainability in Engineering (VCASE).
A k-space method for acoustic propagation using coupled first-order equations in three dimensions.
Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C
2009-09-01
A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Doha, E. H.; Baleanu, D.; Ezz-Eldien, S. S.
2015-07-01
In this paper, an efficient and accurate spectral numerical method is presented for solving second-, fourth-order fractional diffusion-wave equations and fractional wave equations with damping. The proposed method is based on Jacobi tau spectral procedure together with the Jacobi operational matrix for fractional integrals, described in the Riemann-Liouville sense. The main characteristic behind this approach is to reduce such problems to those of solving systems of algebraic equations in the unknown expansion coefficients of the sought-for spectral approximations. The validity and effectiveness of the method are demonstrated by solving five numerical examples. Numerical examples are presented in the form of tables and graphs to make comparisons with the results obtained by other methods and with the exact solutions more easier.
The Logical Problem of Language Change.
1995-07-01
tribution Pi. For the most part we will assume in our simulations that this distribution is uniform on degree-0 ( unembedded ) sentences, exactly as in...following table provides the unembedded (degree- 0) sentences from each of the 8 grammars (languages) obtained by setting the 3 parameters of example 1 to di erent values. The languages are referred to as L1 through L8: 17 18
... 2 [PDF – 2.7 MB] Leading causes of death Leading causes of deaths among adolescents aged 15–19 years: Accidents (unintentional injuries) Suicide Homicide Source: Deaths: Leading Causes for 2015, table 1 [PDF – 2. ...
... 7 [PDF – 2.7 MB] Leading causes of death Children aged 1-4 years Accidents (unintentional injuries) ... unintentional injuries) Cancer Intentional self-harm (suicide) Source: Deaths: Final Data for 2015, table 6 [PDF – 2. ...
FastStats: Oral and Dental Health
... What's this? Submit Button NCHS Home Oral and Dental Health Recommend on Facebook Tweet Share Compartir Data ... States, 2016, table 60 [PDF – 9.8 MB] Dental visits Percent of children aged 2-17 years ...
NASA Astrophysics Data System (ADS)
Chicurel-Uziel, Enrique
2007-08-01
A pair of closed parametric equations are proposed to represent the Heaviside unit step function. Differentiating the step equations results in two additional parametric equations, that are also hereby proposed, to represent the Dirac delta function. These equations are expressed in algebraic terms and are handled by means of elementary algebra and elementary calculus. The proposed delta representation complies exactly with the values of the definition. It complies also with the sifting property and the requisite unit area and its Laplace transform coincides with the most general form given in the tables. Furthermore, it leads to a very simple method of solution of impulsive vibrating systems either linear or belonging to a large class of nonlinear problems. Two example solutions are presented.
Effects of Ramadan Fasting on Inspiratory Muscle Function.
Soori, Mohsen; Mohaghegh, Shahram; Hajain, Maryam; Moraadi, Behrooz
2016-09-01
Ramadan fasting is a major challenge for exercising Muslims especially in warm seasons. There is some evidence to indicate that Ramadan fasting causes higher subjective ratings of perceived exertion (RPE) in fasting Muslims. The mechanisms of this phenomenon are not known exactly. The role of respiratory muscle strength in this regard has not been studied yet. The aim of this study was investigation of the effects of Ramadan fasting on respiratory muscle strength. In a before-after study, from 35 fasting, apparently healthy, male adults who had fasted from the beginning of Ramadan, maximal inspiratory muscle pressure (MIP) and peak inspiratory flow (PIF) were measured in the last week of Ramadan month in summer. At the time of test, there was not any sleep problem in participants and all of them had good cooperation. Three months later, after exclusion of incompatible persons mainly because of change in their physical activity level, smoking behavior or drug consumption, the measurements were repeated in 12 individuals. Weight, MIP and PIF data had normal distribution (Kolmogorov-Smirnov Test). There was a significant increase in MIP (mean 8.3 cm H 2 O with 95% confidence interval of 2.2 - 14.3) and PIF (mean 0.55 lit/s with 95% confidence interval of 0.02 - 1.07) and weight (mean 3.4 Kg with 95% confidence interval of 2.2 - 4.5) after Ramadan (Paired t test with P < 0.05). When weight difference was used as a covariate in repeated measure ANOVA test, there was no further significant difference between MIP and PIF measurements. Ramadan fasting may cause reduction of respiratory muscle strength through reduction of body weight.
ARYANA: Aligning Reads by Yet Another Approach
2014-01-01
Motivation Although there are many different algorithms and software tools for aligning sequencing reads, fast gapped sequence search is far from solved. Strong interest in fast alignment is best reflected in the $106 prize for the Innocentive competition on aligning a collection of reads to a given database of reference genomes. In addition, de novo assembly of next-generation sequencing long reads requires fast overlap-layout-concensus algorithms which depend on fast and accurate alignment. Contribution We introduce ARYANA, a fast gapped read aligner, developed on the base of BWA indexing infrastructure with a completely new alignment engine that makes it significantly faster than three other aligners: Bowtie2, BWA and SeqAlto, with comparable generality and accuracy. Instead of the time-consuming backtracking procedures for handling mismatches, ARYANA comes with the seed-and-extend algorithmic framework and a significantly improved efficiency by integrating novel algorithmic techniques including dynamic seed selection, bidirectional seed extension, reset-free hash tables, and gap-filling dynamic programming. As the read length increases ARYANA's superiority in terms of speed and alignment rate becomes more evident. This is in perfect harmony with the read length trend as the sequencing technologies evolve. The algorithmic platform of ARYANA makes it easy to develop mission-specific aligners for other applications using ARYANA engine. Availability ARYANA with complete source code can be obtained from http://github.com/aryana-aligner PMID:25252881
ARYANA: Aligning Reads by Yet Another Approach.
Gholami, Milad; Arbabi, Aryan; Sharifi-Zarchi, Ali; Chitsaz, Hamidreza; Sadeghi, Mehdi
2014-01-01
Although there are many different algorithms and software tools for aligning sequencing reads, fast gapped sequence search is far from solved. Strong interest in fast alignment is best reflected in the $10(6) prize for the Innocentive competition on aligning a collection of reads to a given database of reference genomes. In addition, de novo assembly of next-generation sequencing long reads requires fast overlap-layout-concensus algorithms which depend on fast and accurate alignment. We introduce ARYANA, a fast gapped read aligner, developed on the base of BWA indexing infrastructure with a completely new alignment engine that makes it significantly faster than three other aligners: Bowtie2, BWA and SeqAlto, with comparable generality and accuracy. Instead of the time-consuming backtracking procedures for handling mismatches, ARYANA comes with the seed-and-extend algorithmic framework and a significantly improved efficiency by integrating novel algorithmic techniques including dynamic seed selection, bidirectional seed extension, reset-free hash tables, and gap-filling dynamic programming. As the read length increases ARYANA's superiority in terms of speed and alignment rate becomes more evident. This is in perfect harmony with the read length trend as the sequencing technologies evolve. The algorithmic platform of ARYANA makes it easy to develop mission-specific aligners for other applications using ARYANA engine. ARYANA with complete source code can be obtained from http://github.com/aryana-aligner.
STELLAR: fast and exact local alignments
2011-01-01
Background Large-scale comparison of genomic sequences requires reliable tools for the search of local alignments. Practical local aligners are in general fast, but heuristic, and hence sometimes miss significant matches. Results We present here the local pairwise aligner STELLAR that has full sensitivity for ε-alignments, i.e. guarantees to report all local alignments of a given minimal length and maximal error rate. The aligner is composed of two steps, filtering and verification. We apply the SWIFT algorithm for lossless filtering, and have developed a new verification strategy that we prove to be exact. Our results on simulated and real genomic data confirm and quantify the conjecture that heuristic tools like BLAST or BLAT miss a large percentage of significant local alignments. Conclusions STELLAR is very practical and fast on very long sequences which makes it a suitable new tool for finding local alignments between genomic sequences under the edit distance model. Binaries are freely available for Linux, Windows, and Mac OS X at http://www.seqan.de/projects/stellar. The source code is freely distributed with the SeqAn C++ library version 1.3 and later at http://www.seqan.de. PMID:22151882
Cortex Matures Faster in Youths With Highest IQ
... NIH Cortex Matures Faster in Youths With Highest IQ Past Issues / Summer 2006 Table of Contents For ... on. Photo: Getty image (StockDisc) Youths with superior IQ are distinguished by how fast the thinking part ...
Diamond anvils with a round table designed for high pressure experiments in DAC
NASA Astrophysics Data System (ADS)
Dubrovinsky, Leonid; Koemets, Egor; Bykov, Maxim; Bykova, Elena; Aprilis, Georgios; Pakhomova, Anna; Glazyrin, Konstantin; Laskin, Alexander; Prakapenka, Vitali B.; Greenberg, Eran; Dubrovinskaia, Natalia
2017-10-01
Here, we present new Diamond Anvils with a Round Table (DART-anvils) designed for applications in the diamond anvil cell (DAC) technique. The main features of the new DART-anvil design are a spherical shape of both the crown and the table of a diamond and the position of the centre of the culet exactly in the centre of the sphere. The performance of DART-anvils was tested in a number of high pressure high-temperature experiments at different synchrotron beamlines. These experiments demonstrated a number of advantages, which are unavailable with any of the hitherto known anvil designs. Use of DART-anvils enables to realise in situ single-crystal X-ray diffraction experiments with laser heating using stationary laser-heating setups; eliminating flat-plate design of conventional anvils, DART-anvils make the cell alignment easier; working as solid immersion lenses, they provide additional magnification of the sample in a DAC and improve the image resolution.
Wind shear modeling for aircraft hazard definition
NASA Technical Reports Server (NTRS)
Frost, W.; Camp, D. W.; Wang, S. T.
1978-01-01
Mathematical models of wind profiles were developed for use in fast time and manned flight simulation studies aimed at defining and eliminating these wind shear hazards. A set of wind profiles and associated wind shear characteristics for stable and neutral boundary layers, thunderstorms, and frontal winds potentially encounterable by aircraft in the terminal area are given. Engineering models of wind shear for direct hazard analysis are presented in mathematical formulae, graphs, tables, and computer lookup routines. The wind profile data utilized to establish the models are described as to location, how obtained, time of observation and number of data points up to 500 m. Recommendations, engineering interpretations and guidelines for use of the data are given and the range of applicability of the wind shear models is described.
Fast, adaptive summation of point forces in the two-dimensional Poisson equation
NASA Technical Reports Server (NTRS)
Van Dommelen, Leon; Rundensteiner, Elke A.
1989-01-01
A comparatively simple procedure is presented for the direct summation of the velocity field introduced by point vortices which significantly reduces the required number of operations by replacing selected partial sums by asymptotic series. Tables are presented which demonstrate the speed of this algorithm in terms of the mere doubling of computational time in dealing with a doubling of the number of vortices; current methods involve a computational time extension by a factor of 4. This procedure need not be restricted to the solution of the Poisson equation, and may be applied to other problems involving groups of points in which the interaction between elements of different groups can be simplified when the distance between groups is sufficiently great.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stimson, J.
1985-02-01
Field surveys at Enewetak Atoll, Marshall Islands, show that coral density and diversity is much lower beneath Acropora table corals than in adjacent unshaded areas. Additionally, the understory community is predominantly composed of massive and encrusting species, while branching Acropora and Pocillopora predominate in unshaded areas. Results of experiments in which coral fragments were transferred to the shade of table Acropora and to adjacent unshaded areas show that shading slows the growth and leads to higher mortality of branching species, while massive and encrusting species are unaffected. Light measurements made beneath table Acropora show that illumination and irradiance values fallmore » to levels at which most hermatypic corals do not occur. The fast-growing but fragile table Acropora are abundant in a wide variety of atoll habitats and grow rapidly to form a canopy approx. = 50 cm above the substrate. However, table Acropora also have high mortality rates, so that there is continuous production of unshaded areas. The growth and death of tables thus create local disturbances, and the resulting patchwork of recently shaded and unshaded areas may enhance coral diversity in areas of high coral cover.« less
Exact and Heuristic Minimization of the Average Path Length in Decision Diagrams
2005-01-01
34$&%’ (*) &+#-,./&%1023 ’+/4%! 5637& 158+#&9 1 SHINOBU NAGAYAMA∗ , ALAN ...reviewers for constructive comments. REFERENCES [1] Ashar , P. and Malik, S. (1995). Fast functional simulation using branching programs, ICCAD’95, 408–412. [2
Efficient generation of 3D hologram for American Sign Language using look-up table
NASA Astrophysics Data System (ADS)
Park, Joo-Sup; Kim, Seung-Cheol; Kim, Eun-Soo
2010-02-01
American Sign Language (ASL) is one of the languages giving the greatest help for communication of the hearing impaired person. Current 2-D broadcasting, 2-D movies are used the ASL to give some information, help understand the situation of the scene and translate the foreign language. These ASL will not be disappeared in future three-dimensional (3-D) broadcasting or 3-D movies because the usefulness of the ASL. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic ASL in holographic 3DTV or 3-D movies using look-up table method. The proposed method is largely consisted of five steps: construction of the LUT for each ASL images, extraction of characters in scripts or situation, call the fringe patterns for characters in the LUT for each ASL, composition of hologram pattern for 3-D video and hologram pattern for ASL and reconstruct the holographic 3D video with ASL. Some simulation results confirmed the feasibility of the proposed method in efficient generation of CGH patterns for ASL.
FastStats: Health of Mexican American Population
... table I-4 [PDF – 2.7 MB] Infant deaths Infant deaths per 1,000 live births: 4.92 (2012- ... Related Links Birth Data Linked Birth and Infant Death Data Mortality Data National Health and Nutrition Examination ...
FastStats: Health of Asian or Pacific Islander Population
... 1 [PDF – 2.7 MB] Leading causes of death for Asian or Pacific Islander population Cancer Heart ... 2015, table 1 [PDF – 2.3 MB] Infant deaths for Asian or Pacific Islander population Infant deaths ...
FastStats: Health of Hispanic or Latino Population
... 1 [PDF – 2.7 MB] Leading causes of death Cancer Heart disease Accidents (unintentional injuries) Source: Deaths: ... 2015, table D [PDF – 2.3 MB] Infant deaths Infant deaths per 1,000 live births: 5. ...
NASA Astrophysics Data System (ADS)
Bailey, David H.; Frolov, Alexei M.
2003-12-01
Since the above paper was published we have received a suggestion from T K Rebane that our variational energy, -402.261 928 652 266 220 998 au, for the 3S(L = 0) state from table 4 (right-hand column) is wrong in the fourth and fifth decimal digits. Our original variational energies were E(2000) = -402.192 865 226 622 099 583 au and E(3000) = -402.192 865 226 622 099 838 au. Unfortunately, table 4 contains a simple typographic error. The first two digits after the decimal point (26) in the published energies must be removed. Then the results exactly coincide with the original energies. These digits (26) were left in table 4 from the original version, which also included the 2S(L = 0) states of the helium-muonic atoms. A similar typographic error was found in table 4 of another paper by A M Frolov (2001 J. Phys. B: At. Mol. Opt. Phys. 34 3813). The computed ground state energy for the ppµ muonic molecular ion was -0.494 386 820 248 934 546 94 mau. In table 4 of that paper the first figure '8' (fifth digit after the decimal point) was lost from the energy value presented in this table. We wish to thank T K Rebane of the Fock Physical Institute in St Petersburg for pointing out the misprint related to the helium(4)-muonic atom.
Reddi, Krishna; Elgowainy, Amgad; Rustagi, Neha; ...
2017-05-16
Hydrogen fuel cell electric vehicles (HFCEVs) are zero-emission vehicles (ZEVs) that can provide drivers a similar experience to conventional internal combustion engine vehicles (ICEVs), in terms of fueling time and performance (i.e. power and driving range). The Society of Automotive Engineers (SAE) developed fueling protocol J2601 for light-duty HFCEVs to ensure safe vehicle fills while maximizing fueling performance. This study employs a physical model that simulates and compares the fueling performance of two fueling methods, known as the “lookup table” method and the “MC formula” method, within the SAE J2601 protocol. Both the fueling methods provide fast fueling of HFCEVsmore » within minutes, but the MC formula method takes advantage of active measurement of precooling temperature to dynamically control the fueling process, and thereby provides faster vehicle fills. Here, the MC formula method greatly reduces fueling time compared to the lookup table method at higher ambient temperatures, as well as when the precooling temperature falls on the colder side of the expected temperature window for all station types. Although the SAE J2601 lookup table method is the currently implemented standard for refueling hydrogen fuel cell vehicles, the MC formula method provides significant fueling time advantages in certain conditions; these warrant its implementation in future hydrogen refueling stations for better customer satisfaction with fueling experience of HFCEVs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddi, Krishna; Elgowainy, Amgad; Rustagi, Neha
Hydrogen fuel cell electric vehicles (HFCEVs) are zero-emission vehicles (ZEVs) that can provide drivers a similar experience to conventional internal combustion engine vehicles (ICEVs), in terms of fueling time and performance (i.e. power and driving range). The Society of Automotive Engineers (SAE) developed fueling protocol J2601 for light-duty HFCEVs to ensure safe vehicle fills while maximizing fueling performance. This study employs a physical model that simulates and compares the fueling performance of two fueling methods, known as the “lookup table” method and the “MC formula” method, within the SAE J2601 protocol. Both the fueling methods provide fast fueling of HFCEVsmore » within minutes, but the MC formula method takes advantage of active measurement of precooling temperature to dynamically control the fueling process, and thereby provides faster vehicle fills. Here, the MC formula method greatly reduces fueling time compared to the lookup table method at higher ambient temperatures, as well as when the precooling temperature falls on the colder side of the expected temperature window for all station types. Although the SAE J2601 lookup table method is the currently implemented standard for refueling hydrogen fuel cell vehicles, the MC formula method provides significant fueling time advantages in certain conditions; these warrant its implementation in future hydrogen refueling stations for better customer satisfaction with fueling experience of HFCEVs.« less
NASA Technical Reports Server (NTRS)
Weatherford, C. A.; Onda, K.; Temkin, A.
1985-01-01
The noniterative partial-differential-equation (PDE) approach to electron-molecule scattering of Onda and Temkin (1983) is modified to account for the effects of exchange explicitly. The exchange equation is reduced to a set of inhomogeneous equations containing no integral terms and solved noniteratively in a difference form; a method for propagating the solution to large values of r is described; the changes in the polarization potential of the original PDE method required by the inclusion of exact static exchange are indicated; and the results of computations for e-N2 scattering in the fixed-nuclei approximation are presented in tables and graphs and compared with previous calculations and experimental data. Better agreement is obtained using the modified PDE method.
Li, Zhilin; Xiao, Li; Cai, Qin; Zhao, Hongkai; Luo, Ray
2016-01-01
In this paper, a new Navier–Stokes solver based on a finite difference approximation is proposed to solve incompressible flows on irregular domains with open, traction, and free boundary conditions, which can be applied to simulations of fluid structure interaction, implicit solvent model for biomolecular applications and other free boundary or interface problems. For some problems of this type, the projection method and the augmented immersed interface method (IIM) do not work well or does not work at all. The proposed new Navier–Stokes solver is based on the local pressure boundary method, and a semi-implicit augmented IIM. A fast Poisson solver can be used in our algorithm which gives us the potential for developing fast overall solvers in the future. The time discretization is based on a second order multi-step method. Numerical tests with exact solutions are presented to validate the accuracy of the method. Application to fluid structure interaction between an incompressible fluid and a compressible gas bubble is also presented. PMID:27087702
Li, Zhilin; Xiao, Li; Cai, Qin; Zhao, Hongkai; Luo, Ray
2015-08-15
In this paper, a new Navier-Stokes solver based on a finite difference approximation is proposed to solve incompressible flows on irregular domains with open, traction, and free boundary conditions, which can be applied to simulations of fluid structure interaction, implicit solvent model for biomolecular applications and other free boundary or interface problems. For some problems of this type, the projection method and the augmented immersed interface method (IIM) do not work well or does not work at all. The proposed new Navier-Stokes solver is based on the local pressure boundary method, and a semi-implicit augmented IIM. A fast Poisson solver can be used in our algorithm which gives us the potential for developing fast overall solvers in the future. The time discretization is based on a second order multi-step method. Numerical tests with exact solutions are presented to validate the accuracy of the method. Application to fluid structure interaction between an incompressible fluid and a compressible gas bubble is also presented.
NASA Astrophysics Data System (ADS)
Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw
2012-12-01
We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.
FastStats: Health of American Indian or Alaska Native Population
... 1 [PDF – 2.7 MB] Leading causes of death Heart disease Cancer Accidents (unintentional injuries) Source: Deaths: ... 2015, table 1 [PDF – 2.3 MB] Infant deaths Infant deaths per 1,000 live births: 7. ...
Chronic Obstructive Pulmonary Disease (COPD) Includes: Chronic Bronchitis and Emphysema
... MB] Related FastStats Asthma More Data Age-adjusted death rates for selected causes of death, by sex, race, ... table 18 [PDF – 9.8 MB] COPD-related Mortality by Sex and Race Among Adults Aged 25 ...
FastStats: Health of Black or African American non-Hispanic Population
... Survey, 2016, Table P-11c [PDF – 236 KB] Mortality Number of deaths: 315,254 Deaths per 100, ... Birth Data Linked Birth and Infant Death Data Mortality Data National Health and Nutrition Examination Survey National ...
Lupus: When the Body Attacks Itself | NIH MedlinePlus the Magazine
... of this page please turn JavaScript on. Feature: Lupus Lupus: When the Body Attacks Itself Past Issues / Spring 2014 Table of Contents fast facts 1 Lupus occurs when the body's immune system attacks the ...
Design of a Holonic Control Architecture for Distributed Sensor Management
2009-09-01
Tracking tasks require only intermit - tent access to the sensors to maintain a given track quality. The higher the specified quality, the more often...resolution of the sensor (i.e., sensor mode), which can be adjusted to compensate for fast moving targets tracked over long ranges, or slower moving...but provides higher data update rates that are beneficial when tracking fast agile targets (i.e., a fighter). Table A.2 illustrates the dependence of
Evaluation of Visual Alerts in the Maritime Domain. Study 2. Program Modifications
2009-02-01
feedback that they were wrong, and without consulting the Status screen again enter the alternate answer (“ qwe ”). That is, the need to consult the...Neutral Large Slow No QWE Hostile Small Fast Yes ASD DRDC Atlantic CR 2008-268 5 Table 3. First proposed target types and...classification scheme. Target Size Speed Weapons Flag Response Type Neutral Large Slow No Other QWE Hostile Small Fast Yes Other ASD Friendly Large/Small Slow
Multifractals embedded in short time series: An unbiased estimation of probability moment
NASA Astrophysics Data System (ADS)
Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie
2016-12-01
An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.
Bennett, Bradley C; Husby, Chad E
2008-03-28
Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.
Performance of a 100V Half-Bridge MOSFET Driver, Type MIC4103, Over a Wide Temperature Range
NASA Technical Reports Server (NTRS)
Patterson, Richard L.; Hammoud, Ahmad
2011-01-01
The operation of a high frequency, high voltage MOSFET (metal-oxide semiconductor field-effect transistors) driver was investigated over a wide temperature regime that extended beyond its specified range. The Micrel MIC4103 is a 100V, non-inverting, dual driver that is designed to independently drive both high-side and low-side N-channel MOSFETs. It features fast propagation delay times and can drive 1000 pF load with 10ns rise times and 6 ns fall times [1]. The device consumes very little power, has supply under-voltage protection, and is rated for a -40 C to +125 C junction temperature range. The floating high-side driver of the chip can sustain boost voltages up to 100 V. Table I shows some of the device manufacturer s specification.
Natural convection heat transfer in an oscillating vertical cylinder
Ali Shah, Nehad; Tassaddiq, Asifa; Mustapha, Norzieha; Kechil, Seripah Awang
2018-01-01
This paper studies the heat transfer analysis caused due to free convection in a vertically oscillating cylinder. Exact solutions are determined by applying the Laplace and finite Hankel transforms. Expressions for temperature distribution and velocity field corresponding to cosine and sine oscillations are obtained. The solutions that have been obtained for velocity are presented in the forms of transient and post-transient solutions. Moreover, these solutions satisfy both the governing differential equation and all imposed initial and boundary conditions. Numerical computations and graphical illustrations are used in order to study the effects of Prandtl and Grashof numbers on velocity and temperature for various times. The transient solutions for both cosine and sine oscillations are also computed in tables. It is found that, the transient solutions are of considerable interest up to the times t = 15 for cosine oscillations and t = 1.75 for sine oscillations. After these moments, the transient solutions can be neglected and, the fluid moves according with the post-transient solutions. PMID:29304161
Natural convection heat transfer in an oscillating vertical cylinder.
Khan, Ilyas; Ali Shah, Nehad; Tassaddiq, Asifa; Mustapha, Norzieha; Kechil, Seripah Awang
2018-01-01
This paper studies the heat transfer analysis caused due to free convection in a vertically oscillating cylinder. Exact solutions are determined by applying the Laplace and finite Hankel transforms. Expressions for temperature distribution and velocity field corresponding to cosine and sine oscillations are obtained. The solutions that have been obtained for velocity are presented in the forms of transient and post-transient solutions. Moreover, these solutions satisfy both the governing differential equation and all imposed initial and boundary conditions. Numerical computations and graphical illustrations are used in order to study the effects of Prandtl and Grashof numbers on velocity and temperature for various times. The transient solutions for both cosine and sine oscillations are also computed in tables. It is found that, the transient solutions are of considerable interest up to the times t = 15 for cosine oscillations and t = 1.75 for sine oscillations. After these moments, the transient solutions can be neglected and, the fluid moves according with the post-transient solutions.
1986-09-01
analysis ’" methods in environmental samples. The hepatotoxins from laboratory cultures of M. aeruginosa Strain 7820,15 Anabena flos- aguae (A. 4flos...flos- aguae S-23-g-1l (8 lug) F1 The results from the amino acid analysis using the Llqui-Mat Analyzer are listed in Table 2. The elution times of the...Runnegar, M.T.C., and Huynh, V.L. Effec- tiveness of Activated Carbon in the Removal of Algal Toxin from Potable Water Supplies: A Pilot Plant
Methods for performing fast discrete curvelet transforms of data
Candes, Emmanuel; Donoho, David; Demanet, Laurent
2010-11-23
Fast digital implementations of the second generation curvelet transform for use in data processing are disclosed. One such digital transformation is based on unequally-spaced fast Fourier transforms (USFFT) while another is based on the wrapping of specially selected Fourier samples. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. Both implementations are fast in the sense that they run in about O(n.sup.2 log n) flops for n by n Cartesian arrays or about O(N log N) flops for Cartesian arrays of size N=n.sup.3; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity.
Isaac Newton and the astronomical refraction.
Lehn, Waldemar H
2008-12-01
In a short interval toward the end of 1694, Isaac Newton developed two mathematical models for the theory of the astronomical refraction and calculated two refraction tables, but did not publish his theory. Much effort has been expended, starting with Biot in 1836, in the attempt to identify the methods and equations that Newton used. In contrast to previous work, a closed form solution is identified for the refraction integral that reproduces the table for his first model (in which density decays linearly with elevation). The parameters of his second model, which includes the exponential variation of pressure in an isothermal atmosphere, have also been identified by reproducing his results. The implication is clear that in each case Newton had derived exactly the correct equations for the astronomical refraction; furthermore, he was the first to do so.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Ambroise, J.; Salerno, M.; Kevrekidis, P. G.
The existence of multidimensional lattice compactons in the discrete nonlinear Schrödinger equation in the presence of fast periodic time modulations of the nonlinearity is demonstrated. By averaging over the period of the fast modulations, an effective averaged dynamical equation arises with coupling constants involving Bessel functions of the first and zeroth kinds. We show that these terms allow one to solve, at this averaged level, for exact discrete compacton solution configurations in the corresponding stationary equation. We focus on seven types of compacton solutions. Single-site and vortex solutions are found to be always stable in the parametric regimes we examined.more » We also found that other solutions such as double-site in- and out-of-phase, four-site symmetric and antisymmetric, and a five-site compacton solution are found to have regions of stability and instability in two-dimensional parametric planes, involving variations of the strength of the coupling and of the nonlinearity. We also explore the time evolution of the solutions and compare the dynamics according to the averaged equations with those of the original dynamical system. Finally, the possible observation of compactons in Bose-Einstein condensates loaded in a deep two-dimensional optical lattice with interactions modulated periodically in time is also discussed.« less
D'Ambroise, J.; Salerno, M.; Kevrekidis, P. G.; ...
2015-11-19
The existence of multidimensional lattice compactons in the discrete nonlinear Schrödinger equation in the presence of fast periodic time modulations of the nonlinearity is demonstrated. By averaging over the period of the fast modulations, an effective averaged dynamical equation arises with coupling constants involving Bessel functions of the first and zeroth kinds. We show that these terms allow one to solve, at this averaged level, for exact discrete compacton solution configurations in the corresponding stationary equation. We focus on seven types of compacton solutions. Single-site and vortex solutions are found to be always stable in the parametric regimes we examined.more » We also found that other solutions such as double-site in- and out-of-phase, four-site symmetric and antisymmetric, and a five-site compacton solution are found to have regions of stability and instability in two-dimensional parametric planes, involving variations of the strength of the coupling and of the nonlinearity. We also explore the time evolution of the solutions and compare the dynamics according to the averaged equations with those of the original dynamical system. Finally, the possible observation of compactons in Bose-Einstein condensates loaded in a deep two-dimensional optical lattice with interactions modulated periodically in time is also discussed.« less
Radiocarbon constraints on the glacial ocean circulation and its impact on atmospheric CO2
Skinner, L. C.; Primeau, F.; Freeman, E.; de la Fuente, M.; Goodwin, P. A.; Gottschalk, J.; Huang, E.; McCave, I. N.; Noble, T. L.; Scrivner, A. E.
2017-01-01
While the ocean’s large-scale overturning circulation is thought to have been significantly different under the climatic conditions of the Last Glacial Maximum (LGM), the exact nature of the glacial circulation and its implications for global carbon cycling continue to be debated. Here we use a global array of ocean–atmosphere radiocarbon disequilibrium estimates to demonstrate a ∼689±53 14C-yr increase in the average residence time of carbon in the deep ocean at the LGM. A predominantly southern-sourced abyssal overturning limb that was more isolated from its shallower northern counterparts is interpreted to have extended from the Southern Ocean, producing a widespread radiocarbon age maximum at mid-depths and depriving the deep ocean of a fast escape route for accumulating respired carbon. While the exact magnitude of the resulting carbon cycle impacts remains to be confirmed, the radiocarbon data suggest an increase in the efficiency of the biological carbon pump that could have accounted for as much as half of the glacial–interglacial CO2 change. PMID:28703126
Strategic Insights. Volume 10, Issue 1, Spring 2011
2011-01-01
stated, Chinn and Frankel put the exact year as 1872. Table 1 shows that the United States had the largest economy and the fastest rate of growth...The system attempted to lower trade barriers by reconciling exchange rate stability and domestic economic autonomy by creating an explicit code of...Monetary Fund website, https://www.imf.org/external/pubs/ft/weo/2010/01/weodata/index.aspx, (accessed June 3, 2010). 41 Michael Mussa, " Exchange Rate
2008-01-01
ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Air Force Research Laboratory /RVBXR AFRL-RV-HA-TR-2008-1053 29 Randolph...the umn 2 in Table I) features a very similiar behavior of a(R) g states so that the asymptotic value of a(R) exactly coffe - although these a(R) curves
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
A fast rebinning algorithm for 3D positron emission tomography using John's equation
NASA Astrophysics Data System (ADS)
Defrise, Michel; Liu, Xuan
1999-08-01
Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.
Local Equilibrium and Retardation Revisited.
Hansen, Scott K; Vesselinov, Velimir V
2018-01-01
In modeling solute transport with mobile-immobile mass transfer (MIMT), it is common to use an advection-dispersion equation (ADE) with a retardation factor, or retarded ADE. This is commonly referred to as making the local equilibrium assumption (LEA). Assuming local equilibrium, Eulerian textbook treatments derive the retarded ADE, ostensibly exactly. However, other authors have presented rigorous mathematical derivations of the dispersive effect of MIMT, applicable even in the case of arbitrarily fast mass transfer. We resolve the apparent contradiction between these seemingly exact derivations by adopting a Lagrangian point of view. We show that local equilibrium constrains the expected time immobile, whereas the retarded ADE actually embeds a stronger, nonphysical, constraint: that all particles spend the same amount of every time increment immobile. Eulerian derivations of the retarded ADE thus silently commit the gambler's fallacy, leading them to ignore dispersion due to mass transfer that is correctly modeled by other approaches. We then present a particle tracking simulation illustrating how poor an approximation the retarded ADE may be, even when mobile and immobile plumes are continually near local equilibrium. We note that classic "LEA" (actually, retarded ADE validity) criteria test for insignificance of MIMT-driven dispersion relative to hydrodynamic dispersion, rather than for local equilibrium. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Thermographic measurements of high-speed metal cutting
NASA Astrophysics Data System (ADS)
Mueller, Bernhard; Renz, Ulrich
2002-03-01
Thermographic measurements of a high-speed cutting process have been performed with an infrared camera. To realize images without motion blur the integration times were reduced to a few microseconds. Since the high tool wear influences the measured temperatures a set-up has been realized which enables small cutting lengths. Only single images have been recorded because the process is too fast to acquire a sequence of images even with the frame rate of the very fast infrared camera which has been used. To expose the camera when the rotating tool is in the middle of the camera image an experimental set-up with a light barrier and a digital delay generator with a time resolution of 1 ns has been realized. This enables a very exact triggering of the camera at the desired position of the tool in the image. Since the cutting depth is between 0.1 and 0.2 mm a high spatial resolution was also necessary which was obtained by a special close-up lens allowing a resolution of app. 45 microns. The experimental set-up will be described and infrared images and evaluated temperatures of a titanium alloy and a carbon steel will be presented for cutting speeds up to 42 m/s.
[Fast identification of constituents of Lagotis brevituba by using UPLC-Q-TOF-MS/MS method].
Xie, Jing; Zhang, Li; Zeng, Jin-Xiang; Li, Min; Wang, Juan; Xie, Xiong-Xiong; Zhong, Guo-Yue; Luo, Guang-Ming; Yuan, Jin-Bin; Liang, Jian
2017-06-01
The chemical constituents of Lagotis brevituba were rapidly determined and analyzed by using ultra performance liquid chromatography tandem quadrupole time of flight mass spectrometry (UPLC-Q-TOF-MS/MS) method, providing material basis for the clinical application of L. brevituba. The separation was performed on UPLC YMC-Triart C₁₈ (2.1 mm×100 mm, 1.9 μm) column, with acetonitrile-water containing 0.2% formic acid as mobile phase for gradient elution. The flow rate was 0.4 mL•min-1 gradient elution and column temperature was 40 ℃, the injection volume was 2 μL. ESI ion source was used to ensure the data collected in a negative ion mode. The chemical components of L. brevituba were identified through retention time, exact relative molecular mass, cleavage fragments of MS/MS and reported data. The results showed that a total of 22 compounds were identified, including 11 flavones, 6 phenylethanoid glycosides, 1 iridoid glucosides, and 4 organic acid. The UPLC-Q-TOF-MS/MS method could fast identify the chemical components of L. brevituba, providing valuable information about L. brevituba for its clinical application. Copyright© by the Chinese Pharmaceutical Association.
Numerical Solution of the Electron Transport Equation in the Upper Atmosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Mark Christopher; Holmes, Mark; Sailor, William C
A new approach for solving the electron transport equation in the upper atmosphere is derived. The problem is a very stiff boundary value problem, and to obtain an accurate numerical solution, matrix factorizations are used to decouple the fast and slow modes. A stable finite difference method is applied to each mode. This solver is applied to a simplifieed problem for which an exact solution exists using various versions of the boundary conditions that might arise in a natural auroral display. The numerical and exact solutions are found to agree with each other to at least two significant digits.
Real-time earthquake monitoring using a search engine method.
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-12-04
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.
Real-time earthquake monitoring using a search engine method
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-01-01
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861
Modeling high-order synchronization epochs and transitions in the cardiovascular system
NASA Astrophysics Data System (ADS)
García-Álvarez, David; Bahraminasab, Alireza; Stefanovska, Aneta; McClintock, Peter V. E.
2007-12-01
We study a system consisting of two coupled phase oscillators in the presence of noise. This system is used as a model for the cardiorespiratory interaction in wakefulness and anaesthesia. We show that longrange correlated noise produces transitions between epochs with different n:m synchronisation ratios, as observed in the cardiovascular system. Also, we see that, the smaller the noise (specially the one acting on the slower oscillator), the bigger the synchronisation time, exactly as happens in anaesthesia compared with wakefulness. The dependence of the synchronisation time on the couplings, in the presence of noise, is studied; such dependence is softened by low-frequency noise. We show that the coupling from the slow oscillator to the fast one (respiration to heart) plays a more important role in synchronisation. Finally, we see that the isolines with same synchronisation time seem to be a linear combination of the two couplings.
Plasticity of adipose tissue in response to fasting and refeeding in male mice.
Tang, Hao-Neng; Tang, Chen-Yi; Man, Xiao-Fei; Tan, Shu-Wen; Guo, Yue; Tang, Jun; Zhou, Ci-La; Zhou, Hou-De
2017-01-01
Fasting is the most widely prescribed and self-imposed strategy for treating excessive weight gain and obesity, and has been shown to exert a number of beneficial effects. The aim of the present study was to determine the exact role of fasting and subsequent refeeding on fat distribution in mice. C57/BL6 mice fasted for 24 to 72 h and were then subjected to refeeding for 72 h. At 24, 48 and 72 h of fasting, and 12, 24, 48 and 72 h of refeeding, the mice were sacrificed, and serum and various adipose tissues were collected. Serum biochemical parameters, adipose tissue masses and histomorphological analysis of different depots were detected. MRNA was isolated from various adipose tissues, and the expressions of thermogenesis, visceral signature and lipid metabolism-related genes were examined. The phenotypes of adipose tissues between juvenile and adult mice subjected to fasting and refeeding were also compared. Fasting preferentially consumed mesenteric fat mass and decreased the cell size of mesenteric depots; however, refeeding recovered the mass and morphology of inguinal adipose tissues preferentially compared with visceral depots. Thermogenesis-related gene expression in the inguinal WAT and interscapular BAT were suppressed. Mitochondrial biogenesis was affected by fasting in a depot-specific manner. Furthermore, a short period of fasting led to an increase in visceral signature genes ( Wt1, Tcf21 ) in subcutaneous adipose tissue, while the expression of these genes decreased sharply as the fasting time increased. Additionally, lipogenesis-related markers were enhanced to a greater extent greater in subcutaneous depots compared with those in visceral adipose tissues by refeeding. Although similar phenotypic changes in adipose tissue were observed between juvenile mice and adult mice subjected to fasting and refeeding, the alterations appeared earlier and more sensitively in juvenile mice. Fasting preferentially consumes lipids in visceral adipose tissues, whereas refeeding recovers lipids predominantly in subcutaneous adipose tissues, which indicated the significance of plasticity of adipose organs for fat distribution when subject to food deprivation or refeeding.
Fast, Exact Bootstrap Principal Component Analysis for p > 1 million
Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim
2015-01-01
Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801
Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha
2012-11-01
Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.
Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa
2012-11-01
Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (μ∕ρ) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, gray∕white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive the high-degree parallelism of the task overlapping strategy. Numerical experiments were conducted to compare the performance of the OpenCL∕GPGPU-based implementation with the CPU-based implementation. The projection images were similar to typical portal images obtained with a 4 or 6 MV x-ray source. For a phantom size of 512 × 512 × 223, the time for calculating the line integrals for a 512 × 512 image panel was 16.2 ms on GPGPU for one energy bin in comparison to 8.83 s on CPU. The total computation time for generating one polyenergetic projection image of 512 × 512 was 0.3 s (141 s for CPU). The relative difference between the projection images obtained with the CPU-based and OpenCL∕GPGPU-based implementations was on the order of 10(-6) and was virtually indistinguishable. The task overlapping strategy was 5.84 and 1.16 times faster than the sequential method for the first and the subsequent digitally reconstruction radiographies, respectively. The authors have successfully built digital phantoms using anatomic CT images and NIST μ∕ρ tables for simulating realistic polyenergetic projection images and optimized the processing speed with parallel computing using GPGPU∕OpenCL-based implementation. The computation time was fast (0.3 s per projection image) enough for real-time IGRT (image-guided radiotherapy) applications.
NASA Astrophysics Data System (ADS)
Anisimov, D. N.; Dang, Thai Son; Banerjee, Santo; Mai, The Anh
2017-07-01
In this paper, an intelligent system use fuzzy-PD controller based on relation models is developed for a two-wheeled self-balancing robot. Scaling factors of the fuzzy-PD controller are optimized by a Cross-Entropy optimization method. A linear Quadratic Regulator is designed to bring a comparison with the fuzzy-PD controller by control quality parameters. The controllers are ported and run on STM32F4 Discovery Kit based on the real-time operating system. The experimental results indicate that the proposed fuzzy-PD controller runs exactly on embedded system and has desired performance in term of fast response, good balance and stabilize.
An Epoch of Reionization simulation pipeline based on BEARS
NASA Astrophysics Data System (ADS)
Krause, Fabian; Thomas, Rajat M.; Zaroubi, Saleem; Abdalla, Filipe B.
2018-10-01
The quest to unlock the mysteries of the Epoch of Reionization (EoR) is well poised with many experiments at diverse wavelengths beginning to gather data. Albeit these efforts, we are yet uncertain about the various factors that influence the EoR which include, the nature of the sources, their spectral characteristics (blackbody temperatures, power-law indices), clustering property, efficiency, duty cycle etc. Given these physical uncertainties that define the EoR, we need fast and efficient computational methods to model and analyze the data in order to provide confidence bounds on the parameters that influence the brightness temperature at 21-cm. Towards this goal we developed a pipeline that combines dark matter-only N-body simulations with exact 1-dimensional radiative transfer computations to approximate exact 3-dimensional radiative transfer. Because these simulations are about two to three orders of magnitude faster than the exact 3-dimensional methods, they can be used to explore the parameter space of the EoR systematically. A fast scheme like this pipeline could be incorporated into a Bayesian framework for parameter estimation. In this paper we detail the construction of the pipeline and describe how to use the software which is being made publicly available. We show the results of running the pipeline for four test cases of sources with various spectral energy distributions and compare their outputs using various statistics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gus’kov, S. Yu., E-mail: guskov@sci.lebedev.ru; Nicolai, Ph.; Ribeyre, X.
2015-09-15
An exact analytic solution is found for the steady-state distribution function of fast electrons with an arbitrary initial spectrum irradiating a planar low-Z plasma with an arbitrary density distribution. The solution is applied to study the heating of a material by fast electrons of different spectra such as a monoenergetic spectrum, a step-like distribution in a given energy range, and a Maxwellian spectrum, which is inherent in laser-produced fast electrons. The heating of shock- and fast-ignited precompressed inertial confinement fusion (ICF) targets as well as the heating of a target designed to generate a Gbar shock wave for equation ofmore » state (EOS) experiments by laser-produced fast electrons with a Maxwellian spectrum is investigated. A relation is established between the energies of two groups of Maxwellian fast electrons, which are responsible for generation of a shock wave and heating the upstream material (preheating). The minimum energy of the fast and shock igniting beams as well as of the beam for a Gbar shock wave generation increases with the spectral width of the electron distribution.« less
Imbelloni, Luiz Eduardo; Pombo, Illova Anaya Nasiane; Filho, Geraldo Borges de Morais
2015-01-01
Patient's satisfaction is a standard indicator of care quality. The aim of this study was to evaluate whether a preoperative oral ingestion of 200mL of a carbohydrate drink can improve comfort and satisfaction with anesthesia in elderly patients with hip fracture. Prospective randomized clinical trial conducted in a Brazilian public hospital, with patients ASA I-III undergoing surgery for hip fracture. The control group (NPO) received nothing by mouth after 9:00 p.m. the night before, while patients in the experimental group (CHO) received 200mL of a carbohydrate drink 2-4hours before the operation. Patients' characteristics, subjective perceptions, thirst and hunger and satisfaction were determined in four steps. Mann-Whitney U-test and Fisher exact test were used for comparison of control and experimental groups. A p-value <0.05 was considered significant. A total of 100 patients were included in one of two regimens of preoperative fasting. Fasting time decreased significantly in the study group. Patients drank 200mL 2:59h before surgery and showed no hunger (p <0.00) and thirsty on arrival to OR (p <0.00), resulting in increased satisfaction with the perioperative anesthesia care (p <0.00). The satisfaction questionnaire for surgical patient could become a useful tool in assessing the quality of care. In conclusion, CHO significantly reduces preoperative discomfort and increases satisfaction with anesthesia care. Copyright © 2014 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.
NASA Astrophysics Data System (ADS)
Gupta, S. R. D.; Gupta, Santanu D.
1991-10-01
The flow of laser radiation in a plane-parallel cylindrical slab of active amplifying medium with axial symmetry is treated as a problem in radiative transfer. The appropriate one-dimensional transfer equation describing the transfer of laser radiation has been derived by an appeal to Einstein's A, B coefficients (describing the processes of stimulated line absorption, spontaneous line emission, and stimulated line emission sustained by population inversion in the medium) and considering the 'rate equations' to completely establish the rational of the transfer equation obtained. The equation is then exactly solved and the angular distribution of the emergent laser beam intensity is obtained; its numerically computed values are given in tables and plotted in graphs showing the nature of peaks of the emerging laser beam intensity about the axis of the laser cylinder.
Numerical simulation of KdV equation by finite difference method
NASA Astrophysics Data System (ADS)
Yokus, A.; Bulut, H.
2018-05-01
In this study, the numerical solutions to the KdV equation with dual power nonlinearity by using the finite difference method are obtained. Discretize equation is presented in the form of finite difference operators. The numerical solutions are secured via the analytical solution to the KdV equation with dual power nonlinearity which is present in the literature. Through the Fourier-Von Neumann technique and linear stable, we have seen that the FDM is stable. Accuracy of the method is analyzed via the L2 and L_{∞} norm errors. The numerical, exact approximations and absolute error are presented in tables. We compare the numerical solutions with the exact solutions and this comparison is supported with the graphic plots. Under the choice of suitable values of parameters, the 2D and 3D surfaces for the used analytical solution are plotted.
Alfven wave cyclotron resonance heating
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, R.B.; Yosikawa, S.; Oberman, C.
1981-02-01
The resonance absorption of fast Alfven waves at the proton ctclotron resonance of a predominately deuterium plasma is investigated. An approximate dispersion relation is derived, valid in the vicinity of the resonance, which permits an exact calculation of transmission and reflection coefficients. For reasonable plasma parameters significant linear resonance absorption is found.
aMC fast: automation of fast NLO computations for PDF fits
NASA Astrophysics Data System (ADS)
Bertone, Valerio; Frederix, Rikkert; Frixione, Stefano; Rojo, Juan; Sutton, Mark
2014-08-01
We present the interface between M adG raph5_ aMC@NLO, a self-contained program that calculates cross sections up to next-to-leading order accuracy in an automated manner, and APPL grid, a code that parametrises such cross sections in the form of look-up tables which can be used for the fast computations needed in the context of PDF fits. The main characteristic of this interface, which we dub aMC fast, is its being fully automated as well, which removes the need to extract manually the process-specific information for additional physics processes, as is the case with other matrix-element calculators, and renders it straightforward to include any new process in the PDF fits. We demonstrate this by studying several cases which are easily measured at the LHC, have a good constraining power on PDFs, and some of which were previously unavailable in the form of a fast interface.
Eisinga, Rob; Heskes, Tom; Pelzer, Ben; Te Grotenhuis, Manfred
2017-01-25
The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation. We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p-values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip .
The generation and propagation of internal gravity waves in a rotating fluid
NASA Technical Reports Server (NTRS)
Maxworthy, T.; Chabert Dhieres, G.; Didelle, H.
1984-01-01
The present investigation is concerned with an extension of a study conducted bu Maxworthy (1979) on internal wave generation by barotropic tidal flow over bottom topography. A short series of experiments was carried out during a limited time period on a large (14-m diameter) rotating table. It was attempted to obtain, in particular, information regarding the plan form of the waves, the exact character of the flow over the obstacle, and the evolution of the waves. The main basin was a dammed section of a long free surface water tunnel. The obstacle was towed back and forth by a wire harness connected to an electronically controlled hydraulic piston, the stroke and period of which could be independently varied. Attention is given to the evolution of the wave crests, the formation of solitary wave groups the evolution of the three-dimensional wave field wave shapes, the wave amplitudes, and particle motion.
A data colocation grid framework for big data medical image processing: backend design
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design
Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-01-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668
Design of a self-adaptive fuzzy PID controller for piezoelectric ceramics micro-displacement system
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Zhong, Yuning; Xu, Zhongbao
2008-12-01
In order to improve control precision of the piezoelectric ceramics (PZT) micro-displacement system, a self-adaptive fuzzy Proportional Integration Differential (PID) controller is designed based on the traditional digital PID controller combining with fuzzy control. The arithmetic gives a fuzzy control rule table with the fuzzy control rule and fuzzy reasoning, through this table, the PID parameters can be adjusted online in real time control. Furthermore, the automatic selective control is achieved according to the change of the error. The controller combines the good dynamic capability of the fuzzy control and the high stable precision of the PID control, adopts the method of using fuzzy control and PID control in different segments of time. In the initial and middle stage of the transition process of system, that is, when the error is larger than the value, fuzzy control is used to adjust control variable. It makes full use of the fast response of the fuzzy control. And when the error is smaller than the value, the system is about to be in the steady state, PID control is adopted to eliminate static error. The problems of PZT existing in the field of precise positioning are overcome. The results of the experiments prove that the project is correct and practicable.
Measuring Distances to Remote Galaxies and Quasars.
ERIC Educational Resources Information Center
McCarthy, Patrick J.
1988-01-01
Describes the use of spectroscopy and the redshift to measure how far an object is by measuring how fast it is receding from earth. Lists the most distant quasars yet found. Tables include "Redshift vs. Distance" and "Distances to Celestial Objects for Various Cosmologies." (CW)
Gurdak, Jason J.; Walvoord, Michelle Ann; McMahon, Peter B.
2008-01-01
Aquifer susceptibility to contamination is controlled in part by the inherent hydrogeologic properties of the vadose zone, which includes preferential-flow pathways. The purpose of this study was to investigate the importance of seasonal ponding near leaky irrigation wells as a mechanism for depression-focused preferential flow and enhanced chemical migration through the vadose zone of the High Plains aquifer. Such a mechanism may help explain the widespread presence of agrichemicals in recently recharged groundwater despite estimates of advective chemical transit times through the vadose zone from diffuse recharge that exceed the historical period of agriculture. Using a combination of field observations, vadose zone flow and transport simulations, and probabilistic neural network modeling, we demonstrated that vadose zone transit times near irrigation wells range from 7 to 50 yr, which are one to two orders of magnitude faster than previous estimates based on diffuse recharge. These findings support the concept of fast and slow transport zones and help to explain the previous discordant findings of long vadose zone transit times and the presence of agrichemicals at the water table. Using predictions of aquifer susceptibility from probabilistic neural network models, we delineated approximately 20% of the areal extent of the aquifer to have conditions that may promote advective chemical transit times to the water table of <50 yr if seasonal ponding and depression-focused flow exist. This aquifer-susceptibility map may help managers prioritize areas for groundwater monitoring or implementation of best management practices.
Efficient algorithms for a class of partitioning problems
NASA Technical Reports Server (NTRS)
Iqbal, M. Ashraf; Bokhari, Shahid H.
1990-01-01
The problem of optimally partitioning the modules of chain- or tree-like tasks over chain-structured or host-satellite multiple computer systems is addressed. This important class of problems includes many signal processing and industrial control applications. Prior research has resulted in a succession of faster exact and approximate algorithms for these problems. Polynomial exact and approximate algorithms are described for this class that are better than any of the previously reported algorithms. The approach is based on a preprocessing step that condenses the given chain or tree structured task into a monotonic chain or tree. The partitioning of this monotonic take can then be carried out using fast search techniques.
NASA Astrophysics Data System (ADS)
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
Exact geodesic distances in FLRW spacetimes
NASA Astrophysics Data System (ADS)
Cunningham, William J.; Rideout, David; Halverson, James; Krioukov, Dmitri
2017-11-01
Geodesics are used in a wide array of applications in cosmology and astrophysics. However, it is not a trivial task to efficiently calculate exact geodesic distances in an arbitrary spacetime. We show that in spatially flat (3 +1 )-dimensional Friedmann-Lemaître-Robertson-Walker (FLRW) spacetimes, it is possible to integrate the second-order geodesic differential equations, and derive a general method for finding both timelike and spacelike distances given initial-value or boundary-value constraints. In flat spacetimes with either dark energy or matter, whether dust, radiation, or a stiff fluid, we find an exact closed-form solution for geodesic distances. In spacetimes with a mixture of dark energy and matter, including spacetimes used to model our physical universe, there exists no closed-form solution, but we provide a fast numerical method to compute geodesics. A general method is also described for determining the geodesic connectedness of an FLRW manifold, provided only its scale factor.
Detailed noise statistics for an optically preamplified direct detection receiver
NASA Astrophysics Data System (ADS)
Danielsen, Soeren Lykke; Mikkelsen, Benny; Durhuus, Terji; Joergensen, Carsten; Stubkjaer, Kristian E.
We describe the exact statistics of an optically preamplified direct detection receiver by means of the moment generating function. The theory allows an arbitrary shaped electrical filter in the receiver circuit. The moment generating function (MGF) allows for a precise calculation of the error rate by using the inverse Fast Fourier transform (FFT). The exact results are compared with the usual Gaussian approximation (GA), the saddlepoint approximation (SAP) and the modified Chernoff bound (MCB). This comparison shows that the noise is not Gaussian distributed for all values of the optical amplifier gain. In the region from 20-30 dB gain, calculations shows that the GA underestimates the receiver sensitivity while the SAP is very close to the results of our exact model. Using the MGF derived in the article we then find the optimal bandwidth of the electrical filter in the receiver circuit and calculate the sensitivity degradation due to inter symbol interference (ISI).
Fast and reliable symplectic integration for planetary system N-body problems
NASA Astrophysics Data System (ADS)
Hernandez, David M.
2016-06-01
We apply one of the exactly symplectic integrators, which we call HB15, of Hernandez & Bertschinger, along with the Kepler problem solver of Wisdom & Hernandez, to solve planetary system N-body problems. We compare the method to Wisdom-Holman (WH) methods in the MERCURY software package, the MERCURY switching integrator, and others and find HB15 to be the most efficient method or tied for the most efficient method in many cases. Unlike WH, HB15 solved N-body problems exhibiting close encounters with small, acceptable error, although frequent encounters slowed the code. Switching maps like MERCURY change between two methods and are not exactly symplectic. We carry out careful tests on their properties and suggest that they must be used with caution. We then use different integrators to solve a three-body problem consisting of a binary planet orbiting a star. For all tested tolerances and time steps, MERCURY unbinds the binary after 0 to 25 years. However, in the solutions of HB15, a time-symmetric HERMITE code, and a symplectic Yoshida method, the binary remains bound for >1000 years. The methods' solutions are qualitatively different, despite small errors in the first integrals in most cases. Several checks suggest that the qualitative binary behaviour of HB15's solution is correct. The Bulirsch-Stoer and Radau methods in the MERCURY package also unbind the binary before a time of 50 years, suggesting that this dynamical error is due to a MERCURY bug.
NASA Astrophysics Data System (ADS)
Shaya, E.; Kargatis, V.; Blackwell, J.; Borne, K.; White, R. A.; Cheung, C.
1998-05-01
Several new web based services have been introduced this year by the Astrophysics Data Facility (ADF) at the NASA Goddard Space Flight Center. IMPReSS is a graphical interface to astrophysics databases that presents the user with the footprints of observations of space-based missions. It also aids astronomers in retrieving these data by sending requests to distributed data archives. The VIEWER is a reader of ADC astronomical catalogs and journal tables that allows subsetting of catalogs by column choices and range selection and provides database-like search capability within each table. With it, the user can easily find the table data most appropriate for their purposes and then download either the subset table or the original table. CATSEYE is a tool that plots output tables from the VIEWER (and soon AMASE), making exploring the datasets fast and easy. Having completed the basic functionality of these systems, we are enhancing the site to provide advanced functionality. These will include: market basket storage of tables and records of VIEWER output for IMPReSS and AstroBrowse queries, non-HTML table responses to AstroBrowse type queries, general column arithmetic, modularity to allow entrance into the sequence of web pages at any point, histogram plots, navigable maps, and overplotting of catalog objects on mission footprint maps. When completed, the ADF/ADC web facilities will provide astronomical tabled data and mission retrieval information in several hyperlinked environments geared for users at any level, from the school student to the typical astronomer to the expert datamining tools at state-of-the-art data centers.
Modelling rogue waves through exact dynamical lump soliton controlled by ocean currents.
Kundu, Anjan; Mukherjee, Abhik; Naskar, Tapan
2014-04-08
Rogue waves are extraordinarily high and steep isolated waves, which appear suddenly in a calm sea and disappear equally fast. However, though the rogue waves are localized surface waves, their theoretical models and experimental observations are available mostly in one dimension, with the majority of them admitting only limited and fixed amplitude and modular inclination of the wave. We propose two dimensions, exactly solvable nonlinear Schrödinger (NLS) equation derivable from the basic hydrodynamic equations and endowed with integrable structures. The proposed two-dimensional equation exhibits modulation instability and frequency correction induced by the nonlinear effect, with a directional preference, all of which can be determined through precise analytic result. The two-dimensional NLS equation allows also an exact lump soliton which can model a full-grown surface rogue wave with adjustable height and modular inclination. The lump soliton under the influence of an ocean current appears and disappears preceded by a hole state, with its dynamics controlled by the current term. These desirable properties make our exact model promising for describing ocean rogue waves.
Modelling rogue waves through exact dynamical lump soliton controlled by ocean currents
Kundu, Anjan; Mukherjee, Abhik; Naskar, Tapan
2014-01-01
Rogue waves are extraordinarily high and steep isolated waves, which appear suddenly in a calm sea and disappear equally fast. However, though the rogue waves are localized surface waves, their theoretical models and experimental observations are available mostly in one dimension, with the majority of them admitting only limited and fixed amplitude and modular inclination of the wave. We propose two dimensions, exactly solvable nonlinear Schrödinger (NLS) equation derivable from the basic hydrodynamic equations and endowed with integrable structures. The proposed two-dimensional equation exhibits modulation instability and frequency correction induced by the nonlinear effect, with a directional preference, all of which can be determined through precise analytic result. The two-dimensional NLS equation allows also an exact lump soliton which can model a full-grown surface rogue wave with adjustable height and modular inclination. The lump soliton under the influence of an ocean current appears and disappears preceded by a hole state, with its dynamics controlled by the current term. These desirable properties make our exact model promising for describing ocean rogue waves. PMID:24711719
Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.
Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong
2016-02-01
Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both the naive construction methods and the state-of-the-art hashing algorithms.
Nucleon and deuteron scattering cross sections from 25 MV/Nucleon to 22.5 GeV/Nucleon
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Wilson, J. W.; Bidasaria, H. B.
1983-01-01
Within the context of a double-folding optical potential approximation to the exact nucleus-nucleus multiple-scattering series, eikonal scattering theory is used to generate tables of nucleon and deuteron total and absorption cross sections at kinetic energies between 25 MeV/nucleon and 22.5 GeV/nucleon for use in cosmic-ray transport and shielding studies. Comparisons of predictions for nucleon-nucleus and deuteron-nucleus absorption and total cross sections with experimental data are also made.
Integrated software package STAMP for minor planets
NASA Technical Reports Server (NTRS)
Kochetova, O. M.; Shor, Viktor A.
1992-01-01
The integrated software package STAMP allowed for rapid and exact reproduction of the tables of the year-book 'Ephemerides of Minor Planets.' Additionally, STAMP solved the typical problems connected with the use of the year-book. STAMP is described. The year-book 'Ephemerides of Minor Planets' (EMP) is a publication used in many astronomical institutions around the world. It contains all the necessary information on the orbits of the numbered minor planets. Also, the astronomical coordinates are provided for each planet during its suitable observation period.
Davydov solitons in polypeptides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, A.
1984-10-01
The experimental evidence for self-trapping of amide-I (CO stretching) vibrational energy in crystalline acetanilide (a model protein) is reviewed and related to A. S. Davydov's theory of solitons as a mechanism for energy storage and transport in protein. Particular attention is paid to the construction of quantum states that contain N amide-I vibrational quanta. It is noted that the N = 2 state is almost exactly resonant with the free energy that is released upon hydrolysis of adenosine triphosphate. 30 references, 4 figures, 3 tables.
Balancing fast-rotating parts of hand-held machine drive
NASA Astrophysics Data System (ADS)
Korotkov, V. S.; Sicora, E. A.; Nadeina, L. V.; Yongzheng, Wang
2018-03-01
The article considers the issues related to the balancing of fast rotating parts of the hand-held machine drive including a wave transmission with intermediate rolling elements, which is constructed on the basis of the single-phase collector motor with a useful power of 1 kW and a nominal rotation frequency of 15000 rpm. The forms of balancers and their location are chosen. The method of balancing is described. The scheme for determining of residual unbalance in two correction planes is presented. Measurement results are given in tables.
NASA Astrophysics Data System (ADS)
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; Dagotto, Elbio
2015-06-01
Lattice spin-fermion models are important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the "spins," are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The "traveling cluster approximation" (TCA) is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 103 sites. In this publication, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. This allows us to solve generic spin-fermion models easily on 104 lattice sites and with some effort on 105 lattice sites, representing the record lattice sizes studied for this family of models.
Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences.
Nozaradan, Sylvie; Mouraux, André; Cousineau, Marion
2017-07-01
The human auditory system presents a remarkable ability to detect rapid changes in fast, continuous acoustic sequences, as best illustrated in speech and music. However, the neural processing of rapid auditory contrast remains largely unclear, probably due to the lack of methods to objectively dissociate the response components specifically related to the contrast from the other components in response to the sequence of fast continuous sounds. To overcome this issue, we tested a novel use of the frequency-tagging approach allowing contrast-specific neural responses to be tracked based on their expected frequencies. The EEG was recorded while participants listened to 40-s sequences of sounds presented at 8Hz. A tone or interaural time contrast was embedded every fifth sound (AAAAB), such that a response observed in the EEG at exactly 8 Hz/5 (1.6 Hz) or harmonics should be the signature of contrast processing by neural populations. Contrast-related responses were successfully identified, even in the case of very fine contrasts. Moreover, analysis of the time course of the responses revealed a stable amplitude over repetitions of the AAAAB patterns in the sequence, except for the response to perceptually salient contrasts that showed a buildup and decay across repetitions of the sounds. Overall, this new combination of frequency-tagging with an oddball design provides a valuable complement to the classic, transient, evoked potentials approach, especially in the context of rapid auditory information. Specifically, we provide objective evidence on the neural processing of contrast embedded in fast, continuous sound sequences. NEW & NOTEWORTHY Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia might be an impaired processing of fast auditory changes, highlighting how the encoding of rapid acoustic information is critical for auditory communication. Here, we present a novel electrophysiological approach to capture in humans neural markers of contrasts in fast continuous tone sequences. Contrast-specific responses were successfully identified, even for very fine contrasts, providing direct insight on the encoding of rapid auditory information. Copyright © 2017 the American Physiological Society.
Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers
NASA Astrophysics Data System (ADS)
Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi
2018-03-01
Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.
Motsa, S. S.; Magagula, V. M.; Sibanda, P.
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252
Motsa, S S; Magagula, V M; Sibanda, P
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.
A Class of Exact Solutions of the Boussinesq Equation for Horizontal and Sloping Aquifers
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Porporato, A.
2018-02-01
The nonlinear equation of Boussinesq (1877) is a foundational approach for studying groundwater flow through an unconfined aquifer, but solving the full nonlinear version of the Boussinesq equation remains a challenge. Here, we present an exact solution to the full nonlinear Boussinesq equation that not only applies to sloping aquifers but also accounts for source and sink terms such as bedrock seepage, an often significant flux in headwater catchments. This new solution captures the hysteretic relationship (a loop rating curve) between the groundwater flow rate and the water table height, which may be used to provide a more realistic representation of streamflow and groundwater dynamics in hillslopes. In addition, the solution provides an expression where the flow recession varies based on hillslope parameters such as bedrock slope, bedrock seepage, aquifer recharge, plant transpiration, and other factors that vary across landscape types.
NASA Astrophysics Data System (ADS)
Das Gupta, Santanu; Das Gupta, S. R.
1991-10-01
The flow of laser radiation in a plane-parallel cylindrical slab of active amplifying medium with axial symmetry is treated as a problem in radiative transfer. The appropriate one-dimensional transfer equation describing the transfer of laser radiation has been derived by an appeal to Einstein'sA, B coefficients (describing the processes of stimulated line absorption, spontaneous line emission, and stimulated line emission sustained by population inversion in the medium) and considering the ‘rate equations’ to completely establish the rational of the transfer equation obtained. The equation is then exactly solved and the angular distribution of the emergent laser beam intensity is obtained; its numerically computed values are given in tables and plotted in graphs showing the nature of peaks of the emerging laser beam intensity about the axis of the laser cylinder.
Efficient Exact Inference With Loss Augmented Objective in Structured Learning.
Bauer, Alexander; Nakajima, Shinichi; Muller, Klaus-Robert
2016-08-19
Structural support vector machine (SVM) is an elegant approach for building complex and accurate models with structured outputs. However, its applicability relies on the availability of efficient inference algorithms--the state-of-the-art training algorithms repeatedly perform inference to compute a subgradient or to find the most violating configuration. In this paper, we propose an exact inference algorithm for maximizing nondecomposable objectives due to special type of a high-order potential having a decomposable internal structure. As an important application, our method covers the loss augmented inference, which enables the slack and margin scaling formulations of structural SVM with a variety of dissimilarity measures, e.g., Hamming loss, precision and recall, Fβ-loss, intersection over union, and many other functions that can be efficiently computed from the contingency table. We demonstrate the advantages of our approach in natural language parsing and sequence segmentation applications.
User-friendly InSAR Data Products: Fast and Simple Timeseries (FAST) Processing
NASA Astrophysics Data System (ADS)
Zebker, H. A.
2017-12-01
Interferometric Synthetic Aperture Radar (InSAR) methods provide high resolution maps of surface deformation applicable to many scientific, engineering and management studies. Despite its utility, the specialized skills and computer resources required for InSAR analysis remain as barriers for truly widespread use of the technique. Reduction of radar scenes to maps of temporal deformation evolution requires not only detailed metadata describing the exact radar and surface acquisition geometries, but also a software package that can combine these for the specific scenes of interest. Furthermore, the radar range-Doppler radar coordinate system itself is confusing, so that many users find it hard to incorporate even useful products in their customary analyses. And finally, the sheer data volume needed to represent interferogram time series makes InSAR analysis challenging for many analysis systems. We show here that it is possible to deliver radar data products to users that address all of these difficulties, so that the data acquired by large, modern satellite systems are ready to use in more natural coordinates, without requiring further processing, and in as small volume as possible.
Band Structure of the IV-VI Black Phosphorus Analog and Thermoelectric SnSe
NASA Astrophysics Data System (ADS)
Pletikosić, I.; von Rohr, F.; Pervan, P.; Das, P. K.; Vobornik, I.; Cava, R. J.; Valla, T.
2018-04-01
The success of black phosphorus in fast electronic and photonic devices is hindered by its rapid degradation in the presence of oxygen. Orthorhombic tin selenide is a representative of group IV-VI binary compounds that are robust and isoelectronic and share the same structure with black phosphorus. We measure the band structure of SnSe and find highly anisotropic valence bands that form several valleys having fast dispersion within the layers and negligible dispersion across. This is exactly the band structure desired for efficient thermoelectric generation where SnSe has shown great promise.
Band Structure of the IV-VI Black Phosphorus Analog and Thermoelectric SnSe
Pletikosic, Ivo; von Rohr, F.; Pervan, P.; ...
2018-04-10
Here, the success of black phosphorus in fast electronic and photonic devices is hindered by its rapid degradation in the presence of oxygen. Orthorhombic tin selenide is a representative of group IV-VI binary compounds that are robust and isoelectronic and share the same structure with black phosphorus. We measure the band structure of SnSe and find highly anisotropic valence bands that form several valleys having fast dispersion within the layers and negligible dispersion across. This is exactly the band structure desired for efficient thermoelectric generation where SnSe has shown great promise.
Band Structure of the IV-VI Black Phosphorus Analog and Thermoelectric SnSe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pletikosic, Ivo; von Rohr, F.; Pervan, P.
Here, the success of black phosphorus in fast electronic and photonic devices is hindered by its rapid degradation in the presence of oxygen. Orthorhombic tin selenide is a representative of group IV-VI binary compounds that are robust and isoelectronic and share the same structure with black phosphorus. We measure the band structure of SnSe and find highly anisotropic valence bands that form several valleys having fast dispersion within the layers and negligible dispersion across. This is exactly the band structure desired for efficient thermoelectric generation where SnSe has shown great promise.
A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems
NASA Astrophysics Data System (ADS)
Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix
2018-03-01
We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.
Techniques of Australian forest planning
Australian Forestry Council
1978-01-01
Computer modeling has been extensively adopted for Australian forest planning over the last ten years. It has been confined almost entirely to the plantations of fast-growing species for which adequate inventory, growth, and experimental data are available. Stand simulation models have replaced conventional yield tables and enabled a wide range of alternative...
Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability
NASA Astrophysics Data System (ADS)
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2016-06-01
Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.
Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento
Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximatemore » algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.« less
NASA Astrophysics Data System (ADS)
Belkić, Dževad; Belkić, Karen
2018-01-01
This paper on molecular imaging emphasizes improving specificity of magnetic resonance spectroscopy (MRS) for early cancer diagnostics by high-resolution data analysis. Sensitivity of magnetic resonance imaging (MRI) is excellent, but specificity is insufficient. Specificity is improved with MRS by going beyond morphology to assess the biochemical content of tissue. This is contingent upon accurate data quantification of diagnostically relevant biomolecules. Quantification is spectral analysis which reconstructs chemical shifts, amplitudes and relaxation times of metabolites. Chemical shifts inform on electronic shielding of resonating nuclei bound to different molecular compounds. Oscillation amplitudes in time signals retrieve the abundance of MR sensitive nuclei whose number is proportional to metabolite concentrations. Transverse relaxation times, the reciprocal of decay probabilities of resonances, arise from spin-spin coupling and reflect local field inhomogeneities. In MRS single voxels are used. For volumetric coverage, multi-voxels are employed within a hybrid of MRS and MRI called magnetic resonance spectroscopic imaging (MRSI). Common to MRS and MRSI is encoding of time signals and subsequent spectral analysis. Encoded data do not provide direct clinical information. Spectral analysis of time signals can yield the quantitative information, of which metabolite concentrations are the most clinically important. This information is equivocal with standard data analysis through the non-parametric, low-resolution fast Fourier transform and post-processing via fitting. By applying the fast Padé transform (FPT) with high-resolution, noise suppression and exact quantification via quantum mechanical signal processing, advances are made, presented herein, focusing on four areas of critical public health importance: brain, prostate, breast and ovarian cancers.
NASA Astrophysics Data System (ADS)
Vidybida, Alexander; Shchur, Olha
We consider a class of spiking neuronal models, defined by a set of conditions typical for basic threshold-type models, such as the leaky integrate-and-fire or the binding neuron model and also for some artificial neurons. A neuron is fed with a Poisson process. Each output impulse is applied to the neuron itself after a finite delay Δ. This impulse acts as being delivered through a fast Cl-type inhibitory synapse. We derive a general relation which allows calculating exactly the probability density function (pdf) p(t) of output interspike intervals of a neuron with feedback based on known pdf p0(t) for the same neuron without feedback and on the properties of the feedback line (the Δ value). Similar relations between corresponding moments are derived. Furthermore, we prove that the initial segment of pdf p0(t) for a neuron with a fixed threshold level is the same for any neuron satisfying the imposed conditions and is completely determined by the input stream. For the Poisson input stream, we calculate that initial segment exactly and, based on it, obtain exactly the initial segment of pdf p(t) for a neuron with feedback. That is the initial segment of p(t) is model-independent as well. The obtained expressions are checked by means of Monte Carlo simulation. The course of p(t) has a pronounced peculiarity, which makes it impossible to approximate p(t) by Poisson or another simple stochastic process.
FAST TRACK COMMUNICATION: The unusual asymptotics of three-sided prudent polygons
NASA Astrophysics Data System (ADS)
Beaton, Nicholas R.; Flajolet, Philippe; Guttmann, Anthony J.
2010-08-01
We have studied the area-generating function of prudent polygons on the square lattice. Exact solutions are obtained for the generating function of two-sided and three-sided prudent polygons, and a functional equation is found for four-sided prudent polygons. This is used to generate series coefficients in polynomial time, and these are analysed to determine the asymptotics numerically. A careful asymptotic analysis of the three-sided polygons produces a most surprising result. A transcendental critical exponent is found, and the leading amplitude is not quite a constant, but is a constant plus a small oscillatory component with an amplitude approximately 10-8 times that of the leading amplitude. This effect cannot be seen by any standard numerical analysis, but it may be present in other models. If so, it changes our whole view of the asymptotic behaviour of lattice models.
Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.
Junker, André; Brenner, Karl-Heinz
2018-03-01
The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.
Samant, Asawari; Ogunnaike, Babatunde A; Vlachos, Dionisios G
2007-05-24
The fundamental role that intrinsic stochasticity plays in cellular functions has been shown via numerous computational and experimental studies. In the face of such evidence, it is important that intracellular networks are simulated with stochastic algorithms that can capture molecular fluctuations. However, separation of time scales and disparity in species population, two common features of intracellular networks, make stochastic simulation of such networks computationally prohibitive. While recent work has addressed each of these challenges separately, a generic algorithm that can simultaneously tackle disparity in time scales and population scales in stochastic systems is currently lacking. In this paper, we propose the hybrid, multiscale Monte Carlo (HyMSMC) method that fills in this void. The proposed HyMSMC method blends stochastic singular perturbation concepts, to deal with potential stiffness, with a hybrid of exact and coarse-grained stochastic algorithms, to cope with separation in population sizes. In addition, we introduce the computational singular perturbation (CSP) method as a means of systematically partitioning fast and slow networks and computing relaxation times for convergence. We also propose a new criteria of convergence of fast networks to stochastic low-dimensional manifolds, which further accelerates the algorithm. We use several prototype and biological examples, including a gene expression model displaying bistability, to demonstrate the efficiency, accuracy and applicability of the HyMSMC method. Bistable models serve as stringent tests for the success of multiscale MC methods and illustrate limitations of some literature methods.
Nilsson, Maria A; Härlid, Anna; Kullberg, Morgan; Janke, Axel
2010-05-01
The native rodents are the most species-rich placental mammal group on the Australian continent. Fossils of native Australian rodents belonging to the group Conilurini are known from Northern Australia at 4.5Ma. These fossil assemblages already display a rich diversity of rodents, but the exact timing of their arrival on the Australian continent is not yet established. The complete mitochondrial genomes of two native Australian rodents, Leggadina lakedownensis (Lakeland Downs mouse) and Pseudomys chapmani (Western Pebble-mound mouse) were sequenced for investigating their evolutionary history. The molecular data were used for studying the phylogenetic position and divergence times of the Australian rodents, using 12 calibration points and various methods. Phylogenetic analyses place the native Australian rodents as the sister-group to the genus Mus. The Mus-Conilurini calibration point (7.3-11.0Ma) is highly critical for estimating rodent divergence times, while the influence of the different algorithms on estimating divergence times is negligible. The influence of the data type was investigated, indicating that amino acid data are more likely to reflect the correct divergence times than nucleotide sequences. The study on the problems related to estimating divergence times in fast-evolving lineages such as rodents, emphasize the choice of data and calibration points as being critical. Furthermore, it is essential to include accurate calibration points for fast-evolving groups, because the divergence times can otherwise be estimated to be significantly older. The divergence times of the Australian rodents are highly congruent and are estimated to 6.5-7.2Ma, a date that is compatible with their fossil record.
Asynchronously Coupled Models of Ice Loss from Airless Planetary Bodies
NASA Astrophysics Data System (ADS)
Schorghofer, N.
2016-12-01
Ice is found near the surface of dwarf planet Ceres, in some main belt asteroids, and perhaps in NEOs that will be explored or even mined in future. The simple but important question of how fast ice is lost from airless bodies can present computational challenges. The thermal cycle on the surface repeats on much shorter time-scales than ice retreats; one process acts on the time-scale of hours, the other over billions of years. This multi-scale situation is addressed with asynchronous coupling, where models with different time steps are woven together. The sharp contrast at the retreating ice table is dealt with with explicit interface tracking. For Ceres, which is covered with a thermally insulating dust mantle, desiccation rates are orders of magnitude slower than had been calculated with simpler models. More model challenges remain: The role of impact devolatization and the time-scale for complete desiccation of an asteroid. I will also share my experience with code distribution using GitHub and Zenodo.
Algorithms for System Identification and Source Location.
NASA Astrophysics Data System (ADS)
Nehorai, Arye
This thesis deals with several topics in least squares estimation and applications to source location. It begins with a derivation of a mapping between Wiener theory and Kalman filtering for nonstationary autoregressive moving average (ARMO) processes. Applying time domain analysis, connections are found between time-varying state space realizations and input-output impulse response by matrix fraction description (MFD). Using these connections, the whitening filters are derived by the two approaches, and the Kalman gain is expressed in terms of Wiener theory. Next, fast estimation algorithms are derived in a unified way as special cases of the Conjugate Direction Method. The fast algorithms included are the block Levinson, fast recursive least squares, ladder (or lattice) and fast Cholesky algorithms. The results give a novel derivation and interpretation for all these methods, which are efficient alternatives to available recursive system identification algorithms. Multivariable identification algorithms are usually designed only for left MFD models. In this work, recursive multivariable identification algorithms are derived for right MFD models with diagonal denominator matrices. The algorithms are of prediction error and model reference type. Convergence analysis results obtained by the Ordinary Differential Equation (ODE) method are presented along with simulations. Sources of energy can be located by estimating time differences of arrival (TDOA's) of waves between the receivers. A new method for TDOA estimation is proposed for multiple unknown ARMA sources and additive correlated receiver noise. The method is based on a formula that uses only the receiver cross-spectra and the source poles. Two algorithms are suggested that allow tradeoffs between computational complexity and accuracy. A new time delay model is derived and used to show the applicability of the methods for non -integer TDOA's. Results from simulations illustrate the performance of the algorithms. The last chapter analyzes the response of exact least squares predictors for enhancement of sinusoids with additive colored noise. Using the matrix inversion lemma and the Christoffel-Darboux formula, the frequency response and amplitude gain of the sinusoids are expressed as functions of the signal and noise characteristics. The results generalize the available white noise case.
An efficient and accurate 3D displacements tracking strategy for digital volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles
2014-07-01
Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.
Verkest, K R; Fleeman, L M; Morton, J M; Groen, S J; Suchodolski, J S; Steiner, J M; Rand, J S
2012-01-01
Hypertriglyceridemia has been proposed to contribute to the risk of developing pancreatitis in dogs. To determine associations between postprandial serum triglyceride concentrations and canine pancreatic lipase immunoreactivity (cPLI) concentrations or pancreatic disease. Thirty-five client-owned overweight (n = 25) or obese (n = 10) dogs weighing >10 kg. Healthy dogs were prospectively recruited for a cross-sectional study. Serum triglyceride concentrations were measured before and hourly for 12 hours after a meal. Fasting cPLI and canine trypsin-like immunoreactivity (cTLI) concentrations were assayed. Cut-off values for hypertriglyceridemia were set a priori for fasting (≥ 88, ≥ 177, ≥ 354, ≥ 885 mg/dL) and peak postprandial (≥ 133, ≥ 442, ≥ 885 mg/dL) triglyceride concentrations. The association between hypertriglyceridemia and high cPLI concentrations was assessed by exact logistic regression. Follow-up was performed 4 years later to determine the incidence of pancreatic disease. Eight dogs had peak postprandial triglycerides >442 mg/dL and 3 dogs had fasting serum cPLI concentrations ≥ 400 μg/L. Odds of high cPLI concentrations were 16.7 times higher in dogs with peak postprandial triglyceride concentrations ≥ 442 mg/dL relative to other dogs (P < .001). Fasting triglyceride concentration was not significantly associated with cPLI concentrations. None of the dogs with high triglyceride concentrations and one of the dogs with low fasting and peak postprandial triglyceride concentrations developed clinically important pancreatic disease. Overweight and obese dogs with peak serum postprandial triglyceride concentrations ≥ 442 mg/dL after a standard meal are more likely to have serum cPLI concentrations ≥ 400 μg/L, but did not develop clinically important pancreatic disease. Copyright © 2011 by the American College of Veterinary Internal Medicine.
Donnadieu, Anne Claire; Deffieux, Xavier; Le Ray, Camille; Mordefroid, Marie; Frydman, René; Fernandez, Hervé
2006-12-01
A large ovarian cyst was diagnosed at 22 weeks' of gestation in a 32-year-old woman. The ultrasonographic appearance of the ovarian cyst was unusual with multiple mobile, spherical echogenic structures floating in the cystic mass, called intracystic "fat balls." Right adnexectomy was performed by laparotomy at 28 weeks' of gestation, because of rapid growth and overall size exceeding 20 cm. Pathological examination confirmed ovarian cystic teratoma. Usually, dermoid cysts are slow-growing, even in premenopausal women. The exact mechanism related to the fast growth during pregnancy is unknown. It could be related to an unusual pattern of estrogen (E)/P receptors expression in the cystic teratoma. This case shows that a fast-growing, mature ovarian cystic teratoma may occur during pregnancy.
A tractable prescription for large-scale free flight expansion of wavefunctions
NASA Astrophysics Data System (ADS)
Deuar, P.
2016-11-01
A numerical recipe is given for obtaining the density image of an initially compact quantum mechanical wavefunction that has expanded by a large but finite factor under free flight. The recipe given avoids the memory storage problems that plague this type of calculation by reducing the problem to the sum of a number of fast Fourier transforms carried out on the relatively small initial lattice. The final expanded state is given exactly on a coarser magnified grid with the same number of points as the initial state. An important application of this technique is the simulation of measured time-of-flight images in ultracold atom experiments, especially when the initial clouds contain superfluid defects. It is shown that such a finite-time expansion, rather than a far-field approximation is essential to correctly predict images of defect-laden clouds, even for long flight times. Examples shown are: an expanding quasicondensate with soliton defects and a matter-wave interferometer in 3D.
Acoustic streaming: an arbitrary Lagrangian-Eulerian perspective.
Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco
2017-08-25
We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid-structure interaction problems in microacoustofluidic devices. After the formulation's exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches.
Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method
NASA Astrophysics Data System (ADS)
Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen
2008-03-01
The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.
Acoustic streaming: an arbitrary Lagrangian–Eulerian perspective
Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco
2017-01-01
We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid–structure interaction problems in microacoustofluidic devices. After the formulation’s exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches. PMID:29051631
Research on Attribute Reduction in Hoisting Motor State Recognition of Quayside Container Crane
NASA Astrophysics Data System (ADS)
Li, F.; Tang, G.; Hu, X.
2017-07-01
In view of too many attributes in hoisting motor state recognition of quayside container crane. Attribute reduction method based on discernibility matrix is introduced to attribute reduction of lifting motor state information table. A method of attribute reduction based on the combination of rough set and genetic algorithm is proposed to deal with the hoisting motor state decision table. Under the condition that the information system's decision-making ability is unchanged, the redundant attribute is deleted. Which reduces the complexity and computation of the recognition process of the hoisting motor. It is possible to realize the fast state recognition.
NASA Astrophysics Data System (ADS)
Jernsletten, J. A.
2005-11-01
A TEM survey was carried out in Pima County, Arizona, in January 2003. Data was collected using 100 m Tx loops and a ferrite-cored magnetic coil Rx antenna, using a 16 Hz sounding frequency, which is sensitive to slightly salty groundwater. Prominent features in the data are the ~500 m depth of investigation and the ~120 m depth to the water table, confirmed by data from four USGS test wells sur-rounding the field area. Note also the conductive (~20-40 Ωm) clay-rich soil above the water table. During May and June of 2003, a Fast-Turnoff (early time) TEM survey was carried out at the Peña de Hierro field area of the MARTE project, near the town of Nerva, Spain. Data was collected using 20 m and 40 m Tx loop antennae and 10 m loop Rx antennae, with a 32 Hz sounding frequency. Data from Line 4 (of 16) from this survey, collected using 40 m Tx loops, show ~200 m depth of investigation and a conduc-tive high at ~90 m depth below Station 20 (second station of 10 along this line). This is the water table, matching the 431 m MSL elevation of the nearby pit lake. Data from Line 15 and Line 14 of the Rio Tinto survey, collected using 20 m Tx loops, achieve ~50 m depth of investigation and show con-ductive highs at ~15 m depth below Station 50 (Line 15) and Station 30 (Line 14), interpreted as subsurface water flow under mine tailings matching surface flows seen coming out from under the tailings, and shown on maps. Both of the interpretations from Rio Tinto data (Line 4, and Lines 15 & 14) were confirmed by preliminary results from the MARTE ground truth drilling campaign carried out in September and October 2003. Drill Site 1 was moved ~50 m based on recommendations built on data from Line 15 and Line 14 of the Fast-Turnoff TEM survey.
ProbCD: enrichment analysis accounting for categorization uncertainty.
Vêncio, Ricardo Z N; Shmulevich, Ilya
2007-10-12
As in many other areas of science, systems biology makes extensive use of statistical association and significance estimates in contingency tables, a type of categorical data analysis known in this field as enrichment (also over-representation or enhancement) analysis. In spite of efforts to create probabilistic annotations, especially in the Gene Ontology context, or to deal with uncertainty in high throughput-based datasets, current enrichment methods largely ignore this probabilistic information since they are mainly based on variants of the Fisher Exact Test. We developed an open-source R-based software to deal with probabilistic categorical data analysis, ProbCD, that does not require a static contingency table. The contingency table for the enrichment problem is built using the expectation of a Bernoulli Scheme stochastic process given the categorization probabilities. An on-line interface was created to allow usage by non-programmers and is available at: http://xerad.systemsbiology.net/ProbCD/. We present an analysis framework and software tools to address the issue of uncertainty in categorical data analysis. In particular, concerning the enrichment analysis, ProbCD can accommodate: (i) the stochastic nature of the high-throughput experimental techniques and (ii) probabilistic gene annotation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yi; Zhang Jingtao; Xu Zhizhan
2010-07-15
The exact algebraic solution recently obtained by Guo, Wu, and Van Woerkom (Phys. Rev. A 73 (2006) 023419) made possible accurate calculations of quasienergies of a driven two-level atom with an arbitrary original energy spacing and laser intensity. Due to the complication of the analytic solutions that involves an infinite number of infinite determinants, many mathematical difficulties must be overcome to obtain precise values of quasienergies. In this paper, with a further developed algebraic method, we show how to solve the computational problem completely and results are presented in a data table. With this table, one can easily obtain allmore » quasienergies of a driven two-level atom with an arbitrary original energy spacing and arbitrary intensity and frequency of the driving laser. The numerical solution technique developed here can be applied to the calculation of Freeman resonances in photoelectron energy spectra. As an example for applications, we show how to use the data table to calculate the peak laser intensity at which a Freeman resonance occurs in the transition between the ground Xe 5p P{sub 3/2} state and the Rydberg state Xe 8p P{sub 3/2}.« less
Fast simulation tool for ultraviolet radiation at the earth's surface
NASA Astrophysics Data System (ADS)
Engelsen, Ola; Kylling, Arve
2005-04-01
FastRT is a fast, yet accurate, UV simulation tool that computes downward surface UV doses, UV indices, and irradiances in the spectral range 290 to 400 nm with a resolution as small as 0.05 nm. It computes a full UV spectrum within a few milliseconds on a standard PC, and enables the user to convolve the spectrum with user-defined and built-in spectral response functions including the International Commission on Illumination (CIE) erythemal response function used for UV index calculations. The program accounts for the main radiative input parameters, i.e., instrumental characteristics, solar zenith angle, ozone column, aerosol loading, clouds, surface albedo, and surface altitude. FastRT is based on look-up tables of carefully selected entries of atmospheric transmittances and spherical albedos, and exploits the smoothness of these quantities with respect to atmospheric, surface, geometrical, and spectral parameters. An interactive site, http://nadir.nilu.no/~olaeng/fastrt/fastrt.html, enables the public to run the FastRT program with most input options. This page also contains updated information about FastRT and links to freely downloadable source codes and binaries.
Daudin, L; Carrière, M; Gouget, B; Hoarau, J; Khodja, H
2006-01-01
A single ion hit facility is being developed at the Pierre Süe Laboratory (LPS) since 2004. This set-up will be dedicated to the study of ionising radiation effects on living cells, which will complete current research conducted on uranium chemical toxicity on renal and osteoblastic cells. The study of the response to an exposure to alpha particles will allow us to distinguish radiological and chemical toxicities of uranium, with a special emphasis on the bystander effect at low doses. Designed and installed on the LPS Nuclear microprobe, up to now dedicated to ion beam microanalysis, this set-up will enable us to deliver an exact number of light ions accelerated by a 3.75 MV electrostatic accelerator. An 'in air' vertical beam permits the irradiation of cells in conditions compatible with cell culture techniques. Furthermore, cellular monolayer will be kept in controlled conditions of temperature and atmosphere in order to diminish stress. The beam is collimated with a fused silica capillary tubing to target pre-selected cells. Motorisation of the collimator with piezo-electric actuators should enable fast irradiation without moving the sample, thus avoiding mechanical stress. An automated epifluorescence microscope, mounted on an antivibration table, allows pre- and post-irradiation cell observation. An ultra thin silicon surface barrier detector has been developed and tested to be able to shoot a cell with a single alpha particle.
NASA Technical Reports Server (NTRS)
Alexandrov, Mikhail D.; Cairns, Brian; Mishchenko, Michael I.
2012-01-01
We present a novel technique for remote sensing of cloud droplet size distributions. Polarized reflectances in the scattering angle range between 135deg and 165deg exhibit a sharply defined rainbow structure, the shape of which is determined mostly by single scattering properties of cloud particles, and therefore, can be modeled using the Mie theory. Fitting the observed rainbow with such a model (computed for a parameterized family of particle size distributions) has been used for cloud droplet size retrievals. We discovered that the relationship between the rainbow structures and the corresponding particle size distributions is deeper than it had been commonly understood. In fact, the Mie theory-derived polarized reflectance as a function of reduced scattering angle (in the rainbow angular range) and the (monodisperse) particle radius appears to be a proxy to a kernel of an integral transform (similar to the sine Fourier transform on the positive semi-axis). This approach, called the rainbow Fourier transform (RFT), allows us to accurately retrieve the shape of the droplet size distribution by the application of the corresponding inverse transform to the observed polarized rainbow. While the basis functions of the proxy-transform are not exactly orthogonal in the finite angular range, this procedure needs to be complemented by a simple regression technique, which removes the retrieval artifacts. This non-parametric approach does not require any a priori knowledge of the droplet size distribution functional shape and is computationally fast (no look-up tables, no fitting, computations are the same as for the forward modeling).
Exact-Differential Large-Scale Traffic Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios
2015-01-01
Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less
All ceramic table tops analyzed using swept source optical coherence tomography
NASA Astrophysics Data System (ADS)
Stoica, Eniko Tunde; Marcauteanu, Corina; Sinescu, Cosmin; Negrutiu, Meda Lavinia; Topala, Florin; Duma, Virgil Florin; Bradu, Adrian; Podoleanu, Adrian Gh.
2016-03-01
Erosion is the progressive loss of tooth substance by chemical processes that do not involve bacterial action. The affected teeth can be restored by using IPS e.max Press "table tops", which replace the occlusal surfaces. In this study we applied a fast in-house Swept Source Optical Coherence Tomography (SS OCT) system to analyze IPS e.max Press "table tops". 12 maxillary first premolars have been extracted and prepared for "table tops". These restorations were subjected to 3000 alternating cycles of thermo-cycling in a range from -10°C to +50°C mechanical occlusal loads of 200 N were also applied. Using SS OCT we analyze the marginal seal of these restorations, before and after applying the mechanical and thermal strain. The characteristics of the SS OCT system utilized are presented. Its depth resolution, measured in air is 10 μm. The system is able to acquire entire volumetric reconstructions in 2.5 s. From the dataset acquired high resolution en-face projections were also produced. Thus, the interfaces between all ceramic "table tops" and natural teeth were analyzed on the cross-sections (i.e., the B-scans) produced and also on the volumetric (tri-dimensional (3D)) reconstructions, several open interfaces being detected. The study therefore demonstrates the utility of SS OCT for the analysis of lithium disilicate glass ceramic "table tops".
2017-06-01
12 III. ACOUSTIC WAVE TRAVEL TIME ESTIMATION...Mission ...............................125 Table 8. Average Horizontal Distance from the UUV to the Reference Points when a Travel Time Measurement is...Taken ............................................126 Table 9. Average UUV Depth when a Travel Time Measurement is Taken .........126 Table 10. Ratio
Evaluation of Calorie Requirements for Ranger Training at Fort Benning, Georgia
1976-07-01
Florida (Table 1). Body weights were obtained in the nude fasting state after void- in&. Skinfold thicknesses were measured on the right and left triceps...between the three pha"es which provided them with opportunities to purchase and consm other foods., Fif teen of the man were questioned for recall
Kim, Seung-Cheol; Kim, Eun-Soo
2009-02-20
In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.
Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun; Wang, Gi-Nam
2016-01-01
Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively.
Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun
2016-01-01
Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively. PMID:27974882
High-throughput accurate-wavelength lens-based visible spectrometer.
Bell, Ronald E; Scotti, Filippo
2010-10-01
A scanning visible spectrometer has been prototyped to complement fixed-wavelength transmission grating spectrometers for charge exchange recombination spectroscopy. Fast f/1.8 200 mm commercial lenses are used with a large 2160 mm(-1) grating for high throughput. A stepping-motor controlled sine drive positions the grating, which is mounted on a precision rotary table. A high-resolution optical encoder on the grating stage allows the grating angle to be measured with an absolute accuracy of 0.075 arc sec, corresponding to a wavelength error ≤0.005 Å. At this precision, changes in grating groove density due to thermal expansion and variations in the refractive index of air are important. An automated calibration procedure determines all the relevant spectrometer parameters to high accuracy. Changes in bulk grating temperature, atmospheric temperature, and pressure are monitored between the time of calibration and the time of measurement to ensure a persistent wavelength calibration.
Monitoring of Engineering Buildings Behaviour Within the Disaster Management System
NASA Astrophysics Data System (ADS)
Oku Topal, G.; Gülal, E.
2017-11-01
The Disaster management aims to prevent events that result in disaster or to reduce their losses. Monitoring of engineering buildings, identification of unusual movements and taking the necessary precautions are very crucial for determination of the disaster risk so possible prevention could be taken to reduce big loss. Improving technology, increasing population due to increased construction and these areas largest economy lead to offer damage detection strategies. Structural Health Monitoring (SHM) is the most effective of these strategies. SHM research is very important to maintain all this structuring safely. The purpose of structural monitoring is determining in advance of possible accidents and taking necessary precaution. In this paper, determining the behaviour of construction using Global Positioning System (GPS) is investigated. For this purpose shaking table tests were performed. Shaking table was moved at different amplitude and frequency aiming to determine these movement with a GPS measuring system. The obtained data were evaluated by analysis of time series and Fast Fourier Transformation techniques and the frequency and amplitude values are calculated. By examining the results of the tests made, it will be determined whether the GPS measurement method can accurately detect the movements of the engineering structures.
Exact Doppler broadening of tabulated cross sections. [SIGMA 1 kernel broadening method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullen, D.E.; Weisbin, C.R.
1976-07-01
The SIGMA1 kernel broadening method is presented to Doppler broaden to any required accuracy a cross section that is described by a table of values and linear-linear interpolation in energy-cross section between tabulated values. The method is demonstrated to have no temperature or energy limitations and to be equally applicable to neutron or charged-particle cross sections. The method is qualitatively and quantitatively compared to contemporary approximate methods of Doppler broadening with particular emphasis on the effect of each approximation introduced.
Exact consideration of data redundancies for spiral cone-beam CT
NASA Astrophysics Data System (ADS)
Lauritsch, Guenter; Katsevich, Alexander; Hirsch, Michael
2004-05-01
In multi-slice spiral computed tomography (CT) there is an obvious trend in adding more and more detector rows. The goals are numerous: volume coverage, isotropic spatial resolution, and speed. Consequently, there will be a variety of scan protocols optimizing clinical applications. Flexibility in table feed requires consideration of data redundancies to ensure efficient detector usage. Until recently this was achieved by approximate reconstruction algorithms only. However, due to the increasing cone angles there is a need of exact treatment of the cone beam geometry. A new, exact and efficient 3-PI algorithm for considering three-fold data redundancies was derived from a general, theoretical framework based on 3D Radon inversion using Grangeat's formula. The 3-PI algorithm possesses a simple and efficient structure as the 1-PI method for non-redundant data previously proposed. Filtering is one-dimensional, performed along lines with variable tilt on the detector. This talk deals with a thorough evaluation of the performance of the 3-PI algorithm in comparison to the 1-PI method. Image quality of the 3-PI algorithm is superior. The prominent spiral artifacts and other discretization artifacts are significantly reduced due to averaging effects when taking into account redundant data. Certainly signal-to-noise ratio is increased. The computational expense is comparable even to that of approximate algorithms. The 3-PI algorithm proves its practicability for applications in medical imaging. Other exact n-PI methods for n-fold data redundancies (n odd) can be deduced from the general, theoretical framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villone, F.; Mastrostefano, S.; Calabrò, G.
2014-08-15
One of the main FAST (Fusion Advanced Studies Torus) goals is to have a flexible experiment capable to test tools and scenarios for safe and reliable tokamak operation, in order to support ITER and help the final DEMO design. In particular, in this paper, we focus on operation close to a possible border of stability related to low-q operation. To this purpose, a new FAST scenario has then been designed at I{sub p} = 10 MA, B{sub T} = 8.5 T, q{sub 95} ≈ 2.3. Transport simulations, carried out by using the code JETTO and the first principle transport model GLF23, indicate that, under these conditions, FASTmore » could achieve an equivalent Q ≈ 3.5. FAST will be equipped with a set of internal active coils for feedback control, which will produce magnetic perturbation with toroidal number n = 1 or n = 2. Magnetohydrodynamic (MHD) mode analysis and feedback control simulations performed with the codes MARS, MARS-F, CarMa (both assuming the presence of a perfect conductive wall and using the exact 3D resistive wall structure) show the possibility of the FAST conductive structures to stabilize n = 1 ideal modes. This leaves therefore room for active mitigation of the resistive mode (down to a characteristic time of 1 ms) for safety purposes, i.e., to avoid dangerous MHD-driven plasma disruption, when working close to the machine limits and magnetic and kinetic energy density not far from reactor values.« less
Hansmann, Jan; Michaely, Henrik J; Morelli, John N; Diehl, Steffen J; Meyer, Mathias; Schoenberg, Stefan O; Attenberger, Ulrike I
2013-12-01
The purpose of this article is to evaluate the added diagnostic accuracy of time-resolved MR angiography (MRA) of the calves compared with continuous-table-movement MRA in patients with symptomatic lower extremity peripheral artery disease (PAD) using digital subtraction angiography (DSA) correlation. Eighty-four consecutive patients with symptomatic PAD underwent a low-dose 3-T MRA protocol, consisting of continuous-table-movement MRA, acquired from the diaphragm to the calves, and an additional time-resolved MRA of the calves; 0.1 mmol/kg body weight (bw) of contrast material was used (0.07 mmol/kg bw for continuous-table-movement MRA and 0.03 mmol/kg bw for time-resolved MRA). Two radiologists rated image quality on a 4-point scale and stenosis degree on a 3-point scale. An additional assessment determined the degree of venous contamination and whether time-resolved MRA improved diagnostic confidence. The accuracy of stenosis gradation with continuous-table-movement and time-resolved MRA was compared with that of DSA as a correlation. Overall diagnostic accuracy was calculated for continuous-table-movement and time-resolved MRA. Median image quality was rated as good for 578 vessel segments with continuous-table-movement MRA and as excellent for 565 vessel segments with time-resolved MRA. Interreader agreement was excellent (κ = 0.80-0.84). Venous contamination interfered with diagnosis in more than 60% of continuous-table-movement MRA examinations. The degree of stenosis was assessed for 340 vessel segments. The diagnostic accuracies (continuous-table-movement MRA/time-resolved MRA) combined for the readers were obtained for the tibioperoneal trunk (84%/93%), anterior tibial (69%/87%), posterior tibial (85%/91%), and peroneal (67%/81%) arteries. The addition of time-resolved MRA improved diagnostic confidence in 69% of examinations. The addition of time-resolved MRA at the calf station improves diagnostic accuracy over continuous-table-movement MRA alone in symptomatic patients with PAD.
A comparative study of visual reaction time in table tennis players and healthy controls.
Bhabhor, Mahesh K; Vidja, Kalpesh; Bhanderi, Priti; Dodhia, Shital; Kathrotia, Rajesh; Joshi, Varsha
2013-01-01
Visual reaction time is time required to response to visual stimuli. The present study was conducted to measure visual reaction time in 209 subjects, 50 table tennis (TT) players and 159 healthy controls. The visual reaction time was measured by the direct RT computerized software in healthy controls and table tennis players. Simple visual reaction time was measured. During the reaction time testing, visual stimuli were given for eighteen times and average reaction time was taken as the final reaction time. The study shows that table tennis players had faster reaction time than healthy controls. On multivariate analysis, it was found that TT players had 74.121 sec (95% CI 98.8 and 49.4 sec) faster reaction time compared to non-TT players of same age and BMI. Also playing TT has a profound influence on visual reaction time than BMI. Our study concluded that persons involved in sports are having good reaction time as compared to controls. These results support the view that playing of table tennis is beneficial to eye-hand reaction time, improve the concentration and alertness.
NASA Astrophysics Data System (ADS)
Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.
2018-04-01
Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.
Determining linear vibration frequencies of a ferromagnetic shell
NASA Astrophysics Data System (ADS)
Bagdoev, A. G.; Vardanyan, A. V.; Vardanyan, S. V.; Kukudzhanov, V. N.
2007-10-01
The problems of determining the roots of dispersion equations for free bending vibrations of thin magnetoelastic plates and shells are of both theoretical and practical interest, in particular, in studying vibrations of metallic structures used in controlled thermonuclear reactors. These problems were solved on the basis of the Kirchhoff hypothesis in [1-5]. In [6], an exact spatial approach to determining the vibration frequencies of thin plates was suggested, and it was shown that it completely agrees with the solution obtained according to the Kirchhoff hypothesis. In [7-9], this exact approach was used to solve the problem on vibrations of thin magnetoelastic plates, and it was shown by cumbersome calculations that the solutions obtained according to the exact theory and the Kirchhoff hypothesis differ substantially except in a single case. In [10], the equations of the dynamic theory of elasticity in the axisymmetric problem are given. In [11], the equations for the vibration frequencies of thin ferromagnetic plates with arbitrary conductivity were obtained in the exact statement. In [12], the Kirchhoff hypothesis was used to obtain dispersion relations for a magnetoelastic thin shell. In [5, 13-16], the relations for the Maxwell tensor and the ponderomotive force for magnetics were presented. In [17], the dispersion relations for thin ferromagnetic plates in the transverse field in the spatial statement were studied analytically and numerically. In the present paper, on the basis of the exact approach, we study free bending vibrations of a thin ferromagnetic cylindrical shell. We obtain the exact dispersion equation in the form of a sixth-order determinant, which can be solved numerically in the case of a magnetoelastic thin shell. The numerical results are presented in tables and compared with the results obtained by the Kirchhoff hypothesis. We show a large number of differences in the results, even for the least frequency.
NASA Technical Reports Server (NTRS)
Pinckney, John
2010-01-01
With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.
Spectral analysis method and sample generation for real time visualization of speech
NASA Astrophysics Data System (ADS)
Hobohm, Klaus
A method for translating speech signals into optical models, characterized by high sound discrimination and learnability and designed to provide to deaf persons a feedback towards control of their way of speaking, is presented. Important properties of speech production and perception processes and organs involved in these mechanisms are recalled in order to define requirements for speech visualization. It is established that the spectral representation of time, frequency and amplitude resolution of hearing must be fair and continuous variations of acoustic parameters of speech signal must be depicted by a continuous variation of images. A color table was developed for dynamic illustration and sonograms were generated with five spectral analysis methods such as Fourier transformations and linear prediction coding. For evaluating sonogram quality, test persons had to recognize consonant/vocal/consonant words and an optimized analysis method was achieved with a fast Fourier transformation and a postprocessor. A hardware concept of a real time speech visualization system, based on multiprocessor technology in a personal computer, is presented.
Preventing Serious Conduct Problems in School-Age Youths: The Fast Track Program
Slough, Nancy M.; McMahon, Robert J.; Bierman, Karen L.; Coie, John D.; Dodge, Kenneth A.; Foster, E. Michael; Greenberg, Mark T.; Lochman, John E.; McMahon, Robert J.; Pinderhughes, Ellen E.
2009-01-01
Children with early-starting conduct Problems have a very poor prognosis and exact a high cost to society. The Fast Track project is a multisite, collaborative research project investigating the efficacy of a comprehensive, long-term, multicomponent intervention designed to prevent the development of serious conduct problems in high-risk children. In this article, we (a) provide an overview of the development model that serves as the conceptual foundation for the Fast Track intervention and describe its integration into the intervention model; (b) outline the research design and intervention model, with an emphasis on the elementary school phase of the intervention; and (c) summarize findings to dale concerning intervention outcomes. We then provide a case illustration, and conclude with a discussion of guidelines for practitioners who work with children with conduct problems. PMID:19890487
Fast, Inclusive Searches for Geographic Names Using Digraphs
Donato, David I.
2008-01-01
An algorithm specifies how to quickly identify names that approximately match any specified name when searching a list or database of geographic names. Based on comparisons of the digraphs (ordered letter pairs) contained in geographic names, this algorithmic technique identifies approximately matching names by applying an artificial but useful measure of name similarity. A digraph index enables computer name searches that are carried out using this technique to be fast enough for deployment in a Web application. This technique, which is a member of the class of n-gram algorithms, is related to, but distinct from, the soundex, PHONIX, and metaphone phonetic algorithms. Despite this technique's tendency to return some counterintuitive approximate matches, it is an effective aid for fast, inclusive searches for geographic names when the exact name sought, or its correct spelling, is unknown.
The Big Mac and Teaching about Japan. Footnotes. Volume 8, Number 5
ERIC Educational Resources Information Center
Ellington, Lucien
2003-01-01
The Big Mac can be effective tool in helping students achieve a better understanding of Japan. It can defeat Orientalist stereotypes about the Japanese--and also challenge young people who might have oversimplified notions of what exactly occurs when U.S. fast food chains take root in another culture. Many deride McDonald's as a villain…
S2LET: A code to perform fast wavelet analysis on the sphere
NASA Astrophysics Data System (ADS)
Leistedt, B.; McEwen, J. D.; Vandergheynst, P.; Wiaux, Y.
2013-10-01
We describe S2LET, a fast and robust implementation of the scale-discretised wavelet transform on the sphere. Wavelets are constructed through a tiling of the harmonic line and can be used to probe spatially localised, scale-dependent features of signals on the sphere. The reconstruction of a signal from its wavelets coefficients is made exact here through the use of a sampling theorem on the sphere. Moreover, a multiresolution algorithm is presented to capture all information of each wavelet scale in the minimal number of samples on the sphere. In addition S2LET supports the HEALPix pixelisation scheme, in which case the transform is not exact but nevertheless achieves good numerical accuracy. The core routines of S2LET are written in C and have interfaces in Matlab, IDL and Java. Real signals can be written to and read from FITS files and plotted as Mollweide projections. The S2LET code is made publicly available, is extensively documented, and ships with several examples in the four languages supported. At present the code is restricted to axisymmetric wavelets but will be extended to directional, steerable wavelets in a future release.
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; ...
2015-06-08
Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris
Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
Fitzgerald, Matthew; Sagi, Elad; Morbiwala, Tasnim A.; Tan, Chin-Tuan; Svirsky, Mario A.
2013-01-01
Objectives Perception of spectrally degraded speech is particularly difficult when the signal is also distorted along the frequency axis. This might be particularly important for post-lingually deafened recipients of cochlear implants (CI), who must adapt to a signal where there may be a mismatch between the frequencies of an input signal and the characteristic frequencies of the neurons stimulated by the CI. However, there is a lack of tools that can be used to identify whether an individual has adapted fully to a mismatch in the frequency-to-place relationship and if so, to find a frequency table that ameliorates any negative effects of an unadapted mismatch. The goal of the proposed investigation is to test the feasibility of whether real-time selection of frequency tables can be used to identify cases in which listeners have not fully adapted to a frequency mismatch. The assumption underlying this approach is that listeners who have not adapted to a frequency mismatch will select a frequency table that minimizes any such mismatches, even at the expense of reducing the information provided by this frequency table. Design 34 normal-hearing adults listened to a noise-vocoded acoustic simulation of a cochlear implant and adjusted the frequency table in real time until they obtained a frequency table that sounded “most intelligible” to them. The use of an acoustic simulation was essential to this study because it allowed us to explicitly control the degree of frequency mismatch present in the simulation. None of the listeners had any previous experience with vocoded speech, in order to test the hypothesis that the real-time selection procedure could be used to identify cases in which a listener has not adapted to a frequency mismatch. After obtaining a self-selected table, we measured CNC word-recognition scores with that self-selected table and two other frequency tables: a “frequency-matched” table that matched the analysis filters with the noisebands of the noise-vocoder simulation, and a “right information” table that is similar to that used in most cochlear implant speech processors, but in this simulation results in a frequency shift equivalent to 6.5 mm of cochlear space. Results Listeners tended to select a table that was very close to, but shifted slightly lower in frequency from the frequency-matched table. The real-time selection process took on average 2–3 minutes for each trial, and the between-trial variability was comparable to that previously observed with closely-related procedures. The word-recognition scores with the self-selected table were clearly higher than with the right-information table and slightly higher than with the frequency-matched table. Conclusions Real-time self-selection of frequency tables may be a viable tool for identifying listeners who have not adapted to a mismatch in the frequency-to-place relationship, and to find a frequency table that is more appropriate for them. Moreover, the small but significant improvements in word-recognition ability observed with the self-selected table suggest that these listeners based their selections on intelligibility rather than some other factor. The within-subject variability in the real-time selection procedure was comparable to that of a genetic algorithm, and the speed of the real-time procedure appeared to be faster than either a genetic algorithm or a simplex procedure. PMID:23807089
A holographic model for black hole complementarity
Lowe, David A.; Thorlacius, Larus
2016-12-07
Here, we explore a version of black hole complementarity, where an approximate semiclassical effective field theory for interior infalling degrees of freedom emerges holo-graphically from an exact evolution of exterior degrees of freedom. The infalling degrees of freedom have a complementary description in terms of outgoing Hawking radiation and must eventually decohere with respect to the exterior Hamiltonian, leading to a breakdown of the semiclassical description for an infaller. Trace distance is used to quantify the difference between the complementary time evolutions, and to define a decoherence time. We propose a dictionary where the evolution with respect to the bulkmore » effective Hamiltonian corresponds to mean field evolution in the holographic theory. In a particular model for the holographic theory, which exhibits fast scrambling, the decoherence time coincides with the scrambling time. The results support the hypothesis that decoherence of the infalling holographic state and disruptive bulk effects near the curvature singularity are comple-mentary descriptions of the same physics, which is an important step toward resolving the black hole information paradox.« less
Validation of the SURE Program, phase 1
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.
2017-06-01
Training time statistics from Jones’ thesis. . . . . . . . . . . . . . 15 Table 2.2 Evaluation runtime statistics from Camp’s thesis for a single image. 17...Table 2.3 Training and evaluation runtime statistics from Sharpe’s thesis. . . 19 Table 2.4 Sharpe’s screenshot detector results for combinations of...training resources available and time required for each algorithm Jones [15] tested. Table 2.1. Training time statistics from Jones’ [15] thesis. Algorithm
NASA Astrophysics Data System (ADS)
Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.
2015-10-01
We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.
2012-06-15
forever… Gig ‘Em! Dale W. Stanley III vii Table of Contents Page Acknowledgments...over the last 20 years. Airbus predicted that these trends would continue as emerging economies , especially in Asia, were creating a fast growing...US economy , pay differential and hiring by the major airlines contributed most to the decision to separate from the Air Force (Fullerton, 2003: 354
Mahlzeit! (Enjoy Your Meal!) German Table Manners, Menus, and Eating Establishments.
ERIC Educational Resources Information Center
Singer, Debbie
A series of short texts focus on German eating habits and types of eating establishments. Vocabulary is glossed in the margin and each text is followed by comprehension questions. Menus from a restaurant, an inn, and a fast-food restaurant; vocabulary exercises; a word search puzzle and its solution; and directions in English for conversational…
Obesity Exposure Across the Lifespan on Ovarian Cancer Pathogenesis
2014-06-01
significantly greater than non-obese mice (p=0.003, Table 1). There was no effect of HFD on non-fasted blood glucose levels or diabetes onset in KpB mice...Makowski et al. / Gynecologic Oncology 133 (2014) 90–97of hydroxyl groups of serine and threonine amino acid residues on these proteins. The PKC
Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.
NASA Astrophysics Data System (ADS)
Liu, Junzi; Cheng, Lan
2018-04-01
An atomic mean-field (AMF) spin-orbit (SO) approach within exact two-component theory (X2C) is reported, thereby exploiting the exact decoupling scheme of X2C, the one-electron approximation for the scalar-relativistic contributions, the mean-field approximation for the treatment of the two-electron SO contribution, and the local nature of the SO interactions. The Hamiltonian of the proposed SOX2CAMF scheme comprises the one-electron X2C Hamiltonian, the instantaneous two-electron Coulomb interaction, and an AMF SO term derived from spherically averaged Dirac-Coulomb Hartree-Fock calculations of atoms; no molecular relativistic two-electron integrals are required. Benchmark calculations for bond lengths, harmonic frequencies, dipole moments, and electric-field gradients for a set of diatomic molecules containing elements across the periodic table show that the SOX2CAMF scheme offers a balanced treatment for SO and scalar-relativistic effects and appears to be a promising candidate for applications to heavy-element containing systems. SOX2CAMF coupled-cluster calculations of molecular properties for bismuth compounds (BiN, BiP, BiF, BiCl, and BiI) are also presented and compared with experimental results to further demonstrate the accuracy and applicability of the SOX2CAMF scheme.
Training Level Does Not Affect Auditory Perception of The Magnitude of Ball Spin in Table Tennis.
Santos, Daniel P R; Barbosa, Roberto N; Vieira, Luiz H P; Santiago, Paulo R P; Zagatto, Alessandro M; Gomes, Matheus M
2017-01-01
Identifying the trajectory and spin of the ball with speed and accuracy is critical for good performance in table tennis. The aim of this study was to analyze the ability of table tennis players presenting different levels of training/experience to identify the magnitude of the ball spin from the sound produced when the racket hit the ball. Four types of "forehand" contact sounds were collected in the laboratory, defined as: Fast Spin (spinning ball forward at 140 r/s); Medium Spin (105 r/s); Slow Spin (84 r/s); and Flat Hit (less than 60 r/s). Thirty-four table tennis players of both sexes (24 men and 10 women) aged 18-40 years listened to the sounds and tried to identify the magnitude of the ball spin. The results revealed that in 50.9% of the cases the table tennis players were able to identify the ball spin and the observed number of correct answers (10.2) was significantly higher (χ 2 = 270.4, p <0.05) than the number of correct answers that could occur by chance. On the other hand, the results did not show any relationship between the level of training/experience and auditory perception of the ball spin. This indicates that auditory information contributes to identification of the magnitude of the ball spin, however, it also reveals that, in table tennis, the level of training does not interfere with the auditory perception of the ball spin.
NASA Astrophysics Data System (ADS)
DiLisi, Gregory A.; Rarick, Richard A.
2007-05-01
The 2006 Winter Meeting of the AAPT Was Over … and the flight home from Anchorage to Cleveland was just about to end—eight hours in the air, only two complimentary beverages, no meals, a jump across four time zones, a one-year-old baby daughter, and a wife whose motto for the week was, "Why did they choose to have a winter meeting in Alaska?" made for a mentally and physically taxing airborne ordeal. As we entered the last hour of flight, my small family was exhausted and the pilot's decision to dim the interior cabin lights mixed with the soothing hum of the Airbus® A320's engines quickly put us to sleep. Fading in and out of my delirium, I eventually heard the pilot's voice crackle over the intercom with a seemingly innocent comment: "We are going to begin our final descent into Cleveland … we should have you on the ground in exactly eight minutes." Something about the pilot's use of the word "exactly" must have triggered a reaction in my brain, because his remarks initiated a series of calculations: So how fast are we flying? How high are we flying? What's our angle of descent? With only eight minutes until touchdown, my curiosity to determine the descending airplane's motion led me to conduct a hastily constructed experiment.
NASA Astrophysics Data System (ADS)
Abd Elazem, Nader Y.; Ebaid, Abdelhalim
2017-12-01
In this paper, the effect of partial slip boundary condition on the heat and mass transfer of the Cu-water and Ag-water nanofluids over a stretching sheet in the presence of magnetic field and radiation. Such partial slip boundary condition has attracted much attention due to its wide applications in industry and chemical engineering. The flow is basically governing by a system of partial differential equations which are reduced to a system of ordinary differential equations. This system has been exactly solved, where exact analytical expression has been obtained for the fluid velocity in terms of exponential function, while the temperature distribution, and the nanoparticles concentration are expressed in terms of the generalized incomplete gamma function. In addition, explicit formulae are also derived from the rates of heat transfer and mass transfer. The effects of the permanent parameters on the skin friction, heat transfer coefficient, rate of mass transfer, velocity, the temperature profile, and concentration profile have been discussed through tables and graphs.
[Prediabetes as a riskmarker for stress-induced hyperglycemia in critically ill adults].
García-Gallegos, Diego Jesús; Luis-López, Eliseo
2017-01-01
It is not known if patients with prediabetes, a subgroup of non-diabetic patients that usually present hyperinsulinemia, have higher risk to present stress-induced hyperglycemia. The objective was to determine if prediabetes is a risk marker to present stress-induced hyperglycemia. Analytic, observational, prospective cohort study of non-diabetic critically ill patients of a third level hospital. We determined plasmatic glucose and glycated hemoglobin (HbA1c) at admission to diagnose stress-induced hyperglycemia (glucose ≥ 140 mg/dL) and prediabetes (HbA1c between 5.7 and 6.4%), respectively. We examined the proportion of non-prediabetic and prediabetic patients that developed stress hyperglycemia with contingence tables and Fisher's exact test for nominal scales. Of 73 patients studied, we found a proportion of stress-induced hyperglycemia in 6.6% in those without prediabetes and 61.1% in those with prediabetes. The Fisher's exact test value was 22.46 (p < 0.05). Prediabetes is a risk marker for stress-induced hyperglycemia in critically ill adults.
Kopský, Vojtech
2006-03-01
This article is a roadmap to a systematic calculation and tabulation of tensorial covariants for the point groups of material physics. The following are the essential steps in the described approach to tensor calculus. (i) An exact specification of the considered point groups by their embellished Hermann-Mauguin and Schoenflies symbols. (ii) Introduction of oriented Laue classes of magnetic point groups. (iii) An exact specification of matrix ireps (irreducible representations). (iv) Introduction of so-called typical (standard) bases and variables -- typical invariants, relative invariants or components of the typical covariants. (v) Introduction of Clebsch-Gordan products of the typical variables. (vi) Calculation of tensorial covariants of ascending ranks with consecutive use of tables of Clebsch-Gordan products. (vii) Opechowski's magic relations between tensorial decompositions. These steps are illustrated for groups of the tetragonal oriented Laue class D(4z) -- 4(z)2(x)2(xy) of magnetic point groups and for tensors up to fourth rank.
An efficient blocking M2L translation for low-frequency fast multipole method in three dimensions
NASA Astrophysics Data System (ADS)
Takahashi, Toru; Shimba, Yuta; Isakari, Hiroshi; Matsumoto, Toshiro
2016-05-01
We propose an efficient scheme to perform the multipole-to-local (M2L) translation in the three-dimensional low-frequency fast multipole method (LFFMM). Our strategy is to combine a group of matrix-vector products associated with M2L translation into a matrix-matrix product in order to diminish the memory traffic. For this purpose, we first developed a grouping method (termed as internal blocking) based on the congruent transformations (rotational and reflectional symmetries) of M2L-translators for each target box in the FMM hierarchy (adaptive octree). Next, we considered another method of grouping (termed as external blocking) that was able to handle M2L translations for multiple target boxes collectively by using the translational invariance of the M2L translation. By combining these internal and external blockings, the M2L translation can be performed efficiently whilst preservingthe numerical accuracy exactly. We assessed the proposed blocking scheme numerically and applied it to the boundary integral equation method to solve electromagnetic scattering problems for perfectly electrical conductor. From the numerical results, it was found that the proposed M2L scheme achieved a few times speedup compared to the non-blocking scheme.
Barbieri, Dechristian França; Srinivasan, Divya; Mathiassen, Svend Erik; Oliveira, Ana Beatriz
2017-08-01
We compared usage patterns of two different electronically controlled sit-stand tables during a 2-month intervention period among office workers. Office workers spend most of their working time sitting, which is likely detrimental to health. Although the introduction of sit-stand tables has been suggested as an effective intervention to decrease sitting time, limited evidence is available on usage patterns of sit-stand tables and whether patterns are influenced by table configuration. Twelve workers were provided with standard sit-stand tables (nonautomated table group) and 12 with semiautomated sit-stand tables programmed to change table position according to a preset pattern, if the user agreed to the system-generated prompt (semiautomated table group). Table position was monitored continuously for 2 months after introducing the tables, as a proxy for sit-stand behavior. On average, the table was in a "sit" position for 85% of the workday in both groups; this percentage did not change significantly during the 2-month period. Switches in table position from sit to stand were, however, more frequent in the semiautomated table group than in the nonautomated table group (0.65 vs. 0.29 hr -1 ; p = .001). Introducing a semiautomated sit-stand table appeared to be an attractive alternative to a standard sit-stand table, because it led to more posture variation. A semiautomated sit-stand table may effectively contribute to making postures more variable among office workers and thus aid in alleviating negative health effects of extensive sitting.
NASA Astrophysics Data System (ADS)
Batchelor, Murray T.; Wille, Luc T.
The Table of Contents for the book is as follows: * Preface * Modelling the Immune System - An Example of the Simulation of Complex Biological Systems * Brief Overview of Quantum Computation * Quantal Information in Statistical Physics * Modeling Economic Randomness: Statistical Mechanics of Market Phenomena * Essentially Singular Solutions of Feigenbaum- Type Functional Equations * Spatiotemporal Chaotic Dynamics in Coupled Map Lattices * Approach to Equilibrium of Chaotic Systems * From Level to Level in Brain and Behavior * Linear and Entropic Transformations of the Hydrophobic Free Energy Sequence Help Characterize a Novel Brain Polyprotein: CART's Protein * Dynamical Systems Response to Pulsed High-Frequency Fields * Bose-Einstein Condensates in the Light of Nonlinear Physics * Markov Superposition Expansion for the Entropy and Correlation Functions in Two and Three Dimensions * Calculation of Wave Center Deflection and Multifractal Analysis of Directed Waves Through the Study of su(1,1)Ferromagnets * Spectral Properties and Phases in Hierarchical Master Equations * Universality of the Distribution Functions of Random Matrix Theory * The Universal Chiral Partition Function for Exclusion Statistics * Continuous Space-Time Symmetries in a Lattice Field Theory * Quelques Cas Limites du Problème à N Corps Unidimensionnel * Integrable Models of Correlated Electrons * On the Riemann Surface of the Three-State Chiral Potts Model * Two Exactly Soluble Lattice Models in Three Dimensions * Competition of Ferromagnetic and Antiferromagnetic Order in the Spin-l/2 XXZ Chain at Finite Temperature * Extended Vertex Operator Algebras and Monomial Bases * Parity and Charge Conjugation Symmetries and S Matrix of the XXZ Chain * An Exactly Solvable Constrained XXZ Chain * Integrable Mixed Vertex Models Ftom the Braid-Monoid Algebra * From Yang-Baxter Equations to Dynamical Zeta Functions for Birational Tlansformations * Hexagonal Lattice Directed Site Animals * Direction in the Star-Triangle Relations * A Self-Avoiding Walk Through Exactly Solved Lattice Models in Statistical Mechanics
A Fast Visible-Infrared Imaging Radiometer Suite Simulator for Cloudy Atmopheres
NASA Technical Reports Server (NTRS)
Liu, Chao; Yang, Ping; Nasiri, Shaima L.; Platnick, Steven; Meyer, Kerry G.; Wang, Chen Xi; Ding, Shouguo
2015-01-01
A fast instrument simulator is developed to simulate the observations made in cloudy atmospheres by the Visible Infrared Imaging Radiometer Suite (VIIRS). The correlated k-distribution (CKD) technique is used to compute the transmissivity of absorbing atmospheric gases. The bulk scattering properties of ice clouds used in this study are based on the ice model used for the MODIS Collection 6 ice cloud products. Two fast radiative transfer models based on pre-computed ice cloud look-up-tables are used for the VIIRS solar and infrared channels. The accuracy and efficiency of the fast simulator are quantify in comparison with a combination of the rigorous line-by-line (LBLRTM) and discrete ordinate radiative transfer (DISORT) models. Relative errors are less than 2 for simulated TOA reflectances for the solar channels and the brightness temperature differences for the infrared channels are less than 0.2 K. The simulator is over three orders of magnitude faster than the benchmark LBLRTM+DISORT model. Furthermore, the cloudy atmosphere reflectances and brightness temperatures from the fast VIIRS simulator compare favorably with those from VIIRS observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marney, Luke C.; Siegler, William C.; Parsons, Brendon A.
Two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC – TOFMS) is a highly capable instrumental platform that produces complex and information-rich multi-dimensional chemical data. The complex data can be overwhelming, especially when many samples (of various sample classes) are analyzed with multiple injections for each sample. Thus, the data must be analyzed in such a way to extract the most meaningful information. The pixel-based and peak table-based algorithmic use of Fisher ratios has been used successfully in the past to reduce the multi-dimensional data down to those chemical compounds that are changing between classes relative tomore » those that are not (i.e., chemical feature selection). We report on the initial development of a computationally fast novel tile-based Fisher-ratio software that addresses challenges due to 2D retention time misalignment without explicitly aligning the data, which is a problem for both pixel-based and peak table- based methods. Concurrently, the tile-based Fisher-ratio software maximizes the sensitivity contrast of true positives against a background of potential false positives and noise. To study this software, eight compounds, plus one internal standard, were spiked into diesel at various concentrations. The tile-based F-ratio software was able to discover all spiked analytes, within the complex diesel sample matrix with thousands of potential false positives, in each possible concentration comparison, even at the lowest absolute spiked analyte concentration ratio of 1.06.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curchod, Basile F. E.; Agostini, Federica, E-mail: agostini@mpi-halle.mpg.de; Gross, E. K. U.
Nonadiabatic quantum interferences emerge whenever nuclear wavefunctions in different electronic states meet and interact in a nonadiabatic region. In this work, we analyze how nonadiabatic quantum interferences translate in the context of the exact factorization of the molecular wavefunction. In particular, we focus our attention on the shape of the time-dependent potential energy surface—the exact surface on which the nuclear dynamics takes place. We use a one-dimensional exactly solvable model to reproduce different conditions for quantum interferences, whose characteristic features already appear in one-dimension. The time-dependent potential energy surface develops complex features when strong interferences are present, in clear contrastmore » to the observed behavior in simple nonadiabatic crossing cases. Nevertheless, independent classical trajectories propagated on the exact time-dependent potential energy surface reasonably conserve a distribution in configuration space that mimics one of the exact nuclear probability densities.« less
A variational principle for compressible fluid mechanics: Discussion of the multi-dimensional theory
NASA Technical Reports Server (NTRS)
Prozan, R. J.
1982-01-01
The variational principle for compressible fluid mechanics previously introduced is extended to two dimensional flow. The analysis is stable, exactly conservative, adaptable to coarse or fine grids, and very fast. Solutions for two dimensional problems are included. The excellent behavior and results lend further credence to the variational concept and its applicability to the numerical analysis of complex flow fields.
Diagnostic value of the fast-FLAIR sequence in MR imaging of intracranial tumors.
Husstedt, H W; Sickert, M; Köstler, H; Haubitz, B; Becker, H
2000-01-01
The aim of this study was to quantify imaging characteristics of fast fluid-attenuated inversion recovery (FLAIR) sequence in brain tumors compared with T1-postcontrast- and T2-sequences. Fast-FLAIR-, T2 fast spin echo (FSE)-, and T1 SE postcontrast images of 74 patients with intracranial neoplasms were analyzed. Four neuroradiologists rated signal intensity and inhomogeneity of the tumor, rendering of cystic parts, demarcation of the tumor vs brain, of the tumor vs edema and of brain vs edema, as well as the presence of motion and of other artifacts. Data analysis was performed for histologically proven astrocytomas, glioblastomas, and meningiomas, for tumors with poor contrast enhancement, and for all patients pooled. Only for tumors with poor contrast enhancement (n = 12) did fast FLAIR provide additional information about the lesion. In these cases, signal intensity, demarcation of the tumor vs brain, and differentiation of the tumor vs edema were best using fast FLAIR. In all cases, rendering of the tumor's inner structure was poor. For all other tumor types, fast FLAIR did not give clinically relevant information, the only exception being a better demarcation of the edema from brain tissue. Artifacts rarely interfered with evaluation of fast-FLAIR images. Thus, fast FLAIR cannot replace T2-weighted series. It provides additional information only in tumors with poor contrast enhancement. It is helpful for defining the exact extent of the edema of any tumor but gives little information about their inner structure.
A time series approach to inferring groundwater recharge using the water table fluctuation method
NASA Astrophysics Data System (ADS)
Crosbie, Russell S.; Binning, Philip; Kalma, Jetse D.
2005-01-01
The water table fluctuation method for determining recharge from precipitation and water table measurements was originally developed on an event basis. Here a new multievent time series approach is presented for inferring groundwater recharge from long-term water table and precipitation records. Additional new features are the incorporation of a variable specific yield based upon the soil moisture retention curve, proper accounting for the Lisse effect on the water table, and the incorporation of aquifer drainage so that recharge can be detected even if the water table does not rise. A methodology for filtering noise and non-rainfall-related water table fluctuations is also presented. The model has been applied to 2 years of field data collected in the Tomago sand beds near Newcastle, Australia. It is shown that gross recharge estimates are very sensitive to time step size and specific yield. Properly accounting for the Lisse effect is also important to determining recharge.
NASA Astrophysics Data System (ADS)
Nichols, Brandon S.; Rajaram, Narasimhan; Tunnell, James W.
2012-05-01
Diffuse optical spectroscopy (DOS) provides a powerful tool for fast and noninvasive disease diagnosis. The ability to leverage DOS to accurately quantify tissue optical parameters hinges on the model used to estimate light-tissue interaction. We describe the accuracy of a lookup table (LUT)-based inverse model for measuring optical properties under different conditions relevant to biological tissue. The LUT is a matrix of reflectance values acquired experimentally from calibration standards of varying scattering and absorption properties. Because it is based on experimental values, the LUT inherently accounts for system response and probe geometry. We tested our approach in tissue phantoms containing multiple absorbers, different sizes of scatterers, and varying oxygen saturation of hemoglobin. The LUT-based model was able to extract scattering and absorption properties under most conditions with errors of less than 5 percent. We demonstrate the validity of the lookup table over a range of source-detector separations from 0.25 to 1.48 mm. Finally, we describe the rapid fabrication of a lookup table using only six calibration standards. This optimized LUT was able to extract scattering and absorption properties with average RMS errors of 2.5 and 4 percent, respectively.
Development of Ultra-Fine Multigroup Cross Section Library of the AMPX/SCALE Code Packages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeon, Byoung Kyu; Sik Yang, Won; Kim, Kang Seog
The Consortium for Advanced Simulation of Light Water Reactors Virtual Environment for Reactor Applications (VERA) neutronic simulator MPACT is being developed by Oak Ridge National Laboratory and the University of Michigan for various reactor applications. The MPACT and simplified MPACT 51- and 252-group cross section libraries have been developed for the MPACT neutron transport calculations by using the AMPX and Standardized Computer Analyses for Licensing Evaluations (SCALE) code packages developed at Oak Ridge National Laboratory. It has been noted that the conventional AMPX/SCALE procedure has limited applications for fast-spectrum systems such as boiling water reactor (BWR) fuels with very highmore » void fractions and fast reactor fuels because of its poor accuracy in unresolved and fast energy regions. This lack of accuracy can introduce additional error sources to MPACT calculations, which is already limited by the Bondarenko approach for resolved resonance self-shielding calculation. To enhance the prediction accuracy of MPACT for fast-spectrum reactor analyses, the accuracy of the AMPX/SCALE code packages should be improved first. The purpose of this study is to identify the major problems of the AMPX/SCALE procedure in generating fast-spectrum cross sections and to devise ways to improve the accuracy. For this, various benchmark problems including a typical pressurized water reactor fuel, BWR fuels with various void fractions, and several fast reactor fuels were analyzed using the AMPX 252-group libraries. Isotopic reaction rates were determined by SCALE multigroup (MG) calculations and compared with continuous energy (CE) Monte Carlo calculation results. This reaction rate analysis revealed three main contributors to the observed differences in reactivity and reaction rates: (1) the limitation of the Bondarenko approach in coarse energy group structure, (2) the normalization issue of probability tables, and (3) neglect of the self-shielding effect of resonance-like cross sections at high energy range such as (n,p) cross section of Cl35. The first error source can be eliminated by an ultra-fine group (UFG) structure in which the broad scattering resonances of intermediate-weight nuclides can be represented accurately by a piecewise constant function. A UFG AMPX library was generated with modified probability tables and tested against various benchmark problems. The reactivity and reaction rates determined with the new UFG AMPX library agreed very well with respect to Monte Carlo Neutral Particle (MCNP) results. To enhance the lattice calculation accuracy without significantly increasing the computational time, performing the UFG lattice calculation in two steps was proposed. In the first step, a UFG slowing-down calculation is performed for the corresponding homogenized composition, and UFG cross sections are collapsed into an intermediate group structure. In the second step, the lattice calculation is performed for the intermediate group level using the condensed group cross sections. A preliminary test showed that the condensed library reproduces the results obtained with the UFG cross section library. This result suggests that the proposed two-step lattice calculation approach is a promising option to enhance the applicability of the AMPX/SCALE system to fast system analysis.« less
A simplified analysis of the multigrid V-cycle as a fast elliptic solver
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Taasan, Shlomo
1988-01-01
For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.
NASA Astrophysics Data System (ADS)
Wen, Xiao-Gang
2017-05-01
We propose a generic construction of exactly soluble local bosonic models that realize various topological orders with gappable boundaries. In particular, we construct an exactly soluble bosonic model that realizes a (3+1)-dimensional [(3+1)D] Z2-gauge theory with emergent fermionic Kramers doublet. We show that the emergence of such a fermion will cause the nucleation of certain topological excitations in space-time without pin+ structure. The exactly soluble model also leads to a statistical transmutation in (3+1)D. In addition, we construct exactly soluble bosonic models that realize 2 types of time-reversal symmetry-enriched Z2 topological orders in 2+1 dimensions, and 20 types of simplest time-reversal symmetry-enriched topological (SET) orders which have only one nontrivial pointlike and stringlike topological excitation. Many physical properties of those topological states are calculated using the exactly soluble models. We find that some time-reversal SET orders have pointlike excitations that carry Kramers doublet, a fractionalized time-reversal symmetry. We also find that some Z2 SET orders have stringlike excitations that carry anomalous (nononsite) Z2 symmetry, which can be viewed as a fractionalization of Z2 symmetry on strings. Our construction is based on cochains and cocycles in algebraic topology, which is very versatile. In principle, it can also realize emergent topological field theory beyond the twisted gauge theory.
NASA Astrophysics Data System (ADS)
Baleanu, Dumitru; Inc, Mustafa; Yusuf, Abdullahi; Aliyu, Aliyu Isa
2018-06-01
In this work, we investigate the Lie symmetry analysis, exact solutions and conservation laws (Cls) to the time fractional Caudrey-Dodd-Gibbon-Sawada-Kotera (CDGDK) equation with Riemann-Liouville (RL) derivative. The time fractional CDGDK is reduced to nonlinear ordinary differential equation (ODE) of fractional order. New exact traveling wave solutions for the time fractional CDGDK are obtained by fractional sub-equation method. In the reduced equation, the derivative is in Erdelyi-Kober (EK) sense. Ibragimov's nonlocal conservation method is applied to construct Cls for time fractional CDGDK.
Decision Model for U.S.- Mexico Border Security Measures
2017-09-01
and money assigned to border security investments. 14. SUBJECT TERMS Department of Homeland Security (DHS), border security, U.S.–Mexico border...and money assigned to border security investments. vi THIS PAGE INTENTIONALLY LEFT BLANK vii TABLE OF CONTENTS I. INTRODUCTION...FAA Federal Aviation Administration FAMS Federal Air Marshals Service FAST Free and Secure Trade GSA General Services Administration HIR Human
Merchantable Volume and Weights of Mahoe in Puerto Rican Plantations
John K. Francis
1989-01-01
Mahoe (Hibiscus elatus Sw.), a fast-growing tree whose wood is considered valuable, is planted and managed primarily in the West Indies. Until now, volume and weight tables have not been available for the species. Data used in this paper were collected from 50 felled trees in a range of sizes from plantations across Puerto Rico. Using linear...
Effects from Unsaturated Zone Flow during Oscillatory Hydraulic Testing
NASA Astrophysics Data System (ADS)
Lim, D.; Zhou, Y.; Cardiff, M. A.; Barrash, W.
2014-12-01
In analyzing pumping tests on unconfined aquifers, the impact of the unsaturated zone is often neglected. Instead, desaturation at the water table is often treated as a free-surface boundary, which is simple and allows for relatively fast computation. Richards' equation models, which account for unsaturated flow, can be compared with saturated flow models to validate the use of Darcy's Law. In this presentation, we examine the appropriateness of using fast linear steady-periodic models based on linearized water table conditions in order to simulate oscillatory pumping tests in phreatic aquifers. We compare oscillatory pumping test models including: 1) a 2-D radially-symmetric phreatic aquifer model with a partially penetrating well, simulated using both Darcy's Law and Richards' Equation in COMSOL; and 2) a linear phase-domain numerical model developed in MATLAB. Both COMSOL and MATLAB models are calibrated to match oscillatory pumping test data collected in the summer of 2013 at the Boise Hydrogeophysical Research Site (BHRS), and we examine the effect of model type on the associated parameter estimates. The results of this research will aid unconfined aquifer characterization efforts and help to constrain the impact of the simplifying physical assumptions often employed during test analysis.
NASA Astrophysics Data System (ADS)
Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-11-01
Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADES can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved—up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.
Markov-modulated Markov chains and the covarion process of molecular evolution.
Galtier, N; Jean-Marie, A
2004-01-01
The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.
Alastuey, A; Ballenegger, V
2012-12-01
We compute thermodynamical properties of a low-density hydrogen gas within the physical picture, in which the system is described as a quantum electron-proton plasma interacting via the Coulomb potential. Our calculations are done using the exact scaled low-temperature (SLT) expansion, which provides a rigorous extension of the well-known virial expansion-valid in the fully ionized phase-into the Saha regime where the system is partially or fully recombined into hydrogen atoms. After recalling the SLT expansion of the pressure [A. Alastuey et al., J. Stat. Phys. 130, 1119 (2008)], we obtain the SLT expansions of the chemical potential and of the internal energy, up to order exp(-|E_{H}|/kT) included (E_{H}≃-13.6 eV). Those truncated expansions describe the first five nonideal corrections to the ideal Saha law. They account exactly, up to the considered order, for all effects of interactions and thermal excitations, including the formation of bound states (atom H, ions H^{-} and H_{2}^{+}, molecule H_{2},⋯) and atom-charge and atom-atom interactions. Among the five leading corrections, three are easy to evaluate, while the remaining ones involve well-defined internal partition functions for the molecule H_{2} and ions H^{-} and H_{2}^{+}, for which no closed-form analytical formula exist currently. We provide accurate low-temperature approximations for those partition functions by using known values of rotational and vibrational energies. We compare then the predictions of the SLT expansion, for the pressure and the internal energy, with, on the one hand, the equation-of-state tables obtained within the opacity program at Livermore (OPAL) and, on the other hand, data of path integral quantum Monte Carlo (PIMC) simulations. In general, a good agreement is found. At low densities, the simple analytical SLT formulas reproduce the values of the OPAL tables up to the last digit in a large range of temperatures, while at higher densities (ρ∼10^{-2} g/cm^{3}), some discrepancies among the SLT, OPAL, and PIMC results are observed.
Life-table methods for detecting age-risk factor interactions in long-term follow-up studies.
Logue, E E; Wing, S
1986-01-01
Methodological investigation has suggested that age-risk factor interactions should be more evident in age of experience life tables than in follow-up time tables due to the mixing of ages of experience over follow-up time in groups defined by age at initial examination. To illustrate the two approaches, age modification of the effect of total cholesterol on ischemic heart disease mortality in two long-term follow-up studies was investigated. Follow-up time life table analysis of 116 deaths over 20 years in one study was more consistent with a uniform relative risk due to cholesterol, while age of experience life table analysis was more consistent with a monotonic negative age interaction. In a second follow-up study (160 deaths over 24 years), there was no evidence of a monotonic negative age-cholesterol interaction by either method. It was concluded that age-specific life table analysis should be used when age-risk factor interactions are considered, but that both approaches yield almost identical results in absence of age interaction. The identification of the more appropriate life-table analysis should be ultimately guided by the nature of the age or time phenomena of scientific interest.
NASA Astrophysics Data System (ADS)
Michael, C. A.; Tanaka, K.; Akiyama, T.; Ozaki, T.; Osakabe, M.; Sakakibara, S.; Yamaguchi, H.; Murakami, S.; Yokoyama, M.; Shoji, M.; Vyacheslavov, L. N.; LHD Experimental Group
2018-04-01
In the Large helical device, a change of energetic particle mode is observed as He concentration is varied in ion-ITB type experiments, having constant electron density and input heating power but with a clear increase of central ion temperature in He rich discharges. This activity consists of bursty, but damped energetic interchange modes (EICs, Du et al 2015 Phys. Rev. Lett. 114 155003), whose occurrence rate is dramatically lower in the He-rich discharges. Mechanisms are discussed for the changes in drive and damping of the modes with He concentration. These EIC bursts consist of marked changes in the radial electric field, which is derived from the phase velocity of turbulence measured with the 2D phase contrast imaging (PCI) system. Similar bursts are detected in edge fast ion diagnostics. Ion thermal transport by gyro-Bohm scaling is recognised as a contribution to the change in ion temperature, though fast ion losses by these EIC modes may also contribute to the ion temperature dependence on He concentration, most particularly controlling the height of an ‘edge-pedestal’ in the Ti profile. The steady-state level of fast ions is shown to be larger in helium rich discharges on the basis of a compact neutral particle analyser (CNPA), and the fast-ion component of the diamagnetic stored energy. These events also have an influence on turbulence and transport. The large velocity shear induced produced during these events transiently improves confinement and suppresses turbulence, and has a larger net effect when bursts are more frequent in hydrogen discharges. This exactly offsets the increased gyro-Bohm related turbulence drive in hydrogen which results in the same time-averaged turbulence level in hydrogen as in helium.
Landau-Zener extension of the Tavis-Cummings model: structure of the solution
NASA Astrophysics Data System (ADS)
Sun, Chen; Sinitsyn, Nikolai
We explore the recently discovered solution of the driven Tavis-Cummings model (DTCM). It describes interaction of arbitrary number of two-level systems with a bosonic mode that has linearly time-dependent frequency. We derive compact and tractable expressions for transition probabilities in terms of the well known special functions. In the new form, our formulas are suitable for fast numerical calculations and analytical approximations. As an application, we obtain the semiclassical limit of the exact solution and compare it to prior approximations. We also reveal connection between DTCM and q-deformed binomial statistics. Under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Authors also thank the support from the LDRD program at LANL.
Extreme mortality in nineteenth-century Africa: the case of Liberian immigrants.
McDaniel, A
1992-11-01
Several studies have examined the mortality of immigrants from Europe to Africa in the nineteenth century. This paper examines the level of mortality in Liberia of Africans who emigrated there from the United States. A life table is estimated from data collected by the American Colonization Society from 1820 to 1843. The analysis reflects the mortality experience of a population that is transplanted from one disease environment to another, more exacting, disease environment. The results of this analysis show that these Liberian immigrants experienced the highest mortality rates in accurately recorded human history.
Krawczyk, Joanna; Wojciechowski, Jarosław; Leszczyński, Ryszard; Błaszczyk, Jan
2010-01-01
More and more people in the world contend with overweight or obesity, and this phenomenon at the moment is being recognized as one of the most important problems of modern civilization observed in many developed countries. Change of the lifestyle connected with turning from the active life to the more sedentary one and bad eating habits led to the development of overweight and obesity at an alarmingly fast rate with the parallel development of interests directed on conducting the research and looking for the effective methods of fighting against the overweight and obesity. The aim of the study was to evaluate some parameters of body weight among people being put on the healthy training on the rehabilitation and keep-fit tables Slender-Life. A group of 50 patients treated in sanatorium were included into the observation. Double measurement of body weight and thickness of the skin and fat were performed during the first and last days of the fifteen day training on the formerly mentioned tables. The statistically important decrease of examined parameters including the real body weight, fat mass, the BMI indication and the thickness of the skin and fat folds was detected. The healthy training on the rehabilitation and keep-fit tables Slender-Life causes the increase of the body fat-free weight. The positive acceptation of the rehabilitation on tables Slender-Life proves it should be applied.
The Impact of Water Table Drawdown and Drying on Subterranean Aquatic Fauna in In-Vitro Experiments
Stumpp, Christine; Hose, Grant C.
2013-01-01
The abstraction of groundwater is a global phenomenon that directly threatens groundwater ecosystems. Despite the global significance of this issue, the impact of groundwater abstraction and the lowering of groundwater tables on biota is poorly known. The aim of this study is to determine the impacts of groundwater drawdown in unconfined aquifers on the distribution of fauna close to the water table, and the tolerance of groundwater fauna to sediment drying once water levels have declined. A series of column experiments were conducted to investigate the depth distribution of different stygofauna (Syncarida and Copepoda) under saturated conditions and after fast and slow water table declines. Further, the survival of stygofauna under conditions of reduced sediment water content was tested. The distribution and response of stygofauna to water drawdown was taxon specific, but with the common response of some fauna being stranded by water level decline. So too, the survival of stygofauna under different levels of sediment saturation was variable. Syncarida were better able to tolerate drying conditions than the Copepoda, but mortality of all groups increased with decreasing sediment water content. The results of this work provide new understanding of the response of fauna to water table drawdown. Such improved understanding is necessary for sustainable use of groundwater, and allows for targeted strategies to better manage groundwater abstraction and maintain groundwater biodiversity. PMID:24278111
Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew
2009-01-01
On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.
eFAST for Pneumothorax: Real-Life Application in an Urban Level 1 Center by Trauma Team Members.
Maximus, Steven; Figueroa, Cesar; Whealon, Matthew; Pham, Jacqueline; Kuncir, Eric; Barrios, Cristobal
2018-02-01
The focused assessment with sonography for trauma (FAST) examination has become the standard of care for rapid evaluation of trauma patients. Extended FAST (eFAST) is the use of ultrasonography for the detection of pneumothorax (PTX). The exact sensitivity and specificity of eFAST detecting traumatic PTX during practical "real-life" application is yet to be investigated. This is a retrospective review of all trauma patients with a diagnosis of PTX, who were treated at a large level 1 urban trauma center from March 2013 through July 2014. Charts were reviewed for results of imaging, which included eFAST, chest X-ray, and CT scan. The requirement of tube thoracostomy and mechanism of injury were also analyzed. A total of 369 patients with a diagnosis of PTX were identified. A total of 69 patients were excluded, as eFAST was either not performed or not documented, leaving 300 patients identified with PTX. A total of 113 patients had clinically significant PTX (37.6%), requiring immediate tube thoracostomy placement. eFAST yielded a positive diagnosis of PTX in 19 patients (16.8%), and all were clinically significant, requiring tube thoracostomy. Chest X-ray detected clinically significant PTX in 105 patients (92.9%). The literature on the utility of eFAST for PTX in trauma is variable. Our data show that although specific for clinically significant traumatic PTX, it has poor sensitivity when performed by clinicians with variable levels of ultrasound training. We conclude that CT is still the gold standard in detecting PTX, and clinicians performing eFAST should have adequate training.
Inhomogeneous quasistationary state of dense fluids of inelastic hard spheres
NASA Astrophysics Data System (ADS)
Fouxon, Itzhak
2014-05-01
We study closed dense collections of freely cooling hard spheres that collide inelastically with constant coefficient of normal restitution. We find inhomogeneous states (ISs) where the density profile is spatially nonuniform but constant in time. The states are exact solutions of nonlinear partial differential equations that describe the coupled distributions of density and temperature valid when inelastic losses of energy per collision are small. The derivation is performed without modeling the equations' coefficients that are unknown in the dense limit (such as the equation of state) using only their scaling form specific for hard spheres. Thus the IS is the exact state of this dense many-body system. It captures a fundamental property of inelastic collections of particles: the possibility of preserving nonuniform temperature via the interplay of inelastic cooling and heat conduction that generalizes previous results. We perform numerical simulations to demonstrate that arbitrary initial state evolves to the IS in the limit of long times where the container has the geometry of the channel. The evolution is like a gas-liquid transition. The liquid condenses in a vanishing part of the total volume but takes most of the mass of the system. However, the gaseous phase, which mass grows only logarithmically with the system size, is relevant because its fast particles carry most of the energy of the system. Remarkably, the system self-organizes to dissipate no energy: The inelastic decay of energy is a power law [1+t/tc]-2, where tc diverges in the thermodynamic limit. This is reinforced by observing that for supercritical systems the IS coincide in most of the space with the steady states of granular systems heated at one of the walls. We discuss the relation of our results to the recently proposed finite-time singularity in other container's geometries.
Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation
NASA Astrophysics Data System (ADS)
Abuasad, Salah; Hashim, Ishak
2018-04-01
In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.
Accuracy of the adiabatic-impulse approximation for closed and open quantum systems
NASA Astrophysics Data System (ADS)
Tomka, Michael; Campos Venuti, Lorenzo; Zanardi, Paolo
2018-03-01
We study the adiabatic-impulse approximation (AIA) as a tool to approximate the time evolution of quantum states when driven through a region of small gap. Such small-gap regions are a common situation in adiabatic quantum computing and having reliable approximations is important in this context. The AIA originates from the Kibble-Zurek theory applied to continuous quantum phase transitions. The Kibble-Zurek mechanism was developed to predict the power-law scaling of the defect density across a continuous quantum phase transition. Instead, here we quantify the accuracy of the AIA via the trace norm distance with respect to the exact evolved state. As expected, we find that for short times or fast protocols, the AIA outperforms the simple adiabatic approximation. However, for large times or slow protocols, the situation is actually reversed and the AIA provides a worse approximation. Nevertheless, we found a variation of the AIA that can perform better than the adiabatic one. This counterintuitive modification consists in crossing the region of small gap twice. Our findings are illustrated by several examples of driven closed and open quantum systems.
Principles for circadian orchestration of metabolic pathways.
Thurley, Kevin; Herbst, Christopher; Wesener, Felix; Koller, Barbara; Wallach, Thomas; Maier, Bert; Kramer, Achim; Westermark, Pål O
2017-02-14
Circadian rhythms govern multiple aspects of animal metabolism. Transcriptome-, proteome- and metabolome-wide measurements have revealed widespread circadian rhythms in metabolism governed by a cellular genetic oscillator, the circadian core clock. However, it remains unclear if and under which conditions transcriptional rhythms cause rhythms in particular metabolites and metabolic fluxes. Here, we analyzed the circadian orchestration of metabolic pathways by direct measurement of enzyme activities, analysis of transcriptome data, and developing a theoretical method called circadian response analysis. Contrary to a common assumption, we found that pronounced rhythms in metabolic pathways are often favored by separation rather than alignment in the times of peak activity of key enzymes. This property holds true for a set of metabolic pathway motifs (e.g., linear chains and branching points) and also under the conditions of fast kinetics typical for metabolic reactions. By circadian response analysis of pathway motifs, we determined exact timing separation constraints on rhythmic enzyme activities that allow for substantial rhythms in pathway flux and metabolite concentrations. Direct measurements of circadian enzyme activities in mouse skeletal muscle confirmed that such timing separation occurs in vivo.
Principles for circadian orchestration of metabolic pathways
Thurley, Kevin; Herbst, Christopher; Wesener, Felix; Koller, Barbara; Wallach, Thomas; Maier, Bert; Kramer, Achim
2017-01-01
Circadian rhythms govern multiple aspects of animal metabolism. Transcriptome-, proteome- and metabolome-wide measurements have revealed widespread circadian rhythms in metabolism governed by a cellular genetic oscillator, the circadian core clock. However, it remains unclear if and under which conditions transcriptional rhythms cause rhythms in particular metabolites and metabolic fluxes. Here, we analyzed the circadian orchestration of metabolic pathways by direct measurement of enzyme activities, analysis of transcriptome data, and developing a theoretical method called circadian response analysis. Contrary to a common assumption, we found that pronounced rhythms in metabolic pathways are often favored by separation rather than alignment in the times of peak activity of key enzymes. This property holds true for a set of metabolic pathway motifs (e.g., linear chains and branching points) and also under the conditions of fast kinetics typical for metabolic reactions. By circadian response analysis of pathway motifs, we determined exact timing separation constraints on rhythmic enzyme activities that allow for substantial rhythms in pathway flux and metabolite concentrations. Direct measurements of circadian enzyme activities in mouse skeletal muscle confirmed that such timing separation occurs in vivo. PMID:28159888
Fast associative memory + slow neural circuitry = the computational model of the brain.
NASA Astrophysics Data System (ADS)
Berkovich, Simon; Berkovich, Efraim; Lapir, Gennady
1997-08-01
We propose a computational model of the brain based on a fast associative memory and relatively slow neural processors. In this model, processing time is expensive but memory access is not, and therefore most algorithmic tasks would be accomplished by using large look-up tables as opposed to calculating. The essential feature of an associative memory in this context (characteristic for a holographic type memory) is that it works without an explicit mechanism for resolution of multiple responses. As a result, the slow neuronal processing elements, overwhelmed by the flow of information, operate as a set of templates for ranking of the retrieved information. This structure addresses the primary controversy in the brain architecture: distributed organization of memory vs. localization of processing centers. This computational model offers an intriguing explanation of many of the paradoxical features in the brain architecture, such as integration of sensors (through DMA mechanism), subliminal perception, universality of software, interrupts, fault-tolerance, certain bizarre possibilities for rapid arithmetics etc. In conventional computer science the presented type of a computational model did not attract attention as it goes against the technological grain by using a working memory faster than processing elements.
NASA Astrophysics Data System (ADS)
Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel; Chu, Shih-I.
2017-04-01
Solving and analyzing the exact time-dependent optimized effective potential (TDOEP) integral equation has been a longstanding challenge due to its highly nonlinear and nonlocal nature. To meet the challenge, we derive an exact time-local TDOEP equation that admits a unique real-time solution in terms of time-dependent Kohn-Sham orbitals and effective memory orbitals. For illustration, the dipole evolution dynamics of a one-dimension-model chain of hydrogen atoms is numerically evaluated and examined to demonstrate the utility of the proposed time-local formulation. Importantly, it is shown that the zero-force theorem, violated by the time-dependent Krieger-Li-Iafrate approximation, is fulfilled in the current TDOEP framework. This work was partially supported by DOE.
Principles of protein folding--a perspective from simple exact models.
Dill, K. A.; Bromberg, S.; Yue, K.; Fiebig, K. M.; Yee, D. P.; Thomas, P. D.; Chan, H. S.
1995-01-01
General principles of protein structure, stability, and folding kinetics have recently been explored in computer simulations of simple exact lattice models. These models represent protein chains at a rudimentary level, but they involve few parameters, approximations, or implicit biases, and they allow complete explorations of conformational and sequence spaces. Such simulations have resulted in testable predictions that are sometimes unanticipated: The folding code is mainly binary and delocalized throughout the amino acid sequence. The secondary and tertiary structures of a protein are specified mainly by the sequence of polar and nonpolar monomers. More specific interactions may refine the structure, rather than dominate the folding code. Simple exact models can account for the properties that characterize protein folding: two-state cooperativity, secondary and tertiary structures, and multistage folding kinetics--fast hydrophobic collapse followed by slower annealing. These studies suggest the possibility of creating "foldable" chain molecules other than proteins. The encoding of a unique compact chain conformation may not require amino acids; it may require only the ability to synthesize specific monomer sequences in which at least one monomer type is solvent-averse. PMID:7613459
Assessing the Value of Regulation Resources Based on Their Time Response Characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Lu, Shuai; Ma, Jian
2008-06-01
Fast responsive regulation resources are potentially more valuable as a power system regulation resource (more efficient) because they allow applying controls at the exact moment and in the exact amount as needed. Faster control is desirable because it facilitates more reliable compliance with the NERC Control Performance Standards at relatively lesser regulation capacity procurements. The current California ISO practices and markets do not provide a differentiation among the regulation resources based on their speed of response (with the exception of some minimum ramping capabilities). Some demand response technologies, including some generation and energy storage resources, can provide quicker control actions.more » California ISO practices and markets could be updated to welcome more fast regulation resources into the California ISO service area. The project work reported in this work was pursuing the following objectives: • Develop methodology to assess the relative value of generation resources used for regulation and load following California ISO functions • This assessment should be done based on physical characteristics including the ability to quickly change their output following California ISO signals • Evaluate what power is worth on different time scales • Analyze the benefits of new regulation resources to provide effective compliance with the mandatory NERC Control Performance Standards • Evaluate impacts of the newly proposed BAAL and FRR standards on the potential value of fast regulation and distributed regulation resources • Develop a scope for the follow-up projects to pave a road for the new efficient types of balancing resources in California. The work included the following studies: • Analysis of California ISO regulating units characteristics • California ISO automatic generation system (AGC) analysis • California ISO regulation procurement and market analysis • Fast regulation efficiency analysis • Projection of the California ISO load following and regulation requirements into the future • Value of fast responsive resources depending on their ramping capability • Potential impacts of the balancing authority area control error limit (BAAL), which is a part of the newly proposed NERC standard “Balancing Resources and Demand” • Potential impacts of the Western Electricity Coordinating Council (WECC) frequency responsive reserve (FRR) standard • Recommendations for the next phase of the project. The following main conclusions and suggestions for the future have been made: • The analysis of regulation ramping requirements shows that the regulation system should be able to provide ramps of at least 40-60 MW per minute for a period up to 6 minutes. • Evaluate if changes are needed in the California ISO AGC system to effectively accommodate new types of fast regulation resources and minimize the California ISO regulation procurement. • California ISO may consider creating better market opportunities for and incentives for fast responsive resources. • An additional study of low probability high ramp events can be recommended to the California ISO. • The California ISO may be willing to consider establishing a more relaxed target CPS2 compliance level. • A BAAL-related study can be recommended for the California ISO as soon as more clarity is achieved concerning the actual enforcement of the BAAL standard and its numerical values for the California ISO. The study may involve an assessment of advantages of the distributed frequency-based control for the California ISO system. The market-related issues that arise in this connection can be also investigated. • A FRR-related study can be recommended for the California ISO as soon as more clarity is achieved concerning the actual enforcement of the FRR standard and its numerical values for the California ISO.« less
NASA Astrophysics Data System (ADS)
Bouaamlat, I.; Larabi, A.; Faouzi, M.
2013-12-01
The geographical location of Tafilalet oasis system (TOS) in the south of the valley of Ziz (Morocco) offers him a particular advantage on the plane of water potential. The surface water which comes from humid regions of the High Atlas and intercepted by a dam then converged through the watercourse of Ziz towards the plain of the TOS, have created the conditions for the formation of a water table relatively rich with regard to the local climatic conditions (arid climate with recurrent drought). Because of this situation, the region has one of the largest palms of North Africa. Thus there is an agricultural activity that is practiced in a 21 irrigation areas whose size rarely exceeds 2,000 hectare. Given the role of the water table in the economic development of the region, a hydrogeological study was conducted to understand the impact of artificial recharge and recurrent droughts on the development of the groundwater reserves of TOS. In this study, a three-dimensional model of groundwater flow was developed for the Tafilalet oasis system aquifer, to assist the decision makers as a "management tool" in order to assess alternative schemes for development and exploitation of groundwater resources based on the variation of artificial recharge and drought, using for the first time the Modflow code. This study takes into account the most possible real hydrogeological conditions and using the geographical information system (GIS) for the organisation and treatment of data and applying a multidisciplinary approach combining geostatistical and hydrogeological modeling. The results from this numerical investigation of the TOS aquifer shows that the commissioning of the dam to control the flows of extreme flood and good management of water releases, has avoided the losses of irrigation water and consequently the non-overexploitation of the groundwater. So that with one or two water releases per year from the dam of flow rate more than 14 million m3/year it is possible to reconstruct the volume of water abstracted by wells. The idea of lowering water table by pumping wells is not exactly true, as well the development of groundwater abstraction has not prevented the wound of water table in these last years, the pumping wells accompanied more than it triggers the lowering of water table and it is mainly the succession of dry periods causing the decreases of the piezometric level. This situation confirms the important role that groundwater plays as a 'buffer' during the drought periods.
2015-04-21
seismic sensors , acoustic sensors , electromagnetic sensors and infrared (IR) detectors are among in-need multimodal sensing of vehicles, personnel, weapons... sensors and detectors largely due to the fact that the nature of piezoelectricity renders both active and passive sensing with fast response, low profile...and low power consumption. Acoustic and seismic sensors are used to ascertain the exact target location, speed, direction of motion, and
Bourlier, Christophe; Kubické, Gildas; Déchamps, Nicolas
2008-04-01
A fast, exact numerical method based on the method of moments (MM) is developed to calculate the scattering from an object below a randomly rough surface. Déchamps et al. [J. Opt. Soc. Am. A23, 359 (2006)] have recently developed the PILE (propagation-inside-layer expansion) method for a stack of two one-dimensional rough interfaces separating homogeneous media. From the inversion of the impedance matrix by block (in which two impedance matrices of each interface and two coupling matrices are involved), this method allows one to calculate separately and exactly the multiple-scattering contributions inside the layer in which the inverses of the impedance matrices of each interface are involved. Our purpose here is to apply this method for an object below a rough surface. In addition, to invert a matrix of large size, the forward-backward spectral acceleration (FB-SA) approach of complexity O(N) (N is the number of unknowns on the interface) proposed by Chou and Johnson [Radio Sci.33, 1277 (1998)] is applied. The new method, PILE combined with FB-SA, is tested on perfectly conducting circular and elliptic cylinders located below a dielectric rough interface obeying a Gaussian process with Gaussian and exponential height autocorrelation functions.
Benning, C; Huang, Z H; Gage, D A
1995-02-20
Cells of the photosynthetic bacterium Rhodobacter sphaeroides grown under phosphate-limiting conditions accumulated nonphosphorous glycolipids and lipids carrying head groups derived from amino acids. Concomitantly, the relative amount of phosphoglycerolipids decreased from 90 to 22 mol% of total polar lipids in the membranes. Two lipids, not detectable in cells grown under standard conditions, were synthesized during phosphate-limited growth. Fast atom bombardment mass spectroscopy, exact mass measurements, 1H NMR spectroscopy, sugar composition analysis, and methylation analysis of the predominant glycolipid led to the identification of the novel compound 1,2-di-O-acyl-3-O-[alpha-D-glucopyranosyl-(1-->4)-O-beta-D-galactopyr anosyl]glycerol. The second lipid was identified as the betaine lipid 1,2-di-O-acyl-[4'-(N,N,N-trimethyl)-homoserine]glycerol by cochromatography employing an authentic standard from Chlamydomonas reinhardtii, fast atom bombardment mass spectroscopy, exact mass measurements, and 1H NMR spectroscopy. Prior to this observation, the occurrence of this lipid was thought to be restricted to lower plants and algae. Apparently, these newly synthesized nonphosphorous lipids, in addition to the sulfo- and the ornithine lipid also found in R. sphaeroides grown under optimal conditions, take over the role of phosphoglycerolipids in phosphate-deprived cells.
46 CFR 502.221 - Briefs; requests for findings.
Code of Federal Regulations, 2010 CFR
2010-10-01
... presiding officer shall fix the time and manner of filing briefs and any enlargement of time. The period of... subject index or table of contents with page references and a list of authorities cited. (f) All briefs... pages containing the table of contents, table of authorities, and certificate of service, unless the...
Exact solutions to the time-fractional differential equations via local fractional derivatives
NASA Astrophysics Data System (ADS)
Guner, Ozkan; Bekir, Ahmet
2018-01-01
This article utilizes the local fractional derivative and the exp-function method to construct the exact solutions of nonlinear time-fractional differential equations (FDEs). For illustrating the validity of the method, it is applied to the time-fractional Camassa-Holm equation and the time-fractional-generalized fifth-order KdV equation. Moreover, the exact solutions are obtained for the equations which are formed by different parameter values related to the time-fractional-generalized fifth-order KdV equation. This method is an reliable and efficient mathematical tool for solving FDEs and it can be applied to other non-linear FDEs.
End-growth/evaporation living polymerization kinetics revisited
NASA Astrophysics Data System (ADS)
Semenov, A. N.; Nyrkova, I. A.
2011-03-01
End-growth/evaporation kinetics in living polymer systems with "association-ready" free unimers (no initiator) is considered theoretically. The study is focused on the systems with long chains (typical aggregation number N ≫ 1) at long times. A closed system of continuous equations is derived and is applied to study the kinetics of the chain length distribution (CLD) following a jump of a parameter (T-jump) inducing a change of the equilibrium mean chain length from N0 to N. The continuous approach is asymptotically exact for t ≫ t1, where t1 is the dimer dissociation time. It yields a number of essentially new analytical results concerning the CLD kinetics in some representative regimes. In particular, we obtained the asymptotically exact CLD response (for N ≫ 1) to a weak T-jump (ɛ = N0/N - 1 ≪ 1). For arbitrary T-jumps we found that the longest relaxation time tmax = 1/γ is always quadratic in N (γ is the relaxation rate of the slowest normal mode). More precisely tmax ∝4N2 for N0 < 2N and tmax ∝NN0/(1 - N/N0) for N0 > 2N. The mean chain length Nn is shown to change significantly during the intermediate slow relaxation stage t1 ≪ t ≪ tmax . We predict that N_n(t)-N_n(0)∝ √{t} in the intermediate regime for weak (or moderate) T-jumps. For a deep T-quench inducing strong increase of the equilibrium Nn (N ≫ N0 ≫ 1), the mean chain length follows a similar law, N_n(t)∝ √{t}, while an opposite T-jump (inducing chain shortening, N0 ≫ N ≫ 1) leads to a power-law decrease of Nn: Nn(t)∝t-1/3. It is also shown that a living polymer system gets strongly polydisperse in the latter regime, the maximum polydispersity index r = Nw/Nn being r* ≈ 0.77N0/N ≫ 1. The concentration of free unimers relaxes mainly during the fast process with the characteristic time tf ˜ t1N0/N2. A nonexponential CLD dominated by short chains develops as a result of the fast stage in the case of N0 = 1 and N ≫ 1. The obtained analytical results are supported, in part, by comparison with numerical results found both previously and in the present paper.
A Fast MoM Solver (GIFFT) for Large Arrays of Microstrip and Cavity-Backed Antennas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasenfest, B J; Capolino, F; Wilton, D
2005-02-02
A straightforward numerical analysis of large arrays of arbitrary contour (and possibly missing elements) requires large memory storage and long computation times. Several techniques are currently under development to reduce this cost. One such technique is the GIFFT (Green's function interpolation and FFT) method discussed here that belongs to the class of fast solvers for large structures. This method uses a modification of the standard AIM approach [1] that takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basismore » functions, such as the RWG basis. The Green's function is then projected onto a sparse regular grid of separable interpolating polynomials. This grid can then be used in a 2D or 3D FFT to accelerate the matrix-vector product used in an iterative solver [2]. The method has been proven to greatly reduce solve time by speeding up the matrix-vector product computation. The GIFFT approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends GIFFT to layered material Green's functions and multiregion interactions via slots in ground planes. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the GIFFT method is reported in [2]; this contribution is limited to presenting new results for array antennas made of slot-excited patches and cavity-backed patch antennas.« less
Hydrological connectivity in the karst critical zone: an integrated approach
NASA Astrophysics Data System (ADS)
Chen, X.; Zhang, Z.; Soulsby, C.; Cheng, Q.; Binley, A. M.; Tao, M.
2017-12-01
Spatial heterogeneity in the subsurface is high, evidenced by specific landform features (sinkholes, caves etc.) and resulting in high variability of hydrological processes in space and time. This includes complex exchange of various flow sources (e.g. hillslope springs and depression aquifers) and fast conduit flow and slow fracture flow. In this paper we integrate various "state-of-the-art" methods to understand the structure and function of this understudied critical zone environment. Geophysical, hydrometric and hydrogeochemical tools are used to characterize the hydrological connectivity of the cockpit karst critical zone in a small catchment of Chenqi, Guizhou province, China. Geophysical surveys, using electrical resistivity tomography (ERT), identified the complex conduit networks that link flows between hillslopes and depressions. Statistical time series analysis of water tables and discharge responses at hillslope springs and in depression wells and underground channels showed different threshold responses of hillslope and depression flows. This reflected the differing relative contribution of fast and slow flow paths during rainfall events of varying magnitude in the hillslope epikarst and depression aquifer in dry and wet periods. This showed that the hillslope epikarst receives a high proportion of rainfall recharge and is thus a main water resource in the catchment during the drought period. In contrast, the depression aquifer receives fast, concentrated hillslope flows during large rainfall events during the wet period, resulting in the filling of depression conduits and frequent flooding. Hydrological tracer studies using water temperatures and stable water isotopes (δD and δ18O) corroborated this and provided quantitative information of the mixing proportions of various flow sources and insights into water travel times. This revealed how higher contributions of event "new" water (from hillslope springs and depression conduits displaces "old" pre-event water primarily from low permeability fissures and fractures), particularly during heavy rainfall. As the various water sources have contrasting water quality characteristics, these mixing and exchange processes have important implications for understanding and managing water quality in karst waters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fairchild, R.G.; Bond, V.P.
The characteristics of dose distribution, beam alignment, and radiobiological advantages accorded to high LET radiation were reviewed and compared for various particle beam radiotherapeutic modalities (neutron, Auger electrons, p, ..pi../sup -/, He, C, Ne, and Ar ions). Merit factors were evaluated on the basis of effective dose to tumor relative to normal tissue, linear energy transfer (LET), and dose localization, at depths of 1, 4, and 10 cm. In general, it was found that neutron capture therapy using an epithermal neutron beam provided the best merit factors available for depths up to 8 cm. The position of fast neutron therapymore » on the Merit Factor Tables was consistently lower than that of other particle modalities, and above only /sup 60/Co. The largest body of clinical data exists for fast neutron therapy; results are considered by some to be encouraging. It then follows that if benefits with fast neutron therapy are real, additional gains are within reach with other modalities.« less
Rotary fast tool servo system and methods
Montesanti, Richard C.; Trumper, David L.
2007-10-02
A high bandwidth rotary fast tool servo provides tool motion in a direction nominally parallel to the surface-normal of a workpiece at the point of contact between the cutting tool and workpiece. Three or more flexure blades having all ends fixed are used to form an axis of rotation for a swing arm that carries a cutting tool at a set radius from the axis of rotation. An actuator rotates a swing arm assembly such that a cutting tool is moved in and away from the lathe-mounted, rotating workpiece in a rapid and controlled manner in order to machine the workpiece. A pair of position sensors provides rotation and position information for a swing arm to a control system. A control system commands and coordinates motion of the fast tool servo with the motion of a spindle, rotating table, cross-feed slide, and in-feed slide of a precision lathe.
Rotary fast tool servo system and methods
Montesanti, Richard C [Cambridge, MA; Trumper, David L [Plaistow, NH; Kirtley, Jr., James L.
2009-08-18
A high bandwidth rotary fast tool servo provides tool motion in a direction nominally parallel to the surface-normal of a workpiece at the point of contact between the cutting tool and workpiece. Three or more flexure blades having all ends fixed are used to form an axis of rotation for a swing arm that carries a cutting tool at a set radius from the axis of rotation. An actuator rotates a swing arm assembly such that a cutting tool is moved in and away from the lathe-mounted, rotating workpiece in a rapid and controlled manner in order to machine the workpiece. One or more position sensors provides rotation and position information for a swing arm to a control system. A control system commands and coordinates motion of the fast tool servo with the motion of a spindle, rotating table, cross-feed slide, and in-feed slide of a precision lathe.
Exact Magnetic Diffusion Solutions for Magnetohydrodynamic Code Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D S
In this paper, the authors present several new exact analytic space and time dependent solutions to the problem of magnetic diffusion in R-Z geometry. These problems serve to verify several different elements of an MHD implementation: magnetic diffusion, external circuit time integration, current and voltage energy sources, spatially dependent conductivities, and ohmic heating. The exact solutions are shown in comparison with 2D simulation results from the Ares code.
New analytical exact solutions of time fractional KdV-KZK equation by Kudryashov methods
NASA Astrophysics Data System (ADS)
S Saha, Ray
2016-04-01
In this paper, new exact solutions of the time fractional KdV-Khokhlov-Zabolotskaya-Kuznetsov (KdV-KZK) equation are obtained by the classical Kudryashov method and modified Kudryashov method respectively. For this purpose, the modified Riemann-Liouville derivative is used to convert the nonlinear time fractional KdV-KZK equation into the nonlinear ordinary differential equation. In the present analysis, the classical Kudryashov method and modified Kudryashov method are both used successively to compute the analytical solutions of the time fractional KdV-KZK equation. As a result, new exact solutions involving the symmetrical Fibonacci function, hyperbolic function and exponential function are obtained for the first time. The methods under consideration are reliable and efficient, and can be used as an alternative to establish new exact solutions of different types of fractional differential equations arising from mathematical physics. The obtained results are exhibited graphically in order to demonstrate the efficiencies and applicabilities of these proposed methods of solving the nonlinear time fractional KdV-KZK equation.
Discovery of Possible Nova in M31 : P60-M31-080723
NASA Astrophysics Data System (ADS)
Kasliwal, M. M.; Cenko, S. B.; Rau, A.; Ofek, E. O.; Quimby, R.; Kulkarni, S. R.
2008-07-01
On UT 2008 July 23.33, P60-FasTING (Palomar 60-inch Fast Transients In Nearby Galaxies) discovered a faint optical transient in M31 at RA(J2000)=00:43:27.28, DEC(J2000) = +41:10:03.3. P60-M31-080723 has a brightness of g = 19.3 +/- 0.2 (photometric calibration wrt USNO-B) and it is offset by 8.1'E,6.1'S from the center of M31. It was also marginally detected on July 22.32 at g = 19.8 +/- 0.2. It was not detected on the previous several nights with 3-sigma upper limits in table below.
NASA Astrophysics Data System (ADS)
Sankarapandi, S.; Chandramouli, G. V. R.; Daul, C.; Manoharan, P. T.
2018-01-01
The authors regret for an error in units of rate constants K12 and K21 given in Tables 1, 3 and 4. The rate constants K12 and K21 are in gauss, and not in s-1 as mentioned in the article. The authors would like to apologise for any inconvenience caused.
Efficient Computer Implementations of Fast Fourier Transforms.
1980-12-01
fit in computer? Yes, continue (9) Determine fastest algorithm between WFTA and PFA from Table 4.6. For N=420, WFTA PFA Mult 1296 2528 Add 11352 10956...real adds = 24tN/4 + 2(3tN/4) = 15tN/2 (G.8) 260 All odd prime C<ictors ciual to or (,rater than 5 iso the general transform section. Based on the
NASA Astrophysics Data System (ADS)
United States Calibration Working Group, Russian Federation/
- Joint Research Program of Seismic Calibration of the International Monitoring System (IMS) in Northern Eurasia and North America has been signed by the Nuclear Treaty Programs Office (NTPO), Department of Defense USA, and the Special Monitoring Service (SMS) of the Ministry of Defense, Russian Federation (RF). Under the Program historical data from nuclear and large chemical explosions of known location and shot time, together with appropriate geological and geophysical data, has been used to derive regional Pn/P travel-time tables for seismic event location within the lower 48 States of the USA and the European part of the RF. These travel-time tables are up to 5seconds faster in shields than the IASPEI91 tables, and up to 5seconds slower in the Western USA. Relocation experiments using the regional Pn travel-time curves and surrogate networks for the IMS network generally improved locations for regional seismic events. The distance between true and estimated location (mislocation) was decreased from an average of 18.8km for the IASPEI91 tables to 10.1km for the regional Pn travel-time tables. However, the regional travel-time table approach has limitations caused by travel-time variations inside major tectonic provinces and paths crossing several tectonic provinces with substantially different crustal and upper mantle velocity structure.The RF members of the Calibration Working Group (WG): Colonel Vyacheslav Gordon (chairman); Dr. Prof. Marat Mamsurov, and Dr. Nikolai Vasiliev. The US members of the WG: Dr. Anton Dainty (chairman), Dr. Douglas Baumgardt, Mr. John Murphy, Dr. Robert North, and Dr. Vladislav Ryaboy.
A fast, programmable hardware architecture for the processing of spaceborne SAR data
NASA Technical Reports Server (NTRS)
Bennett, J. R.; Cumming, I. G.; Lim, J.; Wedding, R. M.
1984-01-01
The development of high-throughput SAR processors (HTSPs) for the spaceborne SARs being planned by NASA, ESA, DFVLR, NASDA, and the Canadian Radarsat Project is discussed. The basic parameters and data-processing requirements of the SARs are listed in tables, and the principal problems are identified as real-operations rates in excess of 2 x 10 to the 9th/sec, I/O rates in excess of 8 x 10 to the 6th samples/sec, and control computation loads (as for range cell migration correction) as high as 1.4 x 10 to the 6th instructions/sec. A number of possible HTSP architectures are reviewed; host/array-processor (H/AP) and distributed-control/data-path (DCDP) architectures are examined in detail and illustrated with block diagrams; and a cost/speed comparison of these two architectures is presented. The H/AP approach is found to be adequate and economical for speeds below 1/200 of real time, while DCDP is more cost-effective above 1/50 of real time.
Consequences of recent loophole-free experiments on a relaxation of measurement independence
NASA Astrophysics Data System (ADS)
Hnilo, Alejandro A.
2017-02-01
Recent experiments using innovative optical detectors and techniques have strongly increased the capacity of testing the violation of the Bell's inequalities in Nature. Most of them have used the Eberhard's inequality (EI) to close the "detection" loophole. Closing the "locality" loophole has been attempted by spacelike separated detections and fast changes of the bases of observation, driven by random number generators of new design. Also, pulsed pumping and time-resolved data recording to close the "time-coincidence" loophole, and sophisticated statistical methods to close the "memory" loophole, have been used. In this paper, the meaning of the EI is reviewed. A simple hidden variables theory based on a relaxation of the condition of "measurement independence," which was devised long ago for the Clauser-Horne-Shimony and Holt inequality, is adapted to the EI case. It is used here to evaluate the significance of the results of the recent experiments, which are briefly described. A table summarizes the main results.
The Advanced Gamma-ray Imaging System (AGIS): Real Time Stereoscopic Array Trigger
NASA Astrophysics Data System (ADS)
Byrum, K.; Anderson, J.; Buckley, J.; Cundiff, T.; Dawson, J.; Drake, G.; Duke, C.; Haberichter, B.; Krawzcynski, H.; Krennrich, F.; Madhavan, A.; Schroedter, M.; Smith, A.
2009-05-01
Future large arrays of Imaging Atmospheric Cherenkov telescopes (IACTs) such as AGIS and CTA are conceived to comprise of 50 - 100 individual telescopes each having a camera with 10**3 to 10**4 pixels. To maximize the capabilities of such IACT arrays with a low energy threshold, a wide field of view and a low background rate, a sophisticated array trigger is required. We describe the design of a stereoscopic array trigger that calculates image parameters and then correlates them across a subset of telescopes. Fast Field Programmable Gate Array technology allows to use lookup tables at the array trigger level to form a real-time pattern recognition trigger tht capitalizes on the multiple view points of the shower at different shower core distances. A proof of principle system is currently under construction. It is based on 400 MHz FPGAs and the goal is for camera trigger rates of up to 10 MHz and a tunable cosmic-ray background suppression at the array level.
Exact solution of a quantum forced time-dependent harmonic oscillator
NASA Technical Reports Server (NTRS)
Yeon, Kyu Hwang; George, Thomas F.; Um, Chung IN
1992-01-01
The Schrodinger equation is used to exactly evaluate the propagator, wave function, energy expectation values, uncertainty values, and coherent state for a harmonic oscillator with a time dependent frequency and an external driving time dependent force. These quantities represent the solution of the classical equation of motion for the time dependent harmonic oscillator.
NASA Astrophysics Data System (ADS)
Arason, Þórður; Bjornsson, Halldór; Nína Petersen, Guðrún
2013-04-01
Eruption of subglacial volcanoes may lead to catastrophic floods and thus early determination of the exact eruption site may be critical to civil protection evacuation plans. A system is being developed that automatically monitors and analyses volcanic lightning in Iceland. The system predicts the eruption site location from mean lightning locations, taking into account upper level wind. In estimating mean lightning locations, outliers are automatically omitted. A simple wind correction is performed based on the vector wind at the 500 hPa pressure level in the latest radiosonde from Keflavík airport. The system automatically creates a web page with maps and tables showing individual lightning locations and mean locations with and without wind corrections along with estimates of uncetainty. A dormant automatic monitoring system, waiting for a rare event, potentially for several years, is quite susceptible to degeneration during the waiting period, e.g. due to computer or other IT-system upgrades. However, ordinary weather thunderstorms in Iceland should initiate special monitoring and automatic analysis of this system in the same fashion as during a volcanic eruption. Such ordinary weather thunderstorm events will be used to observe anomalies and malfunctions in the system. The essential elements of this system will be described. An example is presented of how the system would have worked during the first hours of the Grímsvötn 2011 eruption. In that case the exact eruption site, within the Grímsvötn caldera, was first known about 15 hours into the eruption.
Stern, Robin L; Heaton, Robert; Fraser, Martin W; Goddu, S Murty; Kirby, Thomas H; Lam, Kwok Leung; Molineu, Andrea; Zhu, Timothy C
2011-01-01
The requirement of an independent verification of the monitor units (MU) or time calculated to deliver the prescribed dose to a patient has been a mainstay of radiation oncology quality assurance. The need for and value of such a verification was obvious when calculations were performed by hand using look-up tables, and the verification was achieved by a second person independently repeating the calculation. However, in a modern clinic using CT/MR/PET simulation, computerized 3D treatment planning, heterogeneity corrections, and complex calculation algorithms such as convolution/superposition and Monte Carlo, the purpose of and methodology for the MU verification have come into question. In addition, since the verification is often performed using a simpler geometrical model and calculation algorithm than the primary calculation, exact or almost exact agreement between the two can no longer be expected. Guidelines are needed to help the physicist set clinically reasonable action levels for agreement. This report addresses the following charges of the task group: (1) To re-evaluate the purpose and methods of the "independent second check" for monitor unit calculations for non-IMRT radiation treatment in light of the complexities of modern-day treatment planning. (2) To present recommendations on how to perform verification of monitor unit calculations in a modern clinic. (3) To provide recommendations on establishing action levels for agreement between primary calculations and verification, and to provide guidance in addressing discrepancies outside the action levels. These recommendations are to be used as guidelines only and shall not be interpreted as requirements.
The Ml Magnitude Scale In Italy
NASA Astrophysics Data System (ADS)
Gasperini, P.; Lolli, B.; Filippucci, M.; de Simoni, B.
To improve the reliability of Ml magnitude estimates in Italy, we have updated the database of real Wood-Anderson (WA) and of simulated Wood Anderson (SWA) am- plitudes recently revised by Gasperini (2002). This was done by the re-reading of orig- inal WA seismograms, made available by the SISMOS Project of the Istituto Nazionale di Geofisica (INGV), as well as by the analysis of further Very Broad Band (VBB) recordings of the MEDNET network of INGV for the period from 1996 to 1998. The full operability, in the last five years, of a VBB station located exactly at the same site (TRI) of a former WA instrument allowed us to reliably infer a new attenuation function from the joined WA and SWA dataset. We found a significant deviation of the attenuation law from the standard Richter table at distances larger than 400 km where the latter overestimates the magnitude up to about 0.3 units. We also computed regionalized attenuation functions accounting for the differences in the propagation properties of seismic waves between the Adriatic (less attenuating) and Tyrrhenian (more attenuating) sides of the Italian peninsula. Using this improved Ml magnitude database we were also able to further improve the computation of duration (Md) and amplitude (Ma) magnitudes computed from short period vertical seismometers of the INGV as well as to analyze the time variation of the station calibrations. We found that the absolute amplification of INGV stations is underestimated almost exactly by a factor 2 starting from the entering upon in operation of the digital acquisition system at INGV in middle 1984.
NASA Astrophysics Data System (ADS)
Xianqiang, He; Delu, Pan; Yan, Bai; Qiankun, Zhu
2005-10-01
The numerical model of the vector radiative transfer of the coupled ocean-atmosphere system is developed based on the matrix-operator method, which is named PCOART. In PCOART, using the Fourier analysis, the vector radiative transfer equation (VRTE) splits up into a set of independent equations with zenith angle as only angular coordinate. Using the Gaussian-Quadrature method, VRTE is finally transferred into the matrix equation, which is calculated by using the adding-doubling method. According to the reflective and refractive properties of the ocean-atmosphere interface, the vector radiative transfer numerical model of ocean and atmosphere is coupled in PCOART. By comparing with the exact Rayleigh scattering look-up-table of MODIS(Moderate-resolution Imaging Spectroradiometer), it is shown that PCOART is an exact numerical calculation model, and the processing methods of the multi-scattering and polarization are correct in PCOART. Also, by validating with the standard problems of the radiative transfer in water, it is shown that PCOART could be used to calculate the underwater radiative transfer problems. Therefore, PCOART is a useful tool to exactly calculate the vector radiative transfer of the coupled ocean-atmosphere system, which can be used to study the polarization properties of the radiance in the whole ocean-atmosphere system and the remote sensing of the atmosphere and ocean.
NASA Astrophysics Data System (ADS)
Almazmumy, Mariam; Ebaid, Abdelhalim
2017-08-01
In this article, the flow and heat transfer of a non-Newtonian nanofluid between two coaxial cylinders through a porous medium has been investigated. The velocity, temperature, and nanoparticles concentration of the present mathematical model are governed by a system of nonlinear ordinary differential equations. The objective of this article is to obtain new exact solutions for the temperature and the nanoparticles concentration and, therefore, compare them with the previous approximate results in the literature. Moreover, the velocity equation has been numerically solved. The effects of the pressure gradient, thermophoresis, third-grade, Brownian motion, and porosity parameters on the included phenomena have been discussed through several tables and plots. It is found that the velocity profile is increased by increasing the pressure gradient parameter, thermophoresis parameter (slightly), third-grade parameter, and Brownian motion parameter (slightly); however, it decreases with an increase in the porosity parameter and viscosity power index. In addition, the temperature and the nanoparticles concentration reduce with the strengthen of the Brownian motion parameter, while they increase by increasing the thermophoresis parameter. Furthermore, the numerical solution and the physical interpretation in the literature for the same problem have been validated with the current exact analysis, where many remarkable differences and errors have been concluded. Therefore, the suggested analysis may be recommended with high trust for similar problems.
Dissociation between exact and approximate addition in developmental dyslexia.
Yang, Xiujie; Meng, Xiangzhi
2016-09-01
Previous research has suggested that number sense and language are involved in number representation and calculation, in which number sense supports approximate arithmetic, and language permits exact enumeration and calculation. Meanwhile, individuals with dyslexia have a core deficit in phonological processing. Based on these findings, we thus hypothesized that children with dyslexia may exhibit exact calculation impairment while doing mental arithmetic. The reaction time and accuracy while doing exact and approximate addition with symbolic Arabic digits and non-symbolic visual arrays of dots were compared between typically developing children and children with dyslexia. Reaction time analyses did not reveal any differences across two groups of children, the accuracies, interestingly, revealed a distinction of approximation and exact addition across two groups of children. Specifically, two groups of children had no differences in approximation. Children with dyslexia, however, had significantly lower accuracy in exact addition in both symbolic and non-symbolic tasks than that of typically developing children. Moreover, linguistic performances were selectively associated with exact calculation across individuals. These results suggested that children with dyslexia have a mental arithmetic deficit specifically in the realm of exact calculation, while their approximation ability is relatively intact. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Forma, Ray
2008-01-01
Go to the end of this article to find more detailed information about the Star Chart, how to use the chart, and a table of Moon phases. Use this table of phases to help you with the timing of successful astronomy evenings for students. The best time for an astronomy evening is usually six days after New Moon. (Contains 1 table and 3 figures.)
Fast Quaternion Attitude Estimation from Two Vector Measurements
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2001-01-01
Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. Existing closed-form attitude estimates based on Wahba's optimality criterion for two arbitrarily weighted observations are somewhat slow to evaluate. This paper presents two new fast quaternion attitude estimation algorithms using two vector observations, one optimal and one suboptimal. The suboptimal method gives the same estimate as the TRIAD algorithm, at reduced computational cost. Simulations show that the TRIAD estimate is almost as accurate as the optimal estimate in representative test scenarios.
Fast stochastic algorithm for simulating evolutionary population dynamics
NASA Astrophysics Data System (ADS)
Tsimring, Lev; Hasty, Jeff; Mather, William
2012-02-01
Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.
A time motion study in the immunization clinic of a tertiary care hospital of kolkata, west bengal.
Chattopadhyay, Amitabha; Ghosh, Ritu; Maji, Sucharita; Ray, Tapobroto Guha; Lahiri, Saibendu Kumar
2012-01-01
A time and motion study is used to determine the amount of time required for a specific activity, work function, or mechanical process. Few such studies have been reported in the outpatient department of institutions, and such studies based exclusively on immunization clinic of an institute is a rarity. This was an observational cross sectional study done in the immunization clinic of R.G. Kar Medical College, Kolkata, over a period of 1 month (September 2010). The study population included mother/caregivers attending the immunization clinics with their children. The total sample was 482. Pre-synchronized stopwatches were used to record service delivery time at the different activity points. Median time was the same for both initial registration table and nutrition and health education table (120 seconds), but the vaccination and post vaccination advice table took the highest percentage of overall time (46.3%). Maximum time spent on the vaccination and post vaccination advice table was on Monday (538.1 s) and nutritional assessment and health assessment table took maximum time on Friday (217.1 s). Time taken in the first half of immunization session was more in most of the tables. The goal for achieving universal immunization against vaccine-preventable diseases requires multifaceted collated response from many stakeholders. Efficient functioning of immunization clinics is therefore required to achieve the prescribed goals. This study aims to initiate an effort to study the utilization of time at a certain health care unit with the invitation of much more in depth analysis in future.
NASA Astrophysics Data System (ADS)
Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.
2012-05-01
In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.
Time and Frequency Synchronization on the Virac Radio Telescope RT-32
NASA Astrophysics Data System (ADS)
Bezrukovs, V.
2016-04-01
One of the main research directions of Ventspils International Radio Astronomy Centre (VIRAC) is radio astronomy and astrophysics. The instrumental base for the centre comprised two fully steerable parabolic antennas, RT-16 and RT-32 (i.e. with the mirror diameter of 16 m and 32 m). After long reconstruction, radio telescope RT-32 is currently equipped with the receiving and data acquisition systems that allow observing in a wide frequency range from 327 MHz to 9 GHz. New Antenna Control Unit (ACU) allows stable, fast and precise pointing of antenna. Time and frequency distribution service provide 5, 10 and 100 MHz reference frequency, 1PPS signals and precise time stamps by NTP protocol and in the IRIG-B format by coaxial cable. For the radio astronomical observations, main requirement of spatially Very Long Base Line Interferometric (VLBI) observations for the observatory is precise synchronization of the received and sampled data and linking to the exact time stamps. During October 2015, radio telescope RT-32 performance was tested in several successful VLBI experiments. The obtained results confirm the efficiency of the chosen methods of synchronization and the ability to reproduce them on similar antennas.
Quantum dynamics of nuclear spins and spin relaxation in organic semiconductors
NASA Astrophysics Data System (ADS)
Mkhitaryan, V. V.; Dobrovitski, V. V.
2017-06-01
We investigate the role of the nuclear-spin quantum dynamics in hyperfine-induced spin relaxation of hopping carriers in organic semiconductors. The fast-hopping regime, when the carrier spin does not rotate much between subsequent hops, is typical for organic semiconductors possessing long spin coherence times. We consider this regime and focus on a carrier random-walk diffusion in one dimension, where the effect of the nuclear-spin dynamics is expected to be the strongest. Exact numerical simulations of spin systems with up to 25 nuclear spins are performed using the Suzuki-Trotter decomposition of the evolution operator. Larger nuclear-spin systems are modeled utilizing the spin-coherent state P -representation approach developed earlier. We find that the nuclear-spin dynamics strongly influences the carrier spin relaxation at long times. If the random walk is restricted to a small area, it leads to the quenching of carrier spin polarization at a nonzero value at long times. If the random walk is unrestricted, the carrier spin polarization acquires a long-time tail, decaying as 1 /√{t } . Based on the numerical results, we devise a simple formula describing the effect quantitatively.
Application of adaptive gridding to magnetohydrodynamic flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnack, D.D.; Lotatti, I.; Satyanarayana, P.
1996-12-31
The numerical simulation of the primitive, three-dimensional, time-dependent, resistive MHD equations on an unstructured, adaptive poloidal mesh using the TRIM code has been reported previously. The toroidal coordinate is approximated pseudo-spectrally with finite Fourier series and Fast-Fourier Transforms. The finite-volume algorithm preserves the magnetic field as solenoidal to round-off error, and also conserves mass, energy, and magnetic flux exactly. A semi-implicit method is used to allow for large time steps on the unstructured mesh. This is important for tokamak calculations where the relevant time scale is determined by the poloidal Alfven time. This also allows the viscosity to be treatedmore » implicitly. A conjugate-gradient method with pre-conditioning is used for matrix inversion. Applications to the growth and saturation of ideal instabilities in several toroidal fusion systems has been demonstrated. Recently we have concentrated on the details of the mesh adaption algorithm used in TRIM. We present several two-dimensional results relating to the use of grid adaptivity to track the evolution of hydrodynamic and MHD structures. Examples of plasma guns, opening switches, and supersonic flow over a magnetized sphere are presented. Issues relating to mesh adaption criteria are discussed.« less
NASA Astrophysics Data System (ADS)
Jernsletten, J. A.; Heggy, E.
2004-05-01
INTRODUCTION: This study compares the use of (diffusive) Transient Electromagnetics (TEM) for sounding of subsurface water in conductive Mars analog environments to the use of (propagative) Ground-Penetrating Radar (GPR) for the same purpose. We show data from three field studies: 1) Radar sounding data (GPR) from the Nubian aquifer, Bahria Oasis, Egypt; 2) Diffusive sounding data (TEM) from Pima County, Arizona; and 3) Shallower sounding data using the Fast-Turnoff TEM method from Peña de Hierro in the Rio Tinto area, Spain. The latter is data from work conducted under the auspices of the Mars Analog Research and Technology Experiment (MARTE). POTENTIAL OF TEM: A TEM survey was carried out in Pima County, Arizona, in January 2003. Data was collected using 100 m Tx loops, a ferrite-cored magnetic coil Rx antenna, and a sounding frequency of 16 Hz. The dataset has ~500 m depth of investigation, shows a ~120 m depth to the water table (confirmed by several USGS test wells in the area), and a conductive (~20-40 Ω m) clay-rich soil above the water table. The Rio Tinto Fast-Turnoff TEM data was collected using 40 m Tx loops, 10 m Rx loops, and a 32 Hz sounding frequency. Note ~200 m depth of investigation and a conductive high at ~80 m depth (interpreted as water table). Data was also collected using 20 m Tx loops (10 m Rx loops) in other parts of the area. Note ~50 m depth of investigation and a conductive high at ~15 m depth (interpreted as subsurface water flow under mine tailings matching surface flows seen coming out from under the tailings, and shown on maps). Both of these interpretations were roughly confirmed by preliminary results from the MARTE ground truth drilling campaign carried out in September and October 2003. POTENTIAL OF GPR: A GPR experiment was carried out in February 2003 in the Bahria Oasis in the western Egyptian desert, using a 2 MHz monostatic GPR, mapping the Nubian Aquifer at depths of 100-900 m, beneath a thick layer of homogenous marine sedimentary quaternary and tertiary structures constituted mainly of highly resistive dry porous dolomite, illinite, limestone and sandstone, given a reasonable knowledge of the local geoelectrical properties of the crust. The GPR was able to map the first interface between the dolomitic limestone and the gravel, while the detection of the deep subsurface water table remains uncertain due to the uncertainties arising from some instrumentational and geoelectrical problems. In locations were the water table was at shallower depths (less then 200 m), but with the presence of very thin layers (less than 0.5 m) of reddish dry clays, the technique failed to probe the moist interface and to map any significant stratigraphy. CONCLUSIONS: GPR excels in resolution, productivity (logistical efficiency) and is well suited for the shallower applications, but is more sensitive to highly conductive layers (result of wave propagation and higher frequencies), and achieves considerably smaller depths of investigation than TEM. The (diffusive) TEM method uses roughly two orders of magnitude lower sounding frequencies than GPR, is less sensitive to highly conductive layers, achieves considerably deeper depths of investigation, and is more suitable for sounding very deep subsurface water. Compared with GPR, TEM suffers for very shallow applications in terms of resolution and logistical efficiency. Fast-Turnoff TEM, with its very early measured time windows, achieves higher resolution than conventional TEM in shallow applications, and somewhat bridges the gap between GPR and TEM in terms of depths of investigation and suitable applications.
Cyclotron resonant scattering feature simulations. II. Description of the CRSF simulation process
NASA Astrophysics Data System (ADS)
Schwarm, F.-W.; Ballhausen, R.; Falkner, S.; Schönherr, G.; Pottschmidt, K.; Wolff, M. T.; Becker, P. A.; Fürst, F.; Marcu-Cheatham, D. M.; Hemphill, P. B.; Sokolova-Lapa, E.; Dauser, T.; Klochkov, D.; Ferrigno, C.; Wilms, J.
2017-05-01
Context. Cyclotron resonant scattering features (CRSFs) are formed by scattering of X-ray photons off quantized plasma electrons in the strong magnetic field (of the order 1012 G) close to the surface of an accreting X-ray pulsar. Due to the complex scattering cross-sections, the line profiles of CRSFs cannot be described by an analytic expression. Numerical methods, such as Monte Carlo (MC) simulations of the scattering processes, are required in order to predict precise line shapes for a given physical setup, which can be compared to observations to gain information about the underlying physics in these systems. Aims: A versatile simulation code is needed for the generation of synthetic cyclotron lines. Sophisticated geometries should be investigatable by making their simulation possible for the first time. Methods: The simulation utilizes the mean free path tables described in the first paper of this series for the fast interpolation of propagation lengths. The code is parallelized to make the very time-consuming simulations possible on convenient time scales. Furthermore, it can generate responses to monoenergetic photon injections, producing Green's functions, which can be used later to generate spectra for arbitrary continua. Results: We develop a new simulation code to generate synthetic cyclotron lines for complex scenarios, allowing for unprecedented physical interpretation of the observed data. An associated XSPEC model implementation is used to fit synthetic line profiles to NuSTAR data of Cep X-4. The code has been developed with the main goal of overcoming previous geometrical constraints in MC simulations of CRSFs. By applying this code also to more simple, classic geometries used in previous works, we furthermore address issues of code verification and cross-comparison of various models. The XSPEC model and the Green's function tables are available online (see link in footnote, page 1).
Numerical analysis of groundwater recharge through stony soils using limited data
NASA Astrophysics Data System (ADS)
Hendrickx, J. M. H.; Khan, A. S.; Bannink, M. H.; Birch, D.; Kidd, C.
1991-10-01
This study evaluates groundwater recharge on an alluvial fan in Quetta Valley (Baluchistan, Pakistan), through deep stony soils with limited data of soil texture, soil profile descriptions, water-table depths and meteorological variables. From the soil profile descriptions, a representative profile was constructed with typical soil layers. Next, the texture of each layer was compared with textures of soils with known soil physical characteristics; it is assumed that soils from the same textural class have similar water retention and hydraulic conductivity curves. Finally, the water retention and hydraulic conductivity curves were transformed to account for the volume of stones in each layer; this varied between 0 and 60 vol. %. These data were used in a transient finite difference model and in a steady-state analytical solution to evaluate the travel time of the recharge water and the maximum annual recharge volume. Travel times proved to be less sensitive to differences in soil physical characteristics than to differences in annual infiltration rates. Therefore, estimation of soil physical characteristics from soil texture data alone appears justified for this study. Estimated travel times on the alluvial fan in the Quetta Valley vary between 1.6 years, through a soil profile of 25 m with an infiltration rate of 120 cm year -1, to 18.3 years through a soil profile of 100 m with an infiltration rate of 40 cm year -1. When the infiltration rate of the soil exceeds 40 cm day -1, the infiltration process proceeds so fast that evaporation losses are small. If the depth of ponding at the start of infiltration is more than 1 m, at least 90% of the applied recharge water will reach the water table, providing that the ponding area is bare of vegetation.
Endurance exercise training in orthostatic intolerance: a randomized, controlled trial.
Winker, Robert; Barth, Alfred; Bidmon, Daniela; Ponocny, Ivo; Weber, Michael; Mayr, Otmar; Robertson, David; Diedrich, André; Maier, Richard; Pilger, Alex; Haber, Paul; Rüdiger, Hugo W
2005-03-01
Orthostatic intolerance is a syndrome characterized by chronic orthostatic symptoms of light-headedness, fatigue, nausea, orthostatic tachycardia, and aggravated norepinephrine levels while standing. The aim of this study was to assess the protective effect of exercise endurance training on orthostatic symptoms and to examine its usefulness in the treatment of orthostatic intolerance. 2768 military recruits were screened for orthostatic intolerance by questionnaire. Tilt-table testing identified 36 cases of orthostatic intolerance out of the 2768 soldiers. Subsequently, 31 of these subjects with orthostatic intolerance entered a randomized, controlled trial. The patients were allocated randomly to either a "training" (3 months jogging) or a "control" group. The influence of exercise training on orthostatic intolerance was assessed by determination of questionnaire scores and tilt-table testing before and after intervention. After training, only 6 individuals of 16 still had orthostatic intolerance compared with 10 of 11 in the control group. The Fisher exact test showed a highly significant difference in diagnosis between the 2 groups (P=0.008) at the end of the study. Analysis of the questionnaire-score showed significant interaction between time and group (P=0.001). The trained subjects showed an improvement in the average symptom score from 1.79+/-0.4 to 1.04+/-0.4, whereas the control subjects showed no significant change in average symptom score (2.09+/-0.6 and 2.14+/-0.5, respectively). Our data demonstrate that endurance exercise training leads to an improvement of symptoms in the majority of patients with orthostatic intolerance. Therefore, we suggest that endurance training should be considered in the treatment of orthostatic intolerance patients.
Exact synchronization bound for coupled time-delay systems.
Senthilkumar, D V; Pesquera, Luis; Banerjee, Santo; Ortín, Silvia; Kurths, J
2013-04-01
We obtain an exact bound for synchronization in coupled time-delay systems using the generalized Halanay inequality for the general case of time-dependent delay, coupling, and coefficients. Furthermore, we show that the same analysis is applicable to both uni- and bidirectionally coupled time-delay systems with an appropriate evolution equation for their synchronization manifold, which can also be defined for different types of synchronization. The exact synchronization bound assures an exponential stabilization of the synchronization manifold which is crucial for applications. The analytical synchronization bound is independent of the nature of the modulation and can be applied to any time-delay system satisfying a Lipschitz condition. The analytical results are corroborated numerically using the Ikeda system.
Iino, Yoichi; Kojima, Takeji
2016-01-01
The purpose of this study was to investigate the effect of the racket mass and the rate of strokes on the kinematics and kinetics of the trunk and the racket arm in the table tennis topspin backhand. Eight male Division I collegiate table tennis players hit topspin backhands against topspin balls projected at 75 balls · min(-1) and 35 balls · min(-1) using three rackets varying in mass of 153.5, 176 and 201.5 g. A motion capture system was used to obtain trunk and racket arm motion data. The joint torques of the racket arm were determined using inverse dynamics. The racket mass did not significantly affect all the trunk and racket arm kinematics and kinetics examined except for the wrist dorsiflexion torque, which was significantly larger for the large mass racket than for the small mass racket. The racket speed at impact was significantly lower for the high ball frequency than for the low ball frequency. This was probably because pelvis and upper trunk axial rotations tended to be more restricted for the high ball frequency. The result highlights one of the advantages of playing close to the table and making the rally speed fast.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2002-01-01
A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.
Fast generation of computer-generated hologram by graphics processing unit
NASA Astrophysics Data System (ADS)
Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi
2009-02-01
A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul, E-mail: tavan@physik.uni-muenchen.de
2015-11-14
Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADESmore » can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved—up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.« less
Exact solutions for species tree inference from discordant gene trees.
Chang, Wen-Chieh; Górecki, Paweł; Eulenstein, Oliver
2013-10-01
Phylogenetic analysis has to overcome the grant challenge of inferring accurate species trees from evolutionary histories of gene families (gene trees) that are discordant with the species tree along whose branches they have evolved. Two well studied approaches to cope with this challenge are to solve either biologically informed gene tree parsimony (GTP) problems under gene duplication, gene loss, and deep coalescence, or the classic RF supertree problem that does not rely on any biological model. Despite the potential of these problems to infer credible species trees, they are NP-hard. Therefore, these problems are addressed by heuristics that typically lack any provable accuracy and precision. We describe fast dynamic programming algorithms that solve the GTP problems and the RF supertree problem exactly, and demonstrate that our algorithms can solve instances with data sets consisting of as many as 22 taxa. Extensions of our algorithms can also report the number of all optimal species trees, as well as the trees themselves. To better asses the quality of the resulting species trees that best fit the given gene trees, we also compute the worst case species trees, their numbers, and optimization score for each of the computational problems. Finally, we demonstrate the performance of our exact algorithms using empirical and simulated data sets, and analyze the quality of heuristic solutions for the studied problems by contrasting them with our exact solutions.
A simple and fast heuristic for protein structure comparison.
Pelta, David A; González, Juan R; Moreno Vega, Marcos
2008-03-25
Protein structure comparison is a key problem in bioinformatics. There exist several methods for doing protein comparison, being the solution of the Maximum Contact Map Overlap problem (MAX-CMO) one of the alternatives available. Although this problem may be solved using exact algorithms, researchers require approximate algorithms that obtain good quality solutions using less computational resources than the formers. We propose a variable neighborhood search metaheuristic for solving MAX-CMO. We analyze this strategy in two aspects: 1) from an optimization point of view the strategy is tested on two different datasets, obtaining an error of 3.5%(over 2702 pairs) and 1.7% (over 161 pairs) with respect to optimal values; thus leading to high accurate solutions in a simpler and less expensive way than exact algorithms; 2) in terms of protein structure classification, we conduct experiments on three datasets and show that is feasible to detect structural similarities at SCOP's family and CATH's architecture levels using normalized overlap values. Some limitations and the role of normalization are outlined for doing classification at SCOP's fold level. We designed, implemented and tested.a new tool for solving MAX-CMO, based on a well-known metaheuristic technique. The good balance between solution's quality and computational effort makes it a valuable tool. Moreover, to the best of our knowledge, this is the first time the MAX-CMO measure is tested at SCOP's fold and CATH's architecture levels with encouraging results.
NASA Astrophysics Data System (ADS)
Lacasa, Lucas
2014-09-01
Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical time series analysis and signal processing and (ii) characterizing classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals' variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein-Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments.
Bin-Hash Indexing: A Parallel Method for Fast Query Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, Edward W; Gosink, Luke J.; Wu, Kesheng
2008-06-27
This paper presents a new parallel indexing data structure for answering queries. The index, called Bin-Hash, offers extremely high levels of concurrency, and is therefore well-suited for the emerging commodity of parallel processors, such as multi-cores, cell processors, and general purpose graphics processing units (GPU). The Bin-Hash approach first bins the base data, and then partitions and separately stores the values in each bin as a perfect spatial hash table. To answer a query, we first determine whether or not a record satisfies the query conditions based on the bin boundaries. For the bins with records that can not bemore » resolved, we examine the spatial hash tables. The procedures for examining the bin numbers and the spatial hash tables offer the maximum possible level of concurrency; all records are able to be evaluated by our procedure independently in parallel. Additionally, our Bin-Hash procedures access much smaller amounts of data than similar parallel methods, such as the projection index. This smaller data footprint is critical for certain parallel processors, like GPUs, where memory resources are limited. To demonstrate the effectiveness of Bin-Hash, we implement it on a GPU using the data-parallel programming language CUDA. The concurrency offered by the Bin-Hash index allows us to fully utilize the GPU's massive parallelism in our work; over 12,000 records can be simultaneously evaluated at any one time. We show that our new query processing method is an order of magnitude faster than current state-of-the-art CPU-based indexing technologies. Additionally, we compare our performance to existing GPU-based projection index strategies.« less
Binary counting with chemical reactions.
Kharam, Aleksandra; Jiang, Hua; Riedel, Marc; Parhi, Keshab
2011-01-01
This paper describes a scheme for implementing a binary counter with chemical reactions. The value of the counter is encoded by logical values of "0" and "1" that correspond to the absence and presence of specific molecular types, respectively. It is incremented when molecules of a trigger type are injected. Synchronization is achieved with reactions that produce a sustained three-phase oscillation. This oscillation plays a role analogous to a clock signal in digital electronics. Quantities are transferred between molecular types in different phases of the oscillation. Unlike all previous schemes for chemical computation, this scheme is dependent only on coarse rate categories for the reactions ("fast" and "slow"). Given such categories, the computation is exact and independent of the specific reaction rates. Although conceptual for the time being, the methodology has potential applications in domains of synthetic biology such as biochemical sensing and drug delivery. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.
An annular superposition integral for axisymmetric radiators.
Kelly, James F; McGough, Robert J
2007-02-01
A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a "smooth piston" function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity.
Compressed sensing of hyperspectral images based on scrambled block Hadamard ensemble
NASA Astrophysics Data System (ADS)
Wang, Li; Feng, Yan
2016-11-01
A fast measurement matrix based on scrambled block Hadamard ensemble for compressed sensing (CS) of hyperspectral images (HSI) is investigated. The proposed measurement matrix offers several attractive features. First, the proposed measurement matrix possesses Gaussian behavior, which illustrates that the matrix is universal and requires a near-optimal number of samples for exact reconstruction. In addition, it could be easily implemented in the optical domain due to its integer-valued elements. More importantly, the measurement matrix only needs small memory for storage in the sampling process. Experimental results on HSIs reveal that the reconstruction performance of the proposed measurement matrix is comparable or better than Gaussian matrix and Bernoulli matrix using different reconstruction algorithms while consuming less computational time. The proposed matrix could be used in CS of HSI, which would save the storage memory on board, improve the sampling efficiency, and ameliorate the reconstruction quality.
A Biomimetic-Computational Approach to Optimizing the Quantum Efficiency of Photovoltaics
NASA Astrophysics Data System (ADS)
Perez, Lisa M.; Holzenburg, Andreas
The most advanced low-cost organic photovoltaic cells have a quantum efficiency of 10%. This is in stark contrast to plant/bacterial light-harvesting systems which offer quantum efficiencies close to unity. Of particular interest is the highly effective quantum coherence-enabled energy transfer (Fig. 1). Noting that quantum coherence is promoted by charged residues and local dielectrics, classical atomistic simulations and time-dependent density functional theory (DFT) are used to identify charge/dielectric patterns and electronic coupling at exactly defined energy transfer interfaces. The calculations make use of structural information obtained on photosynthetic protein-pigment complexes while still in the native membrane making it possible to establish a link between supramolecular organization and quantum coherence in terms of what length scales enable fast energy transport and prevent quenching. Calculating energy transfer efficiencies between components based on different proximities will permit the search for patterns that enable defining material properties suitable for advanced photovoltaics.
NASA Astrophysics Data System (ADS)
Johnson, T.; Hammond, G. E.; Versteeg, R. J.; Zachara, J. M.
2013-12-01
The Hanford 300 Area, located adjacent to the Columbia River in south-central Washington, USA, is the site of former research and uranium fuel rod fabrication facilities. Waste disposal practices at site included discharging between 33 and 59 metric tons of uranium over a 40 year period into shallow infiltration galleries, resulting in persistent uranium contamination within the vadose and saturated zones. Uranium transport from the vadose zone to the saturated zone is intimately linked with water table fluctuations and river water intrusion driven by upstream dam operations. As river stage increases, the water table rises into the vadose zone and mobilizes contaminated pore water. At the same time, river water moves inland into the aquifer, and river water chemistry facilitates further mobilization by enabling uranium desorption from contaminated sediments. As river stage decreases, flow moves toward the river, ultimately discharging contaminated water at the river bed. River water specific conductance at the 300 Area varies around 0.018 S/m whereas groundwater specific conductance varies around 0.043 S/m. This contrast provides the opportunity to monitor groundwater/river water interaction by imaging changes in bulk conductivity within the saturated zone using time-lapse electrical resistivity tomography. Previous efforts have demonstrated this capability, but have also shown that disconnecting regularization constraints at the water table is critical for obtaining meaningful time-lapse images. Because the water table moves with time, the regularization constraints must also be transient to accommodate the water table boundary. This was previously accomplished with 2D time-lapse ERT imaging by using a finely discretized computational mesh within the water table interval, enabling a relatively smooth water table to be defined without modifying the mesh. However, in 3D this approach requires a computational mesh with an untenable number of elements. In order to accommodate the water table boundary in 3D, we propose a time-lapse warping mesh inversion, whereby mesh elements that traverse the water table are modified to generate a smooth boundary at the known water table position, enabling regularization constraints to be accurately disconnected across the water table boundary at a given time. We demonstrate the approach using a surface ERT array installed adjacent to the Columbia River at the 300 Area, consisting of 352 electrodes and covering an area of approximately 350 m x 350 m. Using autonomous data collection, transmission, and filtering tools coupled with high performance computing resources, the 4D imaging process is automated and executed in real time. Each time lapse survey consists of approximately 40,000 measurements and 4 surveys are collected and processed per day from April 1st , 2013 to September 30th, 2013. The data are inverted on an unstructured tetrahedral mesh that honors LiDAR-based surface topography and is comprised of approximately 905,000 elements. Imaging results show the dynamic 4D extent of river water intrusion, and are validated with well-based fluid conductivity measurements at each monitoring well within the imaging domain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, C.-L.; Lee, C.-C., E-mail: chieh.no27@gmail.com
2016-01-15
We consider solvability of the generalized reaction–diffusion equation with both space- and time-dependent diffusion and reaction terms by means of the similarity method. By introducing the similarity variable, the reaction–diffusion equation is reduced to an ordinary differential equation. Matching the resulting ordinary differential equation with known exactly solvable equations, one can obtain corresponding exactly solvable reaction–diffusion systems. Several representative examples of exactly solvable reaction–diffusion equations are presented.
Nsoesie, Elaine O; Buckeridge, David L; Brownstein, John S
2014-01-22
Alternative data sources are used increasingly to augment traditional public health surveillance systems. Examples include over-the-counter medication sales and school absenteeism. We sought to determine if an increase in restaurant table availabilities was associated with an increase in disease incidence, specifically influenza-like illness (ILI). Restaurant table availability was monitored using OpenTable, an online restaurant table reservation site. A daily search was performed for restaurants with available tables for 2 at the hour and at half past the hour for 22 distinct times: between 11:00 am-3:30 pm for lunch and between 6:00-11:30 PM for dinner. In the United States, we examined table availability for restaurants in Boston, Atlanta, Baltimore, and Miami. For Mexico, we studied table availabilities in Cancun, Mexico City, Puebla, Monterrey, and Guadalajara. Time series of restaurant use was compared with Google Flu Trends and ILI at the state and national levels for the United States and Mexico using the cross-correlation function. Differences in restaurant use were observed across sampling times and regions. We also noted similarities in time series trends between data on influenza activity and restaurant use. In some settings, significant correlations greater than 70% were noted between data on restaurant use and ILI trends. This study introduces and demonstrates the potential value of restaurant use data for event surveillance.
Summary of Fast Pyrolysis and Upgrading GHG Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snowden-Swan, Lesley J.; Male, Jonathan L.
2012-12-07
The Energy Independence and Security Act (EISA) of 2007 established new renewable fuel categories and eligibility requirements (EPA 2010). A significant aspect of the National Renewable Fuel Standard 2 (RFS2) program is the requirement that the life cycle greenhouse gas (GHG) emissions of a qualifying renewable fuel be less than the life cycle GHG emissions of the 2005 baseline average gasoline or diesel fuel that it replaces. Four levels of reduction are required for the four renewable fuel standards. Table 1 lists these life cycle performance improvement thresholds. Table 1. Life Cycle GHG Thresholds Specified in EISA Fuel Type Percentmore » Reduction from 2005 Baseline Renewable fuel 20% Advanced biofuel 50% Biomass-based diesel 50% Cellulosic biofuel 60% Notably, there is a specialized subset of advanced biofuels that are the cellulosic biofuels. The cellulosic biofuels are incentivized by the Cellulosic Biofuel Producer Tax Credit (26 USC 40) to stimulate market adoption of these fuels. EISA defines a cellulosic biofuel as follows (42 USC 7545(o)(1)(E)): The term “cellulosic biofuel” means renewable fuel derived from any cellulose, hemicellulose, or lignin that is derived from renewable biomass and that has lifecycle greenhouse gas emissions, as determined by the Administrator, that are at least 60 percent less than the baseline lifecycle greenhouse gas emissions. As indicated, the Environmental Protection Agency (EPA) has sole responsibility for conducting the life cycle analysis (LCA) and making the final determination of whether a given fuel qualifies under these biofuel definitions. However, there appears to be a need within the LCA community to discuss and eventually reach consensus on discerning a 50–59 % GHG reduction from a ≥ 60% GHG reduction for policy, market, and technology development. The level of specificity and agreement will require additional development of capabilities and time for the sustainability and analysis community, as illustrated by the rich dialogue and convergence around the energy content and GHG reduction of cellulosic ethanol (an example of these discussions can be found in Wang 2011). GHG analyses of fast pyrolysis technology routes are being developed and will require significant work to reach the levels of development and maturity of cellulosic ethanol models. This summary provides some of the first fast pyrolysis analyses and clarifies some of the reasons for differing results in an effort to begin the convergence on assumptions, discussion of quality of models, and harmonization.« less
NASA Astrophysics Data System (ADS)
Bouaamlat, I.; Larabi, A.; Faouzi, M.
2014-12-01
The geographical location of Tafilalet oasis system (TOS) in the south of the valley of Ziz (Morocco) offers him a particular advantage on the plane of water potential. The surface water which comes from humid regions of the High Atlas and intercepted by a dam then converged through the watercourse of Ziz towards the plain of the TOS, have created the conditions for the formation of a water table relatively rich with regard to the local climatic conditions (arid climate with recurrent drought). Given the role of the water table in the economic development of the region, a hydrogeological study was conducted to understand the impact of artificial recharge and recurrent droughts on the development of the groundwater reserves of TOS. In this study, a three-dimensional model of groundwater flow was developed for the TOS, to assist the decision makers as a "management tool" in order to assess alternative schemes for development and exploitation of groundwater resources based on the variation of artificial recharge and drought. The results from this numerical investigation of the TOS aquifer shows that the commissioning of the dam to control the flows of extreme flood and good management of water releases, has avoided the losses of irrigation water and consequently the non-overexploitation of the groundwater. So that with one or two water releases per year from the dam of flow rate more than 28 million m3/year it is possible to reconstruct the volume of water abstracted by wells. The idea of lowering water table by pumping wells is not exactly true, as well the development of groundwater abstraction has not prevented the wound of water table in these last years, the pumping wells accompanied more than it triggers the lowering of water table and it is mainly the succession of dry periods causing the decreases of the piezometric level. This situation confirms the important role that groundwater plays as a "buffer" during the drought periods.
NASA Technical Reports Server (NTRS)
Owen, W. M., Jr.
1993-01-01
In order for photons emitted by the GOPEX lasers to be detected by Galileo's camera, the telescopes at Table Mountain Observatory and Starfire Optical Range had to be pointed in the right direction within a tolerance less than the beam divergence. At both sites nearby stars were used as pointing references. The technical challenge was to ensure that the transmission direction and the star positions were specified in exactly the same coordinate system; given this assurance, neither the uncertainty in the star catalog positions nor the difficulty in offset pointing was expected to exceed the pointing error budget. The correctness of the pointing scheme was verified by the success of GOPEX.
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.
1984-01-01
A detailed description of the machine-readable revised catalog as it is currently being distributed from the Astronomical Data Center is given. This catalog of star images was compiled from imagery obtained by the Naval Research Laboratory (NRL) Far-Ultraviolet Camera/Spectrograph (Experiments S201) operated from 21 to 23 April 1972 on the lunar surface during the Apollo 16 mission. The documentation includes a detailed data format description, a table of indigenous characteristics of the magnetic tape file, and a sample listing of data records exactly as they are presented in the machine-readable version.
Processed Thematic Mapper Satellite Imagery for Selected Areas within the U.S.-Mexico Borderlands
Dohrenwend, John C.; Gray, Floyd; Miller, Robert J.
2000-01-01
The study is summarized in the Adobe Acrobat Portable Document Format (PDF) file OF00-309.PDF. This publication also contain satellite full-scene images of selected areas along the U.S.-Mexico border. These images are presented as high-resolution images in jpeg format (IMAGES). The folder LOCATIONS in contains TIFF images showing exact positions of easily-identified reference locations for each of the Landsat TM scenes located at least partly within the U.S. A reference location table (BDRLOCS.DOC in MS Word format) lists the latitude and longitude of each reference location with a nominal precision of 0.001 minute of arc
On-the-fly Doppler broadening of unresolved resonance region cross sections
Walsh, Jonathan A.; Forget, Benoit; Smith, Kord S.; ...
2017-07-29
In this paper, two methods for computing temperature-dependent unresolved resonance region cross sections on-the-fly within continuous-energy Monte Carlo neutron transport simulations are presented. The first method calculates Doppler broadened cross sections directly from zero-temperature average resonance parameters. In a simulation, at each event that requires cross section values, a realization of unresolved resonance parameters is generated about the desired energy and temperature-dependent single-level Breit-Wigner resonance cross sections are computed directly via the analytical Ψ-x Doppler integrals. The second method relies on the generation of equiprobable cross section magnitude bands on an energy-temperature mesh. Within a simulation, the bands are sampledmore » and interpolated in energy and temperature to obtain cross section values on-the-fly. Both of the methods, as well as their underlying calculation procedures, are verified numerically in extensive code-to-code comparisons. Energy-dependent pointwise cross sections calculated with the newly-implemented procedures are shown to be in excellent agreement with those calculated by a widely-used nuclear data processing code. Relative differences at or below 0.1% are observed. Integral criticality benchmark results computed with the proposed methods are shown to reproduce those computed with a state-of-the-art processed nuclear data library very well. In simulations of fast spectrum systems which are highly-sensitive to the representation of cross section data in the unresolved region, k-eigenvalue and neutron flux spectra differences of <10 pcm and <1.0% are observed, respectively. The direct method is demonstrated to be well-suited to the calculation of reference solutions — against which results obtained with a discretized representation may be assessed — as a result of its treatment of the energy, temperature, and cross section magnitude variables as continuous. Also, because there is no pre-processed data to store (only temperature-independent average resonance parameters) the direct method is very memory-efficient. Typically, only a few kB of memory are needed to store all required unresolved region data for a single nuclide. However, depending on the details of a particular simulation, performing URR cross section calculations on-the-fly can significantly increase simulation times. Alternatively, the method of interpolating equiprobable probability bands is demonstrated to produce results that are as accurate as the direct reference solutions, to within arbitrary precision, with high computational efficiency in terms of memory requirements and simulation time. Analyses of a fast spectrum system show that interpolation on a coarse energy-temperature mesh can be used to reproduce reference k-eigenvalue results obtained with cross sections calculated continuously in energy and directly at an exact temperature to within <10 pcm. Probability band data on a mesh encompassing the range of temperatures relevant to reactor analysis usually require around 100 kB of memory per nuclide. Finally, relative to the case in which probability table data generated at a single, desired temperature are used, minor increases in simulation times are observed when probability band interpolation is employed.« less
Numerical Simulation of Ultra-Fast Pulse Propagation in Two-Photon Absorbing Medium
2011-08-01
physical problems including coherent- and incoherent regimes of optical power limiting, saturation, CEP effects, soliton formation etc. It can be also...coherent- and incoherent regimes of optical power limiting, saturation, CEP effects, electromagnetically induced transparency, soliton formation etc...experimental data ( dark blue); Upper panel - 1PA spectrum; Lower panel - 2PA cross section spectrum. The parameter values used are shown in Table 1. 10
Neck Injury in Advanced Military Aircraft Environments
1990-02-01
injury alibhis the Fast 2 nortbs In atstlitied by type of oihcrafr. This table demonstrates a statistirally significant trend in frequency (P- S5 aud...it appears that ransitional vertebrae aic relatively coarnon and equally distributed bhtweon the thoracico-lumbal (9.0%) and the lumbo- sacral area...unilateral contact of asymmetrical lumbar sacralization which increases torque forces with consequent strain on the spine and risk of disc herniation above
Scheduling Independent Partitions in Integrated Modular Avionics Systems
Du, Chenglie; Han, Pengcheng
2016-01-01
Recently the integrated modular avionics (IMA) architecture has been widely adopted by the avionics industry due to its strong partition mechanism. Although the IMA architecture can achieve effective cost reduction and reliability enhancement in the development of avionics systems, it results in a complex allocation and scheduling problem. All partitions in an IMA system should be integrated together according to a proper schedule such that their deadlines will be met even under the worst case situations. In order to help provide a proper scheduling table for all partitions in IMA systems, we study the schedulability of independent partitions on a multiprocessor platform in this paper. We firstly present an exact formulation to calculate the maximum scaling factor and determine whether all partitions are schedulable on a limited number of processors. Then with a Game Theory analogy, we design an approximation algorithm to solve the scheduling problem of partitions, by allowing each partition to optimize its own schedule according to the allocations of the others. Finally, simulation experiments are conducted to show the efficiency and reliability of the approach proposed in terms of time consumption and acceptance ratio. PMID:27942013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
FEDIX is an on-line information service that links the higher education community and the federal government to facilitate research, education, and services. The system provides accurate and timely federal agency information to colleges, universities, and other research organizations. There are no registration fees and no access charges for using FEDIX. Agencies participating in the FEDIX system include: Department of Energy (DOE), Federal Aviation Administration (FAA), National Aeronautics and Space Administration (NASA), Office of Naval Research (ONR), Air Force Office of Scientific Research (AFOSR), National Science Foundation (NSF), National Security Agency (NSA), Department of Commerce (DOC), Department of Education (DOEd), Departmentmore » of Housing and Urban Development (HUD), and Agency for International Development (AID). Additional government agencies are expected to join FEDIX in the near future. This guide is intended to help users access and utilize the FEDIX system. Because the system is frequently updated, however, some menus and tables used as examples in this text may not exactly match those displayed on the live system.« less
Effects of volume change on the unsaturated hydraulic conductivity of Sphagnum moss
NASA Astrophysics Data System (ADS)
Golubev, V.; Whittington, P.
2018-04-01
Due to the non-vascular nature of Sphagnum mosses, the capitula (growing surface) of the moss must rely solely on capillary action to receive water from beneath. Moss subsides and swells in accordance with water table levels, an effect called "mire-breathing", which has been thought to be a self-preservation mechanism, although no systematic studies have been done to demonstrate exactly how volume change affects hydrophysical properties of moss. In this study, the unsaturated hydraulic conductivity (Kunsat) and water content of two different species of Sphagnum moss were measured at different compression rates, up to the maximum of 77%. The findings show that the Kunsat increases by up to an order of magnitude (10×) with compression up to a certain bulk density of the moss, after which higher levels of compression result in lowered unsaturated hydraulic conductivity. This was coupled with an increase in soil water retention with increased compression. The increase of the Kunsat with compression suggests that the mire-breathing effect should be considered a self-preservation mechanism to provide sufficient amount of water to growing moss in times of low water availability.
33 CFR 165.1191 - Northern California and Lake Tahoe Area Annual Fireworks Events.
Code of Federal Regulations, 2012 CFR
2012-07-01
... exact dates, times, and other details concerning the exact geographical description of the areas are... zone during all applicable effective dates and times unless cleared to do so by or through an official... a safety zone during all applicable effective dates and times shall come to an immediate stop. (3...
33 CFR 165.1191 - Northern California and Lake Tahoe Area Annual Fireworks Events.
Code of Federal Regulations, 2014 CFR
2014-07-01
... exact dates, times, and other details concerning the exact geographical description of the areas are... zone during all applicable effective dates and times unless cleared to do so by or through an official... a safety zone during all applicable effective dates and times shall come to an immediate stop. (3...
33 CFR 165.1191 - Northern California and Lake Tahoe Area Annual Fireworks Events.
Code of Federal Regulations, 2013 CFR
2013-07-01
... exact dates, times, and other details concerning the exact geographical description of the areas are... zone during all applicable effective dates and times unless cleared to do so by or through an official... a safety zone during all applicable effective dates and times shall come to an immediate stop. (3...