Sample records for double precision variables

  1. Solving Ordinary Differential Equations

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.

    1987-01-01

    Initial-value ordinary differential equation solution via variable order Adams method (SIVA/DIVA) package is collection of subroutines for solution of nonstiff ordinary differential equations. There are versions for single-precision and double-precision arithmetic. Requires fewer evaluations of derivatives than other variable-order Adams predictor/ corrector methods. Option for direct integration of second-order equations makes integration of trajectory problems significantly more efficient. Written in FORTRAN 77.

  2. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.

  3. Accurate computation of gravitational field of a tesseroid

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2018-02-01

    We developed an accurate method to compute the gravitational field of a tesseroid. The method numerically integrates a surface integral representation of the gravitational potential of the tesseroid by conditionally splitting its line integration intervals and by using the double exponential quadrature rule. Then, it evaluates the gravitational acceleration vector and the gravity gradient tensor by numerically differentiating the numerically integrated potential. The numerical differentiation is conducted by appropriately switching the central and the single-sided second-order difference formulas with a suitable choice of the test argument displacement. If necessary, the new method is extended to the case of a general tesseroid with the variable density profile, the variable surface height functions, and/or the variable intervals in longitude or in latitude. The new method is capable of computing the gravitational field of the tesseroid independently on the location of the evaluation point, namely whether outside, near the surface of, on the surface of, or inside the tesseroid. The achievable precision is 14-15 digits for the potential, 9-11 digits for the acceleration vector, and 6-8 digits for the gradient tensor in the double precision environment. The correct digits are roughly doubled if employing the quadruple precision computation. The new method provides a reliable procedure to compute the topographic gravitational field, especially that near, on, and below the surface. Also, it could potentially serve as a sure reference to complement and elaborate the existing approaches using the Gauss-Legendre quadrature or other standard methods of numerical integration.

  4. [Relations between biomedical variables: mathematical analysis or linear algebra?].

    PubMed

    Hucher, M; Berlie, J; Brunet, M

    1977-01-01

    The authors, after a short reminder of one pattern's structure, stress on the possible double approach of relations uniting the variables of this pattern: use of fonctions, what is within the mathematical analysis sphere, use of linear algebra profiting by matricial calculation's development and automatiosation. They precise the respective interests on these methods, their bounds and the imperatives for utilization, according to the kind of variables, of data, and the objective for work, understanding phenomenons or helping towards decision.

  5. Variable-Delay Polarization Modulators for Cryogenic Millimeter-Wave Applications

    NASA Technical Reports Server (NTRS)

    Chuss, D. T.; Eimer, J. R.; Fixsen, D. J.; Hinderks, J.; Kogut, A. J.; Lazear, J.; Mirel, P.; Switzer, E.; Voellmer, G. M.; Wollack, E. J..

    2014-01-01

    We describe the design, construction, and initial validation of the variable-delay polarization modulator (VPM) designed for the PIPER cosmic microwave background polarimeter. The VPM modulates between linear and circular polarization by introducing a variable phase delay between orthogonal linear polarizations. Each VPM has a diameter of 39 cm and is engineered to operate in a cryogenic environment (1.5 K). We describe the mechanical design and performance of the kinematic double-blade flexure and drive mechanism along with the construction of the high precision wire grid polarizers.

  6. An improved multiple linear regression and data analysis computer program package

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  7. Measures of precision for dissimilarity-based multivariate analysis of ecological communities

    PubMed Central

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826

  8. Double-survey estimates of bald eagle populations in Oregon

    USGS Publications Warehouse

    Anthony, R.G.; Garrett, Monte G.; Isaacs, F.B.

    1999-01-01

    The literature on abundance of birds of prey is almost devoid of population estimates with statistical rigor. Therefore, we surveyed bald eagle (Haliaeetus leucocephalus) populations on the Crooked and lower Columbia rivers of Oregon and used the double-survey method to estimate populations and sighting probabilities for different survey methods (aerial, boat, vehicle) and bald eagle ages (adults vs. subadults). Sighting probabilities were consistently 20%. The results revealed variable and negative bias (percent relative bias = -9 to -70%) of direct counts and emphasized the importance of estimating populations where some measure of precision and ability to conduct inference tests are available. We recommend use of the double-survey method to estimate abundance of bald eagle populations and other raptors in open habitats.

  9. Mixed Single/Double Precision in OpenIFS: A Detailed Study of Energy Savings, Scaling Effects, Architectural Effects, and Compilation Effects

    NASA Astrophysics Data System (ADS)

    Fagan, Mike; Dueben, Peter; Palem, Krishna; Carver, Glenn; Chantry, Matthew; Palmer, Tim; Schlacter, Jeremy

    2017-04-01

    It has been shown that a mixed precision approach that judiciously replaces double precision with single precision calculations can speed-up global simulations. In particular, a mixed precision variation of the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) showed virtually the same quality model results as the standard double precision version (Vana et al., Single precision in weather forecasting models: An evaluation with the IFS, Monthly Weather Review, in print). In this study, we perform detailed measurements of savings in computing time and energy using a mixed precision variation of the -OpenIFS- model. The mixed precision variation of OpenIFS is analogous to the IFS variation used in Vana et al. We (1) present results for energy measurements for simulations in single and double precision using Intel's RAPL technology, (2) conduct a -scaling- study to quantify the effects that increasing model resolution has on both energy dissipation and computing cycles, (3) analyze the differences between single core and multicore processing, and (4) compare the effects of different compiler technologies on the mixed precision OpenIFS code. In particular, we compare intel icc/ifort with gnu gcc/gfortran.

  10. Influence of sulfur-bearing polyatomic species on high precision measurements of Cu isotopic composition

    USGS Publications Warehouse

    Pribil, M.J.; Wanty, R.B.; Ridley, W.I.; Borrok, D.M.

    2010-01-01

    An increased interest in high precision Cu isotope ratio measurements using multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) has developed recently for various natural geologic systems and environmental applications, these typically contain high concentrations of sulfur, particularly in the form of sulfate (SO42-) and sulfide (S). For example, Cu, Fe, and Zn concentrations in acid mine drainage (AMD) can range from 100??g/L to greater than 50mg/L with sulfur species concentrations reaching greater than 1000mg/L. Routine separation of Cu, Fe and Zn from AMD, Cu-sulfide minerals and other geological matrices usually incorporates single anion exchange resin column chromatography for metal separation. During chromatographic separation, variable breakthrough of SO42- during anion exchange resin column chromatography into the Cu fractions was observed as a function of the initial sulfur to Cu ratio, column properties, and the sample matrix. SO42- present in the Cu fraction can form a polyatomic 32S-14N-16O-1H species causing a direct mass interference with 63Cu and producing artificially light ??65Cu values. Here we report the extent of the mass interference caused by SO42- breakthrough when measuring ??65Cu on natural samples and NIST SRM 976 Cu isotope spiked with SO42- after both single anion column chromatography and double anion column chromatography. A set of five 100??g/L Cu SRM 976 samples spiked with 500mg/L SO42- resulted in an average ??65Cu of -3.50?????5.42??? following single anion column separation with variable SO42- breakthrough but an average concentration of 770??g/L. Following double anion column separation, the average SO42-concentration of 13??g/L resulted in better precision and accuracy for the measured ??65Cu value of 0.01?????0.02??? relative to the expected 0??? for SRM 976. We conclude that attention to SO42- breakthrough on sulfur-rich samples is necessary for accurate and precise measurements of ??65Cu and may require the use of a double ion exchange column procedure. ?? 2010.

  11. The dynamical mass of a classical Cepheid variable star in an eclipsing binary system.

    PubMed

    Pietrzyński, G; Thompson, I B; Gieren, W; Graczyk, D; Bono, G; Udalski, A; Soszyński, I; Minniti, D; Pilecki, B

    2010-11-25

    Stellar pulsation theory provides a means of determining the masses of pulsating classical Cepheid supergiants-it is the pulsation that causes their luminosity to vary. Such pulsational masses are found to be smaller than the masses derived from stellar evolution theory: this is the Cepheid mass discrepancy problem, for which a solution is missing. An independent, accurate dynamical mass determination for a classical Cepheid variable star (as opposed to type-II Cepheids, low-mass stars with a very different evolutionary history) in a binary system is needed in order to determine which is correct. The accuracy of previous efforts to establish a dynamical Cepheid mass from Galactic single-lined non-eclipsing binaries was typically about 15-30% (refs 6, 7), which is not good enough to resolve the mass discrepancy problem. In spite of many observational efforts, no firm detection of a classical Cepheid in an eclipsing double-lined binary has hitherto been reported. Here we report the discovery of a classical Cepheid in a well detached, double-lined eclipsing binary in the Large Magellanic Cloud. We determine the mass to a precision of 1% and show that it agrees with its pulsation mass, providing strong evidence that pulsation theory correctly and precisely predicts the masses of classical Cepheids.

  12. A double hit model for the distribution of time to AIDS onset

    NASA Astrophysics Data System (ADS)

    Chillale, Nagaraja Rao

    2013-09-01

    Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.

  13. Measures of precision for dissimilarity-based multivariate analysis of ecological communities.

    PubMed

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.

  14. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less

  15. A double sealing technique for increasing the precision of headspace-gas chromatographic analysis.

    PubMed

    Xie, Wei-Qi; Yu, Kong-Xian; Gong, Yi-Xian

    2018-01-19

    This paper investigates a new double sealing technique for increasing the precision of the headspace gas chromatographic method. The air leakage problem caused by the high pressure in the headspace vial during the headspace sampling process has a great impact to the measurement precision in the conventional headspace analysis (i.e., single sealing technique). The results (using ethanol solution as the model sample) show that the present technique is effective to minimize such a problem. The double sealing technique has an excellent measurement precision (RSD < 0.15%) and accuracy (recovery = 99.1%-100.6%) for the ethanol quantification. The detection precision of the present method was 10-20 times higher than that in earlier HS-GC work that use conventional single sealing technique. The present double sealing technique may open up a new avenue, and also serve as a general strategy for improving the performance (i.e., accuracy and precision) of headspace analysis of various volatile compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Effects of room airflow on accurate determination of PUF-PAS sampling rates in the indoor environment.

    PubMed

    Herkert, Nicholas J; Hornbuckle, Keri C

    2018-05-23

    Accurate and precise interpretation of concentrations from polyurethane passive samplers (PUF-PAS) is important as more studies show elevated concentrations of PCBs and other semivolatile air toxics in indoor air of schools and homes. If sufficiently reliable, these samplers may be used to identify local sources and human health risks. Here we report indoor air sampling rates (Rs) for polychlorinated biphenyl congeners (PCBs) predicted for a frequently used double-dome and a half-dome PUF-PAS design. Both our experimentally calibrated (1.10 ± 0.23 m3 d-1) and modeled (1.08 ± 0.04 m3 d-1) Rs for the double-dome samplers compare well with literature reports for similar rooms. We determined that variability of wind speeds throughout the room significantly (P < 0.001) effected uptake rates. We examined this effect using computational fluid dynamics modeling and 3-D sonic anemometer measurements and found the airflow dynamics to have a significant but small impact on the precision of calculated airborne concentrations. The PUF-PAS concentration measurements were within 27% and 10% of the active sampling concentration measurements for the double-dome and half-dome designs, respectively. While the half-dome samplers produced more consistent concentration measurements, we find both designs to perform well indoors.

  17. Evaluation of the FIR Example using Xilinx Vivado High-Level Synthesis Compiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Finkel, Hal; Yoshii, Kazutomo

    Compared to central processing units (CPUs) and graphics processing units (GPUs), field programmable gate arrays (FPGAs) have major advantages in reconfigurability and performance achieved per watt. This development flow has been augmented with high-level synthesis (HLS) flow that can convert programs written in a high-level programming language to Hardware Description Language (HDL). Using high-level programming languages such as C, C++, and OpenCL for FPGA-based development could allow software developers, who have little FPGA knowledge, to take advantage of the FPGA-based application acceleration. This improves developer productivity and makes the FPGA-based acceleration accessible to hardware and software developers. Xilinx Vivado HLSmore » compiler is a high-level synthesis tool that enables C, C++ and System C specification to be directly targeted into Xilinx FPGAs without the need to create RTL manually. The white paper [1] published recently by Xilinx uses a finite impulse response (FIR) example to demonstrate the variable-precision features in the Vivado HLS compiler and the resource and power benefits of converting floating point to fixed point for a design. To get a better understanding of variable-precision features in terms of resource usage and performance, this report presents the experimental results of evaluating the FIR example using Vivado HLS 2017.1 and a Kintex Ultrascale FPGA. In addition, we evaluated the half-precision floating-point data type against the double-precision and single-precision data type and present the detailed results.« less

  18. Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born

    PubMed Central

    2012-01-01

    We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031

  19. A Double Perturbation Method for Reducing Dynamical Degradation of the Digital Baker Map

    NASA Astrophysics Data System (ADS)

    Liu, Lingfeng; Lin, Jun; Miao, Suoxia; Liu, Bocheng

    2017-06-01

    The digital Baker map is widely used in different kinds of cryptosystems, especially for image encryption. However, any chaotic map which is realized on the finite precision device (e.g. computer) will suffer from dynamical degradation, which refers to short cycle lengths, low complexity and strong correlations. In this paper, a novel double perturbation method is proposed for reducing the dynamical degradation of the digital Baker map. Both state variables and system parameters are perturbed by the digital logistic map. Numerical experiments show that the perturbed Baker map can achieve good statistical and cryptographic properties. Furthermore, a new image encryption algorithm is provided as a simple application. With a rather simple algorithm, the encrypted image can achieve high security, which is competitive to the recently proposed image encryption algorithms.

  20. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed Central

    Palmer, T. N.

    2014-01-01

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic–dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only. PMID:24842038

  1. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed

    Palmer, T N

    2014-06-28

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic-dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only.

  2. Double resonance calibration of g factor standards: Carbon fibers as a high precision standard

    NASA Astrophysics Data System (ADS)

    Herb, Konstantin; Tschaggelar, Rene; Denninger, Gert; Jeschke, Gunnar

    2018-04-01

    The g factor of paramagnetic defects in commercial high performance carbon fibers was determined by a double resonance experiment based on the Overhauser shift due to hyperfine coupled protons. Our carbon fibers exhibit a single, narrow and perfectly Lorentzian shaped ESR line and a g factor slightly higher than gfree with g = 2.002644 =gfree · (1 + 162ppm) with a relative uncertainty of 15ppm . This precisely known g factor and their inertness qualify them as a high precision g factor standard for general purposes. The double resonance experiment for calibration is applicable to other potential standards with a hyperfine interaction averaged by a process with very short correlation time.

  3. The Los Alamos National Laboratory precision double crystal spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, D.V.; Stevens, C.J.; Liefield, R.J.

    1994-03-01

    This report discusses the following topics on the LANL precision double crystal X-ray spectrometer: Motivation for construction of the instrument; a brief history of the instrument; mechanical systems; motion control systems; computer control system; vacuum system; alignment program; scan programs; observations of the copper K{alpha} lines; and characteristics and specifications.

  4. Precision Tests of a Quantum Hall Effect Device DC Equivalent Circuit Using Double-Series and Triple-Series Connections

    PubMed Central

    Jeffery, A.; Elmquist, R. E.; Cage, M. E.

    1995-01-01

    Precision tests verify the dc equivalent circuit used by Ricketts and Kemeny to describe a quantum Hall effect device in terms of electrical circuit elements. The tests employ the use of cryogenic current comparators and the double-series and triple-series connection techniques of Delahaye. Verification of the dc equivalent circuit in double-series and triple-series connections is a necessary step in developing the ac quantum Hall effect as an intrinsic standard of resistance. PMID:29151768

  5. High precision calcium isotope analysis using 42Ca-48Ca double-spike TIMS technique

    NASA Astrophysics Data System (ADS)

    Feng, L.; Zhou, L.; Gao, S.; Tong, S. Y.; Zhou, M. L.

    2014-12-01

    Double spike techniques are widely used for determining calcium isotopic compositions of natural samples. The most important factor controlling precision of the double spike technique is the choice of appropriate spike isotope pair, the composition of double spikes and the ratio of spike to sample(CSp/CN). We propose an optimal 42Ca-48Ca double spike protocol which yields the best internal precision for calcium isotopic composition determinations among all kinds of spike pairs and various spike compositions and ratios of spike to sample, as predicted by linear error propagation method. It is suggested to use spike composition of 42Ca/(42Ca+48Ca) = 0.44 mol/mol and CSp/(CN+ CSp)= 0.12mol/mol because it takes both advantages of the largest mass dispersion between 42Ca and 48Ca (14%) and lowest spike cost. Spiked samples were purified by pass through homemade micro-column filled with Ca special resin. K, Ti and other interference elements were completely separated, while 100% calcium was recovered with negligible blank. Data collection includes integration time, idle time, focus and peakcenter frequency, which were all carefully designed for the highest internal precision and lowest analysis time. All beams were automatically measured in a sequence by Triton TIMS so as to eliminate difference of analytical conditions between samples and standards, and also to increase the analytical throughputs. The typical internal precision of 100 duty cycles for one beam is 0.012‒0.015 ‰ (2δSEM), which agrees well with the predicted internal precision of 0.0124 ‰ (2δSEM). Our methods improve internal precisions by a factor of 2‒10 compared to previous methods of determination of calcium isotopic compositions by double spike TIMS. We analyzed NIST SRM 915a, NIST SRM 915b and Pacific Seawater as well as interspersed geological samples during two months. The obtained average δ44/40Ca (all relative to NIST SRM 915a) is 0.02 ± 0.02 ‰ (n=28), 0.72±0.04 ‰ (n=10) and 1.93±0.03 ‰ (n=21) for NIST SRM 915a, NIST SRM 915b and Pacific Seawater, respectively. The long-term reproducibility is 0.10‰ (2 δSD), which is comparable to the best external precision of 0.04 ‰ (2 δSD) of previous methods, but our sample throughputs are doubled with significant reduction in amount of spike used for single samples.

  6. Improving Weather Forecasts Through Reduced Precision Data Assimilation

    NASA Astrophysics Data System (ADS)

    Hatfield, Samuel; Düben, Peter; Palmer, Tim

    2017-04-01

    We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18

  7. Precision improving of double beam shadow moiré interferometer by phase shifting interferometry for the stress of flexible substrate

    NASA Astrophysics Data System (ADS)

    Huang, Kuo-Ting; Chen, Hsi-Chao; Lin, Ssu-Fan; Lin, Ke-Ming; Syue, Hong-Ye

    2012-09-01

    While tin-doped indium oxide (ITO) has been extensively applied in flexible electronics, the problem of the residual stress has many obstacles to overcome. This study investigated the residual stress of flexible electronics by the double beam shadow moiré interferometer, and focused on the precision improvement with phase shifting interferometry (PSI). According to the out-of-plane displacement equation, the theoretical error depends on the grating pitch and the angle between incident light and CCD. The angle error could be reduced to 0.03% by the angle shift of 10° as a result of the double beam interferometer was a symmetrical system. But the experimental error of the double beam moiré interferometer still reached to 2.2% by the noise of the vibration and interferograms. In order to improve the measurement precision, PSI was introduced to the double shadow moiré interferometer. Wavefront phase was reconstructed by the five interferograms with the Hariharan algorithm. The measurement results of standard cylinder indicating the error could be reduced from 2.2% to less than 1% with PSI. The deformation of flexible electronic could be reconstructed fast and calculated the residual stress with the Stoney correction formula. This shadow moiré interferometer with PSI could improve the precision of residual stress for flexible electronics.

  8. Double resonance calibration of g factor standards: Carbon fibers as a high precision standard.

    PubMed

    Herb, Konstantin; Tschaggelar, Rene; Denninger, Gert; Jeschke, Gunnar

    2018-04-01

    The g factor of paramagnetic defects in commercial high performance carbon fibers was determined by a double resonance experiment based on the Overhauser shift due to hyperfine coupled protons. Our carbon fibers exhibit a single, narrow and perfectly Lorentzian shaped ESR line and a g factor slightly higher than g free with g=2.002644=g free ·(1+162ppm) with a relative uncertainty of 15ppm. This precisely known g factor and their inertness qualify them as a high precision g factor standard for general purposes. The double resonance experiment for calibration is applicable to other potential standards with a hyperfine interaction averaged by a process with very short correlation time. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Quantum analogue computing.

    PubMed

    Kendon, Vivien M; Nemoto, Kae; Munro, William J

    2010-08-13

    We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.

  10. Microfluidic approach for encapsulation via double emulsions.

    PubMed

    Wang, Wei; Zhang, Mao-Jie; Chu, Liang-Yin

    2014-10-01

    Double emulsions, with inner drops well protected by the outer shells, show great potential as compartmentalized systems to encapsulate multiple components for protecting actives, masking flavor, and targetedly delivering and controllably releasing drugs. Precise control of the encapsulation characteristics of each component is critical to achieve an optimal therapeutic efficacy for pharmaceutical applications. Such controllable encapsulation can be realized by using microfluidic approaches for producing monodisperse double emulsions with versatile and controllable structures as the encapsulation system. The size, number and composition of the emulsion drops can be accurately manipulated for optimizing the encapsulation of each component for pharmaceutical applications. In this review, we highlight the outstanding advantages of controllable microfluidic double emulsions for highly efficient and precisely controllable encapsulation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. SIVA/DIVA- INITIAL VALUE ORDINARY DIFFERENTIAL EQUATION SOLUTION VIA A VARIABLE ORDER ADAMS METHOD

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.

    1994-01-01

    The SIVA/DIVA package is a collection of subroutines for the solution of ordinary differential equations. There are versions for single precision and double precision arithmetic. These solutions are applicable to stiff or nonstiff differential equations of first or second order. SIVA/DIVA requires fewer evaluations of derivatives than other variable order Adams predictor-corrector methods. There is an option for the direct integration of second order equations which can make integration of trajectory problems significantly more efficient. Other capabilities of SIVA/DIVA include: monitoring a user supplied function which can be separate from the derivative; dynamically controlling the step size; displaying or not displaying output at initial, final, and step size change points; saving the estimated local error; and reverse communication where subroutines return to the user for output or computation of derivatives instead of automatically performing calculations. The user must supply SIVA/DIVA with: 1) the number of equations; 2) initial values for the dependent and independent variables, integration stepsize, error tolerance, etc.; and 3) the driver program and operational parameters necessary for subroutine execution. SIVA/DIVA contains an extensive diagnostic message library should errors occur during execution. SIVA/DIVA is written in FORTRAN 77 for batch execution and is machine independent. It has a central memory requirement of approximately 120K of 8 bit bytes. This program was developed in 1983 and last updated in 1987.

  12. Double-trap measurement of the proton magnetic moment at 0.3 parts per billion precision.

    PubMed

    Schneider, Georg; Mooser, Andreas; Bohman, Matthew; Schön, Natalie; Harrington, James; Higuchi, Takashi; Nagahama, Hiroki; Sellner, Stefan; Smorra, Christian; Blaum, Klaus; Matsuda, Yasuyuki; Quint, Wolfgang; Walz, Jochen; Ulmer, Stefan

    2017-11-24

    Precise knowledge of the fundamental properties of the proton is essential for our understanding of atomic structure as well as for precise tests of fundamental symmetries. We report on a direct high-precision measurement of the magnetic moment μ p of the proton in units of the nuclear magneton μ N The result, μ p = 2.79284734462 (±0.00000000082) μ N , has a fractional precision of 0.3 parts per billion, improves the previous best measurement by a factor of 11, and is consistent with the currently accepted value. This was achieved with the use of an optimized double-Penning trap technique. Provided a similar measurement of the antiproton magnetic moment can be performed, this result will enable a test of the fundamental symmetry between matter and antimatter in the baryonic sector at the 10 -10 level. Copyright © 2017, American Association for the Advancement of Science.

  13. Solving lattice QCD systems of equations using mixed precision solvers on GPUs

    NASA Astrophysics Data System (ADS)

    Clark, M. A.; Babich, R.; Barros, K.; Brower, R. C.; Rebbi, C.

    2010-09-01

    Modern graphics hardware is designed for highly parallel numerical tasks and promises significant cost and performance benefits for many scientific applications. One such application is lattice quantum chromodynamics (lattice QCD), where the main computational challenge is to efficiently solve the discretized Dirac equation in the presence of an SU(3) gauge field. Using NVIDIA's CUDA platform we have implemented a Wilson-Dirac sparse matrix-vector product that performs at up to 40, 135 and 212 Gflops for double, single and half precision respectively on NVIDIA's GeForce GTX 280 GPU. We have developed a new mixed precision approach for Krylov solvers using reliable updates which allows for full double precision accuracy while using only single or half precision arithmetic for the bulk of the computation. The resulting BiCGstab and CG solvers run in excess of 100 Gflops and, in terms of iterations until convergence, perform better than the usual defect-correction approach for mixed precision.

  14. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DOE PAGES

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...

    2017-01-18

    Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less

  15. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.

    Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less

  16. An Application of Response Surface Methodology to a Macroeconomic Model.

    DTIC Science & Technology

    1985-12-01

    L21,ILI,ISl ,KI,SBI,PYl ,MI,P11,Q1 double precision TE,TW,TC,TN,TR,G,W2,R2,T,H,NP,NL,N8,NE,FR, :: & PF,LB A double precision CLI, SCLI ,PCL1IDL1 ,WlLl...TCLI-TNLI- SCLI )*PILI/PRLI) & .0.012*FR)+0.5*R1 PR=0.5*(-131. 17+2.32.PI)+0.5*PR *The monetary sector is omitted (See Chapter IV) C L1=0.5*(0.14*(M-TW

  17. How GNSS Enables Precision Farming

    DOT National Transportation Integrated Search

    2014-12-01

    Precision farming: Feeding a Growing Population Enables Those Who Feed the World. Immediate and Ongoing Needs - population growth (more to feed) - urbanization (decrease in arable land) Double food production by 2050 to meet world demand. To meet thi...

  18. A dual-core double emulsion platform for osmolarity-controlled microreactor triggered by coalescence of encapsulated droplets.

    PubMed

    Guan, Xuewei; Hou, Likai; Ren, Yukun; Deng, Xiaokang; Lang, Qi; Jia, Yankai; Hu, Qingming; Tao, Ye; Liu, Jiangwei; Jiang, Hongyuan

    2016-05-01

    Droplet-based microfluidics has provided a means to generate multi-core double emulsions, which are versatile platforms for microreactors in materials science, synthetic biology, and chemical engineering. To provide new opportunities for double emulsion platforms, here, we report a glass capillary microfluidic approach to first fabricate osmolarity-responsive Water-in-Oil-in-Water (W/O/W) double emulsion containing two different inner droplets/cores and to then trigger the coalescence between the encapsulated droplets precisely. To achieve this, we independently control the swelling speed and size of each droplet in the dual-core double emulsion by controlling the osmotic pressure between the inner droplets and the collection solutions. When the inner two droplets in one W/O/W double emulsion swell to the same size and reach the instability of the oil film interface between the inner droplets, core-coalescence happens and this coalescence process can be controlled precisely. This microfluidic methodology enables the generation of highly monodisperse dual-core double emulsions and the osmolarity-controlled swelling behavior provides new stimuli to trigger the coalescence between the encapsulated droplets. Such swelling-caused core-coalescence behavior in dual-core double emulsion establishes a novel microreactor for nanoliter-scale reactions, which can protect reaction materials and products from being contaminated or released.

  19. A dual-core double emulsion platform for osmolarity-controlled microreactor triggered by coalescence of encapsulated droplets

    PubMed Central

    Guan, Xuewei; Hou, Likai; Ren, Yukun; Deng, Xiaokang; Lang, Qi; Jia, Yankai; Hu, Qingming; Tao, Ye; Liu, Jiangwei; Jiang, Hongyuan

    2016-01-01

    Droplet-based microfluidics has provided a means to generate multi-core double emulsions, which are versatile platforms for microreactors in materials science, synthetic biology, and chemical engineering. To provide new opportunities for double emulsion platforms, here, we report a glass capillary microfluidic approach to first fabricate osmolarity-responsive Water-in-Oil-in-Water (W/O/W) double emulsion containing two different inner droplets/cores and to then trigger the coalescence between the encapsulated droplets precisely. To achieve this, we independently control the swelling speed and size of each droplet in the dual-core double emulsion by controlling the osmotic pressure between the inner droplets and the collection solutions. When the inner two droplets in one W/O/W double emulsion swell to the same size and reach the instability of the oil film interface between the inner droplets, core-coalescence happens and this coalescence process can be controlled precisely. This microfluidic methodology enables the generation of highly monodisperse dual-core double emulsions and the osmolarity-controlled swelling behavior provides new stimuli to trigger the coalescence between the encapsulated droplets. Such swelling-caused core-coalescence behavior in dual-core double emulsion establishes a novel microreactor for nanoliter-scale reactions, which can protect reaction materials and products from being contaminated or released. PMID:27279935

  20. Optimization of A 2-Micron Laser Frequency Stabilization System for a Double-Pulse CO2 Differential Absorption Lidar

    NASA Technical Reports Server (NTRS)

    Chen, Songsheng; Yu, Jirong; Bai, Yingsin; Koch, Grady; Petros, Mulugeta; Trieu, Bo; Petzar, Paul; Singh, Upendra N.; Kavaya, Michael J.; Beyon, Jeffrey

    2010-01-01

    A carbon dioxide (CO2) Differential Absorption Lidar (DIAL) for accurate CO2 concentration measurement requires a frequency locking system to achieve high frequency locking precision and stability. We describe the frequency locking system utilizing Frequency Modulation (FM), Phase Sensitive Detection (PSD), and Proportional Integration Derivative (PID) feedback servo loop, and report the optimization of the sensitivity of the system for the feed back loop based on the characteristics of a variable path-length CO2 gas cell. The CO2 gas cell is characterized with HITRAN database (2004). The method can be applied for any other frequency locking systems referring to gas absorption line.

  1. The precision of wet atmospheric deposition data from national atmospheric deposition program/national trends network sites determined with collocated samplers

    USGS Publications Warehouse

    Nilles, M.A.; Gordon, J.D.; Schroder, L.J.

    1994-01-01

    A collocated, wet-deposition sampler program has been operated since October 1988 by the U.S. Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments at four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database. Sampling precision was determined from the absolute value of differences in the analytical results for the paired samples in terms of median relative and absolute difference. The median relative difference for Mg2+, Na+, K+ and NH4+ concentration and deposition was quite variable between sites and exceeded 10% at most sites. Relative error for analytes whose concentrations typically approached laboratory method detection limits were greater than for analytes that did not typically approach detection limits. The median relative difference for SO42- and NO3- concentration, specific conductance, and sample volume at all sites was less than 7%. Precision for H+ concentration and deposition ranged from less than 10% at sites with typically high levels of H+ concentration to greater than 30% at sites with low H+ concentration. Median difference for analyte concentration and deposition was typically 1.5-2-times greater for samples collected during the winter than during other seasons at two northern sites. Likewise, the median relative difference in sample volume for winter samples was more than double the annual median relative difference at the two northern sites. Bias accounted for less than 25% of the collocated variability in analyte concentration and deposition from weekly collocated precipitation samples at most sites.A collocated, wet-deposition sampler program has been operated since OCtober 1988 by the U.S Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database.

  2. Airborne Double Pulsed 2-Micron IPDA Lidar for Atmospheric CO2 Measurement

    NASA Technical Reports Server (NTRS)

    Yu, Jirong; Petros, Mulugeta; Refaat, Tamer; Singh, Upendra

    2015-01-01

    We have developed an airborne 2-micron Integrated Path Differential Absorption (IPDA) lidar for atmospheric CO2 measurements. The double pulsed, high pulse energy lidar instrument can provide high-precision CO2 column density measurements.

  3. Simple and Double Alfven Waves: Hamiltonian Aspects

    NASA Astrophysics Data System (ADS)

    Webb, G. M.; Zank, G. P.; Hu, Q.; le Roux, J. A.; Dasgupta, B.

    2011-12-01

    We discuss the nature of simple and double Alfvén waves. Simple waves depend on a single phase variable \\varphi, but double waves depend on two independent phase variables \\varphi1 and \\varphi2. The phase variables depend on the space and time coordinates x and t. Simple and double Alfvén waves have the same integrals, namely, the entropy, density, magnetic pressure, and group velocity (the sum of the Alfvén and fluid velocities) are constant throughout the flow. We present examples of both simple and double Alfvén waves, and discuss Hamiltonian formulations of the waves.

  4. Double-slit experiment with single wave-driven particles and its relation to quantum mechanics.

    PubMed

    Andersen, Anders; Madsen, Jacob; Reichelt, Christian; Rosenlund Ahl, Sonja; Lautrup, Benny; Ellegaard, Clive; Levinsen, Mogens T; Bohr, Tomas

    2015-07-01

    In a thought-provoking paper, Couder and Fort [Phys. Rev. Lett. 97, 154101 (2006)] describe a version of the famous double-slit experiment performed with droplets bouncing on a vertically vibrated fluid surface. In the experiment, an interference pattern in the single-particle statistics is found even though it is possible to determine unambiguously which slit the walking droplet passes. Here we argue, however, that the single-particle statistics in such an experiment will be fundamentally different from the single-particle statistics of quantum mechanics. Quantum mechanical interference takes place between different classical paths with precise amplitude and phase relations. In the double-slit experiment with walking droplets, these relations are lost since one of the paths is singled out by the droplet. To support our conclusions, we have carried out our own double-slit experiment, and our results, in particular the long and variable slit passage times of the droplets, cast strong doubt on the feasibility of the interference claimed by Couder and Fort. To understand theoretically the limitations of wave-driven particle systems as analogs to quantum mechanics, we introduce a Schrödinger equation with a source term originating from a localized particle that generates a wave while being simultaneously guided by it. We show that the ensuing particle-wave dynamics can capture some characteristics of quantum mechanics such as orbital quantization. However, the particle-wave dynamics can not reproduce quantum mechanics in general, and we show that the single-particle statistics for our model in a double-slit experiment with an additional splitter plate differs qualitatively from that of quantum mechanics.

  5. An efficient mixed-precision, hybrid CPU-GPU implementation of a nonlinearly implicit one-dimensional particle-in-cell algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Chacon, Luis; Barnes, Daniel C

    2012-01-01

    Recently, a fully implicit, energy- and charge-conserving particle-in-cell method has been developed for multi-scale, full-f kinetic simulations [G. Chen, et al., J. Comput. Phys. 230, 18 (2011)]. The method employs a Jacobian-free Newton-Krylov (JFNK) solver and is capable of using very large timesteps without loss of numerical stability or accuracy. A fundamental feature of the method is the segregation of particle orbit integrations from the field solver, while remaining fully self-consistent. This provides great flexibility, and dramatically improves the solver efficiency by reducing the degrees of freedom of the associated nonlinear system. However, it requires a particle push per nonlinearmore » residual evaluation, which makes the particle push the most time-consuming operation in the algorithm. This paper describes a very efficient mixed-precision, hybrid CPU-GPU implementation of the implicit PIC algorithm. The JFNK solver is kept on the CPU (in double precision), while the inherent data parallelism of the particle mover is exploited by implementing it in single-precision on a graphics processing unit (GPU) using CUDA. Performance-oriented optimizations, with the aid of an analytical performance model, the roofline model, are employed. Despite being highly dynamic, the adaptive, charge-conserving particle mover algorithm achieves up to 300 400 GOp/s (including single-precision floating-point, integer, and logic operations) on a Nvidia GeForce GTX580, corresponding to 20 25% absolute GPU efficiency (against the peak theoretical performance) and 50-70% intrinsic efficiency (against the algorithm s maximum operational throughput, which neglects all latencies). This is about 200-300 times faster than an equivalent serial CPU implementation. When the single-precision GPU particle mover is combined with a double-precision CPU JFNK field solver, overall performance gains 100 vs. the double-precision CPU-only serial version are obtained, with no apparent loss of robustness or accuracy when applied to a challenging long-time scale ion acoustic wave simulation.« less

  6. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory

    PubMed Central

    Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank

    2016-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957

  7. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory.

    PubMed

    Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank

    2017-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Superior Intraparietal Sulcus Controls the Variability of Visual Working Memory Precision.

    PubMed

    Galeano Weber, Elena M; Peters, Benjamin; Hahn, Tim; Bledowski, Christoph; Fiebach, Christian J

    2016-05-18

    Limitations of working memory (WM) capacity depend strongly on the cognitive resources that are available for maintaining WM contents in an activated state. Increasing the number of items to be maintained in WM was shown to reduce the precision of WM and to increase the variability of WM precision over time. Although WM precision was recently associated with neural codes particularly in early sensory cortex, we have so far no understanding of the neural bases underlying the variability of WM precision, and how WM precision is preserved under high load. To fill this gap, we combined human fMRI with computational modeling of behavioral performance in a delayed color-estimation WM task. Behavioral results replicate a reduction of WM precision and an increase of precision variability under high loads (5 > 3 > 1 colors). Load-dependent BOLD signals in primary visual cortex (V1) and superior intraparietal sulcus (IPS), measured during the WM task at 2-4 s after sample onset, were modulated by individual differences in load-related changes in the variability of WM precision. Although stronger load-related BOLD increase in superior IPS was related to lower increases in precision variability, thus stabilizing WM performance, the reverse was observed for V1. Finally, the detrimental effect of load on behavioral precision and precision variability was accompanied by a load-related decline in the accuracy of decoding the memory stimuli (colors) from left superior IPS. We suggest that the superior IPS may contribute to stabilizing visual WM performance by reducing the variability of memory precision in the face of higher load. This study investigates the neural bases of capacity limitations in visual working memory by combining fMRI with cognitive modeling of behavioral performance, in human participants. It provides evidence that the superior intraparietal sulcus (IPS) is a critical brain region that influences the variability of visual working memory precision between and within individuals (Fougnie et al., 2012; van den Berg et al., 2012) under increased memory load, possibly in cooperation with perceptual systems of the occipital cortex. These findings substantially extend our understanding of the nature of capacity limitations in visual working memory and their neural bases. Our work underlines the importance of integrating cognitive modeling with univariate and multivariate methods in fMRI research, thus improving our knowledge of brain-behavior relationships. Copyright © 2016 the authors 0270-6474/16/365623-13$15.00/0.

  9. Parameter estimation by decoherence in the double-slit experiment

    NASA Astrophysics Data System (ADS)

    Matsumura, Akira; Ikeda, Taishi; Kukita, Shingo

    2018-06-01

    We discuss a parameter estimation problem using quantum decoherence in the double-slit interferometer. We consider a particle coupled to a massive scalar field after the particle passing through the double slit and solve the dynamics non-perturbatively for the coupling by the WKB approximation. This allows us to analyze the estimation problem which cannot be treated by master equation used in the research of quantum probe. In this model, the scalar field reduces the interference fringes of the particle and the fringe pattern depends on the field mass and coupling. To evaluate the contrast and the estimation precision obtained from the pattern, we introduce the interferometric visibility and the Fisher information matrix of the field mass and coupling. For the fringe pattern observed on the distant screen, we derive a simple relation between the visibility and the Fisher matrix. Also, focusing on the estimation precision of the mass, we find that the Fisher information characterizes the wave-particle duality in the double-slit interferometer.

  10. Variability of methacholine bronchoprovocation and the effect of inhaled corticosteroids in mild asthma

    PubMed Central

    Sumino, Kaharu; Sugar, Elizabeth A.; Irvin, Charles G.; Kaminsky, David A.; Shade, Dave; Wei, Christine Y.; Holbrook, Janet T.; Wise, Robert A.; Castro, Mario

    2014-01-01

    Background The methacholine challenge test quantifies airway hyper-responsiveness, which is measured by the provocative concentration of methacholine causing a 20% decrease in forced expiration volume in 1 second (PC20). The dose–response effect of inhaled corticosteroids (ICS) on PC20 has been inconsistent and within-patient variability of PC20 is not well established. Objectives To determine the effect of high- vs low-dose ICS on PC20 and within-patient variability in those with repeated measurements of PC20. Methods A randomized, double-masked, crossover trial was conducted in patients with asthma on controller medications with PC20 of 8 mg/mL or lower (n = 64) to evaluate the effect of high-dose (1,000 μg/d) vs low-dose (250 μg/d) fluticasone for 4 weeks on PC20. In addition, the variability of PC20 was assessed in participants who underwent 2 or 3 PC20 measurements on the same dose of ICS (n = 27) over a 4-week interval. Results Because there was a significant period effect, dose comparison of the change in PC20 was assessed in the first treatment period. There was no significant difference in the change in PC20 for high- vs low-dose ICS (39% vs 30% increase, respectively; P = .87). The within- and between-participant variances for log PC20 were 0.84 and 0.96, respectively, with an intra-class correlation of 0.53, and 37% of participants had more than 2 doubling dose changes in PC20 in those with repeated measurements. Conclusion The effect of ICS on PC20 is not dose dependent at fluticasone levels of 250 and 1,000 μg/d. Interpersonal variability for PC20 is large. A lack of precise measurements should be taken into account when interpreting any change in PC20. PMID:24507830

  11. Precision of coherence analysis to detect cerebral autoregulation by near-infrared spectroscopy in preterm infants

    NASA Astrophysics Data System (ADS)

    Hahn, Gitte Holst; Christensen, Karl Bang; Leung, Terence S.; Greisen, Gorm

    2010-05-01

    Coherence between spontaneous fluctuations in arterial blood pressure (ABP) and the cerebral near-infrared spectroscopy signal can detect cerebral autoregulation. Because reliable measurement depends on signals with high signal-to-noise ratio, we hypothesized that coherence is more precisely determined when fluctuations in ABP are large rather than small. Therefore, we investigated whether adjusting for variability in ABP (variabilityABP) improves precision. We examined the impact of variabilityABP within the power spectrum in each measurement and between repeated measurements in preterm infants. We also examined total monitoring time required to discriminate among infants with a simulation study. We studied 22 preterm infants (GA<30) yielding 215 10-min measurements. Surprisingly, adjusting for variabilityABP within the power spectrum did not improve the precision. However, adjusting for the variabilityABP among repeated measurements (i.e., weighting measurements with high variabilityABP in favor of those with low) improved the precision. The evidence of drift in individual infants was weak. Minimum monitoring time needed to discriminate among infants was 1.3-3.7 h. Coherence analysis in low frequencies (0.04-0.1 Hz) had higher precision and statistically more power than in very low frequencies (0.003-0.04 Hz). In conclusion, a reliable detection of cerebral autoregulation takes hours and the precision is improved by adjusting for variabilityABP between repeated measurements.

  12. The reliable solution and computation time of variable parameters logistic model

    NASA Astrophysics Data System (ADS)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  13. MMEJ-assisted gene knock-in using TALENs and CRISPR-Cas9 with the PITCh systems.

    PubMed

    Sakuma, Tetsushi; Nakade, Shota; Sakane, Yuto; Suzuki, Ken-Ichi T; Yamamoto, Takashi

    2016-01-01

    Programmable nucleases enable engineering of the genome by utilizing endogenous DNA double-strand break (DSB) repair pathways. Although homologous recombination (HR)-mediated gene knock-in is well established, it cannot necessarily be applied in every cell type and organism because of variable HR frequencies. We recently reported an alternative method of gene knock-in, named the PITCh (Precise Integration into Target Chromosome) system, assisted by microhomology-mediated end-joining (MMEJ). MMEJ harnesses independent machinery from HR, and it requires an extremely short homologous sequence (5-25 bp) for DSB repair, resulting in precise gene knock-in with a more easily constructed donor vector. Here we describe a streamlined protocol for PITCh knock-in, including the design and construction of the PITCh vectors, and their delivery to either human cell lines by transfection or to frog embryos by microinjection. The construction of the PITCh vectors requires only a few days, and the entire process takes ∼ 1.5 months to establish knocked-in cells or ∼ 1 week from injection to early genotyping in frog embryos.

  14. The Environmental Heat Flux Routine, Version 4 (EHFR-4) and Multiple Reflections Routine (MRR). Volume 2: Programmers reference manual

    NASA Technical Reports Server (NTRS)

    Dietz, J. B.

    1973-01-01

    The EHFR program reference information which is presented consists of the following subprogram detailed data: purpose-description of the routine, a list of the calling programs, an argument list description, nomenclature definition, flow charts, and a compilation listing of each subprogram. Each of the EHFR subprograms were developed specifically for this routine and do not have an applicability of a general nature. Single precision accuracy available on the Univac 1108 is used exclusively in all but two of the 31 EHFR subprograms. The double precision variables required are identified in the nomenclature definition of the two subprograms that require them. A concise definition of the purpose, function, and capabilities is made in the subprogram description. The description references the appropriate Volume 1 sections of the report which contain the applicable detailed definitions, governing equations, and assumptions used. The compilation listing of each subprogram defines the program/data storage requirements, identifies the labeled block common data required, and identifies other subprograms called during execution. For Vol. 1, see N73-31842.

  15. Pulse intensity characterization of the LCLS nanosecond double-bunch mode of operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yanwen; Decker, Franz-Josef; Turner, James

    The recent demonstration of the 'nanosecond double-bunch' operation mode,i.e.two X-ray pulses separated in time between 0.35 and hundreds of nanoseconds and by increments of 0.35 ns, offers new opportunities to investigate ultrafast dynamics in diverse systems of interest. However, in order to reach its full potential, this mode of operation requires the precise characterization of the intensity of each X-ray pulse within each pulse pair for any time separation. Here, a transmissive single-shot diagnostic that achieves this goal for time separations larger than 0.7 ns with a precision better than 5% is presented. Lastly, it also provides real-time monitoring feedbackmore » to help tune the accelerator parameters to deliver double pulse intensity distributions optimized for specific experimental goals.« less

  16. Pulse intensity characterization of the LCLS nanosecond double-bunch mode of operation

    DOE PAGES

    Sun, Yanwen; Decker, Franz-Josef; Turner, James; ...

    2018-03-27

    The recent demonstration of the 'nanosecond double-bunch' operation mode,i.e.two X-ray pulses separated in time between 0.35 and hundreds of nanoseconds and by increments of 0.35 ns, offers new opportunities to investigate ultrafast dynamics in diverse systems of interest. However, in order to reach its full potential, this mode of operation requires the precise characterization of the intensity of each X-ray pulse within each pulse pair for any time separation. Here, a transmissive single-shot diagnostic that achieves this goal for time separations larger than 0.7 ns with a precision better than 5% is presented. Lastly, it also provides real-time monitoring feedbackmore » to help tune the accelerator parameters to deliver double pulse intensity distributions optimized for specific experimental goals.« less

  17. Precise Hypocenter Determination around Palu Koro Fault: a Preliminary Results

    NASA Astrophysics Data System (ADS)

    Fawzy Ismullah, M. Muhammad; Nugraha, Andri Dian; Ramdhan, Mohamad; Wandono

    2017-04-01

    Sulawesi area is located in complex tectonic pattern. High seismicity activity in the middle of Sulawesi is related to Palu Koro fault (PKF). In this study, we determined precise hypocenter around PKF by applying double-difference method. We attempt to investigate of the seismicity rate, geometry of the fault and distribution of focus depth around PKF. We first re-pick P-and S-wave arrival time of the PKF events to determine the initial hypocenter location using Hypoellipse method through updated 1-D seismic velocity. Later on, we relocated the earthquake event using double-difference method. Our preliminary results show the distribution of relocated events are located around PKF and have smaller residual time than the initial location. We will enhance the hypocenter location through updating of arrival time by applying waveform cross correlation method as input for double-difference relocation.

  18. Double Arm Linkage precision Linear motion (DALL) Carriage, a simplified, rugged, high performance linear motion stage for the moving mirror of an Fourier Transform Spectrometer or other system requiring precision linear motion

    NASA Astrophysics Data System (ADS)

    Johnson, Kendall B.; Hopkins, Greg

    2017-08-01

    The Double Arm Linkage precision Linear motion (DALL) carriage has been developed as a simplified, rugged, high performance linear motion stage. Initially conceived as a moving mirror stage for the moving mirror of a Fourier Transform Spectrometer (FTS), it is applicable to any system requiring high performance linear motion. It is based on rigid double arm linkages connecting a base to a moving carriage through flexures. It is a monolithic design. The system is fabricated from one piece of material including the flexural elements, using high precision machining. The monolithic design has many advantages. There are no joints to slip or creep and there are no CTE (coefficient of thermal expansion) issues. This provides a stable, robust design, both mechanically and thermally and is expected to provide a wide operating temperature range, including cryogenic temperatures, and high tolerance to vibration and shock. Furthermore, it provides simplicity and ease of implementation, as there is no assembly or alignment of the mechanism. It comes out of the machining operation aligned and there are no adjustments. A prototype has been fabricated and tested, showing superb shear performance and very promising tilt performance. This makes it applicable to both corner cube and flat mirror FTS systems respectively.

  19. Using hyperspectral data in precision farming applications

    USDA-ARS?s Scientific Manuscript database

    Precision farming practices such as variable rate applications of fertilizer and agricultural chemicals require accurate field variability mapping. This chapter investigated the value of hyperspectral remote sensing in providing useful information for five applications of precision farming: (a) Soil...

  20. In trans paired nicking triggers seamless genome editing without double-stranded DNA cutting.

    PubMed

    Chen, Xiaoyu; Janssen, Josephine M; Liu, Jin; Maggio, Ignazio; 't Jong, Anke E J; Mikkers, Harald M M; Gonçalves, Manuel A F V

    2017-09-22

    Precise genome editing involves homologous recombination between donor DNA and chromosomal sequences subjected to double-stranded DNA breaks made by programmable nucleases. Ideally, genome editing should be efficient, specific, and accurate. However, besides constituting potential translocation-initiating lesions, double-stranded DNA breaks (targeted or otherwise) are mostly repaired through unpredictable and mutagenic non-homologous recombination processes. Here, we report that the coordinated formation of paired single-stranded DNA breaks, or nicks, at donor plasmids and chromosomal target sites by RNA-guided nucleases based on CRISPR-Cas9 components, triggers seamless homology-directed gene targeting of large genetic payloads in human cells, including pluripotent stem cells. Importantly, in addition to significantly reducing the mutagenicity of the genome modification procedure, this in trans paired nicking strategy achieves multiplexed, single-step, gene targeting, and yields higher frequencies of accurately edited cells when compared to the standard double-stranded DNA break-dependent approach.CRISPR-Cas9-based gene editing involves double-strand breaks at target sequences, which are often repaired by mutagenic non-homologous end-joining. Here the authors use Cas9 nickases to generate coordinated single-strand breaks in donor and target DNA for precise homology-directed gene editing.

  1. Measurement of Sulfur Isotopic Composition (δ34S) by Multiple-Collector Thermal Ionization Mass Spectrometry (MC-TIMS) Using a 33S/36S Double Spike

    NASA Astrophysics Data System (ADS)

    Mann, J. L.; Kelly, W. R.

    2006-05-01

    A new analytical technique for the determination of δ34S will be described. The technique is based on the production of singularly charged arsenic sulfide molecular ions (AsS+) by thermal ionization using silica gel as an emitter and combines multiple-collector thermal ionization mass spectrometry (MC-TIMS) with a 33S/36S double spike to correct instrumental fractionation. Because the double spike is added to the sample before chemical processing, both the isotopic composition and sulfur concentration are measured simultaneously. The accuracy and precision of the double spike technique is comparable to or better than modern gas source mass spectrometry, but requires about a factor of 10 less sample. Δ33S effects can be determined directly in an unspiked sample without any assumptions about the value of k (mass dependent fractionation factor) which is currently required by gas source mass spectrometry. Three international sulfur standards (IAEA-S-1, IAEA-S-2, and IAEA-S-3) were measured to evaluate the precision and accuracy of the new technique and to evaluate the consensus values for these standards. Two different double spike preparations were used. The δ34S values (reported relative to Vienna Canyon Diablo Troilite (VCDT), (δ34S (‰) = 34S/32S)sample/(34S/32S)VCDT - 1) x 1000]), 34S/32SVCDT = 0.0441626) determined were -0.32‰ ± 0.04‰ (1σ, n=4) and -0.31‰ ± 0.13‰ (1σ, n=8) for IAEA-S-1, 22.65‰ ± 0.04‰ (1σ, n=7) and 22.60‰ ± 0.06‰ (1σ, n=5) for IAEA- S-2, and -32.47‰ ± 0.07‰ (1σ, n=8) for IAEA-S-3. The amount of natural sample used for these analyses ranged from 0.40 μmoles to 2.35 μmoles. Each standard showed less than 0.5‰ variability (IAEA-S-1 < 0.4‰, IAEA-S-2 < 0.2‰, and IAEA-S-3 < 0.2‰). Our values for S-1 and S-2 are in excellent agreement with the consensus values and the values reported by other laboratories using both SF6 and SO2. Our value for S-3 differs statistically from the Institute for Reference Materials and Measurement (IRMM) value and is slightly lower than the currently accepted consensus value (-32.3). Because the technique is based on thermal ionization of AsS+, and As is mononuclidic, corrections for interferences or for scale contraction/expansion are not required. The availability of MC-TIMS instruments in laboratories around the world makes this technique immediately available to a much larger scientific community who require highly accurate and precise measurements of sulfur.

  2. A 1D radiative transfer benchmark with polarization via doubling and adding

    NASA Astrophysics Data System (ADS)

    Ganapol, B. D.

    2017-11-01

    Highly precise numerical solutions to the radiative transfer equation with polarization present a special challenge. Here, we establish a precise numerical solution to the radiative transfer equation with combined Rayleigh and isotropic scattering in a 1D-slab medium with simple polarization. The 2-Stokes vector solution for the fully discretized radiative transfer equation in space and direction derives from the method of doubling and adding enhanced through convergence acceleration. Updates to benchmark solutions found in the literature to seven places for reflectance and transmittance as well as for angular flux follow. Finally, we conclude with the numerical solution in a partially randomly absorbing heterogeneous medium.

  3. Elliptic Solvers for Mediterranean Sea Ocean Modeling,

    DTIC Science & Technology

    1984-05-01

    KWSP =21*(112+2*21+6) C PARAMETER (NX=IH-2, NY=JHS-2, KQ=NY*((NX+7)/4+1)+(NY+3)/2+8) 9C DOUBLE PRECISION AX,AY,AC(KH),ACKL DIMENSION HD(IH,JH),HT(IH...JH),RS(IH,JH) C COMMON/BV/ W1(IH,JHS),W2(IH,JHS),W3(IH,JHS),W4(IH,JHS) DOUBLE PRECISION WQ DIMENSION MAP(IH,JHS),WQ(JHS,5),WC( KWSP ) EQUIVALENCE (W1(1...AND ALL OTHER MODES C ( TYPICALLY MXKC1 .GE. MXKC2 .GE .MXKC3 ), C MXKC3 = MAX SIZE OF CV FOR RESTART T.S. C ( TYPICALLY MXKC3 = MXKP*3 ). C KWSP

  4. Accuracy of the lattice-Boltzmann method using the Cell processor

    NASA Astrophysics Data System (ADS)

    Harvey, M. J.; de Fabritiis, G.; Giupponi, G.

    2008-11-01

    Accelerator processors like the new Cell processor are extending the traditional platforms for scientific computation, allowing orders of magnitude more floating-point operations per second (flops) compared to standard central processing units. However, they currently lack double-precision support and support for some IEEE 754 capabilities. In this work, we develop a lattice-Boltzmann (LB) code to run on the Cell processor and test the accuracy of this lattice method on this platform. We run tests for different flow topologies, boundary conditions, and Reynolds numbers in the range Re=6 350 . In one case, simulation results show a reduced mass and momentum conservation compared to an equivalent double-precision LB implementation. All other cases demonstrate the utility of the Cell processor for fluid dynamics simulations. Benchmarks on two Cell-based platforms are performed, the Sony Playstation3 and the QS20/QS21 IBM blade, obtaining a speed-up factor of 7 and 21, respectively, compared to the original PC version of the code, and a conservative sustained performance of 28 gigaflops per single Cell processor. Our results suggest that choice of IEEE 754 rounding mode is possibly as important as double-precision support for this specific scientific application.

  5. A high-finesse Fabry-Perot cavity with a frequency-doubled green laser for precision Compton polarimetry at Jefferson Lab

    DOE PAGES

    Rakhman, A.; Hafez, Mohamed A.; Nanda, Sirish K.; ...

    2016-03-31

    Here, a high-finesse Fabry-Perot cavity with a frequency-doubled continuous wave green laser (532 nm) has been built and installed in Hall A of Jefferson Lab for high precision Compton polarimetry. The infrared (1064 nm) beam from a ytterbium-doped fiber amplifier seeded by a Nd:YAG nonplanar ring oscillator laser is frequency doubled in a single-pass periodically poled MgO:LiNbO 3 crystal. The maximum achieved green power at 5 W infrared pump power is 1.74 W with a total conversion efficiency of 34.8%. The green beam is injected into the optical resonant cavity and enhanced up to 3.7 kW with a corresponding enhancementmore » of 3800. The polarization transfer function has been measured in order to determine the intra-cavity circular laser polarization within a measurement uncertainty of 0.7%. The PREx experiment at Jefferson Lab used this system for the first time and achieved 1.0% precision in polarization measurements of an electron beam with energy and current of 1.0 GeV and 50 μA.« less

  6. Aperture Fever and the Quality of AAVSO Visual Estimates: mu Cephei as an Example

    NASA Astrophysics Data System (ADS)

    Turner, D. G.

    2014-06-01

    (Abstract only) At the limits of human vision the eye can reach precisions of 10% or better in brightness estimates for stars. So why did the quality of AAVSO visual estimates suddenly drop to 50% or worse for many stars following World War II? Possibly it is a consequence of viewing variable stars through ever-larger aperture instruments than was the case previously, a time when many variables were observed without optical aid. An example is provided by the bright red supergiant variable mu Cephei, a star that has the potential to be a calibrating object for the extragalactic distance scale if its low-amplitude brightness variations are better defined. It appears to be a member of the open cluster Trumpler 37, so its distance and luminosity can be established provided one can pinpoint the amount of interstellar extinction between us and it. mu Cep appears to be a double-mode pulsator, as suggested previously in the literature, but with periods of roughly 700 and 1,000 days it is unexciting to observe and its red color presents a variety of calibration problems. Improving quality control for such variable stars is an issue important not only to the AAVSO, but also to science in general.

  7. In vivo short-term precision of hip structure analysis variables in comparison with bone mineral density using paired dual-energy X-ray absorptiometry scans from multi-center clinical trials.

    PubMed

    Khoo, Benjamin C C; Beck, Thomas J; Qiao, Qi-Hong; Parakh, Pallav; Semanick, Lisa; Prince, Richard L; Singer, Kevin P; Price, Roger I

    2005-07-01

    Hip structural analysis (HSA) is a technique for extracting strength-related structural dimensions of bone cross-sections from two-dimensional hip scan images acquired by dual energy X-ray absorptiometry (DXA) scanners. Heretofore the precision of the method has not been thoroughly tested in the clinical setting. Using paired scans from two large clinical trials involving a range of different DXA machines, this study reports the first precision analysis of HSA variables, in comparison with that of conventional bone mineral density (BMD) on the same scans. A key HSA variable, section modulus (Z), biomechanically indicative of bone strength during bending, had a short-term precision percentage coefficient of variation (CV%) in the femoral neck of 3.4-10.1%, depending on the manufacturer or model of the DXA equipment. Cross-sectional area (CSA), a determinant of bone strength during axial loading and closely aligned with conventional DXA bone mineral content, had a range of CV% from 2.8% to 7.9%. Poorer precision was associated with inadequate inclusion of the femoral shaft or femoral head in the DXA-scanned hip region. Precision of HSA-derived BMD varied between 2.4% and 6.4%. Precision of DXA manufacturer-derived BMD varied between 1.9% and 3.4%, arising from the larger analysis region of interest (ROI). The precision of HSA variables was not generally dependent on magnitude, subject height, weight, or conventional femoral neck densitometric variables. The generally poorer precision of key HSA variables in comparison with conventional DXA-derived BMD highlights the critical roles played by correct limb repositioning and choice of an adequate and appropriately positioned ROI.

  8. Can APEX Represent In-Field Spatial Variability and Simulate Its Effects On Crop Yields?

    USDA-ARS?s Scientific Manuscript database

    Precision agriculture, from variable rate nitrogen application to precision irrigation, promises improved management of resources by considering the spatial variability of topography and soil properties. Hydrologic models need to simulate the effects of this variability if they are to inform about t...

  9. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    NASA Astrophysics Data System (ADS)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  10. A precision device needs precise simulation: Software description of the CBM Silicon Tracking System

    NASA Astrophysics Data System (ADS)

    Malygina, Hanna; Friese, Volker; CBM Collaboration

    2017-10-01

    Precise modelling of detectors in simulations is the key to the understanding of their performance, which, in turn, is a prerequisite for the proper design choice and, later, for the achievement of valid physics results. In this report, we describe the implementation of the Silicon Tracking System (STS), the main tracking device of the CBM experiment, in the CBM software environment. The STS makes uses of double-sided silicon micro-strip sensors with double metal layers. We present a description of transport and detector response simulation, including all relevant physical effects like charge creation and drift, charge collection, cross-talk and digitization. Of particular importance and novelty is the description of the time behaviour of the detector, since its readout will not be externally triggered but continuous. We also cover some aspects of local reconstruction, which in the CBM case has to be performed in real-time and thus requires high-speed algorithms.

  11. LSI (Large Scale Integrated) Design for Testability. Final Report of Design, Demonstration, and Testability Analysis.

    DTIC Science & Technology

    1983-11-01

    compound operations, with status. (h) Pre-programmed CRC and double-precision multiply/divide algo- rithms. (i) Double length accumulator with full...IH1.25 _ - MICROCOP ’ RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS-1963-A .4 ’* • • . - . .. •. . . . . . . . . . . . . . • - -. .• ,. o. . . .- "o

  12. Possibility-based robust design optimization for the structural-acoustic system with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2018-03-01

    The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.

  13. Sparse Matrix Software Catalog, Sparse Matrix Symposium 1982, Fairfield Glade, Tennessee, October 24-27, 1982,

    DTIC Science & Technology

    1982-10-27

    are buried within * a much larger, special purpose package. We regret such omissions, but to have reached the practi- tioners in each of the diverse...sparse matrix (form PAQ ) 4. Method of solution: Distribution count sort 5. Programming language: FORTRAN g Precision: Single and double precision 7

  14. Measurement of theta13 in the double Chooz experiment

    NASA Astrophysics Data System (ADS)

    Yang, Guang

    Neutrino oscillation has been established for over a decade. The mixing angle theta13 is one of the parameters that is most difficult to measure due to its small value. Currently, reactor antineutrino experiments provide the best knowledge of theta13, using the electron antineutrino disappearance phenomenon. The most compelling advantage is the high intensity of the reactor antineutrino rate. The Double Chooz experiment, located on the border of France and Belgium, is such an experiment, which aims to have one of the most precise theta 13 measurements in the world. Double Chooz has a single-detector phase and a double-detector phase. For the single-detector phase, the limit of the theta 13 sensitivity comes mostly from the reactor flux. However, the uncertainty on the reactor flux is highly suppressed in the double-detector phase. Oscillation analyses for the two phases have different strategies but need similar inputs, including background estimation, detection systematics evaluation, energy reconstruction and so on. The Double Chooz detectors are filled with gadolinium (Gd) doped liquid scintillator and use the inverse beta decay (IBD) signal so that for each phase, there are two independent theta13 measurements based on different neutron capturer (Gd or hydrogen). Multiple oscillation analyses are performed to provide the best 13 results. In addition to the 13 measurement, Double Chooz is also an excellent \\playground" to do diverse physics research. For example, a 252Cf calibration source study has been done to understand the spontaneous decay of this radioactive source. Further, Double Chooz also has the ability to do a sterile neutrino search in a certain mass region. Moreover, some new physics ideas can be tested in Double Chooz. In this thesis, the detailed methods to provide precise theta13 measurement will be described and the other physics topics will be introduced.

  15. New high-precision orbital and physical parameters of the double-lined low-mass spectroscopic binary BY Draconis

    NASA Astrophysics Data System (ADS)

    Hełminiak, K. G.; Konacki, M.; Muterspaugh, M. W.; Browne, S. E.; Howard, A. W.; Kulkarni, S. R.

    2012-01-01

    We present the most precise to date orbital and physical parameters of the well-known short period (P= 5.975 d), eccentric (e= 0.3) double-lined spectroscopic binary BY Draconis (BY Dra), a prototype of a class of late-type, active, spotted flare stars. We calculate the full spectroscopic/astrometric orbital solution by combining our precise radial velocities (RVs) and the archival astrometric measurements from the Palomar Testbed Interferometer (PTI). The RVs were derived based on the high-resolution echelle spectra taken between 2004 and 2008 with the Keck I/high-resolution echelle spectrograph, Shane/CAT/HamSpec and TNG/SARG telescopes/spectrographs using our novel iodine-cell technique for double-lined binary stars. The RVs and available PTI astrometric data spanning over eight years allow us to reach 0.2-0.5 per cent level of precision in Msin 3i and the parallax but the geometry of the orbit (i≃ 154°) hampers the absolute mass precision to 3.3 per cent, which is still an order of magnitude better than for previous studies. We compare our results with a set of Yonsei-Yale theoretical stellar isochrones and conclude that BY Dra is probably a main-sequence system more metal rich than the Sun. Using the orbital inclination and the available rotational velocities of the components, we also conclude that the rotational axes of the components are likely misaligned with the orbital angular momentum. Given BY Dra's main-sequence status, late spectral type and the relatively short orbital period, its high orbital eccentricity and probable spin-orbit misalignment are not in agreement with the tidal theory. This disagreement may possibly be explained by smaller rotational velocities of the components and the presence of a substellar mass companion to BY Dra AB.

  16. Precision measurements of the RSA method using a phantom model of hip prosthesis.

    PubMed

    Mäkinen, Tatu J; Koort, Jyri K; Mattila, Kimmo T; Aro, Hannu T

    2004-04-01

    Radiostereometric analysis (RSA) has become one of the recommended techniques for pre-market evaluation of new joint implant designs. In this study we evaluated the effect of repositioning of X-ray tubes and phantom model on the precision of the RSA method. In precision measurements, we utilized mean error of rigid body fitting (ME) values as an internal control for examinations. ME value characterizes relative motion among the markers within each rigid body and is conventionally used to detect loosening of a bone marker. Three experiments, each consisting of 10 double examinations, were performed. In the first experiment, the X-ray tubes and the phantom model were not repositioned between one double examination. In experiments two and three, the X-ray tubes were repositioned between one double examination. In addition, the position of the phantom model was changed in experiment three. Results showed that significant differences could be found in 2 of 12 comparisons when evaluating the translation and rotation of the prosthetic components. Repositioning procedures increased ME values mimicking deformation of rigid body segments. Thus, ME value seemed to be a more sensitive parameter than migration values in this study design. These results confirmed the importance of standardized radiographic technique and accurate patient positioning for RSA measurements. Standardization and calibration procedures should be performed with phantom models in order to avoid unnecessary radiation dose of the patients. The present model gives the means to establish and to follow the intra-laboratory precision of the RSA method. The model is easily applicable in any research unit and allows the comparison of the precision values in different laboratories of multi-center trials.

  17. Operator Variability in Scan Positioning is a Major Component of HR-pQCT Precision Error and is Reduced by Standardized Training

    PubMed Central

    Bonaretti, Serena; Vilayphiou, Nicolas; Chan, Caroline Mai; Yu, Andrew; Nishiyama, Kyle; Liu, Danmei; Boutroy, Stephanie; Ghasem-Zadeh, Ali; Boyd, Steven K.; Chapurlat, Roland; McKay, Heather; Shane, Elizabeth; Bouxsein, Mary L.; Black, Dennis M.; Majumdar, Sharmila; Orwoll, Eric S.; Lang, Thomas F.; Khosla, Sundeep; Burghardt, Andrew J.

    2017-01-01

    Introduction HR-pQCT is increasingly used to assess bone quality, fracture risk and anti-fracture interventions. The contribution of the operator has not been adequately accounted in measurement precision. Operators acquire a 2D projection (“scout view image”) and define the region to be scanned by positioning a “reference line” on a standard anatomical landmark. In this study, we (i) evaluated the contribution of positioning variability to in vivo measurement precision, (ii) measured intra- and inter-operator positioning variability, and (iii) tested if custom training software led to superior reproducibility in new operators compared to experienced operators. Methods To evaluate the operator in vivo measurement precision we compared precision errors calculated in 64 co-registered and non-co-registered scan-rescan images. To quantify operator variability, we developed software that simulates the positioning process of the scanner’s software. Eight experienced operators positioned reference lines on scout view images designed to test intra- and inter-operator reproducibility. Finally, we developed modules for training and evaluation of reference line positioning. We enrolled 6 new operators to participate in a common training, followed by the same reproducibility experiments performed by the experienced group. Results In vivo precision errors were up to three-fold greater (Tt.BMD and Ct.Th) when variability in scan positioning was included. Inter-operator precision errors were significantly greater than short-term intra-operator precision (p<0.001). New trained operators achieved comparable intra-operator reproducibility to experienced operators, and lower inter-operator reproducibility (p<0.001). Precision errors were significantly greater for the radius than for the tibia. Conclusion Operator reference line positioning contributes significantly to in vivo measurement precision and is significantly greater for multi-operator datasets. Inter-operator variability can be significantly reduced using a systematic training platform, now available online (http://webapps.radiology.ucsf.edu/refline/). PMID:27475931

  18. Aerodynamic/acoustic performance of YJ101/double bypass VCE with coannular plug nozzle

    NASA Technical Reports Server (NTRS)

    Vdoviak, J. W.; Knott, P. R.; Ebacker, J. J.

    1981-01-01

    Results of a forward Variable Area Bypass Injector test and a Coannular Nozzle test performed on a YJ101 Double Bypass Variable Cycle Engine are reported. These components are intended for use on a Variable Cycle Engine. The forward Variable Area Bypass Injector test demonstrated the mode shifting capability between single and double bypass operation with less than predicted aerodynamic losses in the bypass duct. The acoustic nozzle test demonstrated that coannular noise suppression was between 4 and 6 PNdB in the aft quadrant. The YJ101 VCE equipped with the forward VABI and the coannular exhaust nozzle performed as predicted with exhaust system aerodynamic losses lower than predicted both in single and double bypass modes. Extensive acoustic data were collected including far field, near field, sound separation/ internal probe measurements as Laser Velocimeter traverses.

  19. STT Doubles with Large Delta M - Part VII: Andromeda, Pisces, Auriga

    NASA Astrophysics Data System (ADS)

    Knapp, Wilfried; Nanson, John

    2017-01-01

    The results of visual double star observing sessions suggested a pattern for STT doubles with large DM of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. Similar to the other objects covered so far several of the components show parameters quite different from the current WDS data.

  20. Influence of double stimulation on sound-localization behavior in barn owls.

    PubMed

    Kettler, Lutz; Wagner, Hermann

    2014-12-01

    Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.

  1. Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.

    2009-05-01

    We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

  2. The Double Star Orbit Initial Value Problem

    NASA Astrophysics Data System (ADS)

    Hensley, Hagan

    2018-04-01

    Many precise algorithms exist to find a best-fit orbital solution for a double star system given a good enough initial value. Desmos is an online graphing calculator tool with extensive capabilities to support animations and defining functions. It can provide a useful visual means of analyzing double star data to arrive at a best guess approximation of the orbital solution. This is a necessary requirement before using a gradient-descent algorithm to find the best-fit orbital solution for a binary system.

  3. Overcoming the Power Wall by Exploiting Application Inexactness and Emerging COTS Architectural Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagan, Mike; Schlachter, Jeremy; Yoshii, Kazutomo

    Abstract—Energy and power consumption are major limitations to continued scaling of computing systems. Inexactness where the quality of the solution can be traded for energy savings has been proposed as a counterintuitive approach to overcoming those limitation. However, in the past, inexactness has been necessitated the need for highly customized or specialized hardware. In order to move away from customization, in earlier work [4], it was shown that by interpreting precision in the computation to be the parameter to trade to achieve inexactness, weather prediction and page rank could both benefit in terms of yielding energy savings through reduced precision,more » while preserving the quality of the application. However, this required representations of numbers that were not readily available on commercial off-the-shelf (COTS) processors. In this paper, we provide opportunities for extending the the notion of trading precision for energy savings into the world COTS. We provide a model and analyze the opportunities and behavior of all three IEEE compliant precision values available on COTS processors: (i) double (ii) single, and (iii) half. Through measurements, we show through a limit study energy savings in going from double to half precision can potentially exceed a factor of four, largely due to memory and cache effects.« less

  4. Doubling down on phosphorylation as a variable peptide modification.

    PubMed

    Cooper, Bret

    2016-09-01

    Some mass spectrometrists believe that searching for variable PTMs like phosphorylation of serine or threonine when using database-search algorithms to interpret peptide tandem mass spectra will increase false-positive matching. The basis for this is the premise that the algorithm compares a spectrum to both a nonphosphorylated peptide candidate and a phosphorylated candidate, which is double the number of candidates compared to a search with no possible phosphorylation. Hence, if the search space doubles, false-positive matching could increase accordingly as the algorithm considers more candidates to which false matches could be made. In this study, it is shown that the search for variable phosphoserine and phosphothreonine modifications does not always double the search space or unduly impinge upon the FDR. A breakdown of how one popular database-search algorithm deals with variable phosphorylation is presented. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  5. High Precision Material Study at Near Millimeter Wavelengths.

    DTIC Science & Technology

    1983-08-30

    propagating through these tubes , the beams are allowed to expand for a short distance in free space before they are combined by a mylar -film beam- splitter...Laser Precision Rkp-5200). 22 6 The attenuation of the low-loss EH mode in circular plexiglass tubes of I.D. 0.95 cm, and of various lengths. he...pyroelectric detectors (Laser Precision Rkp-545): L L, and L TPx lens; BS1, wire-mesh beam splitter; BS, mylar -film beam splitter; DPC, double-prism coupler

  6. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  7. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  8. Is Perruchet's dissociation between eyeblink conditioned responding and outcome expectancy evidence for two learning systems?

    PubMed

    Weidemann, Gabrielle; Tangen, Jason M; Lovibond, Peter F; Mitchell, Christopher J

    2009-04-01

    P. Perruchet (1985b) showed a double dissociation of conditioned responses (CRs) and expectancy for an airpuff unconditioned stimulus (US) in a 50% partial reinforcement schedule in human eyeblink conditioning. In the Perruchet effect, participants show an increase in CRs and a concurrent decrease in expectancy for the airpuff across runs of reinforced trials; conversely, participants show a decrease in CRs and a concurrent increase in expectancy for the airpuff across runs of nonreinforced trials. Three eyeblink conditioning experiments investigated whether the linear trend in eyeblink CRs in the Perruchet effect is a result of changes in associative strength of the conditioned stimulus (CS), US sensitization, or learning the precise timing of the US. Experiments 1 and 2 demonstrated that the linear trend in eyeblink CRs is not the result of US sensitization. Experiment 3 showed that the linear trend in eyeblink CRs is present with both a fixed and a variable CS-US interval and so is not the result of learning the precise timing of the US. The results are difficult to reconcile with a single learning process model of associative learning in which expectancy mediates CRs. Copyright (c) 2009 APA, all rights reserved.

  9. beta-Blockade used in precision sports: effect on pistol shooting performance.

    PubMed

    Kruse, P; Ladefoged, J; Nielsen, U; Paulev, P E; Sørensen, J P

    1986-08-01

    In a double-blind cross-over study of 33 marksmen (standard pistol, 25 m) the adrenergic beta 1-receptor blocker, metoprolol, was compared to placebo. Metoprolol obviously improved the pistol shooting performance compared with placebo. Shooting improved by 13.4% of possible improvement (i.e., 600 points minus actual points obtained) as an average (SE = 4%, 2P less than 0.002). The most skilled athletes demonstrated the clearest metoprolol improvement. We found no correlation between the shooting improvement and changes in the cardiovascular variables (i.e., changes of heart rate and systolic blood pressure) and no correlation to the estimated maximum O2 uptake. The shooting improvement is an effect of metoprolol on hand tremor. Emotional increase of heart rate and systolic blood pressure seem to be a beta 1-receptor phenomenon.

  10. Intraocular lens based on double-liquid variable-focus lens.

    PubMed

    Peng, Runling; Li, Yifan; Hu, Shuilan; Wei, Maowei; Chen, Jiabi

    2014-01-10

    In this work, the crystalline lens in the Gullstrand-Le Grand human eye model is replaced by a double-liquid variable-focus lens, the structure data of which are based on theoretical analysis and experimental results. When the pseudoaphakic eye is built in Zemax, aspherical surfaces are introduced to the double-liquid variable-focus lens to reduce the axial spherical aberration existent in the system. After optimization, the zoom range of the pseudoaphakic eye greatly exceeds that of normal human eyes, and the spot size on an image plane basically reaches the normal human eye's limit of resolution.

  11. STT Doubles with Large Delta_M - Part VIII: Tau Per Ori Cam Mon Cnc Peg

    NASA Astrophysics Data System (ADS)

    Knapp, Wilfried; Nanson, John

    2017-04-01

    The results of visual double star observing sessions suggested a pattern for STT doubles with large delta_M of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. Again like for the other STT objects covered so far several of the components show parameters quite different from the current WDS data.

  12. Fabrication of large diffractive optical elements in thick film on a concave lens surface.

    PubMed

    Xie, Yongjun; Lu, Zhenwu; Li, Fengyou

    2003-05-05

    We demonstrate experimentally the technique of fabricating large diffractive optical elements (DOEs) in thick film on a concave lens surface (mirrors) with precise alignment by using the strategy of double exposure. We adopt the method of double exposure to overcome the difficulty of processing thick photoresist on a large curved substrate. A uniform thick film with arbitrary thickness on a concave lens can be obtained with this technique. We fabricate a large concentric circular grating with a 10-ìm period on a concave lens surface in film with a thickness of 2.0 ìm after development. It is believed that this technique can also be used to fabricate larger DOEs in thicker film on the concave or convex lens surface with precise alignment. There are other potential applications of this technique, such as fabrication of micro-optoelectromechanical systems (MOEMS) or microelectromechanical systems (MEMS) and fabrication of microlens arrays on a large concave lens surface or convex lens surface with precise alignment.

  13. Precision Measurements of $$A_1^n$$ in the Deep Inelastic Regime

    DOE PAGES

    Parno, Diana; Flay, David; Posik, Matthew; ...

    2015-04-07

    We have performed precision measurements of the double-spin virtual-photon asymmetry A₁ on the neutron in the deep inelastic scattering regime, using an open-geometry, large-acceptance spectrometer and a longitudinally and transversely polarized ³He target. Our data cover a wide kinematic range 0.277 ≤ x ≤ 0.5480 at an average Q² value of 3.078 (GeV/c)², doubling the available high-precision neutron data in this x range. We have combined our results with world data on proton targets to make a leading-order extraction of the ratio of polarized-to-unpolarized parton distribution functions for up quarks and for down quarks in the same kinematic range. Ourmore » data are consistent with a previous observation of an View the MathML source A 1 n zero crossing near x=0.5. We find no evidence of a transition to a positive slope in (Δd+Δd¯)/(d+d¯) up to x=0.548.« less

  14. High-precision two-dimensional atom localization from four-wave mixing in a double-Λ four-level atomic system

    NASA Astrophysics Data System (ADS)

    Shui, Tao; Yang, Wen-Xing; Chen, Ai-Xi; Liu, Shaopeng; Li, Ling; Zhu, Zhonghu

    2018-03-01

    We propose a scheme for high-precision two-dimensional (2D) atom localization via the four-wave mixing (FWM) in a four-level double-Λ atomic system. Due to the position-dependent atom-field interaction, the 2D position information of the atoms can be directly determined by the measurement of the normalized light intensity of output FWM-generated field. We further show that, when the position-dependent generated FWM field has become sufficiently intense, efficient back-coupling to the FWM generating state becomes important. This back-coupling pathway leads to competitive multiphoton destructive interference of the FWM generating state by three supplied and one internally generated fields. We find that the precision of 2D atom localization can be improved significantly by the multiphoton destructive interference and depends sensitively on the frequency detunings and the pump field intensity. Interestingly enough, we show that adjusting the frequency detunings and the pump field intensity can modify significantly the FWM efficiency, and consequently lead to a redistribution of the atoms. As a result, the atom can be localized in one of four quadrants with holding the precision of atom localization.

  15. SU (2) lattice gauge theory simulations on Fermi GPUs

    NASA Astrophysics Data System (ADS)

    Cardoso, Nuno; Bicudo, Pedro

    2011-05-01

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200× the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2× slower) than single precision computations.

  16. Double the dates and go for Bayes - Impacts of model choice, dating density and quality on chronologies

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Christen, J. Andrés; Bennett, K. D.; Reimer, Paula J.

    2018-05-01

    Reliable chronologies are essential for most Quaternary studies, but little is known about how age-depth model choice, as well as dating density and quality, affect the precision and accuracy of chronologies. A meta-analysis suggests that most existing late-Quaternary studies contain fewer than one date per millennium, and provide millennial-scale precision at best. We use existing and simulated sediment cores to estimate what dating density and quality are required to obtain accurate chronologies at a desired precision. For many sites, a doubling in dating density would significantly improve chronologies and thus their value for reconstructing and interpreting past environmental changes. Commonly used classical age-depth models stop becoming more precise after a minimum dating density is reached, but the precision of Bayesian age-depth models which take advantage of chronological ordering continues to improve with more dates. Our simulations show that classical age-depth models severely underestimate uncertainty and are inaccurate at low dating densities, and also perform poorly at high dating densities. On the other hand, Bayesian age-depth models provide more realistic precision estimates, including at low to average dating densities, and are much more robust against dating scatter and outliers. Indeed, Bayesian age-depth models outperform classical ones at all tested dating densities, qualities and time-scales. We recommend that chronologies should be produced using Bayesian age-depth models taking into account chronological ordering and based on a minimum of 2 dates per millennium.

  17. Kinematics and design of a class of parallel manipulators

    NASA Astrophysics Data System (ADS)

    Hertz, Roger Barry

    1998-12-01

    This dissertation is concerned with the kinematic analysis and design of a class of three degree-of-freedom, spatial parallel manipulators. The class of manipulators is characterized by two platforms, between which are three legs, each possessing a succession of revolute, spherical, and revolute joints. The class is termed the "revolute-spherical-revolute" class of parallel manipulators. Two members of this class are examined. The first mechanism is a double-octahedral variable-geometry truss, and the second is termed a double tripod. The history the mechanisms is explored---the variable-geometry truss dates back to 1984, while predecessors of the double tripod mechanism date back to 1869. This work centers on the displacement analysis of these three-degree-of-freedom mechanisms. Two types of problem are solved: the forward displacement analysis (forward kinematics) and the inverse displacement analysis (inverse kinematics). The kinematic model of the class of mechanism is general in nature. A classification scheme for the revolute-spherical-revolute class of mechanism is introduced, which uses dominant geometric features to group designs into 8 different sub-classes. The forward kinematics problem is discussed: given a set of independently controllable input variables, solve for the relative position and orientation between the two platforms. For the variable-geometry truss, the controllable input variables are assumed to be the linear (prismatic) joints. For the double tripod, the controllable input variables are the three revolute joints adjacent to the base (proximal) platform. Multiple solutions are presented to the forward kinematics problem, indicating that there are many different positions (assemblies) that the manipulator can assume with equivalent inputs. For the double tripod these solutions can be expressed as a 16th degree polynomial in one unknown, while for the variable-geometry truss there exist two 16th degree polynomials, giving rise to 256 solutions. For special cases of the double tripod, the forward kinematics problem is shown to have a closed-form solution. Numerical examples are presented for the solution to the forward kinematics. A double tripod is presented that admits 16 unique and real forward kinematics solutions. Another example for a variable geometry truss is given that possesses 64 real solutions: 8 for each 16th order polynomial. The inverse kinematics problem is also discussed: given the relative position of the hand (end-effector), which is rigidly attached to one platform, solve for the independently controlled joint variables. Iterative solutions are proposed for both the variable-geometry truss and the double tripod. For special cases of both mechanisms, closed-form solutions are given. The practical problems of designing, building, and controlling a double-tripod manipulator are addressed. The resulting manipulator is a first-of-its kind prototype of a tapered (asymmetric) double-tripod manipulator. Real-time forward and inverse kinematics algorithms on an industrial robot controller is presented. The resulting performance of the prototype is impressive, since it was to achieve a maximum tool-tip speed of 4064 mm/s, maximum acceleration of 5 g, and a cycle time of 1.2 seconds for a typical pick-and-place pattern.

  18. Effects of Simple Leaching of Crushed and Powdered Materials on High-precision Pb Isotope Analyses

    NASA Astrophysics Data System (ADS)

    Todd, E.; Stracke, A.

    2013-12-01

    We present new results of simple leaching experiments on the Pb isotope composition of USGS standard reference material powders and on ocean island basalt whole rock splits and powders. Rock samples were leached with 6N HCl in two steps, first hot and then in an ultrasonic bath, and washed with ultrapure H2O before conventional sample digestion and chromatographic purification of Pb. Pb isotope analyses were determined with Tl-doped MC-ICP-MS. Intra- and inter-session analytical reproducibility of repeated analyses of both synthetic Pb solutions and Pb from single digests of chemically processed natural samples were generally < 100 ppm (2 S.D.). The comparison of leached and unleached samples shows that leaching reliably removes variable amounts of different contaminants for different starting materials. For repeated digests of a single sample, the leached samples reproduce better than the unleached ones, showing that leaching effectively removes heterogeneously distributed extraneous Pb. However, the reproducibility of repeated digests of variably contaminated natural samples is up to an order of magnitude worse than the analytical reproducibility of ca. 100 ppm. More complex leaching methods (e.g., Nobre Silva et al., 2009) yield Pb isotope ratios within error of and with similar reproducibility to our method, showing that the simple leaching method is reliable. The remaining Pb isotope heterogeneity of natural samples, which typically exceeds 100 ppm, is thus attributed to inherent isotopic sample heterogeneity. Tl-doped MC-ICP-MS Pb ratio determination is therefore a sufficiently precise method for Pb isotope analyses in natural rocks. More precise Pb double- or triple-spike methods (e.g., Galer, 1999; Thirlwall, 2000), may exploit their full potential only in cases where natural isotopic sample heterogeneity is demonstrably negligible. References: Galer, S., 1999, Chem. Geol. 157, 255-274. Nobre Silva, et al. 2009, Geochemistry Geophysics Geosystems 10, Q08012. Thirlwall, M.F., 2000, Chem. Geol. 163, 299-322.

  19. Nonlocal Poisson-Fermi double-layer models: Effects of nonuniform ion sizes on double-layer structure

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan; Jiang, Yi

    2018-05-01

    This paper reports a nonuniform ionic size nonlocal Poisson-Fermi double-layer model (nuNPF) and a uniform ionic size nonlocal Poisson-Fermi double-layer model (uNPF) for an electrolyte mixture of multiple ionic species, variable voltages on electrodes, and variable induced charges on boundary segments. The finite element solvers of nuNPF and uNPF are developed and applied to typical double-layer tests defined on a rectangular box, a hollow sphere, and a hollow rectangle with a charged post. Numerical results show that nuNPF can significantly improve the quality of the ionic concentrations and electric fields generated from uNPF, implying that the effect of nonuniform ion sizes is a key consideration in modeling the double-layer structure.

  20. Nonalcoholic steatohepatitis in precision medicine: Unraveling the factors that contribute to individual variability.

    PubMed

    Clarke, John D; Cherrington, Nathan J

    2015-07-01

    There are numerous factors in individual variability that make the development and implementation of precision medicine a challenge in the clinic. One of the main goals of precision medicine is to identify the correct dose for each individual in order to maximize therapeutic effect and minimize the occurrence of adverse drug reactions. Many promising advances have been made in identifying and understanding how factors such as genetic polymorphisms can influence drug pharmacokinetics (PK) and contribute to variable drug response (VDR), but it is clear that there remain many unidentified variables. Underlying liver diseases such as nonalcoholic steatohepatitis (NASH) alter absorption, distribution, metabolism, and excretion (ADME) processes and must be considered in the implementation of precision medicine. There is still a profound need for clinical investigation into how NASH-associated changes in ADME mediators, such as metabolism enzymes and transporters, affect the pharmacokinetics of individual drugs known to rely on these pathways for elimination. This review summarizes the key PK factors in individual variability and VDR and highlights NASH as an essential underlying factor that must be considered as the development of precision medicine advances. A multifactorial approach to precision medicine that considers the combination of two or more risk factors (e.g. genetics and NASH) will be required in our effort to provide a new era of benefit for patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Incorporating harvest rates into the sex-age-kill model for white-tailed deer

    USGS Publications Warehouse

    Norton, Andrew S.; Diefenbach, Duane R.; Rosenberry, Christopher S.; Wallingford, Bret D.

    2013-01-01

    Although monitoring population trends is an essential component of game species management, wildlife managers rarely have complete counts of abundance. Often, they rely on population models to monitor population trends. As imperfect representations of real-world populations, models must be rigorously evaluated to be applied appropriately. Previous research has evaluated population models for white-tailed deer (Odocoileus virginianus); however, the precision and reliability of these models when tested against empirical measures of variability and bias largely is untested. We were able to statistically evaluate the Pennsylvania sex-age-kill (PASAK) population model using realistic error measured using data from 1,131 radiocollared white-tailed deer in Pennsylvania from 2002 to 2008. We used these data and harvest data (number killed, age-sex structure, etc.) to estimate precision of abundance estimates, identify the most efficient harvest data collection with respect to precision of parameter estimates, and evaluate PASAK model robustness to violation of assumptions. Median coefficient of variation (CV) estimates by Wildlife Management Unit, 13.2% in the most recent year, were slightly above benchmarks recommended for managing game species populations. Doubling reporting rates by hunters or doubling the number of deer checked by personnel in the field reduced median CVs to recommended levels. The PASAK model was robust to errors in estimates for adult male harvest rates but was sensitive to errors in subadult male harvest rates, especially in populations with lower harvest rates. In particular, an error in subadult (1.5-yr-old) male harvest rates resulted in the opposite error in subadult male, adult female, and juvenile population estimates. Also, evidence of a greater harvest probability for subadult female deer when compared with adult (≥2.5-yr-old) female deer resulted in a 9.5% underestimate of the population using the PASAK model. Because obtaining appropriate sample sizes, by management unit, to estimate harvest rate parameters each year may be too expensive, assumptions of constant annual harvest rates may be necessary. However, if changes in harvest regulations or hunter behavior influence subadult male harvest rates, the PASAK model could provide an unreliable index to population changes. 

  2. Use of generalized linear models and digital data in a forest inventory of Northern Utah

    USGS Publications Warehouse

    Moisen, Gretchen G.; Edwards, Thomas C.

    1999-01-01

    Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray W. S.

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  4. Inductance Calculations of Variable Pitch Helical Inductors

    DTIC Science & Technology

    2015-08-01

    8217 ’ Integral solution using Simpson’s Rule ’ Dim i As Integer Dim Pi As Double, uo As Double, kc As Double Dim a As Double, amax As Double, da As...Double Dim steps As Integer Dim func1a As Double, func1b As Double ’ On Error GoTo err_TorisV1 steps = 1000 Pi = 3.14159 uo = 4 * Pi * 0.0000001...As Double ’ ’ Integral solution using Simpson’s Rule ’ Dim i As Integer Dim Pi As Double, uo As Double, kc As Double Dim a As Double, amax As

  5. STT Doubles with Large DM - Part IV: Ophiuchus and Hercules

    NASA Astrophysics Data System (ADS)

    Knapp, Wilfried; Nanson, John

    2016-04-01

    The results of visual double star observing sessions suggested a pattern for STT doubles with large DM of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. We found that like in the other constellations covered so far (Gem, Leo, UMa, etc.) at least several of the selected objects in Ophiuchus and Hercules show parameters quite different from the current WDS data.

  6. STT Doubles with Large DM - Part V: Aquila, Delphinus, Cygnus, Aquarius

    NASA Astrophysics Data System (ADS)

    Knapp, Wilfried; Nanson, John

    2016-07-01

    The results of visual double star observing sessions suggested a pattern for STT doubles with large DM of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. We found that, as in the other constellations covered so far (Gem, Leo, UMa etc.), at least several of the selected objects in Aql, Del, Cyg and Aqr show parameters quite different from the current WDS data

  7. Stellar Astrophysics with a Dispersed Fourier Transform Spectrograph. II. Orbits of Double-lined Spectroscopic Binaries

    NASA Astrophysics Data System (ADS)

    Behr, Bradford B.; Cenko, Andrew T.; Hajian, Arsen R.; McMillan, Robert S.; Murison, Marc; Meade, Jeff; Hindsley, Robert

    2011-07-01

    We present orbital parameters for six double-lined spectroscopic binaries (ι Pegasi, ω Draconis, 12 Boötis, V1143 Cygni, β Aurigae, and Mizar A) and two double-lined triple star systems (κ Pegasi and η Virginis). The orbital fits are based upon high-precision radial velocity (RV) observations made with a dispersed Fourier Transform Spectrograph, or dFTS, a new instrument that combines interferometric and dispersive elements. For some of the double-lined binaries with known inclination angles, the quality of our RV data permits us to determine the masses M 1 and M 2 of the stellar components with relative errors as small as 0.2%.

  8. A novel double fine guide sensor design on space telescope

    NASA Astrophysics Data System (ADS)

    Zhang, Xu-xu; Yin, Da-yi

    2018-02-01

    To get high precision attitude for space telescope, a double marginal FOV (field of view) FGS (Fine Guide Sensor) is proposed. It is composed of two large area APS CMOS sensors and both share the same lens in main light of sight. More star vectors can be get by two FGS and be used for high precision attitude determination. To improve star identification speed, the vector cross product in inter-star angles for small marginal FOV different from traditional way is elaborated and parallel processing method is applied to pyramid algorithm. The star vectors from two sensors are then used to attitude fusion with traditional QUEST algorithm. The simulation results show that the system can get high accuracy three axis attitudes and the scheme is feasibility.

  9. Field potential soil variability index to identify precision agriculture opportunity

    USDA-ARS?s Scientific Manuscript database

    Precision agriculture (PA) technologies used for identifying and managing within-field variability are not widely used despite decades of advancement. Technological innovations in agronomic tools, such as canopy reflectance or electrical conductivity sensors, have created opportunities to achieve a ...

  10. Airborne 2-Micron Double Pulsed Direct Detection IPDA Lidar for Atmospheric CO2 Measurement

    NASA Technical Reports Server (NTRS)

    Yu, Jirong; Petros, Mulugeta; Refaat, Tamer F.; Reithmaier, Karl; Remus, Ruben; Singh, Upendra; Johnson, Will; Boyer, Charlie; Fay, James; Johnston, Susan; hide

    2015-01-01

    An airborne 2-micron double-pulsed Integrated Path Differential Absorption (IPDA) lidar has been developed for atmospheric CO2 measurements. This new 2-miron pulsed IPDA lidar has been flown in spring of 2014 for total ten flights with 27 flight hours. It provides high precision measurement capability by unambiguously eliminating contamination from aerosols and clouds that can bias the IPDA measurement.

  11. Eclipsing Binary V1178 Tau: A Reddening Independent Determination of the Age and Distance to NGC 1817

    NASA Astrophysics Data System (ADS)

    Hedlund, Anne; Sandquist, Eric L.; Arentoft, Torben; Brogaard, Karsten; Grundahl, Frank; Stello, Dennis; Bedin, Luigi R.; Libralato, Mattia; Malavolta, Luca; Nardiello, Domenico; Molenda-Zakowicz, Joanna; Vanderburg, Andrew

    2018-06-01

    V1178 Tau is a double-lined spectroscopic eclipsing binary in NGC1817, one of the more massive clusters observed in the K2 mission. We have determined the orbital period (P = 2.20 d) for the first time, and we model radial velocity measurements from the HARPS and ALFOSC spectrographs, light curves collected by Kepler, and ground based light curves using the Eclipsing Light Curve code (ELC, Orosz & Hauschildt 2000). We present masses and radii for the stars in the binary, allowing for a reddening-independent means of determining the cluster age. V1178 Tau is particularly useful for calculating the age of the cluster because the stars are close to the cluster turnoff, providing a more precise age determination. Furthermore, because one of the stars in the binary is a delta Scuti variable, the analysis provides improved insight into their pulsations.

  12. Finite element computation on nearest neighbor connected machines

    NASA Technical Reports Server (NTRS)

    Mcaulay, A. D.

    1984-01-01

    Research aimed at faster, more cost effective parallel machines and algorithms for improving designer productivity with finite element computations is discussed. A set of 8 boards, containing 4 nearest neighbor connected arrays of commercially available floating point chips and substantial memory, are inserted into a commercially available machine. One-tenth Mflop (64 bit operation) processors provide an 89% efficiency when solving the equations arising in a finite element problem for a single variable regular grid of size 40 by 40 by 40. This is approximately 15 to 20 times faster than a much more expensive machine such as a VAX 11/780 used in double precision. The efficiency falls off as faster or more processors are envisaged because communication times become dominant. A novel successive overrelaxation algorithm which uses cyclic reduction in order to permit data transfer and computation to overlap in time is proposed.

  13. Asthenopia (eyestrain) in working children of gem-polishing industries.

    PubMed

    Tiwari, Rajnarayan R; Saha, Asim; Parikh, Jagdish R

    2011-04-01

    Working children of gem-polishing units are exposed to poor illumination and improper workstations. Also processes require lot of visual and mental concentration for precision. This may result in eyestrain. The study included 432 exposed and 569 comparison group subjects. Self-reported eyestrain was recorded through personal interview. Eyestrain included symptoms like itching, burning, or irritated eyes; tired or heavy eyes; difficulty seeing clearly (including blurred or double vision); and headache. The study variables included age, gender, daily working hours, and duration of exposure. The prevalence of eyestrain in child labourers was 32.2%, which was significantly more than the comparison group subjects. Also, the working children of gem-polishing units were at 1.4 times higher risk of developing eyestrain. Age (3)14 years and female gender were significantly associated with the eyestrain. The prevalence of eyestrain in child labourers was 32.2% and was associated with age (3)14 years and female gender.

  14. Photometric observations of nine Transneptunian objects and Centaurs

    NASA Astrophysics Data System (ADS)

    Hromakina, T.; Perna, D.; Belskaya, I.; Dotto, E.; Rossi, A.; Bisi, F.

    2018-02-01

    We present the results of photometric observations of six Transneptunian objects and three Centaurs, estimations of their rotational periods and corresponding amplitudes. For six of them we present also lower limits of density values. All observations were made using 3.6-m TNG telescope (La Palma, Spain). For four objects - (148975) 2001 XA255, (281371) 2008 FC76, (315898) 2008 QD4, and 2008 CT190 - the estimation of short-term variability was made for the first time. We confirm rotation period values for two objects: (55636) 2002 TX300 and (202421) 2005 UQ513, and improve the precision of previously reported rotational period values for other three - (120178) 2003 OP32, (145452) 2005 RN43, (444030) 2004 NT33 - by using both our and literature data. We also discuss here that small distant bodies, similar to asteroids in the Main belt, tend to have double-peaked rotational periods caused by the elongated shape rather than surface albedo variations.

  15. Measurement of differential cross sections in the $$\\phi^*$$ variable for inclusive Z boson production in pp collisions at $$\\sqrt{s}=$$ 8 TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sirunyan, Albert M; et al.

    Measurements of differential cross sections dmore » $$\\sigma$$/d$$\\phi^*$$ and double-differential cross sections d$$^2\\sigma$$/d$$\\phi^*\\,$$d$|y|$ for inclusive Z boson production are presented using the dielectron and dimuon final states. The kinematic observable $$\\phi^*$$ correlates with the dilepton transverse momentum but has better resolution, and $y$ is the dilepton rapidity. The analysis is based on data collected with the CMS experiment at a centre-of-mass energy of 8 TeV corresponding to an integrated luminosity of 19.7 fb$$^{-1}$$. The normalised cross section (1/$$\\sigma$$)$$\\,$$d$$\\sigma$$/d$$\\phi^*$$, within the fiducial kinematic region, is measured with a precision of better than 0.5% for $$\\phi^*$$<1. The measurements are compared to theoretical predictions and they agree, typically, within few percent.« less

  16. Harvester-based sensing system for cotton fiber-quality mapping

    USDA-ARS?s Scientific Manuscript database

    Precision agriculture in cotton production attempts to maximize profitability by exploiting information on field spatial variability to optimize the fiber yield and quality. For precision agriculture to be economically viable, collection of spatial variability data within a field must be automated a...

  17. Field variability and vulnerability index to identify precision agriculture opportunity

    USDA-ARS?s Scientific Manuscript database

    Innovations in precision agriculture (PA) have created opportunities to achieve a greater understanding of within-field variability. However, PA adoption has been hindered due to uncertainty about field-specific performance and return on investment. Uncertainty could be better addressed by analyzing...

  18. SU (2) lattice gauge theory simulations on Fermi GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoso, Nuno, E-mail: nunocardoso@cftp.ist.utl.p; Bicudo, Pedro, E-mail: bicudo@ist.utl.p

    2011-05-10

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes formore » the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200x the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2x slower) than single precision computations.« less

  19. Double soft limit of the graviton amplitude from the Cachazo-He-Yuan formalism

    NASA Astrophysics Data System (ADS)

    Saha, Arnab Priya

    2017-08-01

    We present a complete analysis for double soft limit of graviton scattering amplitude using the formalism proposed by Cachazo, He, and Yuan. Our results agree with that obtained via Britto-Cachazo-Feng-Witten (BCFW) recursion relations in [T. Klose, T. McLoughlin, D. Nandan, J. Plefka, and G. Travaglini, Double-soft limits of gluons and gravitons, J. High Energy Phys. 07 (2015) 135., 10.1007/JHEP07(2015)135]. In addition we find precise relations between degenerate and nondegenerate solutions of scattering equations with local and nonlocal terms in the soft factor.

  20. In vivo Three-Dimensional Superresolution Fluorescence Tracking using a Double-Helix Point Spread Function

    PubMed Central

    Lew, Matthew D.; Thompson, Michael A.; Badieirostami, Majid; Moerner, W. E.

    2010-01-01

    The point spread function (PSF) of a widefield fluorescence microscope is not suitable for three-dimensional super-resolution imaging. We characterize the localization precision of a unique method for 3D superresolution imaging featuring a double-helix point spread function (DH-PSF). The DH-PSF is designed to have two lobes that rotate about their midpoint in any transverse plane as a function of the axial position of the emitter. In effect, the PSF appears as a double helix in three dimensions. By comparing the Cramer-Rao bound of the DH-PSF with the standard PSF as a function of the axial position, we show that the DH-PSF has a higher and more uniform localization precision than the standard PSF throughout a 2 μm depth of field. Comparisons between the DH-PSF and other methods for 3D super-resolution are briefly discussed. We also illustrate the applicability of the DH-PSF for imaging weak emitters in biological systems by tracking the movement of quantum dots in glycerol and in live cells. PMID:20563317

  1. Modeling early events in Francisella tularensis pathogenesis.

    PubMed

    Gillard, Joseph J; Laws, Thomas R; Lythe, Grant; Molina-París, Carmen

    2014-01-01

    Computational models can provide valuable insights into the mechanisms of infection and be used as investigative tools to support development of medical treatments. We develop a stochastic, within-host, computational model of the infection process in the BALB/c mouse, following inhalational exposure to Francisella tularensis SCHU S4. The model is mechanistic and governed by a small number of experimentally verifiable parameters. Given an initial dose, the model generates bacterial load profiles corresponding to those produced experimentally, with a doubling time of approximately 5 h during the first 48 h of infection. Analytical approximations for the mean number of bacteria in phagosomes and cytosols for the first 24 h post-infection are derived and used to verify the stochastic model. In our description of the dynamics of macrophage infection, the number of bacteria released per rupturing macrophage is a geometrically-distributed random variable. When combined with doubling time, this provides a distribution for the time taken for infected macrophages to rupture and release their intracellular bacteria. The mean and variance of these distributions are determined by model parameters with a precise biological interpretation, providing new mechanistic insights into the determinants of immune and bacterial kinetics. Insights into the dynamics of macrophage suppression and activation gained by the model can be used to explore the potential benefits of interventions that stimulate macrophage activation.

  2. Estimation of satellite position, clock and phase bias corrections

    NASA Astrophysics Data System (ADS)

    Henkel, Patrick; Psychas, Dimitrios; Günther, Christoph; Hugentobler, Urs

    2018-05-01

    Precise point positioning with integer ambiguity resolution requires precise knowledge of satellite position, clock and phase bias corrections. In this paper, a method for the estimation of these parameters with a global network of reference stations is presented. The method processes uncombined and undifferenced measurements of an arbitrary number of frequencies such that the obtained satellite position, clock and bias corrections can be used for any type of differenced and/or combined measurements. We perform a clustering of reference stations. The clustering enables a common satellite visibility within each cluster and an efficient fixing of the double difference ambiguities within each cluster. Additionally, the double difference ambiguities between the reference stations of different clusters are fixed. We use an integer decorrelation for ambiguity fixing in dense global networks. The performance of the proposed method is analysed with both simulated Galileo measurements on E1 and E5a and real GPS measurements of the IGS network. We defined 16 clusters and obtained satellite position, clock and phase bias corrections with a precision of better than 2 cm.

  3. Effects of lidar pulse density and sample size on a model-assisted approach to estimate forest inventory variables

    Treesearch

    Jacob Strunk; Hailemariam Temesgen; Hans-Erik Andersen; James P. Flewelling; Lisa Madsen

    2012-01-01

    Using lidar in an area-based model-assisted approach to forest inventory has the potential to increase estimation precision for some forest inventory variables. This study documents the bias and precision of a model-assisted (regression estimation) approach to forest inventory with lidar-derived auxiliary variables relative to lidar pulse density and the number of...

  4. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    USGS Publications Warehouse

    Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.

  5. Calculations Supporting Management Zones

    USDA-ARS?s Scientific Manuscript database

    Since the early 1990’s the tools of precision farming (GPS, yield monitors, soil sensors, etc.) have documented how spatial and temporal variability are important factors impacting crop yield response. For precision farming, variability can be measured then used to divide up a field so that manageme...

  6. A historical perspective of VR water management for improved crop production

    USDA-ARS?s Scientific Manuscript database

    Variable-rate water management, or the combination of precision agriculture technology and irrigation, has been enabled by many of the same technologies as other precision agriculture tools. However, adding variable-rate capability to existing irrigation equipment design, or designing new equipment ...

  7. Microphysical variability of Amazonian deep convective cores observed by CloudSat and simulated by a multi-scale modeling framework

    NASA Astrophysics Data System (ADS)

    Brant Dodson, J.; Taylor, Patrick C.; Branson, Mark

    2018-05-01

    Recently launched cloud observing satellites provide information about the vertical structure of deep convection and its microphysical characteristics. In this study, CloudSat reflectivity data is stratified by cloud type, and the contoured frequency by altitude diagrams reveal a double-arc structure in deep convective cores (DCCs) above 8 km. This suggests two distinct hydrometeor modes (snow versus hail/graupel) controlling variability in reflectivity profiles. The day-night contrast in the double arcs is about four times larger than the wet-dry season contrast. Using QuickBeam, the vertical reflectivity structure of DCCs is analyzed in two versions of the Superparameterized Community Atmospheric Model (SP-CAM) with single-moment (no graupel) and double-moment (with graupel) microphysics. Double-moment microphysics shows better agreement with observed reflectivity profiles; however, neither model variant captures the double-arc structure. Ultimately, the results show that simulating realistic DCC vertical structure and its variability requires accurate representation of ice microphysics, in particular the hail/graupel modes, though this alone is insufficient.

  8. Atomically Precise Interfaces from Non-stoichiometric Deposition

    NASA Astrophysics Data System (ADS)

    Nie, Yuefeng; Zhu, Ye; Lee, Che-Hui; Kourkoutis, Lena; Mundy, Julia; Junquera, Javier; Ghosez, Philippe; Baek, David; Sung, Suk Hyun; Xi, Xiaoxing; Shen, Kyle; Muller, David; Schlom, Darrell

    2015-03-01

    Complex oxide heterostructures display some of the most chemically abrupt, atomically precise interfaces, which is advantageous when constructing new interface phases with emergent properties by juxtaposing incompatible ground states. One might assume that atomically precise interfaces result from stoichiometric growth. Here we show that the most precise control is, however, obtained by using deliberate and specific non-stoichiometric growth conditions. For the precise growth of Srn+1TinO3n+1 Ruddlesden-Popper (RP) phases, stoichiometric deposition leads to the loss of the first RP rock-salt double layer, but growing with a strontium-rich surface layer restores the bulk stoichiometry and ordering of the subsurface RP structure. Our results dramatically expand the materials that can be prepared in epitaxial heterostructures with precise interface control--from just the n = 1 end members (perovskites) to the entire RP homologous series--enabling the exploration of novel quantum phenomena at a richer variety of oxide interfaces.

  9. Atomically precise interfaces from non-stoichiometric deposition

    NASA Astrophysics Data System (ADS)

    Nie, Y. F.; Zhu, Y.; Lee, C.-H.; Kourkoutis, L. F.; Mundy, J. A.; Junquera, J.; Ghosez, Ph.; Baek, D. J.; Sung, S.; Xi, X. X.; Shen, K. M.; Muller, D. A.; Schlom, D. G.

    2014-08-01

    Complex oxide heterostructures display some of the most chemically abrupt, atomically precise interfaces, which is advantageous when constructing new interface phases with emergent properties by juxtaposing incompatible ground states. One might assume that atomically precise interfaces result from stoichiometric growth. Here we show that the most precise control is, however, obtained by using deliberate and specific non-stoichiometric growth conditions. For the precise growth of Srn+1TinOn+1 Ruddlesden-Popper (RP) phases, stoichiometric deposition leads to the loss of the first RP rock-salt double layer, but growing with a strontium-rich surface layer restores the bulk stoichiometry and ordering of the subsurface RP structure. Our results dramatically expand the materials that can be prepared in epitaxial heterostructures with precise interface control—from just the n=∞ end members (perovskites) to the entire RP homologous series—enabling the exploration of novel quantum phenomena at a richer variety of oxide interfaces.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utama, Muhammad Reza July, E-mail: muhammad.reza@bmkg.go.id; Indonesian Meteorological, Climatological and Geophysical Agency; Nugraha, Andri Dian

    The precise hypocenter was determined location using double difference method around subduction zone in Moluccas area eastern part of Indonesia. The initial hypocenter location from MCGA data catalogue of 1,945 earthquake events. Basically the principle of double-difference algorithm assumes if the distance between two earthquake hypocenter distribution is very small compared to the distance between the station to the earthquake source, the ray path can be considered close to both earthquakes. The results show the initial earthquakes with a certain depth (fix depth 10 km) relocated and can be interpreted more reliable in term of seismicity and geological setting. Themore » relocation of the intra slab earthquakes beneath Banda Arc are also clearly observed down to depth of about 400 km. The precise relocated hypocenter will give invaluable seismicity information for other seismological and tectonic studies especially for seismic hazard analysis in this region.« less

  11. Reliability of Pressure Ulcer Rates: How Precisely Can We Differentiate Among Hospital Units, and Does the Standard Signal‐Noise Reliability Measure Reflect This Precision?

    PubMed Central

    Cramer, Emily

    2016-01-01

    Abstract Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital‐acquired pressure ulcer rates and evaluate a standard signal‐noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step‐down, medical, surgical, and medical‐surgical nursing units from 1,299 US hospitals were analyzed. Using beta‐binomial models, we estimated between‐unit variability (signal) and within‐unit variability (noise) in annual unit pressure ulcer rates. Signal‐noise reliability was computed as the ratio of between‐unit variability to the total of between‐ and within‐unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal‐noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal‐noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. PMID:27223598

  12. Effect of oil liquid viscosity on hysteresis in double-liquid variable-focus lens based on electrowetting

    NASA Astrophysics Data System (ADS)

    Zeng, Zhi; Peng, Runling; He, Mei

    2017-02-01

    The double-liquid variable-focus lens based on the electrowetting has the characteristics of small size, light weight, fast response, and low price and so on. In this paper, double-liquid variable-focus lens's Principle and structure are introduced. The reasons for the existence and improvement of contact angle hysteresis are given according improved Young's equation. At last, 1-Bromododecane with silicone oil are mixed to get oil liquid with different viscosity and proportion liquid as insulating liquid. External voltages are applied to these three liquid lens and focal lengths of the lenses versus applied voltage are investigated. Experiments show that, the decreasing of oil liquid viscosity can reduce focal length hysteresis.

  13. Current status and future directions of precision agriculture for aerial application in the USA

    USDA-ARS?s Scientific Manuscript database

    Precision aerial application in the USA is less than a decade old since the development of the first variable-rate aerial application system. Many areas of the United States rely on readily available agricultural airplanes or helicopters for pest management. Variable-rate aerial application provides...

  14. Microfluidic single-cell whole-transcriptome sequencing.

    PubMed

    Streets, Aaron M; Zhang, Xiannian; Cao, Chen; Pang, Yuhong; Wu, Xinglong; Xiong, Liang; Yang, Lu; Fu, Yusi; Zhao, Liang; Tang, Fuchou; Huang, Yanyi

    2014-05-13

    Single-cell whole-transcriptome analysis is a powerful tool for quantifying gene expression heterogeneity in populations of cells. Many techniques have, thus, been recently developed to perform transcriptome sequencing (RNA-Seq) on individual cells. To probe subtle biological variation between samples with limiting amounts of RNA, more precise and sensitive methods are still required. We adapted a previously developed strategy for single-cell RNA-Seq that has shown promise for superior sensitivity and implemented the chemistry in a microfluidic platform for single-cell whole-transcriptome analysis. In this approach, single cells are captured and lysed in a microfluidic device, where mRNAs with poly(A) tails are reverse-transcribed into cDNA. Double-stranded cDNA is then collected and sequenced using a next generation sequencing platform. We prepared 94 libraries consisting of single mouse embryonic cells and technical replicates of extracted RNA and thoroughly characterized the performance of this technology. Microfluidic implementation increased mRNA detection sensitivity as well as improved measurement precision compared with tube-based protocols. With 0.2 M reads per cell, we were able to reconstruct a majority of the bulk transcriptome with 10 single cells. We also quantified variation between and within different types of mouse embryonic cells and found that enhanced measurement precision, detection sensitivity, and experimental throughput aided the distinction between biological variability and technical noise. With this work, we validated the advantages of an early approach to single-cell RNA-Seq and showed that the benefits of combining microfluidic technology with high-throughput sequencing will be valuable for large-scale efforts in single-cell transcriptome analysis.

  15. Solving the Orientation Specific Constraints in Transcranial Magnetic Stimulation by Rotating Fields

    PubMed Central

    Neef, Nicole E.; Agudelo-Toro, Andres; Rakhmilevitch, David; Paulus, Walter; Moses, Elisha

    2014-01-01

    Transcranial Magnetic Stimulation (TMS) is a promising technology for both neurology and psychiatry. Positive treatment outcome has been reported, for instance in double blind, multi-center studies on depression. Nonetheless, the application of TMS towards studying and treating brain disorders is still limited by inter-subject variability and lack of model systems accessible to TMS. The latter are required to obtain a deeper understanding of the biophysical foundations of TMS so that the stimulus protocol can be optimized for maximal brain response, while inter-subject variability hinders precise and reliable delivery of stimuli across subjects. Recent studies showed that both of these limitations are in part due to the angular sensitivity of TMS. Thus, a technique that would eradicate the need for precise angular orientation of the coil would improve both the inter-subject reliability of TMS and its effectiveness in model systems. We show here how rotation of the stimulating field relieves the angular sensitivity of TMS and provides improvements in both issues. Field rotation is attained by superposing the fields of two coils positioned orthogonal to each other and operated with a relative phase shift in time. Rotating field TMS (rfTMS) efficiently stimulates both cultured hippocampal networks and rat motor cortex, two neuronal systems that are notoriously difficult to excite magnetically. This opens the possibility of pharmacological and invasive TMS experiments in these model systems. Application of rfTMS to human subjects overcomes the orientation dependence of standard TMS. Thus, rfTMS yields optimal targeting of brain regions where correct orientation cannot be determined (e.g., via motor feedback) and will enable stimulation in brain regions where a preferred axonal orientation does not exist. PMID:24505266

  16. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  17. Tellurium Stable Isotope Fractionation in Chondritic Meteorites

    NASA Astrophysics Data System (ADS)

    Fehr, M. A.; Hammond, S. J.; Parkinson, I. J.

    2014-09-01

    New Te double spike procedures were set up to obtain high-precision accurate Te stable isotope data. Tellurium stable isotope data for 16 chondrite falls are presented, providing evidence for significant Te stable isotope fractionation.

  18. Sterile Neutrino Search with the Double Chooz Experiment

    NASA Astrophysics Data System (ADS)

    Hellwig, D.; Matsubara, T.; Double Chooz Collaboration

    2017-09-01

    Double Chooz is a reactor antineutrino disappearance experiment located in Chooz, France. A far detector at a distance of about 1 km from reactor cores is operating since 2011; a near detector of identical design at a distance of about 400 m is operating since begin 2015. Beyond the precise measurement of θ 13, Double Chooz has a strong sensitivity to so called light sterile neutrinos. Sterile neutrinos are neutrino mass states not taking part in weak interactions, but may mix with known neutrino states. In this paper, we present an analysis method to search for sterile neutrinos and the expected sensitivity with the baselines of our detectors.

  19. Double Star Measurements at the Northern Sky with a 10 inch Newtonian in 2014 and 2015

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2017-07-01

    A 10 inch Newtonian was used for recordings of double stars with a CCD webcam, and measurements of 120 pairs were done with the technique of “lucky imaging”. A rather accurate value of the image scale was obtained with reference systems from the recently published Gaia catalogue of very precise position data. For several pairs, deviations from currently assumed orbits were found. Some images of noteworthy systems are also presented.

  20. Occurrence and Nature of Double Alleles in Variable-Number Tandem-Repeat Patterns of More than 8,000 Mycobacterium tuberculosis Complex Isolates in The Netherlands

    PubMed Central

    Kamst, Miranda; van Hunen, Rianne; de Zwaan, Carolina Catherina; Mulder, Arnout; Supply, Philip; Anthony, Richard; van der Hoek, Wim; van Soolingen, Dick

    2017-01-01

    ABSTRACT Since 2004, variable-number tandem-repeat (VNTR) typing of Mycobacterium tuberculosis complex isolates has been applied on a structural basis in The Netherlands to study the epidemiology of tuberculosis (TB). Although this technique is faster and technically less demanding than the previously used restriction fragment length polymorphism (RFLP) typing, reproducibility remains a concern. In the period from 2004 to 2015, 8,532 isolates were subjected to VNTR typing in The Netherlands, with 186 (2.2%) of these exhibiting double alleles at one locus. Double alleles were most common in loci 4052 and 2163b. The variables significantly associated with double alleles were urban living (odds ratio [OR], 1.503; 95% confidence interval [CI], 1.084 to 2.084; P = 0.014) and pulmonary TB (OR, 1.703; 95% CI, 1.216 to 2.386; P = 0.002). Single-colony cultures of double-allele strains were produced and revealed single-allele profiles; a maximum of five single nucleotide polymorphisms (SNPs) was observed between the single- and double-allele isolates from the same patient when whole-genome sequencing (WGS) was applied. This indicates the presence of two bacterial populations with slightly different VNTR profiles in the parental population, related to genetic drift. This observation is confirmed by the fact that secondary cases from TB source cases with double-allele isolates sometimes display only one of the two alleles present in the source case. Double alleles occur at a frequency of 2.2% in VNTR patterns in The Netherlands. They are caused by biological variation rather than by technical aberrations and can be transmitted either as single- or double-allele variants. PMID:29142049

  1. Variable Stars in the Draco Dwarf Spheroidal Galaxy

    NASA Astrophysics Data System (ADS)

    Harris, H. C.; Silberman, N. A.; Smith, H. A.

    A new survey of the variable stars in the Draco dwarf spheroidal galaxy updates the pioneering study of this galaxy by Baade and Swope (1961). Our improved data, taken in BVI filters with CCD cameras on three telescopes at more than 80 epochs, allow us to investigate the known variables and to discover new, mostly low-amplitude variables. Approximately 300 variables are found and classified, more than double the number of variables analyzed previously. Most are RR Lyraes, with a small fraction of Anomalous Cepheids. This large sample of variables provides a unique opportunity to study the properties of these stars in a single system. This paper discusses the census of RR Lyraes, including RRc-type, double-mode, and Blazhko-effect RR Lyraes, as well as Anomalous Cepheids, and Type II Cepheids in Draco.

  2. Precision of information, sensational information, and self-efficacy information as message-level variables affecting risk perceptions.

    PubMed

    Dahlstrom, Michael F; Dudo, Anthony; Brossard, Dominique

    2012-01-01

    Studies that investigate how the mass media cover risk issues often assume that certain characteristics of content are related to specific risk perceptions and behavioral intentions. However, these relationships have seldom been empirically assessed. This study tests the influence of three message-level media variables--risk precision information, sensational information, and self-efficacy information--on perceptions of risk, individual worry, and behavioral intentions toward a pervasive health risk. Results suggest that more precise risk information leads to increased risk perceptions and that the effect of sensational information is moderated by risk precision information. Greater self-efficacy information is associated with greater intention to change behavior, but none of the variables influence individual worry. The results provide a quantitative understanding of how specific characteristics of informational media content can influence individuals' responses to health threats of a global and uncertain nature. © 2011 Society for Risk Analysis.

  3. Double metric, generalized metric, and α' -deformed double field theory

    NASA Astrophysics Data System (ADS)

    Hohm, Olaf; Zwiebach, Barton

    2016-03-01

    We relate the unconstrained "double metric" of the "α' -geometry" formulation of double field theory to the constrained generalized metric encoding the spacetime metric and b -field. This is achieved by integrating out auxiliary field components of the double metric in an iterative procedure that induces an infinite number of higher-derivative corrections. As an application, we prove that, to first order in α' and to all orders in fields, the deformed gauge transformations are Green-Schwarz-deformed diffeomorphisms. We also prove that to first order in α' the spacetime action encodes precisely the Green-Schwarz deformation with Chern-Simons forms based on the torsionless gravitational connection. This seems to be in tension with suggestions in the literature that T-duality requires a torsionful connection, but we explain that these assertions are ambiguous since actions that use different connections are related by field redefinitions.

  4. The Double ABCX Model of Family Stress and Adaptation: An Empirical Test by Analysis of Structural Equations with Latent Variables.

    ERIC Educational Resources Information Center

    Lavee, Yoav; And Others

    1985-01-01

    Examined relationships among major variables of the Double ABCX model of family stress and adaptation using data on Army families' adaptation to the crisis of relocation overseas. Results support the notion of pile-up of demands. Family system resources and social support are both found to facilitate adaptation. (Author/BL)

  5. Testing Precision of Movement of Curiosity Robotic Arm

    NASA Image and Video Library

    2012-02-22

    A NASA Mars Science Laboratory test rover called the Vehicle System Test Bed, or VSTB, at NASA Jet Propulsion Laboratory, Pasadena, CA serves as the closest double for Curiosity in evaluations of the mission hardware and software.

  6. 3D Printing in Surgical Management of Double Outlet Right Ventricle.

    PubMed

    Yoo, Shi-Joon; van Arsdell, Glen S

    2017-01-01

    Double outlet right ventricle (DORV) is a heterogeneous group of congenital heart diseases that require individualized surgical approach based on precise understanding of the complex cardiovascular anatomy. Physical 3-dimensional (3D) print models not only allow fast and unequivocal perception of the complex anatomy but also eliminate misunderstanding or miscommunication among imagers and surgeons. Except for those cases showing well-recognized classic surgical anatomy of DORV such as in cases with a typical subaortic or subpulmonary ventricular septal defect, 3D print models are of enormous value in surgical decision and planning. Furthermore, 3D print models can also be used for rehearsal of the intended procedure before the actual surgery on the patient so that the outcome of the procedure is precisely predicted and the procedure can be optimally tailored for the patient's specific anatomy. 3D print models are invaluable resource for hands-on surgical training of congenital heart surgeons.

  7. Potential Application of a Graphical Processing Unit to Parallel Computations in the NUBEAM Code

    NASA Astrophysics Data System (ADS)

    Payne, J.; McCune, D.; Prater, R.

    2010-11-01

    NUBEAM is a comprehensive computational Monte Carlo based model for neutral beam injection (NBI) in tokamaks. NUBEAM computes NBI-relevant profiles in tokamak plasmas by tracking the deposition and the slowing of fast ions. At the core of NUBEAM are vector calculations used to track fast ions. These calculations have recently been parallelized to run on MPI clusters. However, cost and interlink bandwidth limit the ability to fully parallelize NUBEAM on an MPI cluster. Recent implementation of double precision capabilities for Graphical Processing Units (GPUs) presents a cost effective and high performance alternative or complement to MPI computation. Commercially available graphics cards can achieve up to 672 GFLOPS double precision and can handle hundreds of thousands of threads. The ability to execute at least one thread per particle simultaneously could significantly reduce the execution time and the statistical noise of NUBEAM. Progress on implementation on a GPU will be presented.

  8. Longitudinal double-spin asymmetry A1p and spin-dependent structure function g1p of the proton at small values of x and Q2

    NASA Astrophysics Data System (ADS)

    Aghasyan, M.; Alexeev, M. G.; Alexeev, G. D.; Amoroso, A.; Andrieux, V.; Anfimov, N. V.; Anosov, V.; Antoshkin, A.; Augsten, K.; Augustyniak, W.; Austregesilo, A.; Azevedo, C. D. R.; Badełek, B.; Balestra, F.; Ball, M.; Barth, J.; Beck, R.; Bedfer, Y.; Bernhard, J.; Bicker, K.; Bielert, E. R.; Birsa, R.; Bodlak, M.; Bordalo, P.; Bradamante, F.; Bressan, A.; Büchele, M.; Burtsev, V. E.; Capozza, L.; Chang, W.-C.; Chatterjee, C.; Chiosso, M.; Choi, I.; Chumakov, A. G.; Chung, S.-U.; Cicuttin, A.; Crespo, M. L.; Dalla Torre, S.; Dasgupta, S. S.; Dasgupta, S.; Denisov, O. Yu.; Dhara, L.; Donskov, S. V.; Doshita, N.; Dreisbach, Ch.; Dünnweber, W.; Dusaev, R. R.; Dziewiecki, M.; Efremov, A.; Eversheim, P. D.; Faessler, M.; Ferrero, A.; Finger, M.; Finger, M.; Fischer, H.; Franco, C.; Du Fresne von Hohenesche, N.; Friedrich, J. M.; Frolov, V.; Fuchey, E.; Gautheron, F.; Gavrichtchouk, O. P.; Gerassimov, S.; Giarra, J.; Giordano, F.; Gnesi, I.; Gorzellik, M.; Grasso, A.; Gridin, A.; Grosse Perdekamp, M.; Grube, B.; Grussenmeyer, T.; Guskov, A.; Hahne, D.; Hamar, G.; von Harrach, D.; Heinsius, F. H.; Heitz, R.; Herrmann, F.; Horikawa, N.; D'Hose, N.; Hsieh, C.-Y.; Huber, S.; Ishimoto, S.; Ivanov, A.; Iwata, T.; Jary, V.; Joosten, R.; Jörg, P.; Kabuß, E.; Kerbizi, A.; Ketzer, B.; Khaustov, G. V.; Khokhlov, Yu. A.; Kisselev, Yu.; Klein, F.; Koivuniemi, J. H.; Kolosov, V. N.; Kondo, K.; Königsmann, K.; Konorov, I.; Konstantinov, V. F.; Kotzinian, A. M.; Kouznetsov, O. M.; Kral, Z.; Krämer, M.; Kremser, P.; Krinner, F.; Kroumchtein, Z. V.; Kulinich, Y.; Kunne, F.; Kurek, K.; Kurjata, R. P.; Kuznetsov, I. I.; Kveton, A.; Lednev, A. A.; Levchenko, E. A.; Levillain, M.; Levorato, S.; Lian, Y.-S.; Lichtenstadt, J.; Longo, R.; Lyubovitskij, V. E.; Maggiora, A.; Magnon, A.; Makins, N.; Makke, N.; Mallot, G. K.; Mamon, S. A.; Marianski, B.; Martin, A.; Marzec, J.; Matoušek, J.; Matsuda, H.; Matsuda, T.; Meshcheryakov, G. V.; Meyer, M.; Meyer, W.; Mikhailov, Yu. V.; Mikhasenko, M.; Mitrofanov, E.; Mitrofanov, N.; Miyachi, Y.; Moretti, A.; Nagaytsev, A.; Nerling, F.; Neyret, D.; Nový, J.; Nowak, W.-D.; Nukazuka, G.; Nunes, A. S.; Olshevsky, A. G.; Orlov, I.; Ostrick, M.; Panzieri, D.; Parsamyan, B.; Paul, S.; Peng, J.-C.; Pereira, F.; Pešek, M.; Pešková, M.; Peshekhonov, D. V.; Pierre, N.; Platchkov, S.; Pochodzalla, J.; Polyakov, V. A.; Pretz, J.; Quaresma, M.; Quintans, C.; Ramos, S.; Regali, C.; Reicherz, G.; Riedl, C.; Rogacheva, N. S.; Ryabchikov, D. I.; Rybnikov, A.; Rychter, A.; Salac, R.; Samoylenko, V. D.; Sandacz, A.; Santos, C.; Sarkar, S.; Savin, I. A.; Sawada, T.; Sbrizzai, G.; Schiavon, P.; Schmidt, K.; Schmieden, H.; Schönning, K.; Seder, E.; Selyunin, A.; Silva, L.; Sinha, L.; Sirtl, S.; Slunecka, M.; Smolik, J.; Srnka, A.; Steffen, D.; Stolarski, M.; Subrt, O.; Sulc, M.; Suzuki, H.; Szabelski, A.; Szameitat, T.; Sznajder, P.; Tasevsky, M.; Tessaro, S.; Tessarotto, F.; Thiel, A.; Tomsa, J.; Tosello, F.; Tskhay, V.; Uhl, S.; Vasilishin, B. I.; Vauth, A.; Veloso, J.; Vidon, A.; Virius, M.; Wallner, S.; Weisrock, T.; Wilfert, M.; Windmolders, R.; Ter Wolbeek, J.; Zaremba, K.; Zavada, P.; Zavertyaev, M.; Zemlyanichkina, E.; Ziembicki, M.; Compass Collaboration

    2018-06-01

    We present a precise measurement of the proton longitudinal double-spin asymmetry A1p and the proton spin-dependent structure function g1p at photon virtualities 0.006(GeV / c) 2

  9. Performance and precision of double digestion RAD (ddRAD) genotyping in large multiplexed datasets of marine fish species.

    PubMed

    Maroso, F; Hillen, J E J; Pardo, B G; Gkagkavouzis, K; Coscia, I; Hermida, M; Franch, R; Hellemans, B; Van Houdt, J; Simionati, B; Taggart, J B; Nielsen, E E; Maes, G; Ciavaglia, S A; Webster, L M I; Volckaert, F A M; Martinez, P; Bargelloni, L; Ogden, R

    2018-06-01

    The development of Genotyping-By-Sequencing (GBS) technologies enables cost-effective analysis of large numbers of Single Nucleotide Polymorphisms (SNPs), especially in "non-model" species. Nevertheless, as such technologies enter a mature phase, biases and errors inherent to GBS are becoming evident. Here, we evaluated the performance of double digest Restriction enzyme Associated DNA (ddRAD) sequencing in SNP genotyping studies including high number of samples. Datasets of sequence data were generated from three marine teleost species (>5500 samples, >2.5 × 10 12 bases in total), using a standardized protocol. A common bioinformatics pipeline based on STACKS was established, with and without the use of a reference genome. We performed analyses throughout the production and analysis of ddRAD data in order to explore (i) the loss of information due to heterogeneous raw read number across samples; (ii) the discrepancy between expected and observed tag length and coverage; (iii) the performances of reference based vs. de novo approaches; (iv) the sources of potential genotyping errors of the library preparation/bioinformatics protocol, by comparing technical replicates. Our results showed use of a reference genome and a posteriori genotype correction improved genotyping precision. Individual read coverage was a key variable for reproducibility; variance in sequencing depth between loci in the same individual was also identified as an important factor and found to correlate to tag length. A comparison of downstream analysis carried out with ddRAD vs single SNP allele specific assay genotypes provided information about the levels of genotyping imprecision that can have a significant impact on allele frequency estimations and population assignment. The results and insights presented here will help to select and improve approaches to the analysis of large datasets based on RAD-like methodologies. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  10. Application of a hybrid model to reduce bias and improve precision in population estimates for elk (Cervus elaphus) inhabiting a cold desert ecosystem

    USGS Publications Warehouse

    Schoenecker, Kathryn A.; Lubow, Bruce C.

    2016-01-01

    Accurately estimating the size of wildlife populations is critical to wildlife management and conservation of species. Raw counts or “minimum counts” are still used as a basis for wildlife management decisions. Uncorrected raw counts are not only negatively biased due to failure to account for undetected animals, but also provide no estimate of precision on which to judge the utility of counts. We applied a hybrid population estimation technique that combined sightability modeling, radio collar-based mark-resight, and simultaneous double count (double-observer) modeling to estimate the population size of elk in a high elevation desert ecosystem. Combining several models maximizes the strengths of each individual model while minimizing their singular weaknesses. We collected data with aerial helicopter surveys of the elk population in the San Luis Valley and adjacent mountains in Colorado State, USA in 2005 and 2007. We present estimates from 7 alternative analyses: 3 based on different methods for obtaining a raw count and 4 based on different statistical models to correct for sighting probability bias. The most reliable of these approaches is a hybrid double-observer sightability model (model MH), which uses detection patterns of 2 independent observers in a helicopter plus telemetry-based detections of radio collared elk groups. Data were fit to customized mark-resight models with individual sighting covariates. Error estimates were obtained by a bootstrapping procedure. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to double-observer modeling. The resulting population estimate corrected for multiple sources of undercount bias that, if left uncorrected, would have underestimated the true population size by as much as 22.9%. Our comparison of these alternative methods demonstrates how various components of our method contribute to improving the final estimate and demonstrates why each is necessary.

  11. Precision controllability of the F-15 airplane

    NASA Technical Reports Server (NTRS)

    Sisk, T. R.; Matheny, N. W.

    1979-01-01

    A flying qualities evaluation conducted on a preproduction F-15 airplane permitted an assessment to be made of its precision controllability in the high subsonic and low transonic flight regime over the allowable angle of attack range. Precision controllability, or gunsight tracking, studies were conducted in windup turn maneuvers with the gunsight in the caged pipper mode and depressed 70 mils. This evaluation showed the F-15 airplane to experience severe buffet and mild-to-moderate wing rock at the higher angles of attack. It showed the F-15 airplane radial tracking precision to vary from approximately 6 to 20 mils over the load factor range tested. Tracking in the presence of wing rock essentially doubled the radial tracking error generated at the lower angles of attack. The stability augmentation system affected the tracking precision of the F-15 airplane more than it did that of previous aircraft studied.

  12. Double Star Measurements at the Southern Sky with a 50 cm Reflector in 2016

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2017-10-01

    A 50 cm Ritchey-Chrétien reflector was used for recordings of double stars with a CCD webcam, and measurements of 95 pairs were mostly obtained from “lucky images”, and in some cases by speckle interferometry. The image scale was calibrated with reference systems from the recently published Gaia catalogue of precise position data. For several pairs, deviations from currently assumed orbits were found. Some images of noteworthy systems are also pre-sented.

  13. Accurate, precise, and efficient theoretical methods to calculate anion-π interaction energies in model structures.

    PubMed

    Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Sun, Jianwei

    2015-01-13

    A correct description of the anion-π interaction is essential for the design of selective anion receptors and channels and important for advances in the field of supramolecular chemistry. However, it is challenging to do accurate, precise, and efficient calculations of this interaction, which are lacking in the literature. In this article, by testing sets of 20 binary anion-π complexes of fluoride, chloride, bromide, nitrate, or carbonate ions with hexafluorobenzene, 1,3,5-trifluorobenzene, 2,4,6-trifluoro-1,3,5-triazine, or 1,3,5-triazine and 30 ternary π-anion-π' sandwich complexes composed from the same monomers, we suggest domain-based local-pair natural orbital coupled cluster energies extrapolated to the complete basis-set limit as reference values. We give a detailed explanation of the origin of anion-π interactions, using the permanent quadrupole moments, static dipole polarizabilities, and electrostatic potential maps. We use symmetry-adapted perturbation theory (SAPT) to calculate the components of the anion-π interaction energies. We examine the performance of the direct random phase approximation (dRPA), the second-order screened exchange (SOSEX), local-pair natural-orbital (LPNO) coupled electron pair approximation (CEPA), and several dispersion-corrected density functionals (including generalized gradient approximation (GGA), meta-GGA, and double hybrid density functional). The LPNO-CEPA/1 results show the best agreement with the reference results. The dRPA method is only slightly less accurate and precise than the LPNO-CEPA/1, but it is considerably more efficient (6-17 times faster) for the binary complexes studied in this paper. For 30 ternary π-anion-π' sandwich complexes, we give dRPA interaction energies as reference values. The double hybrid functionals are much more efficient but less accurate and precise than dRPA. The dispersion-corrected double hybrid PWPB95-D3(BJ) and B2PLYP-D3(BJ) functionals perform better than the GGA and meta-GGA functionals for the present test set.

  14. Precise Penning trap measurements of double β-decay Q-values

    NASA Astrophysics Data System (ADS)

    Redshaw, M.; Brodeur, M.; Bollen, G.; Bustabad, S.; Eibach, M.; Gulyuz, K.; Izzo, C.; Lincoln, D. L.; Novario, S. J.; Ringle, R.; Sandler, R.; Schwarz, S.; Valverde, A. A.

    2015-10-01

    The double β-decay (ββ -decay) Q-value, defined as the mass difference between parent and daughter atoms, is an important parameter for both two-neutrino ββ -decay (2 νββ) and neutrinoless ββ -decay (0 νββ) experiments. The Q-value enters into the calculation of the phase space factors, which relate the measured ββ -decay half-life to the nuclear matrix element and, in the case of 0 νββ , the effective Majorana mass of the neutrino. In addition, the Q-value defines the total kinetic energy of the two electrons emitted in 0 νββ , corresponding to the location of the single peak that is the sought after signature of 0 νββ . Hence, it is essential to have a precise and accurate Q-value determination. Over the last decade, the Penning trap mass spectrometry community has made a significant effort to provide precise ββ -decay Q-value determinations. Here we report on recent measurements with the Low Energy Beam and Ion Trap (LEBIT) facility at the National Superconducting Cyclotron Laboratory (NSCL) of the 48Ca, 82Se, and 96Zr Q-values. These measurements complete the determination of ββ -decay Q-values for the 11 ``best'' candidates (those with Q >2 MeV). We also report on a measurement of the 78Kr double electron capture (2EC) Q-value and discuss ongoing Penning trap measurements relating to ββ -decay and 2EC. Support from NSF Contract No. PHY-1102511, and DOE Grant No. 03ER-41268.

  15. Combining FIA plot data with topographic variables: Are precise locations needed?

    Treesearch

    Stephen P. Prisley; Huei-Jin Wang; Philip J Radtke; John Coulston

    2009-01-01

    Plot data from the USFS FIA program could be combined with terrain variables to attempt to explain how terrain characteristics influence forest growth, species composition, productivity, fire behavior, wildlife habitat, and other phenomena. While some types of analyses using FIA data have been shown to be insensitive to precision of plot locations, it has been...

  16. What to use to express the variability of data: Standard deviation or standard error of mean?

    PubMed

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  17. High precision locating control system based on VCM for Talbot lithography

    NASA Astrophysics Data System (ADS)

    Yao, Jingwei; Zhao, Lixin; Deng, Qian; Hu, Song

    2016-10-01

    Aiming at the high precision and efficiency requirements of Z-direction locating in Talbot lithography, a control system based on Voice Coil Motor (VCM) was designed. In this paper, we built a math model of VCM and its moving characteristic was analyzed. A double-closed loop control strategy including position loop and current loop were accomplished. The current loop was implemented by driver, in order to achieve the rapid follow of the system current. The position loop was completed by the digital signal processor (DSP) and the position feedback was achieved by high precision linear scales. Feed forward control and position feedback Proportion Integration Differentiation (PID) control were applied in order to compensate for dynamic lag and improve the response speed of the system. And the high precision and efficiency of the system were verified by simulation and experiments. The results demonstrated that the performance of Z-direction gantry was obviously improved, having high precision, quick responses, strong real-time and easily to expend for higher precision.

  18. Floating point arithmetic in future supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  19. Reproducibility and reliability of the ankle-brachial index as assessed by vascular experts, family physicians and nurses.

    PubMed

    Holland-Letz, Tim; Endres, Heinz G; Biedermann, Stefanie; Mahn, Matthias; Kunert, Joachim; Groh, Sabine; Pittrow, David; von Bilderling, Peter; Sternitzky, Reinhardt; Diehm, Curt

    2007-05-01

    The reliability of ankle-brachial index (ABI) measurements performed by different observer groups in primary care has not yet been determined. The aims of the study were to provide precise estimates for all effects influencing the variability of the ABI (patients' individual variability, intra- and inter-observer variability), with particular focus on the performance of different observer groups. Using a partially balanced incomplete block design, 144 unselected individuals aged > or = 65 years underwent double ABI measurements by one vascular surgeon or vascular physician, one family physician and one nurse with training in Doppler sonography. Three groups comprising a total of 108 individuals were analyzed (only two with ABI < 0.90). Errors for two repeated measurements for all three observer groups did not differ (experts 8.5%, family physicians 7.7%, and nurses 7.5%, p = 0.39). There was no relevant bias among observer groups. Intra-observer variability expressed as standard deviation divided by the mean was 8%, and inter-observer variability was 9%. In conclusion, reproducibility of the ABI measurement was good in this cohort of elderly patients who almost all had values in the normal range. The mean error of 8-9% within or between observers is smaller than with established screening measures. Since there were no differences among observers with different training backgrounds, our study confirms the appropriateness of ABI assessment for screening peripheral arterial disease (PAD) and generalized atherosclerosis in the primary case setting. Given the importance of the early detection and management of PAD, this diagnostic tool should be used routinely as a standard for PAD screening. Additional studies will be required to confirm our observations in patients with PAD of various severities.

  20. Quasi-Speckle Measurements of Close Double Stars With a CCD Camera

    NASA Astrophysics Data System (ADS)

    Harshaw, Richard

    2017-01-01

    CCD measurements of visual double stars have been an active area of amateur observing for several years now. However, most CCD measurements rely on “lucky imaging” (selecting a very small percentage of the best frames of a larger frame set so as to get the best “frozen” atmosphere for the image), a technique that has limitations with regards to how close the stars can be and still be cleanly resolved in the lucky image. In this paper, the author reports how using deconvolution stars in the analysis of close double stars can greatly enhance the quality of the autocorellogram, leading to a more precise solution using speckle reduction software rather than lucky imaging.

  1. Prediction of radial breathing-like modes of double-walled carbon nanotubes with arbitrary chirality

    NASA Astrophysics Data System (ADS)

    Ghavanloo, Esmaeal; Fazelzadeh, S. Ahmad

    2014-10-01

    The radial breathing-like modes (RBLMs) of double-walled carbon nanotubes (DWCNTs) with arbitrary chirality are investigated by a simple analytical model. For this purpose, DWCNT is considered as double concentric elastic thin cylindrical shells, which are coupled through van der Waals (vdW) forces between two adjacent tubes. Lennard-Jones potential and a molecular mechanics model are used to calculate the vdW forces and to predict the mechanical properties, respectively. The validity of these theoretical results is confirmed through the comparison of the experimental results. Finally, a new approach is proposed to determine the diameters and the chiral indices of the inner and outer tubes of the DWCNTs with high precision.

  2. Picometer Level Modeling of a Shared Vertex Double Corner Cube in the Space Interferometry Mission Kite Testbed

    NASA Technical Reports Server (NTRS)

    Kuan, Gary M.; Dekens, Frank G.

    2006-01-01

    The Space Interferometry Mission (SIM) is a microarcsecond interferometric space telescope that requires picometer level precision measurements of its truss and interferometer baselines. Single-gauge metrology errors due to non-ideal physical characteristics of corner cubes reduce the angular measurement capability of the science instrument. Specifically, the non-common vertex error (NCVE) of a shared vertex, double corner cube introduces micrometer level single-gauge errors in addition to errors due to dihedral angles and reflection phase shifts. A modified SIM Kite Testbed containing an articulating double corner cube is modeled and the results are compared to the experimental testbed data. The results confirm modeling capability and viability of calibration techniques.

  3. Doubling down on naturalness with a supersymmetric twin Higgs

    NASA Astrophysics Data System (ADS)

    Craig, Nathaniel; Howe, Kiel

    2014-03-01

    We show that naturalness of the weak scale can be comfortably reconciled with both LHC null results and observed Higgs properties provided the double protection of supersymmetry and the twin Higgs mechanism. This double protection radically alters conventional signs of naturalness at the LHC while respecting gauge coupling unification and precision electroweak limits. We find the measured Higgs mass, couplings, and percent-level naturalness of the weak scale are compatible with stops at ~ 3.5 TeV and higgsinos at ~ 1 TeV. The primary signs of naturalness in this scenario include modifications of Higgs couplings, a modest invisible Higgs width, resonant Higgs pair production, and an invisibly-decaying heavy Higgs.

  4. Reliability of Pressure Ulcer Rates: How Precisely Can We Differentiate Among Hospital Units, and Does the Standard Signal-Noise Reliability Measure Reflect This Precision?

    PubMed

    Staggs, Vincent S; Cramer, Emily

    2016-08-01

    Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital-acquired pressure ulcer rates and evaluate a standard signal-noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step-down, medical, surgical, and medical-surgical nursing units from 1,299 US hospitals were analyzed. Using beta-binomial models, we estimated between-unit variability (signal) and within-unit variability (noise) in annual unit pressure ulcer rates. Signal-noise reliability was computed as the ratio of between-unit variability to the total of between- and within-unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal-noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal-noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc.

  5. STT Doubles with Large δM - Part VI: Cygnus Multiples

    NASA Astrophysics Data System (ADS)

    Knapp, Wilfried; Nanson, John

    2016-10-01

    The results of visual double star observing sessions suggested a pattern for STT doubles with large delta_M of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. Of these objects we found three rather complex multiples in Cygnus of special interest so we decided to write a separate report to have more room to include the non STT components as well. Again like for the other objects covered so far several of the components show parameters quite different from the current WDS data.

  6. Control of DC gas flow in a single-stage double-inlet pulse tube cooler

    NASA Astrophysics Data System (ADS)

    Wang, C.; Thummes, G.; Heiden, C.

    The use of double-inlet mode in the pulse tube cooler opens up a possibility of DC gas flow circulating around the regenerator and pulse tube. Numerical analysis shows that effects of DC flow in a single-stage pulse tube cooler are different in some aspects from that in a 4 K pulse tube cooler. For highest cooler efficiency, DC flow should be compensated to a small value, i.e. DC flow over average AC flow at regenerator inlet should be in the range -0.0013 to +0.00016. Dual valves with reversed asymmetric geometries were used for the double-inlet bypass to control the DC flow in this paper. The experiment, performed in a single-stage double-inlet pulse tube cooler, verified that the cooler performance can be significantly improved by precisely controlling the DC flow.

  7. UAV low-altitude remote sensing for precision weed management

    USDA-ARS?s Scientific Manuscript database

    Precision weed management, an application of precision agriculture, accounts for within-field variability of weed infestation and herbicide damage. Unmanned aerial vehicles (UAVs) provide a unique platform for remote sensing of field crops. They are more efficient and flexible than manned agricultur...

  8. Precise time series photometry for the Kepler-2.0 mission

    NASA Astrophysics Data System (ADS)

    Aigrain, S.; Hodgkin, S. T.; Irwin, M. J.; Lewis, J. R.; Roberts, S. J.

    2015-03-01

    The recently approved NASA K2 mission has the potential to multiply by an order of magnitude the number of short-period transiting planets found by Kepler around bright and low-mass stars, and to revolutionize our understanding of stellar variability in open clusters. However, the data processing is made more challenging by the reduced pointing accuracy of the satellite, which has only two functioning reaction wheels. We present a new method to extract precise light curves from K2 data, combining list-driven, soft-edged aperture photometry with a star-by-star correction of systematic effects associated with the drift in the roll angle of the satellite about its boresight. The systematics are modelled simultaneously with the stars' intrinsic variability using a semiparametric Gaussian process model. We test this method on a week of data collected during an engineering test in 2014 January, perform checks to verify that our method does not alter intrinsic variability signals, and compute the precision as a function of magnitude on long-cadence (30 min) and planetary transit (2.5 h) time-scales. In both cases, we reach photometric precisions close to the precision reached during the nominal Kepler mission for stars fainter than 12th magnitude, and between 40 and 80 parts per million for brighter stars. These results confirm the bright prospects for planet detection and characterization, asteroseismology and stellar variability studies with K2. Finally, we perform a basic transit search on the light curves, detecting two bona fide transit-like events, seven detached eclipsing binaries and 13 classical variables.

  9. Practical sampling plans for Varroa destructor (Acari: Varroidae) in Apis mellifera (Hymenoptera: Apidae) colonies and apiaries.

    PubMed

    Lee, K V; Moon, R D; Burkness, E C; Hutchison, W D; Spivak, M

    2010-08-01

    The parasitic mite Varroa destructor Anderson & Trueman (Acari: Varroidae) is arguably the most detrimental pest of the European-derived honey bee, Apis mellifera L. Unfortunately, beekeepers lack a standardized sampling plan to make informed treatment decisions. Based on data from 31 commercial apiaries, we developed sampling plans for use by beekeepers and researchers to estimate the density of mites in individual colonies or whole apiaries. Beekeepers can estimate a colony's mite density with chosen level of precision by dislodging mites from approximately to 300 adult bees taken from one brood box frame in the colony, and they can extrapolate to mite density on a colony's adults and pupae combined by doubling the number of mites on adults. For sampling whole apiaries, beekeepers can repeat the process in each of n = 8 colonies, regardless of apiary size. Researchers desiring greater precision can estimate mite density in an individual colony by examining three, 300-bee sample units. Extrapolation to density on adults and pupae may require independent estimates of numbers of adults, of pupae, and of their respective mite densities. Researchers can estimate apiary-level mite density by taking one 300-bee sample unit per colony, but should do so from a variable number of colonies, depending on apiary size. These practical sampling plans will allow beekeepers and researchers to quantify mite infestation levels and enhance understanding and management of V. destructor.

  10. Performance of a newly designed continuous soot monitoring system (COSMOS).

    PubMed

    Miyazaki, Yuzo; Kondo, Yutaka; Sahu, Lokesh K; Imaru, Junichi; Fukushima, Nobuhiko; Kano, Minoru

    2008-10-01

    We designed a continuous soot monitoring system (COSMOS) for fully automated, high-sensitivity, continuous measurement of light absorption by black carbon (BC) aerosols. The instrument monitors changes in transmittance across an automatically advancing quartz fiber filter tape using an LED at a 565 nm wavelength. To achieve measurements with high sensitivity and a lower detectable light absorption coefficient, COSMOS uses a double-convex lens and optical bundle pipes to maintain high light intensity and signal data are obtained at 1000 Hz. In addition, sampling flow rate and optical unit temperature are actively controlled. The inlet line for COSMOS is heated to 400 degrees C to effectively volatilize non-refractory aerosol components that are internally mixed with BC. In its current form, COSMOS provides BC light absorption measurements with a detection limit of 0.45 Mm(-1) (0.045 microg m(-3) for soot) for 10 min. The unit-to-unit variability is estimated to be within +/- 1%, demonstrating its high reproducibility. The absorption coefficients determined by COSMOS agreed with those by a particle soot absorption photometer (PSAP) to within 1% (r2 = 0.97). The precision (+/- 0.60 Mm(-1)) for 10 min integrated data was better than that of PSAP and an aethalometer under our operating conditions. These results showed that COSMOS achieved both an improved detection limit and higher precision for the filter-based light absorption measurements of BC compared to the existing methods.

  11. Potential for long-term, high-frequency, high-precision methane isotope measurements to improve UK emissions estimates

    NASA Astrophysics Data System (ADS)

    Rennick, Chris; Bausi, Francesco; Arnold, Tim

    2017-04-01

    On the global scale methane (CH4) concentrations have more than doubled over the last 150 years, and the contribution to the enhanced greenhouse effect is almost half of that due to the increase in carbon dioxide (CO2) over the same period. Microbial, fossil fuel, biomass burning and landfill are dominant methane sources with differing annual variabilities; however, in the UK for example, mixing ratio measurements from a tall tower network and regional scale inversion modelling have thus far been unable to disaggregate emissions from specific source categories with any significant certainty. Measurement of the methane isotopologue ratios will provide the additional information needed for more robust sector attribution, which will be important for directing policy action Here we explore the potential for isotope ratio measurements to improve the interpretation of atmospheric mixing ratios beyond calculation of total UK emissions, and describe current analytical work at the National Physical Laboratory that will realise deployment of such measurements. We simulate isotopic variations at the four UK greenhouse gas tall tower network sites to understand where deployment of the first isotope analyser would be best situated. We calculate the levels of precision needed in both δ-13C and δ-D in order to detect particular scenarios of emissions. Spectroscopic measurement in the infrared by quantum cascade laser (QCL) absorption is a well-established technique to quantify the mixing ratios of trace species in atmospheric samples and, as has been demonstrated in 2016, if coupled to a suitable preconcentrator then high-precision measurements are possible. The current preconcentration system under development at NPL is designed to make the highest precision measurements yet of the standard isotope ratios via a new large-volume cryogenic trap design and controlled thermal desorption into a QCL spectrometer. Finally we explore the potential for the measurement of clumped isotopes at high frequency and precision. The doubly-substituted 13CH3D isotopologue is a tracer for methane formed at geological temperatures, and will provide additional information for identification of these sources.

  12. Isolation and characterization of eight novel microsatellite loci in the double-crested cormorant (Phalacrocorax auritus)

    USGS Publications Warehouse

    Mercer, Dacey; Haig, Susan; Mullins, Thomas

    2010-01-01

    We describe the isolation and characterization of eight microsatellite loci from the double-crested cormorant (Phalacrocorax auritus). Genetic variability was assessed using 60 individuals from three populations. All loci were variable with the number of alleles ranging from two to 17 per locus, and observed heterozygosity varying from 0.05 to 0.89. No loci showed signs of linkage disequilibrium and all loci conformed to Hardy–Weinberg equilibrium frequencies. Further, all loci amplified and were polymorphic in two related Phalacrocorax species. These loci should prove useful for population genetic studies of the double-crested cormorant and other pelecaniform species.

  13. Design and study on optic fiber sensor detection system

    NASA Astrophysics Data System (ADS)

    Jiang, Xuemei; Liu, Quan; Liang, Xiaoyu; Lin, Haiyan

    2005-11-01

    With the development of industry and agriculture, the environmental pollution becomes more and more serious. Various kinds of poisonous gas are the important pollution sources. Various kinds of poisonous gas, such as the carbon monoxide, sulfureted hydrogen, sulfur dioxide, methane, acetylene are threatening human normal life and production seriously especially today when industry and various kinds of manufacturing industries develop at full speed. The acetylene is a kind of gas with very lively chemical property, extremely apt to burn, resolve and explode, and it is great to destroy things among these poisonous gases. Comparing with other inflammable and explosive gas, the explosion range of the acetylene is heavier. Therefore carrying on monitoring acetylene pollution sources scene in real time, grasping the state of pollution taking place and development in time, have very important meanings. Aim at the above problems, a set of optical fiber detection system of acetylene gas based on the characteristic of spectrum absorption of acetylene is presented in this paper, which has reference channel and is for on-line and real-time detection. In order to eliminate the effect of other factors on measurement precision, the double light sources, double light paths and double cells are used in this system. Because of the use of double wavelength compensating method, this system can eliminate the disturbance in the optical paths, the problem of instability is solved and the measurement precision is greatly enhanced. Some experimental results are presented at the end of this paper.

  14. AN EFFICIENT, COMPACT, AND VERSATILE FIBER DOUBLE SCRAMBLER FOR HIGH PRECISION RADIAL VELOCITY INSTRUMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halverson, Samuel; Roy, Arpita; Mahadevan, Suvrath

    2015-06-10

    We present the design and test results of a compact optical fiber double-scrambler for high-resolution Doppler radial velocity instruments. This device consists of a single optic: a high-index n ∼ 2 ball lens that exchanges the near and far fields between two fibers. When used in conjunction with octagonal fibers, this device yields very high scrambling gains (SGs) and greatly desensitizes the fiber output from any input illumination variations, thereby stabilizing the instrument profile of the spectrograph and improving the Doppler measurement precision. The system is also highly insensitive to input pupil variations, isolating the spectrograph from telescope illumination variationsmore » and seeing changes. By selecting the appropriate glass and lens diameter the highest efficiency is achieved when the fibers are practically in contact with the lens surface, greatly simplifying the alignment process when compared to classical double-scrambler systems. This prototype double-scrambler has demonstrated significant performance gains over previous systems, achieving SGs in excess of 10,000 with a throughput of ∼87% using uncoated Polymicro octagonal fibers. Adding a circular fiber to the fiber train further increases the SG to >20,000, limited by laboratory measurement error. While this fiber system is designed for the Habitable-zone Planet Finder spectrograph, it is more generally applicable to other instruments in the visible and near-infrared. Given the simplicity and low cost, this fiber scrambler could also easily be multiplexed for large multi-object instruments.« less

  15. GPS common-view time transfer

    NASA Technical Reports Server (NTRS)

    Lewandowski, W.

    1994-01-01

    The introduction of the GPS common-view method at the beginning of the 1980's led to an immediate and dramatic improvement of international time comparisons. Since then, further progress brought the precision and accuracy of GPS common-view intercontinental time transfer from tens of nanoseconds to a few nanoseconds, even with SA activated. This achievement was made possible by the use of the following: ultra-precise ground antenna coordinates, post-processed precise ephemerides, double-frequency measurements of ionosphere, and appropriate international coordination and standardization. This paper reviews developments and applications of the GPS common-view method during the last decade and comments on possible future improvements whose objective is to attain sub-nanosecond uncertainty.

  16. Method for measuring retardation of infrared wave-plate by modulated-polarized visible light

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Song, Feijun

    2012-11-01

    A new method for precisely measuring the optical phase retardation of wave-plates in the infrared spectral region is presented by using modulated-polarized visible light. An electro-optic modulator is used to accurately determine the zero point by the frequency-doubled signal of the Modulated-polarized light. A Babinet-Soleil compensator is employed to make the phase delay compensation. Based on this method, an instrument is set up to measure the retardations of the infrared wave-plates with visible region laser. Measurement results with high accuracy and sound repetition are obtained by simple calculation. Its measurement precision is less than and repetitive precision is within 0.3%.

  17. Development of double-pair double difference location algorithm and its application to the regular earthquakes and non-volcanic tremors

    NASA Astrophysics Data System (ADS)

    Guo, H.; Zhang, H.

    2016-12-01

    Relocating high-precision earthquakes is a central task for monitoring earthquakes and studying the structure of earth's interior. The most popular location method is the event-pair double-difference (DD) relative location method, which uses the catalog and/or more accurate waveform cross-correlation (WCC) differential times from event pairs with small inter-event separations to the common stations to reduce the effect of the velocity uncertainties outside the source region. Similarly, Zhang et al. [2010] developed a station-pair DD location method which uses the differential times from common events to pairs of stations to reduce the effect of the velocity uncertainties near the source region, to relocate the non-volcanic tremors (NVT) beneath the San Andreas Fault (SAF). To utilize advantages of both DD location methods, we have proposed and developed a new double-pair DD location method to use the differential times from pairs of events to pairs of stations. The new method can remove the event origin time and station correction terms from the inversion system and cancel out the effects of the velocity uncertainties near and outside the source region simultaneously. We tested and applied the new method on the northern California regular earthquakes to validate its performance. In comparison, among three DD location methods, the new double-pair DD method can determine more accurate relative locations and the station-pair DD method can better improve the absolute locations. Thus, we further proposed a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time. For NVTs, it is difficult to pick the first arrivals and derive the WCC event-pair differential times, thus the general practice is to measure station-pair envelope WCC differential times. However, station-pair tremor locations are scattered due to the low-precision relative locations. The ability that double-pair data can be directly constructed from the station-pair data means that double-pair DD method can be used for improving NVT locations. We have applied the new method to the NVTs beneath the SAF near Cholame, California. Compared to the previous results, the new double-pair DD tremor locations are more concentrated and show more detailed structures.

  18. Sensor-based precision fertilization for field crops

    USDA-ARS?s Scientific Manuscript database

    From the development of the first viable variable-rate fertilizer systems in the upper Midwest USA, precision agriculture is now approaching three decades old. Early precision fertilization practice relied on laboratory analysis of soil samples collected on a spatial pattern to define the nutrient-s...

  19. Singular Stokes-polarimetry as new technique for metrology and inspection of polarized speckle fields

    NASA Astrophysics Data System (ADS)

    Soskin, Marat S.; Denisenko, Vladimir G.; Egorov, Roman I.

    2004-08-01

    Polarimetry is effective technique for polarized light fields characterization. It was shown recently that most full "finger-print" of light fields with arbitrary complexity is network of polarization singularities: C points with circular polarization and L lines with variable azimuth. The new singular Stokes-polarimetry was elaborated for such measurements. It allows define azimuth, eccentricity and handedness of elliptical vibrations in each pixel of receiving CCD camera in the range of mega-pixels. It is based on precise measurement of full set of Stokes parameters by the help of high quality analyzers and quarter-wave plates with λ/500 preciseness and 4" adjustment. The matrices of obtained data are processed in PC by special programs to find positions of polarization singularities and other needed topological features. The developed SSP technique was proved successfully by measurements of topology of polarized speckle-fields produced by multimode "photonic-crystal" fibers, double side rubbed polymer films, biomedical samples. Each singularity is localized with preciseness up to +/- 1 pixel in comparison with 500 pixels dimensions of typical speckle. It was confirmed that network of topological features appeared in polarized light field after its interaction with specimen under inspection is exact individual "passport" for its characterization. Therefore, SSP can be used for smart materials characterization. The presented data show that SSP technique is promising for local analysis of properties and defects of thin films, liquid crystal cells, optical elements, biological samples, etc. It is able discover heterogeneities and defects, which define essentially merits of specimens under inspection and can"t be checked by usual polarimetry methods. The detected extra high sensitivity of polarization singularities position and network to any changes of samples position and deformation opens quite new possibilities for sensing of deformations and displacement of checked elements in the sub-micron range.

  20. Use of genome editing tools in human stem cell-based disease modeling and precision medicine.

    PubMed

    Wei, Yu-da; Li, Shuang; Liu, Gai-gai; Zhang, Yong-xian; Ding, Qiu-rong

    2015-10-01

    Precision medicine emerges as a new approach that takes into account individual variability. The successful conduct of precision medicine requires the use of precise disease models. Human pluripotent stem cells (hPSCs), as well as adult stem cells, can be differentiated into a variety of human somatic cell types that can be used for research and drug screening. The development of genome editing technology over the past few years, especially the CRISPR/Cas system, has made it feasible to precisely and efficiently edit the genetic background. Therefore, disease modeling by using a combination of human stem cells and genome editing technology has offered a new platform to generate " personalized " disease models, which allow the study of the contribution of individual genetic variabilities to disease progression and the development of precise treatments. In this review, recent advances in the use of genome editing in human stem cells and the generation of stem cell models for rare diseases and cancers are discussed.

  1. State Space Model with hidden variables for reconstruction of gene regulatory networks.

    PubMed

    Wu, Xi; Li, Peng; Wang, Nan; Gong, Ping; Perkins, Edward J; Deng, Youping; Zhang, Chaoyang

    2011-01-01

    State Space Model (SSM) is a relatively new approach to inferring gene regulatory networks. It requires less computational time than Dynamic Bayesian Networks (DBN). There are two types of variables in the linear SSM, observed variables and hidden variables. SSM uses an iterative method, namely Expectation-Maximization, to infer regulatory relationships from microarray datasets. The hidden variables cannot be directly observed from experiments. How to determine the number of hidden variables has a significant impact on the accuracy of network inference. In this study, we used SSM to infer Gene regulatory networks (GRNs) from synthetic time series datasets, investigated Bayesian Information Criterion (BIC) and Principle Component Analysis (PCA) approaches to determining the number of hidden variables in SSM, and evaluated the performance of SSM in comparison with DBN. True GRNs and synthetic gene expression datasets were generated using GeneNetWeaver. Both DBN and linear SSM were used to infer GRNs from the synthetic datasets. The inferred networks were compared with the true networks. Our results show that inference precision varied with the number of hidden variables. For some regulatory networks, the inference precision of DBN was higher but SSM performed better in other cases. Although the overall performance of the two approaches is compatible, SSM is much faster and capable of inferring much larger networks than DBN. This study provides useful information in handling the hidden variables and improving the inference precision.

  2. Numerical computation of spherical harmonics of arbitrary degree and order by extending exponent of floating point numbers

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2012-04-01

    By extending the exponent of floating point numbers with an additional integer as the power index of a large radix, we compute fully normalized associated Legendre functions (ALF) by recursion without underflow problem. The new method enables us to evaluate ALFs of extremely high degree as 232 = 4,294,967,296, which corresponds to around 1 cm resolution on the Earth's surface. By limiting the application of exponent extension to a few working variables in the recursion, choosing a suitable large power of 2 as the radix, and embedding the contents of the basic arithmetic procedure of floating point numbers with the exponent extension directly in the program computing the recurrence formulas, we achieve the evaluation of ALFs in the double-precision environment at the cost of around 10% increase in computational time per single ALF. This formulation realizes meaningful execution of the spherical harmonic synthesis and/or analysis of arbitrary degree and order.

  3. Minimum Sobolev norm interpolation of scattered derivative data

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.

    2018-07-01

    We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.

  4. Precise automatic differential stellar photometry

    NASA Technical Reports Server (NTRS)

    Young, Andrew T.; Genet, Russell M.; Boyd, Louis J.; Borucki, William J.; Lockwood, G. Wesley

    1991-01-01

    The factors limiting the precision of differential stellar photometry are reviewed. Errors due to variable atmospheric extinction can be reduced to below 0.001 mag at good sites by utilizing the speed of robotic telescopes. Existing photometric systems produce aliasing errors, which are several millimagnitudes in general but may be reduced to about a millimagnitude in special circumstances. Conventional differential photometry neglects several other important effects, which are discussed in detail. If all of these are properly handled, it appears possible to do differential photometry of variable stars with an overall precision of 0.001 mag with ground based robotic telescopes.

  5. Interference experiment with asymmetric double slit by using 1.2-MV field emission transmission electron microscope.

    PubMed

    Harada, Ken; Akashi, Tetsuya; Niitsu, Kodai; Shimada, Keiko; Ono, Yoshimasa A; Shindo, Daisuke; Shinada, Hiroyuki; Mori, Shigeo

    2018-01-17

    Advanced electron microscopy technologies have made it possible to perform precise double-slit interference experiments. We used a 1.2-MV field emission electron microscope providing coherent electron waves and a direct detection camera system enabling single-electron detections at a sub-second exposure time. We developed a method to perform the interference experiment by using an asymmetric double-slit fabricated by a focused ion beam instrument and by operating the microscope under a "pre-Fraunhofer" condition, different from the Fraunhofer condition of conventional double-slit experiments. Here, pre-Fraunhofer condition means that each single-slit observation was performed under the Fraunhofer condition, while the double-slit observations were performed under the Fresnel condition. The interference experiments with each single slit and with the asymmetric double slit were carried out under two different electron dose conditions: high-dose for calculation of electron probability distribution and low-dose for each single electron distribution. Finally, we exemplified the distribution of single electrons by color-coding according to the above three types of experiments as a composite image.

  6. The climate of HD 189733b from fourteen transits and eclipses measured by Spitzer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agol, E.; /Washington U., Seattle, Astron. Dept. /Santa Barbara, KITP /UC, Santa Barbara; Cowan, Nicolas B.

    We present observations of six transits and six eclipses of the transiting planet system HD 189733 taken with the Spitzer Space Telescope IRAC camera at 8 microns, as well as a re-analysis of previously published data. We use several novel techniques in our data analysis, the most important of which is a new correction for the detector 'ramp' variation with a double-exponential function which performs better and is a better physical model for this detector variation. Our main scientific findings are: (1) an upper limit on the variability of the day-side planet flux of 2.7% (68% confidence); (2) the mostmore » precise set of transit times measured for a transiting planet, with an average accuracy of 3 seconds; (3) a lack of transit-timing variations, excluding the presence of second planets in this system above 20% of the mass of Mars in low-order mean-motion resonance at 95% confidence; (4) a confirmation of the planet's phase variation, finding the night side is 64% as bright as the day side, as well as an upper limit on the night-side variability of 17% (68% confidence); (5) a better correction for stellar variability at 8 micron causing the phase function to peak 3.5 hours before secondary eclipse, confirming that the advection and radiation timescales are comparable at the 8 micron photosphere; (6) variation in the depth of transit, which possibly implies variations in the surface brightness of the portion of the star occulted by the planet, posing a fundamental limit on non-simultaneous multi-wavelength transit absorption measurements of planet atmospheres; (7) a measurement of the infrared limb-darkening of the star, which is in good agreement with stellar atmosphere models; (8) an offset in the times of secondary eclipse of 69 seconds, which is mostly accounted for by a 31 second light travel time delay and 33 second delay due to the shift of ingress and egress by the planet hot spot; this confirms that the phase variation is due to an offset hot spot on the planet; (9) a retraction of the claimed eccentricity of this system due to the offset of secondary eclipse, which is now just an upper limit; and (10) high precision measurements of the parameters of this system. These results were enabled by the exquisite photometric precision of the Spitzer IRAC camera; for repeat observations the scatter is less than 0.35 mmag over the 590 day time scale of our observations after decorrelating with detector parameters.« less

  7. Causal diagrams and multivariate analysis II: precision work.

    PubMed

    Jupiter, Daniel C

    2014-01-01

    In this Investigators' Corner, I continue my discussion of when and why we researchers should include variables in multivariate regression. My examination focuses on studies comparing treatment groups and situations for which we can either exclude variables from multivariate analyses or include them for reasons of precision. Copyright © 2014 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  8. Double axis, two-crystal x-ray spectrometer.

    PubMed

    Erez, G; Kimhi, D; Livnat, A

    1978-05-01

    A two-crystal double axis x-ray spectrometer, capable of goniometric accuracy on the order of 0.1", has been developed. Some of its unique design features are presented. These include (1) a modified commercial thrust bearing which furnishes a precise, full circle theta:2theta coupling, (2) a new tangent drive system design in which a considerable reduction of the lead screw effective pitch is achieved, and (3) an automatic step scanning control which eliminates most of the mechanical deficiencies of the tangent drive by directly reading the tangent arm displacement.

  9. Thermomechanical CSM analysis of a superheater tube in transient state

    NASA Astrophysics Data System (ADS)

    Taler, Dawid; Madejski, Paweł

    2011-12-01

    The paper presents a thermomechanical computational solid mechanics analysis (CSM) of a pipe "double omega", used in the steam superheaters in circulating fluidized bed (CFB) boilers. The complex cross-section shape of the "double omega" tubes requires more precise analysis in order to prevent from failure as a result of the excessive temperature and thermal stresses. The results have been obtained using the finite volume method for transient state of superheater. The calculation was carried out for the section of pipe made of low-alloy steel.

  10. Longitudinal Double-Spin Asymmetry for Inclusive Jet Production in p→+p→ Collisions at s=200GeV

    NASA Astrophysics Data System (ADS)

    Abelev, B. I.; Aggarwal, M. M.; Ahammed, Z.; Anderson, B. D.; Arkhipkin, D.; Averichev, G. S.; Bai, Y.; Balewski, J.; Barannikova, O.; Barnby, L. S.; Baudot, J.; Baumgart, S.; Belaga, V. V.; Bellingeri-Laurikainen, A.; Bellwied, R.; Benedosso, F.; Betts, R. R.; Bhardwaj, S.; Bhasin, A.; Bhati, A. K.; Bichsel, H.; Bielcik, J.; Bielcikova, J.; Bland, L. C.; Blyth, S.-L.; Bombara, M.; Bonner, B. E.; Botje, M.; Bouchet, J.; Brandin, A. V.; Burton, T. P.; Bystersky, M.; Cai, X. Z.; Caines, H.; Calderón de La Barca Sánchez, M.; Callner, J.; Catu, O.; Cebra, D.; Cervantes, M. C.; Chajecki, Z.; Chaloupka, P.; Chattopadhyay, S.; Chen, H. F.; Chen, J. H.; Chen, J. Y.; Cheng, J.; Cherney, M.; Chikanian, A.; Christie, W.; Chung, S. U.; Clarke, R. F.; Codrington, M. J. M.; Coffin, J. P.; Cormier, T. M.; Cosentino, M. R.; Cramer, J. G.; Crawford, H. J.; Das, D.; Dash, S.; Daugherity, M.; de Moura, M. M.; Dedovich, T. G.; Dephillips, M.; Derevschikov, A. A.; Didenko, L.; Dietel, T.; Djawotho, P.; Dogra, S. M.; Dong, X.; Drachenberg, J. L.; Draper, J. E.; Du, F.; Dunin, V. B.; Dunlop, J. C.; Dutta Mazumdar, M. R.; Edwards, W. R.; Efimov, L. G.; Elhalhuli, E.; Emelianov, V.; Engelage, J.; Eppley, G.; Erazmus, B.; Estienne, M.; Fachini, P.; Fatemi, R.; Fedorisin, J.; Feng, A.; Filip, P.; Finch, E.; Fine, V.; Fisyak, Y.; Fu, J.; Gagliardi, C. A.; Gaillard, L.; Ganti, M. S.; Garcia-Solis, E.; Ghazikhanian, V.; Ghosh, P.; Gorbunov, Y. N.; Gos, H.; Grebenyuk, O.; Grosnick, D.; Grube, B.; Guertin, S. M.; Guimaraes, K. S. F. F.; Gupta, A.; Gupta, N.; Haag, B.; Hallman, T. J.; Hamed, A.; Harris, J. W.; He, W.; Heinz, M.; Henry, T. W.; Heppelmann, S.; Hippolyte, B.; Hirsch, A.; Hjort, E.; Hoffman, A. M.; Hoffmann, G. W.; Hofman, D. J.; Hollis, R. S.; Horner, M. J.; Huang, H. Z.; Hughes, E. W.; Humanic, T. J.; Igo, G.; Iordanova, A.; Jacobs, P.; Jacobs, W. W.; Jakl, P.; Jones, P. G.; Judd, E. G.; Kabana, S.; Kang, K.; Kapitan, J.; Kaplan, M.; Keane, D.; Kechechyan, A.; Kettler, D.; Khodyrev, V. Yu.; Kiryluk, J.; Kisiel, A.; Kislov, E. M.; Klein, S. R.; Knospe, A. G.; Kocoloski, A.; Koetke, D. D.; Kollegger, T.; Kopytine, M.; Kotchenda, L.; Kouchpil, V.; Kowalik, K. L.; Kravtsov, P.; Kravtsov, V. I.; Krueger, K.; Kuhn, C.; Kulikov, A. I.; Kumar, A.; Kurnadi, P.; Kuznetsov, A. A.; Lamont, M. A. C.; Landgraf, J. M.; Lange, S.; Lapointe, S.; Laue, F.; Lauret, J.; Lebedev, A.; Lednicky, R.; Lee, C.-H.; Lehocka, S.; Levine, M. J.; Li, C.; Li, Q.; Li, Y.; Lin, G.; Lin, X.; Lindenbaum, S. J.; Lisa, M. A.; Liu, F.; Liu, H.; Liu, J.; Liu, L.; Ljubicic, T.; Llope, W. J.; Longacre, R. S.; Love, W. A.; Lu, Y.; Ludlam, T.; Lynn, D.; Ma, G. L.; Ma, J. G.; Ma, Y. G.; Mahapatra, D. P.; Majka, R.; Mangotra, L. K.; Manweiler, R.; Margetis, S.; Markert, C.; Martin, L.; Matis, H. S.; Matulenko, Yu. A.; McShane, T. S.; Meschanin, A.; Millane, J.; Miller, M. L.; Minaev, N. G.; Mioduszewski, S.; Mischke, A.; Mitchell, J.; Mohanty, B.; Morozov, D. A.; Munhoz, M. G.; Nandi, B. K.; Nattrass, C.; Nayak, T. K.; Nelson, J. M.; Nepali, C.; Netrakanti, P. K.; Nogach, L. V.; Nurushev, S. B.; Odyniec, G.; Ogawa, A.; Okorokov, V.; Olson, D.; Pachr, M.; Pal, S. K.; Panebratsev, Y.; Pavlinov, A. I.; Pawlak, T.; Peitzmann, T.; Perevoztchikov, V.; Perkins, C.; Peryt, W.; Phatak, S. C.; Planinic, M.; Pluta, J.; Poljak, N.; Porile, N.; Poskanzer, A. M.; Potekhin, M.; Potrebenikova, E.; Potukuchi, B. V. K. S.; Prindle, D.; Pruneau, C.; Pruthi, N. K.; Putschke, J.; Qattan, I. A.; Raniwala, R.; Raniwala, S.; Ray, R. L.; Relyea, D.; Ridiger, A.; Ritter, H. G.; Roberts, J. B.; Rogachevskiy, O. V.; Romero, J. L.; Rose, A.; Roy, C.; Ruan, L.; Russcher, M. J.; Sahoo, R.; Sakrejda, I.; Sakuma, T.; Salur, S.; Sandweiss, J.; Sarsour, M.; Sazhin, P. S.; Schambach, J.; Scharenberg, R. P.; Schmitz, N.; Seger, J.; Selyuzhenkov, I.; Seyboth, P.; Shabetai, A.; Shahaliev, E.; Shao, M.; Sharma, M.; Shen, W. Q.; Shimanskiy, S. S.; Sichtermann, E. P.; Simon, F.; Singaraju, R. N.; Skoby, M. J.; Smirnov, N.; Snellings, R.; Sorensen, P.; Sowinski, J.; Speltz, J.; Spinka, H. M.; Srivastava, B.; Stadnik, A.; Stanislaus, T. D. S.; Staszak, D.; Stock, R.; Strikhanov, M.; Stringfellow, B.; Suaide, A. A. P.; Suarez, M. C.; Subba, N. L.; Sumbera, M.; Sun, X. M.; Sun, Z.; Surrow, B.; Symons, T. J. M.; Szanto de Toledo, A.; Takahashi, J.; Tang, A. H.; Tarnowsky, T.; Thomas, J. H.; Timmins, A. R.; Timoshenko, S.; Tokarev, M.; Trainor, T. A.; Tram, V. N.; Trentalange, S.; Tribble, R. E.; Tsai, O. D.; Ulery, J.; Ullrich, T.; Underwood, D. G.; van Buren, G.; van der Kolk, N.; van Leeuwen, M.; Vander Molen, A. M.; Varma, R.; Vasilevski, I. M.; Vasiliev, A. N.; Vernet, R.; Vigdor, S. E.; Viyogi, Y. P.; Vokal, S.; Voloshin, S. A.; Wada, M.; Waggoner, W. T.; Wang, F.; Wang, G.; Wang, J. S.; Wang, X. L.; Wang, Y.; Webb, J. C.; Westfall, G. D.; Whitten, C., Jr.; Wieman, H.; Wissink, S. W.; Witt, R.; Wu, J.; Wu, Y.; Xu, N.; Xu, Q. H.; Xu, Z.; Yepes, P.; Yoo, I.-K.; Yue, Q.; Yurevich, V. I.; Zawisza, M.; Zhan, W.; Zhang, H.; Zhang, W. M.; Zhang, Y.; Zhang, Z. P.; Zhao, Y.; Zhong, C.; Zhou, J.; Zoulkarneev, R.; Zoulkarneeva, Y.; Zubarev, A. N.; Zuo, J. X.

    2008-06-01

    We report a new STAR measurement of the longitudinal double-spin asymmetry ALL for inclusive jet production at midrapidity in polarized p+p collisions at a center-of-mass energy of s=200GeV. The data, which cover jet transverse momenta 5

  11. The gait standard deviation, a single measure of kinematic variability.

    PubMed

    Sangeux, Morgan; Passmore, Elyse; Graham, H Kerr; Tirosh, Oren

    2016-05-01

    Measurement of gait kinematic variability provides relevant clinical information in certain conditions affecting the neuromotor control of movement. In this article, we present a measure of overall gait kinematic variability, GaitSD, based on combination of waveforms' standard deviation. The waveform standard deviation is the common numerator in established indices of variability such as Kadaba's coefficient of multiple correlation or Winter's waveform coefficient of variation. Gait data were collected on typically developing children aged 6-17 years. Large number of strides was captured for each child, average 45 (SD: 11) for kinematics and 19 (SD: 5) for kinetics. We used a bootstrap procedure to determine the precision of GaitSD as a function of the number of strides processed. We compared the within-subject, stride-to-stride, variability with the, between-subject, variability of the normative pattern. Finally, we investigated the correlation between age and gait kinematic, kinetic and spatio-temporal variability. In typically developing children, the relative precision of GaitSD was 10% as soon as 6 strides were captured. As a comparison, spatio-temporal parameters required 30 strides to reach the same relative precision. The ratio stride-to-stride divided by normative pattern variability was smaller in kinematic variables (the smallest for pelvic tilt, 28%) than in kinetic and spatio-temporal variables (the largest for normalised stride length, 95%). GaitSD had a strong, negative correlation with age. We show that gait consistency may stabilise only at, or after, skeletal maturity. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Automatic alignment of double optical paths in excimer laser amplifier

    NASA Astrophysics Data System (ADS)

    Wang, Dahui; Zhao, Xueqing; Hua, Hengqi; Zhang, Yongsheng; Hu, Yun; Yi, Aiping; Zhao, Jun

    2013-05-01

    A kind of beam automatic alignment method used for double paths amplification in the electron pumped excimer laser system is demonstrated. In this way, the beams from the amplifiers can be transferred along the designated direction and accordingly irradiate on the target with high stabilization and accuracy. However, owing to nonexistence of natural alignment references in excimer laser amplifiers, two cross-hairs structure is used to align the beams. Here, one crosshair put into the input beam is regarded as the near-field reference while the other put into output beam is regarded as the far-field reference. The two cross-hairs are transmitted onto Charge Coupled Devices (CCD) by image-relaying structures separately. The errors between intersection points of two cross-talk images and centroid coordinates of actual beam are recorded automatically and sent to closed loop feedback control mechanism. Negative feedback keeps running until preset accuracy is reached. On the basis of above-mentioned design, the alignment optical path is built and the software is compiled, whereafter the experiment of double paths automatic alignment in electron pumped excimer laser amplifier is carried through. Meanwhile, the related influencing factors and the alignment precision are analyzed. Experimental results indicate that the alignment system can achieve the aiming direction of automatic aligning beams in short time. The analysis shows that the accuracy of alignment system is 0.63μrad and the beam maximum restoration error is 13.75μm. Furthermore, the bigger distance between the two cross-hairs, the higher precision of the system is. Therefore, the automatic alignment system has been used in angular multiplexing excimer Main Oscillation Power Amplification (MOPA) system and can satisfy the requirement of beam alignment precision on the whole.

  13. The Cu2+-nitrilotriacetic acid complex improves loading of α-helical double histidine site for precise distance measurements by pulsed ESR

    NASA Astrophysics Data System (ADS)

    Ghosh, Shreya; Lawless, Matthew J.; Rule, Gordon S.; Saxena, Sunil

    2018-01-01

    Site-directed spin labeling using two strategically placed natural histidine residues allows for the rigid attachment of paramagnetic Cu2+. This double histidine (dHis) motif enables extremely precise, narrow distance distributions resolved by Cu2+-based pulsed ESR. Furthermore, the distance measurements are easily relatable to the protein backbone-structure. The Cu2+ ion has, till now, been introduced as a complex with the chelating agent iminodiacetic acid (IDA) to prevent unspecific binding. Recently, this method was found to have two limiting concerns that include poor selectivity towards α-helices and incomplete Cu2+-IDA complexation. Herein, we introduce an alternative method of dHis-Cu2+ loading using the nitrilotriacetic acid (NTA)-Cu2+ complex. We find that the Cu2+-NTA complex shows a four-fold increase in selectivity toward α-helical dHis sites. Furthermore, we show that 100% Cu2+-NTA complexation is achievable, enabling precise dHis loading and resulting in no free Cu2+ in solution. We analyze the optimum dHis loading conditions using both continuous wave and pulsed ESR. We implement these findings to show increased sensitivity of the Double Electron-Electron Resonance (DEER) experiment in two different protein systems. The DEER signal is increased within the immunoglobulin binding domain of protein G (called GB1). We measure distances between a dHis site on an α-helix and dHis site either on a mid-strand or a non-hydrogen bonded edge-strand β-sheet. Finally, the DEER signal is increased twofold within two α-helix dHis sites in the enzymatic dimer glutathione S-transferase exemplifying the enhanced α-helical selectivity of Cu2+-NTA.

  14. Tests of general relativity from timing the double pulsar.

    PubMed

    Kramer, M; Stairs, I H; Manchester, R N; McLaughlin, M A; Lyne, A G; Ferdman, R D; Burgay, M; Lorimer, D R; Possenti, A; D'Amico, N; Sarkissian, J M; Hobbs, G B; Reynolds, J E; Freire, P C C; Camilo, F

    2006-10-06

    The double pulsar system PSR J0737-3039A/B is unique in that both neutron stars are detectable as radio pulsars. They are also known to have much higher mean orbital velocities and accelerations than those of other binary pulsars. The system is therefore a good candidate for testing Einstein's theory of general relativity and alternative theories of gravity in the strong-field regime. We report on precision timing observations taken over the 2.5 years since its discovery and present four independent strong-field tests of general relativity. These tests use the theory-independent mass ratio of the two stars. By measuring relativistic corrections to the Keplerian description of the orbital motion, we find that the "post-Keplerian" parameter s agrees with the value predicted by general relativity within an uncertainty of 0.05%, the most precise test yet obtained. We also show that the transverse velocity of the system's center of mass is extremely small. Combined with the system's location near the Sun, this result suggests that future tests of gravitational theories with the double pulsar will supersede the best current solar system tests. It also implies that the second-born pulsar may not have formed through the core collapse of a helium star, as is usually assumed.

  15. How to measure separations and angles between intra-molecular fluorescent markers

    NASA Astrophysics Data System (ADS)

    Flyvbjerg, Henrik; Mortensen, Kim I.; Sung, Jongmin; Spudich, James A.

    We demonstrate a novel, yet simple tool for the study of structure and function of biomolecules by extending two-colour co-localization microscopy to fluorescent molecules with fixed orientations and in intra-molecular proximity. From each color-separated microscope image in a time-lapse movie and using only simple means, we simultaneously determine both the relative (x,y)-separation of the fluorophores and their individual orientations in space with accuracy and precision. The positions and orientations of two domains of the same molecule are thus time-resolved. Using short double-stranded DNA molecules internally labelled with two fixed fluorophores, we demonstrate the accuracy and precision of our method using the known structure of double-stranded DNA as a benchmark, resolve 10-base-pair differences in fluorophore separations, and determine the unique 3D orientation of each DNA molecule, thereby establishing short, double-labelled DNA molecules as probes of 3D orientation of anything to which one can attach them firmly. This work was supported by a Lundbeck fellowship to K.I.M; a Stanford Bio-X fellowship to J.S. and Grants from the NIH (GM33289) to J.A.S. and the Human Frontier Science Program (GP0054/2009-C) to J.A.S. and H.F.

  16. A hidden analytic structure of the Rabi model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moroz, Alexander, E-mail: wavescattering@yahoo.com

    2014-01-15

    The Rabi model describes the simplest interaction between a cavity mode with a frequency ω{sub c} and a two-level system with a resonance frequency ω{sub 0}. It is shown here that the spectrum of the Rabi model coincides with the support of the discrete Stieltjes integral measure in the orthogonality relations of recently introduced orthogonal polynomials. The exactly solvable limit of the Rabi model corresponding to Δ=ω{sub 0}/(2ω{sub c})=0, which describes a displaced harmonic oscillator, is characterized by the discrete Charlier polynomials in normalized energy ϵ, which are orthogonal on an equidistant lattice. A non-zero value of Δ leads tomore » non-classical discrete orthogonal polynomials ϕ{sub k}(ϵ) and induces a deformation of the underlying equidistant lattice. The results provide a basis for a novel analytic method of solving the Rabi model. The number of ca. 1350 calculable energy levels per parity subspace obtained in double precision (cca 16 digits) by an elementary stepping algorithm is up to two orders of magnitude higher than is possible to obtain by Braak’s solution. Any first n eigenvalues of the Rabi model arranged in increasing order can be determined as zeros of ϕ{sub N}(ϵ) of at least the degree N=n+n{sub t}. The value of n{sub t}>0, which is slowly increasing with n, depends on the required precision. For instance, n{sub t}≃26 for n=1000 and dimensionless interaction constant κ=0.2, if double precision is required. Given that the sequence of the lth zeros x{sub nl}’s of ϕ{sub n}(ϵ)’s defines a monotonically decreasing discrete flow with increasing n, the Rabi model is indistinguishable from an algebraically solvable model in any finite precision. Although we can rigorously prove our results only for dimensionless interaction constant κ<1, numerics and exactly solvable example suggest that the main conclusions remain to be valid also for κ≥1. -- Highlights: •A significantly simplified analytic solution of the Rabi model. •The spectrum is the lattice of discrete orthogonal polynomials. •Up to 1350 levels in double precision can be obtained for a given parity. •Omission of any level can be easily detected.« less

  17. Optimization and application of ICPMS with dynamic reaction cell for precise determination of 44Ca/40Ca isotope ratios.

    PubMed

    Boulyga, Sergei F; Klötzli, Urs; Stingeder, Gerhard; Prohaska, Thomas

    2007-10-15

    An inductively coupled plasma mass spectrometer with dynamic reaction cell (ICP-DRC-MS) was optimized for determining (44)Ca/(40)Ca isotope ratios in aqueous solutions with respect to (i) repeatability, (ii) robustness, and (iii) stability. Ammonia as reaction gas allowed both the removal of (40)Ar+ interference on (40)Ca+ and collisional damping of ion density fluctuations of an ion beam extracted from an ICP. The effect of laboratory conditions as well as ICP-DRC-MS parameters such a nebulizer gas flow rate, rf power, lens potential, dwell time, or DRC parameters on precision and mass bias was studied. Precision (calculated using the "unbiased" or "n - 1" method) of a single isotope ratio measurement of a 60 ng g(-1) calcium solution (analysis time of 6 min) is routinely achievable in the range of 0.03-0.05%, which corresponded to the standard error of the mean value (n = 6) of 0.012-0.020%. These experimentally observed RSDs were close to theoretical precision values given by counting statistics. Accuracy of measured isotope ratios was assessed by comparative measurements of the same samples by ICP-DRC-MS and thermal ionization mass spectrometry (TIMS) by using isotope dilution with a (43)Ca-(48)Ca double spike. The analysis time in both cases was 1 h per analysis (10 blocks, each 6 min). The delta(44)Ca values measured by TIMS and ICP-DRC-MS with double-spike calibration in two samples (Ca ICP standard solution and digested NIST 1486 bone meal) coincided within the obtained precision. Although the applied isotope dilution with (43)Ca-(48)Ca double-spike compensates for time-dependent deviations of mass bias and allows achieving accurate results, this approach makes it necessary to measure an additional isotope pair, reducing the overall analysis time per isotope or increasing the total analysis time. Further development of external calibration by using a bracketing method would allow a wider use of ICP-DRC-MS for routine calcium isotopic measurements, but it still requires particular software or hardware improvements aimed at reliable control of environmental effects, which might influence signal stability in ICP-DRC-MS and serve as potential uncertainty sources in isotope ratio measurements.

  18. Precision Teaching.

    ERIC Educational Resources Information Center

    Couch, Richard W.

    Precision teaching (PT) is an approach to the science of human behavior that focuses on precise monitoring of carefully defined behaviors in an attempt to construct an environmental analysis of that behavior and its controlling variables. A variety of subjects have been used with PT, ranging in academic objectives from beginning reading to college…

  19. Latest Results from EXO-200

    NASA Astrophysics Data System (ADS)

    Kaufman, Lisa; EXO-200 Collaboration

    2017-09-01

    The EXO-200 experiment has made both the first observation of the double beta decay in Xe-136 and the most precisely measured half-life of any two-neutrino double beta decay to date. Consisting of an extremely low-background time projection chamber filled with 150 kg of enriched liquid Xe-136, it has provided one of the most sensitive searches for the neutrinoless double beta decay using the first two years of data. After a hiatus in operations during a temporary shutdown of its host facility, the Waste Isolation Pilot Plant, the experiment has restarted data taking with upgrades to its front-end electronics and a radon suppression system. This talk will cover the latest results of the collaboration including new data with improved energy resolution.

  20. Phase inversion and frequency doubling of reflection high-energy electron diffraction intensity oscillations in the layer-by-layer growth of complex oxides

    NASA Astrophysics Data System (ADS)

    Mao, Zhangwen; Guo, Wei; Ji, Dianxiang; Zhang, Tianwei; Gu, Chenyi; Tang, Chao; Gu, Zhengbin; Nie*, Yuefeng; Pan, Xiaoqing

    In situ reflection high-energy electron diffraction (RHEED) and its intensity oscillations are extremely important for the growth of epitaxial thin films with atomic precision. The RHEED intensity oscillations of complex oxides are, however, rather complicated and a general model is still lacking. Here, we report the unusual phase inversion and frequency doubling of RHEED intensity oscillations observed in the layer-by-layer growth of SrTiO3 using oxide molecular beam epitaxy. In contacts to the common understanding that the maximum(minimum) intensity occurs at SrO(TiO2) termination, respectively, we found that both maximum or minimum intensities can occur at SrO, TiO2, or even incomplete terminations depending on the incident angle of the electron beam, which raises a fundamental question if one can rely on the RHEED intensity oscillations to precisely control the growth of thin films. A general model including surface roughness and termination dependent mean inner potential qualitatively explains the observed phenomena, and provides the answer to the question how to prepare atomically and chemically precise surface/interfaces using RHEED oscillations for complex oxides. We thank National Basic Research Program of China (No. 11574135, 2015CB654901) and the National Thousand-Young-Talents Program.

  1. Development and Evaluation of Math Library Routines for a 1750A Airborne Microcomputer.

    DTIC Science & Technology

    1985-12-04

    Since each iteration doubles the number of correct significant digits in the square root, this assures an accuracy of 63.32 bits. (4: 23) The next...X, C1 + C2 represents In (C) to more than working precision This method gives extra digits of precision equivalent to the number of extra digits in...will not underflow for lxI K eps. Cody and Waite have suggested that eps = 2-t/2 where there are t base-2 digits in the significand. The next step

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritychenko, B.

    The precision of double-beta ββ-decay experimental half lives and their uncertainties is reanalyzed. The method of Benford's distributions has been applied to nuclear reaction, structure and decay data sets. First-digit distribution trend for ββ-decay T 2v 1/2 is consistent with large nuclear reaction and structure data sets and provides validation of experimental half-lives. A complementary analysis of the decay uncertainties indicates deficiencies due to small size of statistical samples, and incomplete collection of experimental information. Further experimental and theoretical efforts would lead toward more precise values of-decay half-lives and nuclear matrix elements.

  3. SU-F-J-203: Retrospective Assessment of Delivered Proton Dose in Prostate Cancer Patients Based On Daily In-Room CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuetzer, K; Paessler, T; Valentini, C

    Purpose: Retrospective calculation of the delivered proton dose in prostate cancer patients based on a unique dataset of daily CT images. Methods: Inter-fractional motion in prostate cancer patients treated at our proton facility is counteracted by water-filled endorectal ballon and bladder filling protocol. Typical plans (XiO, Elekta Instruments AB, Stockholm) for 74 Gy(RBE) sequential boost treatment in 37 fractions include two series of opposing lateral double-scattered proton beams covering the respective iCTV. Stability of fiducial markers and anatomy were checked in 12 patients by daily scheduled in-room control CT (cCT) after immobilization and positioning according to bony anatomy utilizing orthogonalmore » X-ray. In RayStation 4.6 (RaySearch Laboritories AB, Stockholm), all cCTs are delineated retrospectively and the treatment plans were recalculated on the planning CT and the registered cCTs. All fraction doses were accumulated on the planning CT after deformable registration. Parameters of delivered dose to iCTV (D98%>95%, D2%<107%), bladder (V75Gy<15%, V70Gy<25%, V65Gy<30%), rectum (V70Gy<10%, V50Gy<40%) and femoral heads (V50Gy<5%) are compared to those in the treatment plan. Intra-therapy variation is represented in DVH bands. Results: No alarming differences were observed between planned and retrospectively accumulated dose: iCTV constraints were met, except for one patient (D98%=94.6% in non-boosted iCTV). Considered bladder and femoral head values were below the limits. Rectum V70Gy was slightly exceeded (<11.3%) in two patients. First intra-therapy variability analysis in 4 patients showed no timedependent parameter drift, revealed strongest variability for bladder dose. In some fractions, iCTV coverage (D98%) and rectum V70Gy was missed. Conclusion: Double scattered proton plans are accurately delivered to prostate cancer patients due to fractionation effects and the applied precise positioning and immobilization protocols. As a result of rare interventions after daily 3D imaging of the first 12 patients, in-room CT frequency for prostate cancer patients was reduced. The presented study supports this decision. The authors acknowledge the German Federal Ministry for Education and Research for funding the High Precision Radiotherapy Group at the OncoRay - National Center for Radiation Research in Oncology (BMBF- 03Z1N51).« less

  4. High-precision double-frequency interferometric measurement of the cornea shape

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.; Smirnov, Eugene M.; Ilchenko, Leonid M.; Goncharov, Vadym O.

    1996-11-01

    To measure the shape of the cornea and its declinations from the necessary values before and after PRK operation, s well as the shape of other spherical objects like artificial pupil, a technique was used of double-frequency dual-beam interferometry. The technique is based on determination of the optical path difference between two neighboring laser beams, reflected from the cornea or other surface under investigation. Knowing the distance between the beams on the investigated shape. The shape itself is reconstructed by along-line integration. To adjust the wavefront orientation of the laser beam to the spherical shape of the cornea or artificial pupil in the course of scanning, additional lens is involved. Signal-to-noise ratio is ameliorated excluding losses in the acousto-optic deflectors. Polarization selection is realized for choosing the signal needed for measurement. 2D image presentation is accompanied by convenient PC accessories, permitting precise cross-section measurements along selected directions. Sensitivity of the order of 10-2 micrometers is achieved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakhman, A.; Hafez, Mohamed A.; Nanda, Sirish K.

    Here, a high-finesse Fabry-Perot cavity with a frequency-doubled continuous wave green laser (532 nm) has been built and installed in Hall A of Jefferson Lab for high precision Compton polarimetry. The infrared (1064 nm) beam from a ytterbium-doped fiber amplifier seeded by a Nd:YAG nonplanar ring oscillator laser is frequency doubled in a single-pass periodically poled MgO:LiNbO 3 crystal. The maximum achieved green power at 5 W infrared pump power is 1.74 W with a total conversion efficiency of 34.8%. The green beam is injected into the optical resonant cavity and enhanced up to 3.7 kW with a corresponding enhancementmore » of 3800. The polarization transfer function has been measured in order to determine the intra-cavity circular laser polarization within a measurement uncertainty of 0.7%. The PREx experiment at Jefferson Lab used this system for the first time and achieved 1.0% precision in polarization measurements of an electron beam with energy and current of 1.0 GeV and 50 μA.« less

  6. The precision measurement and assembly for miniature parts based on double machine vision systems

    NASA Astrophysics Data System (ADS)

    Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.

    2015-02-01

    In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.

  7. Mechanical Properties and Microstructural Evolution of Variable-Plane-Rolled Mg-3Al-1Zn Alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Rong; Bian, Cunjian; Wu, Yanjun

    2017-04-01

    The microstructural evolution and mechanical properties of AZ31 magnesium alloy produced by variable-plane rolling (VPR) were investigated. Two types of weak textures were formed: basal texture in odd pass and double-peak basal texture in even pass. Dynamic recrystallization (DRX) was observed during the VPR treatment, and the nucleation of grains during DRX was dependent on the coalescence of subgrains. Three types of twins were observed in the VPR treatment: {10-12} extension twins, {10-13} contraction twins and {10-11}-{10-12} double twins. The {10-11}-{10-12} double twinning is the underlying mechanism in the formation of the double-peak texture. Tensile testing revealed improved strength without loss of ductility. The Hall-Petch relationship can be used to describe the strengths in any even pass with the same texture. The significant strengthening is ascribed to the refined grain, twin boundaries, texture hardening, and high dislocation density.

  8. Marker-based or model-based RSA for evaluation of hip resurfacing arthroplasty? A clinical validation and 5-year follow-up.

    PubMed

    Lorenzen, Nina Dyrberg; Stilling, Maiken; Jakobsen, Stig Storgaard; Gustafson, Klas; Søballe, Kjeld; Baad-Hansen, Thomas

    2013-11-01

    The stability of implants is vital to ensure a long-term survival. RSA determines micro-motions of implants as a predictor of early implant failure. RSA can be performed as a marker- or model-based analysis. So far, CAD and RE model-based RSA have not been validated for use in hip resurfacing arthroplasty (HRA). A phantom study determined the precision of marker-based and CAD and RE model-based RSA on a HRA implant. In a clinical study, 19 patients were followed with stereoradiographs until 5 years after surgery. Analysis of double-examination migration results determined the clinical precision of marker-based and CAD model-based RSA, and at the 5-year follow-up, results of the total translation (TT) and the total rotation (TR) for marker- and CAD model-based RSA were compared. The phantom study showed that comparison of the precision (SDdiff) in marker-based RSA analysis was more precise than model-based RSA analysis in TT (p CAD < 0.001; p RE = 0.04) and TR (p CAD = 0.01; p RE < 0.001). The clinical precision (double examination in 8 patients) comparing the precision SDdiff was better evaluating the TT using the marker-based RSA analysis (p = 0.002), but showed no difference between the marker- and CAD model-based RSA analysis regarding the TR (p = 0.91). Comparing the mean signed values regarding the TT and the TR at the 5-year follow-up in 13 patients, the TT was lower (p = 0.03) and the TR higher (p = 0.04) in the marker-based RSA compared to CAD model-based RSA. The precision of marker-based RSA was significantly better than model-based RSA. However, problems with occluded markers lead to exclusion of many patients which was not a problem with model-based RSA. HRA were stable at the 5-year follow-up. The detection limit was 0.2 mm TT and 1° TR for marker-based and 0.5 mm TT and 1° TR for CAD model-based RSA for HRA.

  9. Assessing snow extent data sets over North America to inform and improve trace gas retrievals from solar backscatter

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew J.; Martin, Randall V.; Lyapustin, Alexei I.; McLinden, Chris A.

    2018-05-01

    Accurate representation of surface reflectivity is essential to tropospheric trace gas retrievals from solar backscatter observations. Surface snow cover presents a significant challenge due to its variability and thus snow-covered scenes are often omitted from retrieval data sets; however, the high reflectance of snow is potentially advantageous for trace gas retrievals. We first examine the implications of surface snow on retrievals from the upcoming TEMPO geostationary instrument for North America. We use a radiative transfer model to examine how an increase in surface reflectivity due to snow cover changes the sensitivity of satellite retrievals to NO2 in the lower troposphere. We find that a substantial fraction (> 50 %) of the TEMPO field of regard can be snow covered in January and that the average sensitivity to the tropospheric NO2 column substantially increases (doubles) when the surface is snow covered.We then evaluate seven existing satellite-derived or reanalysis snow extent products against ground station observations over North America to assess their capability of informing surface conditions for TEMPO retrievals. The Interactive Multisensor Snow and Ice Mapping System (IMS) had the best agreement with ground observations (accuracy of 93 %, precision of 87 %, recall of 83 %). Multiangle Implementation of Atmospheric Correction (MAIAC) retrievals of MODIS-observed radiances had high precision (90 % for Aqua and Terra), but underestimated the presence of snow (recall of 74 % for Aqua, 75 % for Terra). MAIAC generally outperforms the standard MODIS products (precision of 51 %, recall of 43 % for Aqua; precision of 69 %, recall of 45 % for Terra). The Near-real-time Ice and Snow Extent (NISE) product had good precision (83 %) but missed a significant number of snow-covered pixels (recall of 45 %). The Canadian Meteorological Centre (CMC) Daily Snow Depth Analysis Data set had strong performance metrics (accuracy of 91 %, precision of 79 %, recall of 82 %). We use the Fscore, which balances precision and recall, to determine overall product performance (F = 85 %, 82 (82) %, 81 %, 58 %, 46 (54) % for IMS, MAIAC Aqua (Terra), CMC, NISE, MODIS Aqua (Terra), respectively) for providing snow cover information for TEMPO retrievals from solar backscatter observations. We find that using IMS to identify snow cover and enable inclusion of snow-covered scenes in clear-sky conditions across North America in January can increase both the number of observations by a factor of 2.1 and the average sensitivity to the tropospheric NO2 column by a factor of 2.7.

  10. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  11. Belle II SVD ladder assembly procedure and electrical qualification

    NASA Astrophysics Data System (ADS)

    Adamczyk, K.; Aihara, H.; Angelini, C.; Aziz, T.; Babu, Varghese; Bacher, S.; Bahinipati, S.; Barberio, E.; Baroncelli, T.; Basith, A. K.; Batignani, G.; Bauer, A.; Behera, P. K.; Bergauer, T.; Bettarini, S.; Bhuyan, B.; Bilka, T.; Bosi, F.; Bosisio, L.; Bozek, A.; Buchsteiner, F.; Casarosa, G.; Ceccanti, M.; Červenkov, D.; Chendvankar, S. R.; Dash, N.; Divekar, S. T.; Doležal, Z.; Dutta, D.; Forti, F.; Friedl, M.; Hara, K.; Higuchi, T.; Horiguchi, T.; Irmler, C.; Ishikawa, A.; Jeon, H. B.; Joo, C.; Kandra, J.; Kang, K. H.; Kato, E.; Kawasaki, T.; Kodyš, P.; Kohriki, T.; Koike, S.; Kolwalkar, M. M.; Kvasnička, P.; Lanceri, L.; Lettenbicher, J.; Mammini, P.; Mayekar, S. N.; Mohanty, G. B.; Mohanty, S.; Morii, T.; Nakamura, K. R.; Natkaniec, Z.; Negishi, K.; Nisar, N. K.; Onuki, Y.; Ostrowicz, W.; Paladino, A.; Paoloni, E.; Park, H.; Pilo, F.; Profeti, A.; Rao, K. K.; Rashevskaya, I.; Rizzo, G.; Rozanska, M.; Sandilya, S.; Sasaki, J.; Sato, N.; Schultschik, S.; Schwanda, C.; Seino, Y.; Shimizu, N.; Stypula, J.; Tanaka, S.; Tanida, K.; Taylor, G. N.; Thalmeier, R.; Thomas, R.; Tsuboyama, T.; Uozumi, S.; Urquijo, P.; Vitale, L.; Volpi, M.; Watanuki, S.; Watson, I. J.; Webb, J.; Wiechczynski, J.; Williams, S.; Würkner, B.; Yamamoto, H.; Yin, H.; Yoshinobu, T.; Belle II SVD Collaboration

    2016-07-01

    The Belle II experiment at the SuperKEKB asymmetric e+e- collider in Japan will operate at a luminosity approximately 50 times larger than its predecessor (Belle). At its heart lies a six-layer vertex detector comprising two layers of pixelated silicon detectors (PXD) and four layers of double-sided silicon microstrip detectors (SVD). One of the key measurements for Belle II is time-dependent CP violation asymmetry, which hinges on a precise charged-track vertex determination. Towards this goal, a proper assembly of the SVD components with precise alignment ought to be performed and the geometrical tolerances should be checked to fall within the design limits. We present an overview of the assembly procedure that is being followed, which includes the precision gluing of the SVD module components, wire-bonding of the various electrical components, and precision three dimensional coordinate measurements of the jigs used in assembly as well as of the final SVD modules.

  12. A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method.

    PubMed

    Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu

    2015-02-01

    To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.

  13. Double-observer line transect surveys with Markov-modulated Poisson process models for animal availability.

    PubMed

    Borchers, D L; Langrock, R

    2015-12-01

    We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  14. Time Delay Embedding Increases Estimation Precision of Models of Intraindividual Variability

    ERIC Educational Resources Information Center

    von Oertzen, Timo; Boker, Steven M.

    2010-01-01

    This paper investigates the precision of parameters estimated from local samples of time dependent functions. We find that "time delay embedding," i.e., structuring data prior to analysis by constructing a data matrix of overlapping samples, increases the precision of parameter estimates and in turn statistical power compared to standard…

  15. Development and evaluation of a biomedical search engine using a predicate-based vector space model.

    PubMed

    Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey

    2013-10-01

    Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (p<.001) for the predicate-based (80%) than for the keyword-based (71%) approach. Relevance was almost doubled with the predicate-based approach-2.1 versus 1.6 without rank order adjustment (p<.001) and 1.34 versus 0.98 with rank order adjustment (p<.001) for predicate--versus keyword-based approach respectively. Predicates can support more precise searching than keywords, laying the foundation for rich and sophisticated information search. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Principles of Precision Spectrophotometry: An Advanced Undergraduate Experiment

    ERIC Educational Resources Information Center

    Billmeyer, Fred W., Jr.

    1974-01-01

    Describes an experiment designed to familiarize students with the operation of a precision spectrophotometer, the effects of changes in operating variables, and the characteristics of such components as sources and detectors. (SLH)

  17. Study on dynamic deformation synchronized measurement technology of double-layer liquid surfaces

    NASA Astrophysics Data System (ADS)

    Tang, Huiying; Dong, Huimin; Liu, Zhanwei

    2017-11-01

    Accurate measurement of the dynamic deformation of double-layer liquid surfaces plays an important role in many fields, such as fluid mechanics, biomechanics, petrochemical industry and aerospace engineering. It is difficult to measure dynamic deformation of double-layer liquid surfaces synchronously for traditional methods. In this paper, a novel and effective method for full-field static and dynamic deformation measurement of double-layer liquid surfaces has been developed, that is wavefront distortion of double-wavelength transmission light with geometric phase analysis (GPA) method. Double wavelength lattice patterns used here are produced by two techniques, one is by double wavelength laser, and the other is by liquid crystal display (LCD). The techniques combine the characteristics such as high transparency, low reflectivity and fluidity of liquid. Two color lattice patterns produced by laser and LCD were adjusted at a certain angle through the tested double-layer liquid surfaces simultaneously. On the basis of the refractive indexes difference of two transmitted lights, the double-layer liquid surfaces were decoupled with GPA method. Combined with the derived relationship between phase variation of transmission-lattice patterns and out-of plane heights of two surfaces, as well as considering the height curves of the liquid level, the double-layer liquid surfaces can be reconstructed successfully. Compared with the traditional measurement method, the developed method not only has the common advantages of the optical measurement methods, such as high-precision, full-field and non-contact, but also simple, low cost and easy to set up.

  18. The double-deficit hypothesis: a comprehensive analysis of the evidence.

    PubMed

    Vukovic, Rose K; Siegel, Linda S

    2006-01-01

    The double-deficit hypothesis of developmental dyslexia proposes that deficits in phonological processing and naming speed represent independent sources of dysfunction in dyslexia. The present article is a review of the evidence for the double-deficit hypothesis, including a discussion of recent findings related to the hypothesis. Studies in this area have been characterized by variability in methodology--how dyslexia is defined and identified, and how dyslexia subtypes are classified. Such variability sets limitations on the extent to which conclusions may be drawn with respect to the double-deficit hypothesis. Furthermore, the literature is complicated by the persistent finding that measures of phonological processing and naming speed are significantly correlated, resulting in a statistical artifact that makes it difficult to disentangle the influence of naming speed from that of phonological processing. Longitudinal and intervention studies of the double-deficit hypothesis are needed to accumulate evidence that investigates a naming speed deficit that is independent of a phonological deficit for readers with dyslexia. The existing evidence does not support a persistent core deficit in naming speed for readers with dyslexia.

  19. Theoretical analysis for double-liquid variable focus lens

    NASA Astrophysics Data System (ADS)

    Peng, Runling; Chen, Jiabi; Zhuang, Songlin

    2007-09-01

    In this paper, various structures for double-liquid variable focus lens are introduced. And based on an energy minimization method, explicit calculations and detailed analyses upon an extended Young-type equation are given for double-liquid lenses with cylindrical electrode. Such an equation is especially applicable to liquid-liquid-solid tri-phase systems. It is a little different from the traditional Young equation that was derived according to vapor-liquid-solid triphase systems. The electrowetting effect caused by an external voltage changes the interface shape between two liquids as well as the focal length of the lens. Based on the extended Young-type equation, the relationship between the focal length and the external voltage can also be derived. Corresponding equations and simulation results are presented.

  20. Measurement of double-differential muon neutrino charged-current interactions on C8 H8 without pions in the final state using the T2K off-axis beam

    NASA Astrophysics Data System (ADS)

    Abe, K.; Andreopoulos, C.; Antonova, M.; Aoki, S.; Ariga, A.; Assylbekov, S.; Autiero, D.; Barbi, M.; Barker, G. J.; Barr, G.; Bartet-Friburg, P.; Batkiewicz, M.; Berardi, V.; Berkman, S.; Bhadra, S.; Blondel, A.; Bolognesi, S.; Bordoni, S.; Boyd, S. B.; Brailsford, D.; Bravar, A.; Bronner, C.; Buizza Avanzini, M.; Calland, R. G.; Cao, S.; Caravaca Rodríguez, J.; Cartwright, S. L.; Castillo, R.; Catanesi, M. G.; Cervera, A.; Cherdack, D.; Chikuma, N.; Christodoulou, G.; Clifton, A.; Coleman, J.; Collazuol, G.; Cremonesi, L.; Dabrowska, A.; De Rosa, G.; Dealtry, T.; Denner, P. F.; Dennis, S. R.; Densham, C.; Dewhurst, D.; Di Lodovico, F.; Di Luise, S.; Dolan, S.; Drapier, O.; Duffy, K. E.; Dumarchez, J.; Dytman, S.; Dziewiecki, M.; Emery-Schrenk, S.; Ereditato, A.; Feusels, T.; Finch, A. J.; Fiorentini, G. A.; Friend, M.; Fujii, Y.; Fukuda, D.; Fukuda, Y.; Furmanski, A. P.; Galymov, V.; Garcia, A.; Giffin, S. G.; Giganti, C.; Gizzarelli, F.; Gonin, M.; Grant, N.; Hadley, D. R.; Haegel, L.; Haigh, M. D.; Hamilton, P.; Hansen, D.; Hara, T.; Hartz, M.; Hasegawa, T.; Hastings, N. C.; Hayashino, T.; Hayato, Y.; Helmer, R. L.; Hierholzer, M.; Hillairet, A.; Himmel, A.; Hiraki, T.; Hirota, S.; Hogan, M.; Holeczek, J.; Horikawa, S.; Hosomi, F.; Huang, K.; Ichikawa, A. K.; Ieki, K.; Ikeda, M.; Imber, J.; Insler, J.; Intonti, R. A.; Irvine, T. J.; Ishida, T.; Ishii, T.; Iwai, E.; Iwamoto, K.; Izmaylov, A.; Jacob, A.; Jamieson, B.; Jiang, M.; Johnson, S.; Jo, J. H.; Jonsson, P.; Jung, C. K.; Kabirnezhad, M.; Kaboth, A. C.; Kajita, T.; Kakuno, H.; Kameda, J.; Karlen, D.; Karpikov, I.; Katori, T.; Kearns, E.; Khabibullin, M.; Khotjantsev, A.; Kielczewska, D.; Kikawa, T.; Kim, H.; Kim, J.; King, S.; Kisiel, J.; Knight, A.; Knox, A.; Kobayashi, T.; Koch, L.; Koga, T.; Konaka, A.; Kondo, K.; Kopylov, A.; Kormos, L. L.; Korzenev, A.; Koshio, Y.; Kropp, W.; Kudenko, Y.; Kurjata, R.; Kutter, T.; Lagoda, J.; Lamont, I.; Larkin, E.; Lasorak, P.; Laveder, M.; Lawe, M.; Lazos, M.; Lindner, T.; Liptak, Z. J.; Litchfield, R. P.; Li, X.; Longhin, A.; Lopez, J. P.; Ludovici, L.; Lu, X.; Magaletti, L.; Mahn, K.; Malek, M.; Manly, S.; Marino, A. D.; Marteau, J.; Martin, J. F.; Martins, P.; Martynenko, S.; Maruyama, T.; Matveev, V.; Mavrokoridis, K.; Ma, W. Y.; Mazzucato, E.; McCarthy, M.; McCauley, N.; McFarland, K. S.; McGrew, C.; Mefodiev, A.; Mezzetto, M.; Mijakowski, P.; Minamino, A.; Mineev, O.; Mine, S.; Missert, A.; Miura, M.; Moriyama, S.; Mueller, Th. A.; Murphy, S.; Myslik, J.; Nakadaira, T.; Nakahata, M.; Nakamura, K. G.; Nakamura, K.; Nakamura, K. D.; Nakayama, S.; Nakaya, T.; Nakayoshi, K.; Nantais, C.; Nielsen, C.; Nirkko, M.; Nishikawa, K.; Nishimura, Y.; Nowak, J.; O'Keeffe, H. M.; Ohta, R.; Okumura, K.; Okusawa, T.; Oryszczak, W.; Oser, S. M.; Ovsyannikova, T.; Owen, R. A.; Oyama, Y.; Palladino, V.; Palomino, J. L.; Paolone, V.; Patel, N. D.; Pavin, M.; Payne, D.; Perkin, J. D.; Petrov, Y.; Pickard, L.; Pickering, L.; Pinzon Guerra, E. S.; Pistillo, C.; Popov, B.; Posiadala-Zezula, M.; Poutissou, J.-M.; Poutissou, R.; Przewlocki, P.; Quilain, B.; Radicioni, E.; Ratoff, P. N.; Ravonel, M.; Rayner, M. A. M.; Redij, A.; Reinherz-Aronis, E.; Riccio, C.; Rojas, P.; Rondio, E.; Roth, S.; Rubbia, A.; Rychter, A.; Sacco, R.; Sakashita, K.; Sánchez, F.; Sato, F.; Scantamburlo, E.; Scholberg, K.; Schoppmann, S.; Schwehr, J.; Scott, M.; Seiya, Y.; Sekiguchi, T.; Sekiya, H.; Sgalaberna, D.; Shah, R.; Shaikhiev, A.; Shaker, F.; Shaw, D.; Shiozawa, M.; Shirahige, T.; Short, S.; Smy, M.; Sobczyk, J. T.; Sorel, M.; Southwell, L.; Stamoulis, P.; Steinmann, J.; Stewart, T.; Suda, Y.; Suvorov, S.; Suzuki, A.; Suzuki, K.; Suzuki, S. Y.; Suzuki, Y.; Tacik, R.; Tada, M.; Takahashi, S.; Takeda, A.; Takeuchi, Y.; Tanaka, H. K.; Tanaka, H. A.; Terhorst, D.; Terri, R.; Thakore, T.; Thompson, L. F.; Tobayama, S.; Toki, W.; Tomura, T.; Touramanis, C.; Tsukamoto, T.; Tzanov, M.; Uchida, Y.; Vacheret, A.; Vagins, M.; Vallari, Z.; Vasseur, G.; Wachala, T.; Wakamatsu, K.; Walter, C. W.; Wark, D.; Warzycha, W.; Wascko, M. O.; Weber, A.; Wendell, R.; Wilkes, R. J.; Wilking, M. J.; Wilkinson, C.; Wilson, J. R.; Wilson, R. J.; Yamada, Y.; Yamamoto, K.; Yamamoto, M.; Yanagisawa, C.; Yano, T.; Yen, S.; Yershov, N.; Yokoyama, M.; Yoshida, K.; Yuan, T.; Yu, M.; Zalewska, A.; Zalipska, J.; Zambelli, L.; Zaremba, K.; Ziembicki, M.; Zimmerman, E. D.; Zito, M.; Żmuda, J.; T2K Collaboration

    2016-06-01

    We report the measurement of muon neutrino charged-current interactions on carbon without pions in the final state at the T2K beam energy using 5.734 ×1020 protons on target. For the first time the measurement is reported as a flux-integrated, double-differential cross section in muon kinematic variables (cos θμ, pμ), without correcting for events where a pion is produced and then absorbed by final state interactions. Two analyses are performed with different selections, background evaluations and cross-section extraction methods to demonstrate the robustness of the results against biases due to model-dependent assumptions. The measurements compare favorably with recent models which include nucleon-nucleon correlations but, given the present precision, the measurement does not distinguish among the available models. The data also agree with Monte Carlo simulations which use effective parameters that are tuned to external data to describe the nuclear effects. The total cross section in the full phase space is σ =(0.417 ±0.047 (syst ) ±0.005 (stat ) )×10-38 cm2 nucleon-1 and the cross section integrated in the region of phase space with largest efficiency and best signal-over-background ratio (cos θμ>0.6 and pμ>200 MeV ) is σ =(0.202 ±0.036 (syst ) ±0.003 (stat ) )×10-38 cm2 nucleon-1 .

  1. A microlens-array based pupil slicer and double scrambler for MAROON-X

    NASA Astrophysics Data System (ADS)

    Seifahrt, Andreas; Stürmer, Julian; Bean, Jacob L.

    2016-07-01

    We report on the design and construction of a microlens-array (MLA)-based pupil slicer and double scrambler for MAROON-X, a new fiber-fed, red-optical, high-precision radial-velocity spectrograph for one of the twin 6.5m Magellan Telescopes in Chile. We have constructed a 3X slicer based on a single cylindrical MLA and show that geometric efficiencies of >=85% can be achieved, limited by the fill factor and optical surface quality of the MLA. We present here the final design of the 3x pupil slicer and double scrambler for MAROON-X, based on a dual MLA design with (a)spherical lenslets. We also discuss the techniques used to create a pseudo-slit of rectangular core fibers with low FRD levels.

  2. First direct determination of the 48Ca double-β decay Q value

    NASA Astrophysics Data System (ADS)

    Bustabad, S.; Bollen, G.; Brodeur, M.; Lincoln, D. L.; Novario, S. J.; Redshaw, M.; Ringle, R.; Schwarz, S.; Valverde, A. A.

    2013-08-01

    The low-energy beam and ion trap Penning trap mass spectrometer was used for an improved determination of the 48Ca double-β decay Q value: Qββ=4268.121(79)keV. The new value is 1.2 keV greater than the value in the 2012 atomic mass evaluation [Chin. Phys. CCPCHCQ1674-113710.1088/1674-1137/36/12/003 36, 1603 (2012)], a shift of three σ, and is a factor of 5 more precise. Accurate knowledge of this Q value is important for experimental searches to observe neutrinoless double-β decay (0νββ) in 48Ca and is essential for extracting the effective mass of the electron neutrino if the 48Ca half-life of 0νββ was experimentally determined.

  3. Advanced supersonic propulsion study, phases 3 and 4. [variable cycle engines

    NASA Technical Reports Server (NTRS)

    Allan, R. D.; Joy, W.

    1977-01-01

    An evaluation of various advanced propulsion concepts for supersonic cruise aircraft resulted in the identification of the double-bypass variable cycle engine as the most promising concept. This engine design utilizes special variable geometry components and an annular exhaust nozzle to provide high take-off thrust and low jet noise. The engine also provides good performance at both supersonic cruise and subsonic cruise. Emission characteristics are excellent. The advanced technology double-bypass variable cycle engine offers an improvement in aircraft range performance relative to earlier supersonic jet engine designs and yet at a lower level of engine noise. Research and technology programs required in certain design areas for this engine concept to realize its potential benefits include refined parametric analysis of selected variable cycle engines, screening of additional unconventional concepts, and engine preliminary design studies. Required critical technology programs are summarized.

  4. Mass and Double-Beta-Decay Q Value of Xe136

    NASA Astrophysics Data System (ADS)

    Redshaw, Matthew; Wingfield, Elizabeth; McDaniel, Joseph; Myers, Edmund G.

    2007-02-01

    The atomic mass of Xe136 has been measured by comparing cyclotron frequencies of single ions in a Penning trap. The result, with 1 standard deviation uncertainty, is M(Xe136)=135.907 214 484 (11) u. Combined with previous results for the mass of Ba136 [Audi, Wapstra, and Thibault, Nucl. Phys. A 729, 337 (2003)NUPABL0375-947410.1016/j.nuclphysa.2003.11.003], this gives a Q value (M[Xe136]-M[Ba136])c2=2457.83(37)keV, sufficiently precise for ongoing searches for the neutrinoless double-beta decay of Xe136.

  5. Mass and Double-Beta-Decay Q Value of {sup 136}Xe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redshaw, Matthew; Wingfield, Elizabeth; McDaniel, Joseph

    The atomic mass of {sup 136}Xe has been measured by comparing cyclotron frequencies of single ions in a Penning trap. The result, with 1 standard deviation uncertainty, is M({sup 136}Xe)=135.907 214 484 (11) u. Combined with previous results for the mass of {sup 136}Ba [Audi, Wapstra, and Thibault, Nucl. Phys. A 729, 337 (2003)], this gives a Q value (M[{sup 136}Xe]-M[{sup 136}Ba])c{sup 2}=2457.83(37) keV, sufficiently precise for ongoing searches for the neutrinoless double-beta decay of {sup 136}Xe.

  6. Doubled-lined eclipsing binary system KIC~2306740 with pulsating component discovered from Kepler space photometry

    NASA Astrophysics Data System (ADS)

    Yakut, Kadri

    2015-08-01

    We present a detailed study of KIC 2306740, an eccentric double-lined eclipsing binary system with a pulsating component.Archive Kepler satellite data were combined with newly obtained spectroscopic data with 4.2\\,m William Herschel Telescope(WHT). This allowed us to determine rather precise orbital and physical parameters of this long period, slightly eccentric, pulsating binary system. Duplicity effects are extracted from the light curve in order to estimate pulsation frequencies from the residuals.We modelled the detached binary system assuming non-conservative evolution models with the Cambridge STARS(TWIN) code.

  7. Precision measurement of the longitudinal double-spin asymmetry for inclusive jet production in polarized proton collisions at √s = 200 GeV

    DOE PAGES

    Adamczyk, L.

    2015-08-26

    We report a new measurement of the midrapidity inclusive jet longitudinal double-spin asymmetry, A LL, in polarized pp collisions at center-of-mass energy √s = 200 GeV. The STAR data place stringent constraints on polarized parton distribution functions extracted at next-to-leading order from global analyses of inclusive deep-inelastic scattering (DIS), semi-inclusive DIS, and RHIC pp data. Lastly, the measured asymmetries provide evidence at the 3σ level for positive gluon polarization in the Bjorken-x region x > 0.05 .

  8. Variable ratio beam splitter for laser applications

    NASA Technical Reports Server (NTRS)

    Brown, R. M.

    1971-01-01

    Beam splitter employing birefringent optics provides either widely different or precisely equal beam ratios, it can be used with laser light source systems for interferometry of lossy media, holography, scattering measurements, and precise beam ratio applications.

  9. Implosion Dynamics and Mix in Double-Shell ICF Capsule Designs

    NASA Astrophysics Data System (ADS)

    Gunderson, Mark; Daughton, William; Simakov, Andrei; Wilson, Douglas; Watt, Robert; Delamater, Norman; Montgomery, David

    2015-11-01

    From an implosion dynamics perspective, double-shell ICF capsule designs have several advantages over the single-shell NIF ICF capsule point design. Double shell designs do not require precise shock sequencing, do not rely on hot spot ignition, have lower peak implosion speed requirements, and have lower convergence ratio requirements. However, there are still hurdles that must be overcome. The timing of the two main shocks in these designs is important in achieving sufficient compression of the DT fuel. Instability of the inner gold shell due to preheat from the hohlraum environment can disrupt the implosion of the inner pill. Mix, in addition to quenching burn in the DT fuel, also decreases the transfer of energy between the beryllium ablator and the inner gold shell during collision thus decreasing the implosion speed of the inner shell along with compression of the DT fuel. Herein, we will discuss practical implications of these effects on double-shell design we carry out in preparation for the NIF double-shell campaign. Work performed under the auspices of DOE by LANL under contract DE-AC52-06NA25396.

  10. EDDIX--a database of ionisation double differential cross sections.

    PubMed

    MacGibbon, J H; Emerson, S; Liamsuwan, T; Nikjoo, H

    2011-02-01

    The use of Monte Carlo track structure is a choice method in biophysical modelling and calculations. To precisely model 3D and 4D tracks, the cross section for the ionisation by an incoming ion, double differential in the outgoing electron energy and angle, is required. However, the double differential cross section cannot be theoretically modelled over the full range of parameters. To address this issue, a database of all available experimental data has been constructed. Currently, the database of Experimental Double Differential Ionisation Cross sections (EDDIX) contains over 1200 digitalised experimentally measured datasets from the 1960s to present date, covering all available ion species (hydrogen to uranium) and all available target species. Double differential cross sections are also presented with the aid of an eight parameter functions fitted to the cross sections. The parameters include projectile species and charge, target nuclear charge and atomic mass, projectile atomic mass and energy, electron energy and deflection angle. It is planned to freely distribute EDDIX and make it available to the radiation research community for use in the analytical and numerical modelling of track structure.

  11. Compensation for Lithography Induced Process Variations during Physical Design

    NASA Astrophysics Data System (ADS)

    Chin, Eric Yiow-Bing

    This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.

  12. Dynamics of Reactive Microbial Hotspots in Concentration Gradient.

    NASA Astrophysics Data System (ADS)

    Hubert, A.; Farasin, J.; Tabuteau, H.; Dufresne, A.; Meheust, Y.; Le Borgne, T.

    2017-12-01

    In subsurface environments, bacteria play a major role in controlling the kinetics of a broad range of biogeochemical reactions. In such environments, nutrients fluxes and solute concentrations needed for bacteria metabolism may be highly variable in space and intermittent in time. This can lead to the formation of reactive hotspots where and when conditions are favorable to particular microorganisms, hence inducing biogeochemical reaction kinetics that differ significantly from those measured in homogeneous model environments. To investigate the impact of chemical gradients on the spatial structure and temporal dynamics of subsurface microorganism populations, we develop microfluidic cells allowing for a precise control of flow and chemical gradient conditions, as well as quantitative monitoring of the bacteria's spatial distribution and biofilm development. Using the non-motile Escherichia coli JW1908-1 strain and Gallionella capsiferriformans ES-2 as model organisms, we investigate the behavior and development of bacteria over a range of single and double concentration gradients in the concentrations of nutrients, electron donors and electron acceptors. We measure bacterial activity and population growth locally in precisely known hydrodynamic and chemical environments. This approach allows time-resolved monitoring of the location and intensity of reactive hotspots in micromodels as a function of the flow and chemical gradient conditions. We compare reactive microbial hotspot dynamics in our micromodels to classic growth laws and well-known growth parameters for the laboratory model bacteria Escherichia coli.We also discuss consequences for the formation and temporal dynamics of biofilms in the subsurface.

  13. Parallel mutual information estimation for inferring gene regulatory networks on GPUs

    PubMed Central

    2011-01-01

    Background Mutual information is a measure of similarity between two variables. It has been widely used in various application domains including computational biology, machine learning, statistics, image processing, and financial computing. Previously used simple histogram based mutual information estimators lack the precision in quality compared to kernel based methods. The recently introduced B-spline function based mutual information estimation method is competitive to the kernel based methods in terms of quality but at a lower computational complexity. Results We present a new approach to accelerate the B-spline function based mutual information estimation algorithm with commodity graphics hardware. To derive an efficient mapping onto this type of architecture, we have used the Compute Unified Device Architecture (CUDA) programming model to design and implement a new parallel algorithm. Our implementation, called CUDA-MI, can achieve speedups of up to 82 using double precision on a single GPU compared to a multi-threaded implementation on a quad-core CPU for large microarray datasets. We have used the results obtained by CUDA-MI to infer gene regulatory networks (GRNs) from microarray data. The comparisons to existing methods including ARACNE and TINGe show that CUDA-MI produces GRNs of higher quality in less time. Conclusions CUDA-MI is publicly available open-source software, written in CUDA and C++ programming languages. It obtains significant speedup over sequential multi-threaded implementation by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs. PMID:21672264

  14. Development and Evaluation of a High Sensitivity DIAL System for Profiling Atmospheric CO2

    NASA Technical Reports Server (NTRS)

    Ismail, Syed; Koch, Grady J.; Refaat, Tamer F.; Abedin, M. N.; Yu, Jirong; Singh, Upendra N.

    2008-01-01

    A ground-based 2-micron Differential Absorption Lidar (DIAL) CO2 profiling system for atmospheric boundary layer studies and validation of space-based CO2 sensors is being developed and tested at NASA Langley Research Center as part of the NASA Instrument Incubator Program. To capture the variability of CO2 in the lower troposphere a precision of 1-2 ppm of CO2 (less than 0.5%) with 0.5 to 1 km vertical resolution from near surface to free troposphere (4-5 km) is one of the goals of this program. In addition, a 1% (3 ppm) absolute accuracy with a 1 km resolution over 0.5 km to free troposphere (4-5 km) is also a goal of the program. This DIAL system leverages 2-micron laser technology developed under NASA's Laser Risk Reduction Program (LRRP) and other NASA programs to develop new solid-state laser technology that provides high pulse energy, tunable, wavelength-stabilized, and double-pulsed lasers that are operable over pre-selected temperature insensitive strong CO2 absorption lines suitable for profiling of lower tropospheric CO2. It also incorporates new high quantum efficiency, high gain, and relatively low noise phototransistors, and a new receiver/signal processor system to achieve high precision DIAL measurements. This presentation describes the capabilities of this system for atmospheric CO2 and aerosol profiling. Examples of atmospheric measurements in the lidar and DIAL mode will be presented.

  15. 7075-T6 and 2024-T351 Aluminum Alloy Fatigue Crack Growth Rate Data

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Wright, Christopher W.; Johnston, William M., Jr.

    2005-01-01

    Experimental test procedures for the development of fatigue crack growth rate data has been standardized by the American Society for Testing and Materials. Over the past 30 years several gradual changes have been made to the standard without rigorous assessment of the affect these changes have on the precision or variability of the data generated. Therefore, the ASTM committee on fatigue crack growth has initiated an international round robin test program to assess the precision and variability of test results generated using the standard E647-00. Crack growth rate data presented in this report, in support of the ASTM roundrobin, shows excellent precision and repeatability.

  16. Heterogeneity in the 238U/235U Ratios of Angrites.

    NASA Astrophysics Data System (ADS)

    Tissot, F.; Dauphas, N.; Grove, T. L.

    2016-12-01

    Angrites are differentiated meteorites of basaltic composition, of either volcanic or plutonic origin, that display minimal post-crystallization alteration, metamorphism, shock or impact brecciation. Because quenched angrites cooled very rapidly, all radiochronometric systems closed simultaneously in these samples. Quenched angrites are thus often used as anchors for cross-calibrating short-lived dating methods (e.g., 26Al-26Mg) and the absolute dating techniques (e.g, Pb-Pb). Due to the constancy of the 238U/235U ratio in natural samples, Pb-Pb ages have long been calculated using a "consensus" 238U/235U ratio, but the discovery of resolvable variations in the 238U/235U ratio of natural samples, means that the U isotopic composition of the material to date also has to be determined in order to obtain high-precision Pb-Pb ages. We set out (a) to measure at high-precision the 238U/235U ratio of a large array of angrites to correct their Pb-Pb ages, and (b) to identify whether all angrites have a similar U isotopic composition, and, if not, what were the processes responsible for this variability. Recently, Brennecka & Wadhwa (2012) suggested that the angrite-parent body had a homogeneous 238U/235U ratio. They reached this conclusion partly because they propagated the uncertainties of the U isotopic composition of the various U double spikes that they used onto the final 238U/235U ratio the sample. Because this error is systematic (i.e., it affects all samples similarly), differences in the δ238U values of samples corrected by the same double spike are better known than one would be led to believe if uncertainties on the spike composition are propagated. At the conference, we will present the results of the high-precision U isotope analyses for six angrite samples: NWA 4590, NWA 4801, NWA 6291, Angra dos Reis, D'Orbigny, and Sahara 99555. We will show that there is some heterogeneity in the δ238U values of the angrites and will discuss the possible processes by which different angrite samples can acquire different U isotopic compositions. The U isotope data will then be used to correct Pb-Pb ages of angrites estimated using an assumed 238U/235U ratio. These ages will be used to discuss the degree of concordance between short-lived nuclides systems and the absolute Pb-Pb clock in early Solar System materials.

  17. Advanced supersonic propulsion system technology study, phase 2

    NASA Technical Reports Server (NTRS)

    Allan, R. D.

    1975-01-01

    Variable cycle engines were identified, based on the mixed-flow low-bypass-ratio augmented turbofan cycle, which has shown excellent range capability in the AST airplane. The best mixed-flow augmented turbofan engine was selected based on range in the AST Baseline Airplane. Selected variable cycle engine features were added to this best conventional baseline engine, and the Dual-Cycle VCE and Double-Bypass VCE were defined. The conventional mixed-flow turbofan and the Double-Bypass VCE were on the subjects of engine preliminary design studies to determine mechanical feasibility, confirm weight and dimensional estimates, and identify the necessary technology considered not yet available. Critical engine components were studied and incorporated into the variable cycle engine design.

  18. N-fold Darboux transformation and double-Wronskian-typed solitonic structures for a variable-coefficient modified Kortweg-de Vries equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lei, E-mail: wanglei2239@126.com; Gao, Yi-Tian; State Key Laboratory of Software Development Environment, Beijing University of Aeronautics and Astronautics, Beijing 100191

    2012-08-15

    Under investigation in this paper is a variable-coefficient modified Kortweg-de Vries (vc-mKdV) model describing certain situations from the fluid mechanics, ocean dynamics and plasma physics. N-fold Darboux transformation (DT) of a variable-coefficient Ablowitz-Kaup-Newell-Segur spectral problem is constructed via a gauge transformation. Multi-solitonic solutions in terms of the double Wronskian for the vc-mKdV model are derived by the reduction of the N-fold DT. Three types of the solitonic interactions are discussed through figures: (1) Overtaking collision; (2) Head-on collision; (3) Parallel solitons. Nonlinear, dispersive and dissipative terms have the effects on the velocities of the solitonic waves while the amplitudes ofmore » the waves depend on the perturbation term. - Highlights: Black-Right-Pointing-Pointer N-fold DT is firstly applied to a vc-AKNS spectral problem. Black-Right-Pointing-Pointer Seeking a double Wronskian solution is changed into solving two systems. Black-Right-Pointing-Pointer Effects of the variable coefficients on the multi-solitonic waves are discussed in detail. Black-Right-Pointing-Pointer This work solves the problem from Yi Zhang [Ann. Phys. 323 (2008) 3059].« less

  19. Multidetector computed tomography shows reverse cardiac remodeling after double lung transplantation for pulmonary hypertension.

    PubMed

    Mandich Crovetto, D; Alonso Charterina, S; Jiménez López-Guarch, C; Pont Vilalta, M; Pérez Núñez, M; de Pablo Gafas, A; Escribano Subías, P

    2016-01-01

    To use multidetector computed tomography (MDCT) to evaluate the structural changes in the right heart and pulmonary arteries that occur in patients with severe pulmonary hypertension treated by double lung transplantation. This was a retrospective study of 21 consecutive patients diagnosed with severe pulmonary hypertension who underwent double lung transplantation at our center between 2010 and 2014. We analyzed the last MDCT study done before lung transplantation and the first MDCT study done after lung transplantation. We recorded the following variables: diameter of the pulmonary artery trunk, ratio of the diameter of the pulmonary artery trunk to the diameter of the ascending aorta, diameter of the right ventricle, ratio of the diameter of the left ventricle to the diameter of the right ventricle, and eccentricity index. Statistical analysis consisted of the comparison of the means of the variables recorded. In all cases analyzed, the MDCT study done a mean of 24±14 days after double lung transplantation showed a significant reduction in the size of the right heart chambers, with improved indices of ventricular interdependency index, and reduction in the size of the pulmonary artery trunk (p<0.001 for all the variables analyzed). Patients with pulmonary hypertension treated by double lung transplantation present early reverse remodeling of the changes in the structures of the right heart and pulmonary arterial tree. MDCT is useful for detecting these changes. Copyright © 2016 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Implementing NLO DGLAP evolution in parton showers

    DOE PAGES

    Hoche, Stefan; Krauss, Frank; Prestel, Stefan

    2017-10-13

    Here, we present a parton shower which implements the DGLAP evolution of parton densities and fragmentation functions at next-to-leading order precision up to effects stemming from local four-momentum conservation. The Monte-Carlo simulation is based on including next-to-leading order collinear splitting functions in an existing parton shower and combining their soft enhanced contributions with the corresponding terms at leading order. Soft double counting is avoided by matching to the soft eikonal. Example results from two independent realizations of the algorithm, implemented in the two event generation frameworks Pythia and Sherpa, illustrate the improved precision of the new formalism.

  1. Implementing NLO DGLAP evolution in parton showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Höche, Stefan; Krauss, Frank; Prestel, Stefan

    2017-10-01

    We present a parton shower which implements the DGLAP evolution of parton densities and fragmentation functions at next-to-leading order precision up to effects stemming from local four-momentum conservation. The Monte-Carlo simulation is based on including next-to-leading order collinear splitting functions in an existing parton shower and combining their soft enhanced contributions with the corresponding terms at leading order. Soft double counting is avoided by matching to the soft eikonal. Example results from two independent realizations of the algorithm, implemented in the two event generation frameworks Pythia and Sherpa, illustrate the improved precision of the new formalism.

  2. Three-dimensional orbit and physical parameters of HD 6840

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Li; Ren, Shu-Lin; Fu, Yan-Ning

    2016-02-01

    HD 6840 is a double-lined visual binary with an orbital period of ˜7.5 years. By fitting the speckle interferometric measurements made by the 6 m BTA telescope and 3.5 m WIYN telescope, Balega et al. gave a preliminary astrometric orbital solution of the system in 2006. Recently, Griffin derived a precise spectroscopic orbital solution from radial velocities observed with OPH and Cambridge Coravel. However, due to the low precision of the determined orbital inclination, the derived component masses are not satisfying. By adding the newly collected astrometric data in the Fourth Catalog of Interferometric Measurements of Binary Stars, we give a three-dimensional orbit solution with high precision and derive the preliminary physical parameters of HD 6840 via a simultaneous fit including both astrometric and radial velocity measurements.

  3. New instrumentation for precise (n,γ) measurements at ILL Grenoble

    NASA Astrophysics Data System (ADS)

    Urban, W.; Jentschel, M.; Märkisch, B.; Materna, Th; Bernards, Ch; Drescher, C.; Fransen, Ch; Jolie, J.; Köster, U.; Mutti, P.; Rzaca-Urban, T.; Simpson, G. S.

    2013-03-01

    An array of eight Ge detectors for coincidence measurements of γ rays from neutron-capture reactions has been constructed at the PF1B cold-neutron facility of the Institut Laue-Langevin. The detectors arranged in one plane every 45° can be used for angular correlation measurements. The neutron collimation line of the setup provides a neutron beam of 12 mm in diameter and the capture flux of about 108/(s × cm2) at the target position, with a negligible neutron halo. With the setup up to 109 γγ and up to 108 triple-γ coincidence events have been collected in a day measurement. Precise energy and efficiency calibrations up to 10 MeV are easily performed with 27Al(n,γ)28Al and 35Cl(n,γ)36Cl reactions. Test measurements have shown that neutron binding energies can be determined with an accuracy down to a few eV and angular correlation coefficients measured with a precision down to a percent level. The triggerless data collected with a digital electronics and acquisition allows to determine half-lives of excited levels in the nano- to microsecond range. The high resolving power of double- and triple-γ time coincidences allows significant improvements of excitation schemes reported in previous (n,γ) works and complements high-resolution γ-energy measurements at the double-crystal Bragg spectrometer GAMS of ILL.

  4. Variable-pulse switching circuit accurately controls solenoid-valve actuations

    NASA Technical Reports Server (NTRS)

    Gillett, J. D.

    1967-01-01

    Solid state circuit generating adjustable square wave pulses of sufficient power operates a 28 volt dc solenoid valve at precise time intervals. This circuit is used for precise time control of fluid flow in combustion experiments.

  5. Relations between basic and specific motor abilities and player quality of young basketball players.

    PubMed

    Marić, Kristijan; Katić, Ratko; Jelicić, Mario

    2013-05-01

    Subjects from 5 first league clubs from Herzegovina were tested with the purpose of determining the relations of basic and specific motor abilities, as well as the effect of specific abilities on player efficiency in young basketball players (cadets). A battery of 12 tests assessing basic motor abilities and 5 specific tests assessing basketball efficiency were used on a sample of 83 basketball players. Two significant canonical correlations, i.e. linear combinations explained the relation between the set of twelve variables of basic motor space and five variables of situational motor abilities. Underlying the first canonical linear combination is the positive effect of the general motor factor, predominantly defined by jumping explosive power, movement speed of the arms, static strength of the arms and coordination, on specific basketball abilities: movement efficiency, the power of the overarm throw, shooting and passing precision, and the skill of handling the ball. The impact of basic motor abilities of precision and balance on specific abilities of passing and shooting precision and ball handling is underlying the second linear combination. The results of regression correlation analysis between the variable set of specific motor abilities and game efficiency have shown that the ability of ball handling has the largest impact on player quality in basketball cadets, followed by shooting precision and passing precision, and the power of the overarm throw.

  6. Comparison of split double and triple twists in pair figure skating.

    PubMed

    King, Deborah L; Smith, Sarah L; Brown, Michele R; McCrory, Jean L; Munkasy, Barry A; Scheirman, Gary I

    2008-05-01

    In this study, we compared the kinematic variables of the split triple twist with those of the split double twist to help coaches and scientists understand these landmark pair skating skills. High-speed video was taken during the pair short and free programmes at the 2002 Salt Lake City Winter Olympics and the 2003 International Skating Union Grand Prix Finals. Three-dimensional analyses of 14 split double twists and 15 split triple twists from eleven pairs were completed. In spite of considerable variability in the performance variables among the pairs, the main difference between the split double twists and split triple twists was an increase in rotational rate. While eight of the eleven pairs relied primarily on an increased rotational rate to complete the split triple twist, three pairs employed a combined strategy of increased rotational rate and increased flight time due predominantly to delayed or lower catches. These results were similar to observations of jumps in singles skating for which the extra rotation is typically due to an increase in rotational velocity; increases in flight time come primarily from delayed landings as opposed to additional height during flight. Combining an increase in flight time and rotational rate may be a good strategy for completing the split triple twist in pair skating.

  7. Precision Spectral Variability of L Dwarfs from the Ground

    NASA Astrophysics Data System (ADS)

    Burgasser, Adam J.; Schlawin, Everett; Teske, Johanna K.; Karalidi, Theodora; Gizis, John

    2017-01-01

    L dwarf photospheres (1500 K < T < 2500 K) contain mineral and metal condensates, which appear to organize into cloud structures as inferred from observed periodic photometric variations with amplitudes of <1%-30%. Studying the vertical structure, composition, and long-term evolution of these clouds necessitates precision spectroscopic monitoring, until recently limited to space-based facilities. Building on techniques developed for ground-based exoplanet transit spectroscopy, we present a method for precision spectral monitoring of L dwarfs with nearby visual companions. Using IRTF/SpeX, we demonstrate <0.5% spectral variability precision across the 0.9-2.4 micron band, and present results for two known L5 dwarf variables, J0835-0819 and J1821+1414, both of which show evidence of 3D cloud structure similar to that seen in space-based observations. We describe a survey of 30 systems which would sample the full L dwarf sequence and allow characterization of temperature, surface gravity, metallicity, rotation period and orientation effects on cloud structure, composition and evolution.This research is supported by funding from the National Science Foundation under award No. AST-1517177, and the National Aeronautics and Space Administration under Grant No. NNX15AI75G.

  8. A terrain-based paired-site sampling design to assess biodiversity losses from eastern hemlock decline

    USGS Publications Warehouse

    Young, J.A.; Smith, D.R.; Snyder, C.D.; Lemarie, D.P.

    2002-01-01

    Biodiversity surveys are often hampered by the inability to control extraneous sources of variability introduced into comparisons of populations across a heterogenous landscape. If not specifically accounted for a priori, this noise can weaken comparisons between sites, and can make it difficult to draw inferences about specific ecological processes. We developed a terrain-based, paired-site sampling design to analyze differences in aquatic biodiversity between streams draining eastern hemlock (Tsuga canadensis) forests, and those draining mixed hardwood forests in Delaware Water Gap National Recreation Area (USA). The goal of this design was to minimize variance due to terrain influences on stream communities, while representing the range of hemlock dominated stream environments present in the park. We used geographic information systems (GIS) and cluster analysis to define and partition hemlock dominated streams into terrain types based on topographic variables and stream order. We computed similarity of forest stands within terrain types and used this information to pair hemlock-dominated streams with hardwood counterparts prior to sampling. We evaluated the effectiveness of the design through power analysis and found that power to detect differences in aquatic invertebrate taxa richness was highest when sites were paired and terrain type was included as a factor in the analysis. Precision of the estimated difference in mean richness was nearly doubled using the terrain-based, paired site design in comparison to other evaluated designs. Use of this method allowed us to sample stream communities representative of park-wide forest conditions while effectively controlling for landscape variability.

  9. Focal length hysteresis of a double-liquid lens based on electrowetting

    NASA Astrophysics Data System (ADS)

    Peng, Runling; Wang, Dazhen; Hu, Zhiwei; Chen, Jiabi; Zhuang, Songlin

    2013-02-01

    In this paper, an extended Young equation especially suited for an ideal cylindrical double-liquid variable-focus lens is derived by means of an energy minimization method. Based on the extended Young equation, a kind of focal length hysteresis effect is introduced into the double-liquid variable-focus lens. Such an effect can be explained theoretically by adding a force of friction to the tri-phase contact line. Theoretical analysis shows that the focal length at a particular voltage can be different depending on whether the applied voltage is increasing or decreasing, that is, there is a focal length hysteresis effect. Moreover, the focal length at a particular voltage must be larger when the voltage is rising than when it is dropping. These conclusions are also verified by experiments.

  10. Design of permanent magnet eddy current brake for a small scaled electromagnetic launch model

    NASA Astrophysics Data System (ADS)

    Zhou, Shigui; Yu, Haitao; Hu, Minqiang; Huang, Lei

    2012-04-01

    A variable pole-pitch double-sided permanent magnet (PM) linear eddy current brake (LECB) is proposed for a small scaled electromagnetic launch model. A two-dimensional (2D) analytical steady state model is presented for the double-sided PM-LECB, and the expression for the braking force is derived. Based on the analytical model, the material and eddy current skin effect of the conducting plate are analyzed. Moreover, a variable pole-pitch double-sided PM-LECB is proposed for the effective braking of the moving plate. In addition, the braking force is predicted by finite element (FE) analysis, and the simulated results are in good agreement with the analytical model. Finally, a prototype is presented to test the braking profile for validation of the proposed design.

  11. Mean centering of double divisor ratio spectra, a novel spectrophotometric method for analysis of ternary mixtures

    NASA Astrophysics Data System (ADS)

    Hassan, Said A.; Elzanfaly, Eman S.; Salem, Maissa Y.; El-Zeany, Badr A.

    2016-01-01

    A novel spectrophotometric method was developed for determination of ternary mixtures without previous separation, showing significant advantages over conventional methods. The new method is based on mean centering of double divisor ratio spectra. The mathematical explanation of the procedure is illustrated. The method was evaluated by determination of model ternary mixture and by the determination of Amlodipine (AML), Aliskiren (ALI) and Hydrochlorothiazide (HCT) in laboratory prepared mixtures and in a commercial pharmaceutical preparation. For proper presentation of the advantages and applicability of the new method, a comparative study was established between the new mean centering of double divisor ratio spectra (MCDD) and two similar methods used for analysis of ternary mixtures, namely mean centering (MC) and double divisor of ratio spectra-derivative spectrophotometry (DDRS-DS). The method was also compared with a reported one for analysis of the pharmaceutical preparation. The method was validated according to the ICH guidelines and accuracy, precision, repeatability and robustness were found to be within the acceptable limits.

  12. Two-phase strategy of neural control for planar reaching movements: I. XY coordination variability and its relation to end-point variability.

    PubMed

    Rand, Miya K; Shimansky, Yury P

    2013-03-01

    A quantitative model of optimal transport-aperture coordination (TAC) during reach-to-grasp movements has been developed in our previous studies. The utilization of that model for data analysis allowed, for the first time, to examine the phase dependence of the precision demand specified by the CNS for neurocomputational information processing during an ongoing movement. It was shown that the CNS utilizes a two-phase strategy for movement control. That strategy consists of reducing the precision demand for neural computations during the initial phase, which decreases the cost of information processing at the expense of lower extent of control optimality. To successfully grasp the target object, the CNS increases precision demand during the final phase, resulting in higher extent of control optimality. In the present study, we generalized the model of optimal TAC to a model of optimal coordination between X and Y components of point-to-point planar movements (XYC). We investigated whether the CNS uses the two-phase control strategy for controlling those movements, and how the strategy parameters depend on the prescribed movement speed, movement amplitude and the size of the target area. The results indeed revealed a substantial similarity between the CNS's regulation of TAC and XYC. First, the variability of XYC within individual trials was minimal, meaning that execution noise during the movement was insignificant. Second, the inter-trial variability of XYC was considerable during the majority of the movement time, meaning that the precision demand for information processing was lowered, which is characteristic for the initial phase. That variability significantly decreased, indicating higher extent of control optimality, during the shorter final movement phase. The final phase was the longest (shortest) under the most (least) challenging combination of speed and accuracy requirements, fully consistent with the concept of the two-phase control strategy. This paper further discussed the relationship between motor variability and XYC variability.

  13. Modelling the balance between quiescence and cell death in normal and tumour cell populations.

    PubMed

    Spinelli, Lorenzo; Torricelli, Alessandro; Ubezio, Paolo; Basse, Britta

    2006-08-01

    When considering either human adult tissues (in vivo) or cell cultures (in vitro), cell number is regulated by the relationship between quiescent cells, proliferating cells, cell death and other controls of cell cycle duration. By formulating a mathematical description we see that even small alterations of this relationship may cause a non-growing population to start growing with doubling times characteristic of human tumours. Our model consists of two age structured partial differential equations for the proliferating and quiescent cell compartments. Model parameters are death rates from and transition rates between these compartments. The partial differential equations can be solved for the steady-age distributions, giving the distribution of the cells through the cell cycle, dependent on specific model parameter values. Appropriate formulas can then be derived for various population characteristic quantities such as labelling index, proliferation fraction, doubling time and potential doubling time of the cell population. Such characteristic quantities can be estimated experimentally, although with decreasing precision from in vitro, to in vivo experimental systems and to the clinic. The model can be used to investigate the effects of a single alteration of either quiescence or cell death control on the growth of the whole population and the non-trivial dependence of the doubling time and other observable quantities on particular underlying cell cycle scenarios of death and quiescence. The model indicates that tumour evolution in vivo is a sequence of steady-states, each characterised by particular death and quiescence rate functions. We suggest that a key passage of carcinogenesis is a loss of the communication between quiescence, death and cell cycle machineries, causing a defect in their precise, cell cycle dependent relationship.

  14. A New Method of High-Precision Positioning for an Indoor Pseudolite without Using the Known Point Initialization.

    PubMed

    Zhao, Yinzhi; Zhang, Peng; Guo, Jiming; Li, Xin; Wang, Jinling; Yang, Fei; Wang, Xinzhe

    2018-06-20

    Due to the great influence of multipath effect, noise, clock and error on pseudorange, the carrier phase double difference equation is widely used in high-precision indoor pseudolite positioning. The initial position is determined mostly by the known point initialization (KPI) method, and then the ambiguities can be fixed with the LAMBDA method. In this paper, a new method without using the KPI to achieve high-precision indoor pseudolite positioning is proposed. The initial coordinates can be quickly obtained to meet the accuracy requirement of the indoor LAMBDA method. The detailed processes of the method follows: Aiming at the low-cost single-frequency pseudolite system, the static differential pseudolite system (DPL) method is used to obtain the low-accuracy positioning coordinates of the rover station quickly. Then, the ambiguity function method (AFM) is used to search for the coordinates in the corresponding epoch. The real coordinates obtained by AFM can meet the initial accuracy requirement of the LAMBDA method, so that the double difference carrier phase ambiguities can be correctly fixed. Following the above steps, high-precision indoor pseudolite positioning can be realized. Several experiments, including static and dynamic tests, are conducted to verify the feasibility of the new method. According to the results of the experiments, the initial coordinates with the accuracy of decimeter level through the DPL can be obtained. For the AFM part, both a one-meter search scope and two-centimeter or four-centimeter search steps are used to ensure the precision at the centimeter level and high search efficiency. After dealing with the problem of multiple peaks caused by the ambiguity cosine function, the coordinate information of the maximum ambiguity function value (AFV) is taken as the initial value of the LAMBDA, and the ambiguities can be fixed quickly. The new method provides accuracies at the centimeter level for dynamic experiments and at the millimeter level for static ones.

  15. Year-class formation of upper St. Lawrence River northern pike

    USGS Publications Warehouse

    Smith, B.M.; Farrell, J.M.; Underwood, H.B.; Smith, S.J.

    2007-01-01

    Variables associated with year-class formation in upper St. Lawrence River northern pike Esox lucius were examined to explore population trends. A partial least-squares (PLS) regression model (PLS 1) was used to relate a year-class strength index (YCSI; 1974-1997) to explanatory variables associated with spawning and nursery areas (seasonal water level and temperature and their variability, number of ice days, and last day of ice presence). A second model (PLS 2) incorporated four additional ecological variables: potential predators (abundance of double-crested cormorants Phalacrocorax auritus and yellow perch Perca flavescens), female northern pike biomass (as a measure of stock-recruitment effects), and total phosphorus (productivity). Trends in adult northern pike catch revealed a decline (1981-2005), and year-class strength was positively related to catch per unit effort (CPUE; R2 = 0.58). The YCSI exceeded the 23-year mean in only 2 of the last 10 years. Cyclic patterns in the YCSI time series (along with strong year-classes every 4-6 years) were apparent, as was a dampening effect of amplitude beginning around 1990. The PLS 1 model explained over 50% of variation in both explanatory variables and the dependent variable, YCSI first-order moving-average residuals. Variables retained (N = 10; Wold's statistic ??? 0.8) included negative YCSI associations with high summer water levels, high variability in spring and fall water levels, and variability in fall water temperature. The YCSI exhibited positive associations with high spring, summer, and fall water temperature, variability in spring temperature, and high winter and spring water level. The PLS 2 model led to positive YCSI associations with phosphorus and yellow perch CPUE and a negative correlation with double-crested cormorant abundance. Environmental variables (water level and temperature) are hypothesized to regulate northern pike YCSI cycles, and dampening in YCSI magnitude may be related to a combination of factors, including wetland habitat changes, reduced nutrient loading, and increased predation by double-crested cormorants. ?? Copyright by the American Fisheries Society 2007.

  16. Double-Balloon-Assisted n-Butyl-2-Cyanoacrylate Embolization of Intrahepatic Arterioportal Shunt Prior to Chemoembolization of Hepatocellular Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takao, Hidemasa, E-mail: takaoh-tky@umin.ac.jp; Shibata, Eisuke; Ohtomo, Kuni

    A case of multiple hepatocellular carcinomas with a severe intrahepatic arterioportal shunt that was successfully embolized with n-butyl-2-cyanoacrylate with coaxial double-balloon occlusion prior to transcatheter arterial chemoembolization is presented. A proximal balloon positioned at the proper hepatic artery was used for flow control, and a coaxial microballoon, positioned in the closest of three arterial feeding branches to the arterioportal shunt, was used to control the delivery of n-butyl-2-cyanoacrylate. This coaxial double-balloon technique can prevent proximal embolization and distal migration of n-butyl-2-cyanoacrylate and enable precise control of the distribution of n-butyl-2-cyanoacrylate. It could also be applicable to n-butyl-2-cyanoacrylate embolization for othermore » than intrahepatic arterioportal shunt.« less

  17. Doppler Lidar Measurements of Tropospheric Wind Profiles Using the Aerosol Double Edge Technique

    NASA Technical Reports Server (NTRS)

    Gentry, Bruce M.; Li, Steven X.; Mathur, Savyasachee; Korb, C. Laurence; Chen, Huailin

    2000-01-01

    The development of a ground based direct detection Doppler lidar based on the recently described aerosol double edge technique is reported. A pulsed, injection seeded Nd:YAG laser operating at 1064 nm is used to make range resolved measurements of atmospheric winds in the free troposphere. The wind measurements are determined by measuring the Doppler shift of the laser signal backscattered from atmospheric aerosols. The lidar instrument and double edge method are described and initial tropospheric wind profile measurements are presented. Wind profiles are reported for both day and night operation. The measurements extend to altitudes as high as 14 km and are compared to rawinsonde wind profile data from Dulles airport in Virginia. Vertical resolution of the lidar measurements is 330 m and the rms precision of the measurements is a low as 0.6 m/s.

  18. Development of a multispectral sensor for crop canopy temperature measurement

    USDA-ARS?s Scientific Manuscript database

    Quantifying spatial and temporal variability in plant stress has precision agriculture applications in controlling variable rate irrigation and variable rate nutrient application. One approach to plant stress detection is crop canopy temperature measurement by the use of thermographic or radiometric...

  19. Temperature-Induced Protein Release from Water-in-Oil-in-Water Double Emulsions

    PubMed Central

    Rojas, Edith C.; Staton, Jennifer A.; John, Vijay T.; Papadopoulos, Kyriakos D.

    2009-01-01

    A model water-in-oil-in-water (W1/O/W2) double emulsion was prepared by a two-step emulsification procedure and subsequently subjected to temperature changes that caused the oil phase to freeze and thaw while the two aqueous phases remained liquid. Our previous work on individual double-emulsion globules1 demonstrated that crystallizing the oil phase (O) preserves stability, while subsequent thawing triggers coalescence of the droplets of the internal aqueous phase (W1) with the external aqueous phase (W2), termed external coalescence. Activation of this instability mechanism led to instant release of fluorescently tagged bovine serum albumin (fluorescein isothiocyanate (FITC)-BSA) from the W1 droplets and into W2. These results motivated us to apply the proposed temperature-induced globule-breakage mechanism to bulk double emulsions. As expected, no phase separation of the emulsion occurred if stored at temperatures below 18 °C (freezing point of the model oil n-hexadecane), whereas oil thawing readily caused instability. Crucial variables were identified during experimentation, and found to greatly influence the behavior of bulk double emulsions following freeze-thaw cycling. Adjustment of these variables accounted for a more efficient release of the encapsulated protein. PMID:18543998

  20. Reporting the accuracy of biochemical measurements for epidemiologic and nutrition studies.

    PubMed

    McShane, L M; Clark, L C; Combs, G F; Turnbull, B W

    1991-06-01

    Procedures for reporting and monitoring the accuracy of biochemical measurements are presented. They are proposed as standard reporting procedures for laboratory assays for epidemiologic and clinical-nutrition studies. The recommended procedures require identification and estimation of all major sources of variability and explanations of laboratory quality control procedures employed. Variance-components techniques are used to model the total variability and calculate a maximum percent error that provides an easily understandable measure of laboratory precision accounting for all sources of variability. This avoids ambiguities encountered when reporting an SD that may taken into account only a few of the potential sources of variability. Other proposed uses of the total-variability model include estimating precision of laboratory methods for various replication schemes and developing effective quality control-checking schemes. These procedures are demonstrated with an example of the analysis of alpha-tocopherol in human plasma by using high-performance liquid chromatography.

  1. Breathing is affected by dopamine D2-like receptors in the basolateral amygdala.

    PubMed

    Sugita, Toshihisa; Kanamaru, Mitsuko; Iizuka, Makito; Sato, Kanako; Tsukada, Setsuro; Kawamura, Mitsuru; Homma, Ikuo; Izumizaki, Masahiko

    2015-04-01

    The precise mechanisms underlying how emotions change breathing patterns remain unclear, but dopamine is a candidate neurotransmitter in the process of emotion-associated breathing. We investigated whether basal dopamine release occurs in the basolateral amygdala (BLA), where sensory-related inputs are received and lead to fear or anxiety responses, and whether D1- and D2-like receptor antagonists affect breathing patterns and dopamine release in the BLA. Adult male mice (C57BL/6N) were perfused with artificial cerebrospinal fluid, a D1-like receptor antagonist (SCH 23390), or a D2-like receptor antagonist ((S)-(-)-sulpiride) through a microdialysis probe in the BLA. Respiratory variables were measured using a double-chamber plethysmograph. Dopamine release was measured by an HPLC. Perfusion of (S)-(-)-sulpiride in the BLA, not SCH 23390, specifically decreased respiratory rate without changes in local release of dopamine. These results suggest that basal dopamine release in the BLA, at least partially, increases respiratory rates only through post-synaptic D2-like receptors, not autoreceptors, which might be associated with emotional responses. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Advanced CLIPS capabilities

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1991-01-01

    The C Language Integrated Production System (CLIPS) is a forward chaining rule based language developed by NASA. CLIPS was designed specifically to provide high portability, low cost, and easy integration with external systems. The current release of CLIPS, version 4.3, is being used by over 2500 users throughout the public and private community. The primary addition to the next release of CLIPS, version 5.0, will be the CLIPS Object Oriented Language (COOL). The major capabilities of COOL are: class definition with multiple inheritance and no restrictions on the number, types, or cardinality of slots; message passing which allows procedural code bundled with an object to be executed; and query functions which allow groups of instances to be examined and manipulated. In addition to COOL, numerous other enhancements were added to CLIPS including: generic functions (which allow different pieces of procedural code to be executed depending upon the types or classes of the arguments); integer and double precision data type support; multiple conflict resolution strategies; global variables; logical dependencies; type checking on facts; full ANSI compiler support; and incremental reset for rules.

  3. Loss of Urinary Macromolecules in Mice Causes Interstitial and Intratubular Renal Calcification Dependent on the Underlying Conditions

    NASA Astrophysics Data System (ADS)

    Wu, Xue-Ru; Lieske, John C.; Evan, Andrew P.; Sommer, Andre J.; Liaw, Lucy; Mo, Lan

    2008-09-01

    Urinary protein macromolecules have long been thought to play a role in influencing the various phases of urolithiasis including nucleation, growth, aggregation of mineral crystals and their subsequent adhesion to the renal epithelial cells. However, compelling evidence regarding their precise role was lacking, due partly to the fact that most prior studies were done in vitro and results were highly variable depending on the experimental conditions. The advent of genetic engineering technology has made it possible to study urinary protein macromolecules within an in vivo biological system. Indeed, recent studies have begun to shed light on the net effects of loss of one or more macromolecules on the earliest steps of urolithiasis. This paper focuses on the in vivo consequences of inactivating Tamm-Horsfall protein and/or osteopontin, two major urinary glycoproteins, using the knockout approach. The renal phenotypes of both single and double knockout mice under spontaneous or hyperoxaluric conditions will be described. The functional significance of the urinary macromolecules as critical defense factors against renal calcification will also be discussed.

  4. Salesperson Communication Style: The Neglected Dimension in Sales Performance.

    ERIC Educational Resources Information Center

    Dion, Paul A.; Notarantonio, Elaine M.

    1992-01-01

    Reports two studies investigating the relationship between communication style variables and sales performance. Relates findings which indicate that the "precise" dimension of communication is strongly associated with effective sales performance and that different combinations of the "precise" and "friendly"…

  5. Dr. Kenneth Sudduth: A giant pioneering precision agriculture

    USDA-ARS?s Scientific Manuscript database

    Dr. Ken Sudduth is nationally and internationally recognized for his precision agriculture research and leadership, especially in the areas of soil sensing and assessment of spatial variability for site-specific management. His many noteworthy contributions include novel techniques and methodology f...

  6. Variable Permanent Magnet Quadrupole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mihara, T.; Iwashita, Y.; /Kyoto U.

    A permanent magnet quadrupole (PMQ) is one of the candidates for the final focus lens in a linear collider. An over 120 T/m strong variable permanent magnet quadrupole is achieved by the introduction of saturated iron and a 'double ring structure'. A fabricated PMQ achieved 24 T integrated gradient with 20 mm bore diameter, 100 mm magnet diameter and 20 cm pole length. The strength of the PMQ is adjustable in 1.4 T steps, due to its 'double ring structure': the PMQ is split into two nested rings; the outer ring is sliced along the beam line into four partsmore » and is rotated to change the strength. This paper describes the variable PMQ from fabrication to recent adjustments.« less

  7. Justifying scale type for a latent variable: Formative or reflective?

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Bahron, Arsiah; Bagul, Awangku Hassanal Bahar Pengiran

    2015-12-01

    The study attempted to explore the possibilities to create a procedure at the experimental level to double confirm whether manifest variables scale type is formative or reflective. Now, the criteria of making such a decision are heavily depended on researchers' judgment at the conceptual and operational level. The study created an experimental procedure that seems could double confirm the decisions from the conceptual and operational level judgments. The experimental procedure includes the following tests, Variance Inflation Factor (VIF), Tolerance (TOL), Ridge Regression, Cronbach's alpha, Dillon-Goldstein's rho, and first and second eigenvalue. The procedure considers manifest variables' both multicollinearity and consistency. As the result, the procedure received the same judgment with the carefully established decision making at the concept and operational level.

  8. Study of a double bubbler for material balance in liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hugues Lambert

    The objective of this project was to determine the potential of a double bubbler to measure density and fluid level of the molten salt contained in an electrorefiner. Such in-situ real-time measurements can provide key information for material balances in the pyroprocessing of the nuclear spent fuel. This theoretical study showed this technique has a lot of promise. Four different experiments were designed and performed. The first three experiments studied the influence of a variety of factors such as depth difference between the two tubes, gas flow rate, the radius of the tubes and determining the best operating conditions. Themore » last experiment purpose was to determine the precision and accuracy of the apparatus during specific conditions. The elected operating conditions for the characterization of the system were a difference of depth of 25 cm and a flow rate of 55 ml/min in each tube. The measured densities were between 1,000 g/l and 1,400g/l and the level between 34cm and 40 cm. The depth difference between the tubes is critical, the larger, the better. The experiments showed that the flow rate should be the same in each tube. The concordances with theoretical predictions were very good. The density precision was very satisfying (spread<0.1%) and the accuracy was about 1%. For the level determination, the precision was also very satisfying (spread<0.1%), but the accuracy was about 3%. However, those two biases could be corrected with calibration curves. In addition to the aqueous systems studied in the present work, future work will focus on examining the behavior of the double bubbler instrumentation in molten salt systems. The two main challenges which were identified in this work are the effect of the temperature and the variation of the superficial tension.« less

  9. Validation of an automatic system (DoubleCage) for detecting the location of animals during preference tests.

    PubMed

    Tsai, P P; Nagelschmidt, N; Kirchner, J; Stelzer, H D; Hackbarth, H

    2012-01-01

    Preference tests have often been performed for collecting information about animals' acceptance of environmental refinement objects. In numerous published studies animals were individually tested during preference experiments, as it is difficult to observe group-housed animals with an automatic system. Thus, videotaping is still the most favoured method for observing preferences of socially-housed animals. To reduce the observation workload and to be able to carry out preference testing of socially-housed animals, an automatic recording system (DoubleCage) was developed for determining the location of group-housed animals in a preference test set-up. This system is able to distinguish the transition of individual animals between two cages and to record up to 16 animals at the same time (four animals per cage). The present study evaluated the reliability of the DoubleCage system. The data recorded by the DoubleCage program and the data obtained by human observation were compared. The measurements of the DoubleCage system and manual observation of the videotapes are comparable and significantly correlated (P < 0.0001) with good agreement. Using the DoubleCage system enables precise and reliable recording of the preferences of group-housed animals and a considerable reduction of animal observation time.

  10. New mainstream double-end carbon dioxide capnograph for human respiration

    NASA Astrophysics Data System (ADS)

    Yang, Jiachen; An, Kun; Wang, Bin; Wang, Lei

    2010-11-01

    Most of the current respiratory devices for monitoring CO2 concentration use the side-stream structure. In this work, we engage to design a new double-end mainstream device for monitoring CO2 concentration of gas breathed out of the human body. The device can accurately monitor the cardiopulmonary status during anesthesia and mechanical ventilation in real time. Meanwhile, to decrease the negative influence of device noise and the low sample precision caused by temperature drift, wavelet packet denoising and temperature drift compensation are used. The new capnograph is proven by clinical trials to be helpful in improving the accuracy of capnography.

  11. Rearrangement of valence neutrons in the neutrinoless double-β decay of 136Xe

    NASA Astrophysics Data System (ADS)

    Szwec, S. V.; Kay, B. P.; Cocolios, T. E.; Entwisle, J. P.; Freeman, S. J.; Gaffney, L. P.; Guimarães, V.; Hammache, F.; McKee, P. P.; Parr, E.; Portail, C.; Schiffer, J. P.; de Séréville, N.; Sharp, D. K.; Smith, J. F.; Stefan, I.

    2016-11-01

    A quantitative description of the change in ground-state neutron occupancies between 136Xe and 136Ba, the initial and final state in the neutrinoless double-β decay of 136Xe, has been extracted from precision measurements of the cross sections of single-neutron-adding and -removing reactions. Comparisons are made to recent theoretical calculations of the same properties using various nuclear-structure models. These are the same calculations used to determine the magnitude of the nuclear matrix elements for the process, which at present disagree with each other by factors of 2 or 3. The experimental neutron occupancies show some disagreement with the theoretical calculations.

  12. Double-Pulse Two-Micron IPDA Lidar Simulation for Airborne Carbon Dioxide Measurements

    NASA Technical Reports Server (NTRS)

    Refaat, Tamer F.; Singh, Upendra N.; Yu, Jirong; Petros, Mulugeta

    2015-01-01

    An advanced double-pulsed 2-micron integrated path differential absorption lidar has been developed at NASA Langley Research Center for measuring atmospheric carbon dioxide. The instrument utilizes a state-of-the-art 2-micron laser transmitter with tunable on-line wavelength and advanced receiver. Instrument modeling and airborne simulations are presented in this paper. Focusing on random errors, results demonstrate instrument capabilities of performing precise carbon dioxide differential optical depth measurement with less than 3% random error for single-shot operation from up to 11 km altitude. This study is useful for defining CO2 measurement weighting, instrument setting, validation and sensitivity trade-offs.

  13. The precision problem in conservation and restoration

    USGS Publications Warehouse

    Hiers, J. Kevin; Jackson, Stephen T.; Hobbs, Richard J.; Bernhardt, Emily S.; Valentine, Leonie E.

    2016-01-01

    Within the varied contexts of environmental policy, conservation of imperilled species populations, and restoration of damaged habitats, an emphasis on idealized optimal conditions has led to increasingly specific targets for management. Overly-precise conservation targets can reduce habitat variability at multiple scales, with unintended consequences for future ecological resilience. We describe this dilemma in the context of endangered species management, stream restoration, and climate-change adaptation. Inappropriate application of conservation targets can be expensive, with marginal conservation benefit. Reduced habitat variability can limit options for managers trying to balance competing objectives with limited resources. Conservation policies should embrace habitat variability, expand decision-space appropriately, and support adaptation to local circumstances to increase ecological resilience in a rapidly changing world.

  14. Effects of color scheme and message lines of variable message signs on driver performance.

    PubMed

    Lai, Chien-Jung

    2010-07-01

    The advancement in variable message signs (VMS) technology has made it possible to display message with various formats. This study presented an ergonomic study on the message design of Chinese variable message signs on urban roads in Taiwan. Effects of color scheme (one, two and three) and number of message lines (single, double and triple) of VMS on participants' response performance were investigated through a laboratory experiment. Results of analysis showed that color scheme and number of message lines are significant factors for participants' response time to VMS. Participants responded faster for two-color than for one- and three-color scheme. Participants also took less response time for double line message than for single and triple line message. Both color scheme and number of message lines had no significant effect on participants' response accuracy. The preference survey after the experiment showed that most participants preferred two-color scheme and double line message to the other combinations. The results can assist in adopting appropriate color scheme and number of message lines of Chinese VMS. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. Mechanisms of double stratification and magnetic field in flow of third grade fluid over a slendering stretching surface with variable thermal conductivity

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Qayyum, Sajid; Alsaedi, Ahmed; Ahmad, Bashir

    2018-03-01

    This article addresses the magnetohydrodynamic (MHD) stagnation point flow of third grade fluid towards a nonlinear stretching sheet. Energy expression is based through involvement of variable thermal conductivity. Heat and mass transfer aspects are described within the frame of double stratification effects. Boundary layer partial differential systems are deduced. Governing systems are then converted into ordinary differential systems by invoking appropriate variables. The transformed expressions are solved through homotopic technique. Impact of embedded variables on velocity, thermal and concentration fields are displayed and argued. Numerical computations are presented to obtain the results of skin friction coefficient and local Nusselt and Sherwood numbers. It is revealed that larger values of magnetic parameter reduces the velocity field while reverse situation is noticed due to wall thickness variable. Temperature field and local Nusselt number are quite reverse for heat generation/absorption parameter. Moreover qualitative behaviors of concentration field and local Sherwood number are similar for solutal stratification parameter.

  16. Effect analysis of design variables on the disc in a double-eccentric butterfly valve.

    PubMed

    Kang, Sangmo; Kim, Da-Eun; Kim, Kuk-Kyeom; Kim, Jun-Oh

    2014-01-01

    We have performed a shape optimization of the disc in an industrial double-eccentric butterfly valve using the effect analysis of design variables to enhance the valve performance. For the optimization, we select three performance quantities such as pressure drop, maximum stress, and mass (weight) as the responses and three dimensions regarding the disc shape as the design variables. Subsequently, we compose a layout of orthogonal array (L16) by performing numerical simulations on the flow and structure using a commercial package, ANSYS v13.0, and then make an effect analysis of the design variables on the responses using the design of experiments. Finally, we formulate a multiobjective function consisting of the three responses and then propose an optimal combination of the design variables to maximize the valve performance. Simulation results show that the disc thickness makes the most significant effect on the performance and the optimal design provides better performance than the initial design.

  17. Which Fields Need Precision Nitrogen Management the Most?

    USDA-ARS?s Scientific Manuscript database

    Precision agriculture (PA) technologies used for identifying and managing within-field variability are not widely used despite decades of advancement. Many producers are hesitant to adopt PA because uncertainty exists about field-specific performance or the potential return on investment. These con...

  18. Hyperspectral imagery for mapping crop yield for precision agriculture

    USDA-ARS?s Scientific Manuscript database

    Crop yield is perhaps the most important piece of information for crop management in precision agriculture. It integrates the effects of various spatial variables such as soil properties, topographic attributes, tillage, plant population, fertilization, irrigation, and pest infestations. A yield map...

  19. Development of low-altitude remote sensing systems for crop production management

    USDA-ARS?s Scientific Manuscript database

    Precision agriculture accounts for within-field variability for targeted treatment rather than uniform treatment of an entire field. Precision agriculture is built on agricultural mechanization and state-of-the-art technologies of geographical information systems (GIS), global positioning systems (G...

  20. Airborne and satellite remote sensors for precision agriculture

    USDA-ARS?s Scientific Manuscript database

    Remote sensing provides an important source of information to characterize soil and crop variability for both within-season and after-season management despite the availability of numerous ground-based soil and crop sensors. Remote sensing applications in precision agriculture have been steadily inc...

  1. Combination of GPS and GLONASS IN PPP algorithms and its effect on site coordinates determination

    NASA Astrophysics Data System (ADS)

    Hefty, J.; Gerhatova, L.; Burgan, J.

    2011-10-01

    Precise Point Positioning (PPP) approach using the un-differenced code and phase GPS observations, precise orbits and satellite clocks is an important alternative to the analyses based on double differences. We examine the extension of the PPP method by introducing the GLONASS satellites into the processing algorithms. The procedures are demonstrated on the software package ABSOLUTE developed at the Slovak University of Technology. Partial results, like ambiguities and receiver clocks obtained from separate solutions of the two GNSS are mutually compared. Finally, the coordinate time series from combination of GPS and GLONASS observations are compared with GPS-only solutions.

  2. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.

  3. Efficient and portable acceleration of quantum chemical many-body methods in mixed floating point precision using OpenACC compiler directives

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.

    2017-09-01

    It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.

  4. Prenatal and accurate perinatal diagnosis of type 2 H or ductular duplicate gallbladder.

    PubMed

    Maggi, Umberto; Farris, Giorgio; Carnevali, Alessandra; Borzani, Irene; Clerici, Paola; Agosti, Massimo; Rossi, Giorgio; Leva, Ernesto

    2018-02-07

    Double gallbladder is a rare biliary anomaly. Perinatal diagnosis of the disorder has been reported in only 6 cases, and in 5 of them the diagnosis was based on ultrasound imaging only. However, the ultrasound technique alone does not provide a sufficiently precise description of cystic ducts and biliary anatomy, an information that is crucial for a correct classification and for a possible future surgery. At 21 weeks of gestational age of an uneventful pregnancy in a 38 year old primipara mother, a routine ultrasound screening detected a biliary anomaly in the fetus suggestive of a double gallbladder. A neonatal abdominal ultrasonography performed on postnatal day 2 confirmed the diagnosis. On day 12 the newborn underwent a Magnetic Resonance Cholangiopancreatography (MRCP) that clearly characterized the anatomy of the anomaly: both gallbladders had their own cystic duct and both had a separate insertion in the main biliary duct. We report a case of early prenatal suspected duplicate gallbladder that was confirmed by a neonatal precise diagnosis of a Type 2, H or ductular duplicate gallbladder, using for the first time 3D images of Magnetic resonance cholangiopancreatography in a newborn. An accurate anatomical diagnosis is mandatory in patients undergoing a possible future cholecystectomy, to avoid surgical complications or reoperations. Therefore, in case of a perinatal suspicion of a double gallbladder, neonates should undergo a Magnetic resonance cholangiopancreatography. A review of the Literature about this variant is included.

  5. Double-Targeting Explosible Nanofirework for Tumor Ignition to Guide Tumor-Depth Photothermal Therapy.

    PubMed

    Zhang, Ming-Kang; Wang, Xiao-Gang; Zhu, Jing-Yi; Liu, Miao-Deng; Li, Chu-Xin; Feng, Jun; Zhang, Xian-Zheng

    2018-04-17

    This study reports a double-targeting "nanofirework" for tumor-ignited imaging to guide effective tumor-depth photothermal therapy (PTT). Typically, ≈30 nm upconversion nanoparticles (UCNP) are enveloped with a hybrid corona composed of ≈4 nm CuS tethered hyaluronic acid (CuS-HA). The HA corona provides active tumor-targeted functionality together with excellent stability and improved biocompatibility. The dimension of UCNP@CuS-HA is specifically set within the optimal size window for passive tumor-targeting effect, demonstrating significant contributions to both the in vivo prolonged circulation duration and the enhanced size-dependent tumor accumulation compared with ultrasmall CuS nanoparticles. The tumors featuring hyaluronidase (HAase) overexpression could induce the escape of CuS away from UCNP@CuS-HA due to HAase-catalyzed HA degradation, in turn activating the recovery of initially CuS-quenched luminescence of UCNP and also driving the tumor-depth infiltration of ultrasmall CuS for effective PTT. This in vivo transition has proven to be highly dependent on tumor occurrence like a tumor-ignited explosible firework. Together with the double-targeting functionality, the pathology-selective tumor ignition permits precise tumor detection and imaging-guided spatiotemporal control over PTT operation, leading to complete tumor ablation under near infrared (NIR) irradiation. This study offers a new paradigm of utilizing pathological characteristics to design nanotheranostics for precise detection and personalized therapy of tumors. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. High Maneuverability Airframe: Investigation of Fin and Canard Sizing for Optimum Maneuverability

    DTIC Science & Technology

    2014-09-01

    overset grids (unified- grid); 5) total variation diminishing discretization based on a new multidimensional interpolation framework; 6) Riemann solvers to...Aerodynamics .........................................................................................3 3.1.1 Solver ...describes the methodology used for the simulations. 3.1.1 Solver The double-precision solver of a commercially available code, CFD ++ v12.1.1, 9

  7. Memory for a single object has differently variable precisions for relevant and irrelevant features.

    PubMed

    Swan, Garrett; Collins, John; Wyble, Brad

    2016-01-01

    Working memory is a limited resource. To further characterize its limitations, it is vital to understand exactly what is encoded about a visual object beyond the "relevant" features probed in a particular task. We measured the memory quality of a task-irrelevant feature of an attended object by coupling a delayed estimation task with a surprise test. Participants were presented with a single colored arrow and were asked to retrieve just its color for the first half of the experiment before unexpectedly being asked to report its direction. Mixture modeling of the data revealed that participants had highly variable precision on the surprise test, indicating a coarse-grained memory for the irrelevant feature. Following the surprise test, all participants could precisely recall the arrow's direction; however, this improvement in direction memory came at a cost in precision for color memory even though only a single object was being remembered. We attribute these findings to varying levels of attention to different features during memory encoding.

  8. Are CT Scans a Satisfactory Substitute for the Follow-Up of RSA Migration Studies of Uncemented Cups? A Comparison of RSA Double Examinations and CT Datasets of 46 Total Hip Arthroplasties

    PubMed Central

    Zeleznik, Michael P.; Nilsson, Kjell G.; Olivecrona, Henrik

    2017-01-01

    As part of the 14-year follow-up of a prospectively randomized radiostereometry (RSA) study on uncemented cup fixation, two pairs of stereo radiographs and a CT scan of 46 hips were compared. Tantalum beads, inserted during the primary operation, were detected in the CT volume and the stereo radiographs and used to produce datasets of 3D coordinates. The limit of agreement between the combined CT and RSA datasets was calculated in the same way as the precision of the double RSA examination. The precision of RSA corresponding to the 99% confidence interval was 1.36°, 1.36°, and 0.60° for X-, Y-, and Z-rotation and 0.40, 0.17, and 0.37 mm for X-, Y-, and Z-translation. The limit of agreement between CT and RSA was 1.51°, 2.17°, and 1.05° for rotation and 0.59, 0.56, and 0.74 mm for translation. The differences between CT and RSA are close to the described normal 99% confidence interval for precision in RSA: 0.3° to 2° for rotation and 0.15 to 0.6 mm for translation. We conclude that measurements using CT and RSA are comparable and that CT can be used for migration studies for longitudinal evaluations of patients with RSA markers. PMID:28243598

  9. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit

    PubMed Central

    Xu, Liangliang; Xu, Nengxiong

    2017-01-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points’ spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available. PMID:28989754

  10. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit.

    PubMed

    Mei, Gang; Xu, Liangliang; Xu, Nengxiong

    2017-09-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.

  11. Are CT Scans a Satisfactory Substitute for the Follow-Up of RSA Migration Studies of Uncemented Cups? A Comparison of RSA Double Examinations and CT Datasets of 46 Total Hip Arthroplasties.

    PubMed

    Otten, Volker; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Nilsson, Kjell G; Olivecrona, Henrik

    2017-01-01

    As part of the 14-year follow-up of a prospectively randomized radiostereometry (RSA) study on uncemented cup fixation, two pairs of stereo radiographs and a CT scan of 46 hips were compared. Tantalum beads, inserted during the primary operation, were detected in the CT volume and the stereo radiographs and used to produce datasets of 3D coordinates. The limit of agreement between the combined CT and RSA datasets was calculated in the same way as the precision of the double RSA examination. The precision of RSA corresponding to the 99% confidence interval was 1.36°, 1.36°, and 0.60° for X -, Y -, and Z -rotation and 0.40, 0.17, and 0.37 mm for X -, Y -, and Z -translation. The limit of agreement between CT and RSA was 1.51°, 2.17°, and 1.05° for rotation and 0.59, 0.56, and 0.74 mm for translation. The differences between CT and RSA are close to the described normal 99% confidence interval for precision in RSA: 0.3° to 2° for rotation and 0.15 to 0.6 mm for translation. We conclude that measurements using CT and RSA are comparable and that CT can be used for migration studies for longitudinal evaluations of patients with RSA markers.

  12. A hybrid double-observer sightability model for aerial surveys

    USGS Publications Warehouse

    Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine

    2013-01-01

    Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.

  13. Impact of coil design on the contrast-to-noise ratio, precision, and consistency of quantitative cartilage morphometry at 3 Tesla: a pilot study for the osteoarthritis initiative.

    PubMed

    Eckstein, Felix; Kunz, Manuela; Hudelmaier, Martin; Jackson, Rebecca; Yu, Joseph; Eaton, Charles B; Schneider, Erika

    2007-02-01

    Phased-array (PA) coils generally provide higher signal-to-noise ratios (SNRs) than quadrature knee coils. In this pilot study for the Osteoarthritis Initiative (OAI) we compared these two types of coils in terms of contrast-to-noise ratio (CNR), precision, and consistency of quantitative femorotibial cartilage measurements. Test-retest measurements were acquired using coronal fast low-angle shot with water excitation (FLASHwe) and coronal multiplanar reconstruction (MPR) of sagittal double-echo steady state with water excitation (DESSwe) at 3T. The precision errors for cartilage volume and thickness were

  14. Measuring double-electron capture with liquid xenon experiments

    NASA Astrophysics Data System (ADS)

    Mei, D.-M.; Marshall, I.; Wei, W.-Z.; Zhang, C.

    2014-01-01

    We investigate the possibilities of observing the decay mode for 124Xe in which two electrons are captured, two neutrinos are emitted, and the final daughter nucleus is in its ground state, using dark matter experiments with liquid xenon. The first upper limit of the decay half-life is calculated to be 1.66 × 1021 years at a 90% confidence level (C.L.) obtained with the published background data from the XENON100 experiment. Employing a known background model from the large underground xenon (LUX) experiment, we predict that the detection of double-electron capture of 124Xe to the ground state of 124Te with LUX will have approximately 115 events, assuming a half-life of 2.9 × 1021 years. We conclude that measuring 124Xe 2ν double-electron capture to the ground state of 124Te can be performed more precisely with the proposed LUX-Zeplin (LZ) experiment.

  15. Earthquake hypocenter relocation using double difference method in East Java and surrounding areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C, Aprilia Puspita; Meteorological, Climatological, and Geophysical Agency; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id

    Determination of precise hypocenter location is very important in order to provide information about subsurface fault plane and for seismic hazard analysis. In this study, we have relocated hypocenter earthquakes in Eastern part of Java and surrounding areas from local earthquake data catalog compiled by Meteorological, Climatological, and Geophysical Agency of Indonesia (MCGA) in time period 2009-2012 by using the double-difference method. The results show that after relocation processes, there are significantly changes in position and orientation of earthquake hypocenter which is correlated with the geological setting in this region. We observed indication of double seismic zone at depths ofmore » 70-120 km within the subducting slab in south of eastern part of Java region. Our results will provide useful information for advance seismological studies and seismic hazard analysis in this study.« less

  16. Identification of young stellar variables with KELT for K2 - II. The Upper Scorpius association

    NASA Astrophysics Data System (ADS)

    Ansdell, Megan; Oelkers, Ryan J.; Rodriguez, Joseph E.; Gaidos, Eric; Somers, Garrett; Mamajek, Eric; Cargile, Phillip A.; Stassun, Keivan G.; Pepper, Joshua; Stevens, Daniel J.; Beatty, Thomas G.; Siverd, Robert J.; Lund, Michael B.; Kuhn, Rudolf B.; James, David; Gaudi, B. Scott

    2018-01-01

    High-precision photometry from space-based missions such as K2 and Transiting Exoplanet Survey Satellite enables detailed studies of young star variability. However, because space-based observing campaigns are often short (e.g. 80 d for K2), complementary long-baseline photometric surveys are critical for obtaining a complete understanding of young star variability, which can change on time-scales of minutes to years. We therefore present and analyse light curves of members of the Upper Scorpius association made over 5.5 yr by the ground-based Kilodegree Extremely Little Telescope (KELT), which complement the high-precision observations of this region taken by K2 during its Campaigns 2 and 15. We show that KELT data accurately identify the periodic signals found with high-precision K2 photometry, demonstrating the power of ground-based surveys in deriving stellar rotation periods of young stars. We also use KELT data to identify sources exhibiting variability that is likely related to circumstellar material and/or stellar activity cycles; these signatures are often unseen in the short-term K2 data, illustrating the importance of long-term monitoring surveys for studying the full range of young star variability. We provide the KELT light curves as electronic tables in an ongoing effort to establish legacy time series data sets for young stellar clusters.

  17. Simulation of changes on the psychosocial risk in the nursing personnel after implementing the policy of good practices on the risk treatment.

    PubMed

    Bolívar Murcia, María Paula; Cruz González, Joan Paola; Rodríguez Bello, Luz Angélica

    2018-02-01

    Evaluate the change over time of psychosocial risk management for the nursing personnel of an intermediate complexity clinic of Bogota (Colombia). Descriptive and correlational research performed under the approach of risk management (identification, analysis, assessment and treatment). The psychosocial risk of the nursing personnel was studied through 10-year system dynamics models (with and without the implementation of the policy of good practices on the risk treatment) in two scenarios: when the nursing personnel works shifts of 6 hours (morning or afternoon) and when they work over 12 hours (double shift or night shift). When implementing a policy of good practices on the risk treatment, the double shift scenario shows an improvement among 25% to 88% in the variables of: health, labor motivation, burnout, service level and productivity; as well as in the variables of the organization associated to number of patients, nursing personnel and profit. Likewise, the single shift scenario with good practices improves in all the above-mentioned variables and generates stability on the variables of absenteeism and resignations. The best scenario is the single shift scenario with the application of good practices of risk treatment in comparison with the double shift scenario with good practices, which allows concluding that the good practices have a positive effect on the variables of nursing personnel and on those associated to the organization. Copyright© by the Universidad de Antioquia.

  18. Delay times of a LiDAR-guided precision sprayer control system

    USDA-ARS?s Scientific Manuscript database

    Accurate flow control systems in triggering sprays against detected targets are needed for precision variable-rate sprayer development. System delay times due to the laser-sensor data buffer, software operation, and hydraulic-mechanical component response were determined for a control system used fo...

  19. Rapid assessment of pulmonary gas transport with hyperpolarized 129Xe MRI using a 3D radial double golden-means acquisition with variable flip angles.

    PubMed

    Ruppert, Kai; Amzajerdian, Faraz; Hamedani, Hooman; Xin, Yi; Loza, Luis; Achekzai, Tahmina; Duncan, Ian F; Profka, Harrilla; Siddiqui, Sarmad; Pourfathi, Mehrdad; Cereda, Maurizio F; Kadlecek, Stephen; Rizi, Rahim R

    2018-04-22

    To demonstrate the feasibility of using a 3D radial double golden-means acquisition with variable flip angles to monitor pulmonary gas transport in a single breath hold with hyperpolarized xenon-129 MRI. Hyperpolarized xenon-129 MRI scans with interleaved gas-phase and dissolved-phase excitations were performed using a 3D radial double golden-means acquisition in mechanically ventilated rabbits. The flip angle was either held fixed at 15 ° or 5 °, or it was varied linearly in ascending or descending order between 5 ° and 15 ° over a sampling interval of 1000 spokes. Dissolved-phase and gas-phase images were reconstructed at high resolution (32 × 32 × 32 matrix size) using all 1000 spokes, or at low resolution (22 × 22 × 22 matrix size) using 400 spokes at a time in a sliding-window fashion. Based on these sliding-window images, relative change maps were obtained using the highest mean flip angle as the reference, and aggregated pixel-based changes were tracked. Although the signal intensities in the dissolve-phase maps were mostly constant in the fixed flip-angle acquisitions, they varied significantly as a function of average flip angle in the variable flip-angle acquisitions. The latter trend reflects the underlying changes in observed dissolve-phase magnetization distribution due to pulmonary gas uptake and transport. 3D radial double golden-means acquisitions with variable flip angles provide a robust means for rapidly assessing lung function during a single breath hold, thereby constituting a particularly valuable tool for imaging uncooperative or pediatric patient populations. © 2018 International Society for Magnetic Resonance in Medicine.

  20. The Precision Problem in Conservation and Restoration.

    PubMed

    Hiers, J Kevin; Jackson, Stephen T; Hobbs, Richard J; Bernhardt, Emily S; Valentine, Leonie E

    2016-11-01

    Within the varied contexts of environmental policy, conservation of imperilled species populations, and restoration of damaged habitats, an emphasis on idealized optimal conditions has led to increasingly specific targets for management. Overly-precise conservation targets can reduce habitat variability at multiple scales, with unintended consequences for future ecological resilience. We describe this dilemma in the context of endangered species management, stream restoration, and climate-change adaptation. Inappropriate application of conservation targets can be expensive, with marginal conservation benefit. Reduced habitat variability can limit options for managers trying to balance competing objectives with limited resources. Conservation policies should embrace habitat variability, expand decision-space appropriately, and support adaptation to local circumstances to increase ecological resilience in a rapidly changing world. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Cattaneo-Christov double-diffusion theory for three-dimensional flow of viscoelastic nanofluid with the effect of heat generation/absorption

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Qayyum, Sajid; Shehzad, Sabir Ali; Alsaedi, Ahmed

    2018-03-01

    The present research article focuses on three-dimensional flow of viscoelastic(second grade) nanofluid in the presence of Cattaneo-Christov double-diffusion theory. Flow caused is due to stretching sheet. Characteristics of heat transfer are interpreted by considering the heat generation/absorption. Nanofluid theory comprises of Brownian motion and thermophoresis. Cattaneo-Christov double-diffusion theory is introduced in the energy and concentration expressions. Such diffusions are developed as a part of formulating the thermal and solutal relaxation times framework. Suitable variables are implemented for the conversion of partial differential systems into a sets of ordinary differential equations. The transformed expressions have been explored through homotopic algorithm. Behavior of sundry variables on the velocities, temperature and concentration are scrutinized graphically. Numerical values of skin friction coefficients are also calculated and examined. Here thermal field enhances for heat generation parameter while reverse situation is noticed for heat absorption parameter.

  2. Frequency Analysis of the RRc Variables of the MACHO Database for the LMC

    NASA Astrophysics Data System (ADS)

    Kovács, G.; Alcock, C.; Allsman, R.; Alves, D.; Axelrod, T.; Becker, A.; Bennett, D.; Clement, C.; Cook, K. H.; Drake, A.; Freeman, K.; Geha, M.; Griest, K.; Kurtz, D. W.; Lehner, M.; Marshall, S.; Minniti, D.; Nelson, C.; Peterson, B.; Popowski, P.; Pratt, M.; Quinn, P.; Rodgers, A.; Rowe, J.; Stubbs, C.; Sutherland, W.; Tomaney, A.; Vandehei, T.; Welch, D. L.; MACHO Collaboration

    We present the first massive frequency analysis of the 1200 first overtone RR Lyrae stars in the Large Magellanic Cloud observed in the first 4.3 yr of the MACHO project. Besides the many new double-mode variables, we also discovered stars with closely spaced frequencies. These variables are most probably nonradial pulsators.

  3. Entering the Two-Detector Phase of Double Chooz: First Near Detector Data and Prospects for Future Analyses

    NASA Astrophysics Data System (ADS)

    Carr, Rachel; Double Chooz Collaboration

    2015-04-01

    In 2011, Double Chooz reported the first evidence for θ13-driven reactor antineutrino oscillation, derived from observations of inverse beta decay (IBD) events in a single detector located ~ 1 km from two nuclear reactors. Since then, the collaboration has honed the precision of its sin2 2θ13 measurement by reducing backgrounds, improving detection efficiency and systematics, and including additional statistics from IBD events with neutron captures on hydrogen. By 2014, the overwhelmingly dominant contribution to sin2 2θ13 uncertainty was reactor flux uncertainty, which is irreducible in a single-detector experiment. Now, as Double Chooz collects the first data with a near detector, we can begin to suppress that uncertainty and approach the experiment's full potential. In this talk, we show quality checks on initial data from the near detector. We also present our two-detector sensitivity to both sin2 2θ13 and sterile neutrino mixing, which are enhanced by analysis strategies developed in our single-detector phase. In particular, we discuss prospects for the first two-detector results from Double Chooz, expected in 2015.

  4. Effect of cation ordering on oxygen vacancy diffusion pathways in double perovskites

    DOE PAGES

    Uberuaga, Blas Pedro; Pilania, Ghanshyam

    2015-07-08

    Perovskite structured oxides (ABO 3) are attractive for a number of technological applications, including as superionics because of the high oxygen conductivities they exhibit. Double perovskites (AA’BB’O 6) provide even more flexibility for tailoring properties. Using accelerated molecular dynamics, we examine the role of cation ordering on oxygen vacancy mobility in one model double perovskite SrLaTiAlO 6. We find that the mobility of the vacancy is very sensitive to the cation ordering, with a migration energy that varies from 0.6 to 2.7 eV. In the extreme cases, the mobility is both higher and lower than either of the two endmore » member single perovskites. Further, the nature of oxygen vacancy diffusion, whether one-dimensional, two-dimensional, or three-dimensional, also varies with cation ordering. We correlate the dependence of oxygen mobility on cation structure to the distribution of Ti 4+ cations, which provide unfavorable environments for the positively charged oxygen vacancy. The results demonstrate the potential of using tailored double perovskite structures to precisely control the behavior of oxygen vacancies in these materials.« less

  5. Diffraction-based overlay metrology for double patterning technologies

    NASA Astrophysics Data System (ADS)

    Dasari, Prasad; Korlahalli, Rahul; Li, Jie; Smith, Nigel; Kritsun, Oleg; Volkman, Cathy

    2009-03-01

    The extension of optical lithography to 32nm and beyond is made possible by Double Patterning Techniques (DPT) at critical levels of the process flow. The ease of DPT implementation is hindered by increased significance of critical dimension uniformity and overlay errors. Diffraction-based overlay (DBO) has shown to be an effective metrology solution for accurate determination of the overlay errors associated with double patterning [1, 2] processes. In this paper we will report its use in litho-freeze-litho-etch (LFLE) and spacer double patterning technology (SDPT), which are pitch splitting solutions that reduce the significance of overlay errors. Since the control of overlay between various mask/level combinations is critical for fabrication, precise and accurate assessment of errors by advanced metrology techniques such as spectroscopic diffraction based overlay (DBO) and traditional image-based overlay (IBO) using advanced target designs will be reported. A comparison between DBO, IBO and CD-SEM measurements will be reported. . A discussion of TMU requirements for 32nm technology and TMU performance data of LFLE and SDPT targets by different overlay approaches will be presented.

  6. Precision genome editing using CRISPR-Cas9 and linear repair templates in C. elegans.

    PubMed

    Paix, Alexandre; Folkmann, Andrew; Seydoux, Geraldine

    2017-05-15

    The ability to introduce targeted edits in the genome of model organisms is revolutionizing the field of genetics. State-of-the-art methods for precision genome editing use RNA-guided endonucleases to create double-strand breaks (DSBs) and DNA templates containing the edits to repair the DSBs. Following this strategy, we have developed a protocol to create precise edits in the C. elegans genome. The protocol takes advantage of two innovations to improve editing efficiency: direct injection of CRISPR-Cas9 ribonucleoprotein complexes and use of linear DNAs with short homology arms as repair templates. The protocol requires no cloning or selection, and can be used to generate base and gene-size edits in just 4days. Point mutations, insertions, deletions and gene replacements can all be created using the same experimental pipeline. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Coarse-Grained Clustering Dynamics of Heterogeneously Coupled Neurons.

    PubMed

    Moon, Sung Joon; Cook, Katherine A; Rajendran, Karthikeyan; Kevrekidis, Ioannis G; Cisternas, Jaime; Laing, Carlo R

    2015-12-01

    The formation of oscillating phase clusters in a network of identical Hodgkin-Huxley neurons is studied, along with their dynamic behavior. The neurons are synaptically coupled in an all-to-all manner, yet the synaptic coupling characteristic time is heterogeneous across the connections. In a network of N neurons where this heterogeneity is characterized by a prescribed random variable, the oscillatory single-cluster state can transition-through [Formula: see text] (possibly perturbed) period-doubling and subsequent bifurcations-to a variety of multiple-cluster states. The clustering dynamic behavior is computationally studied both at the detailed and the coarse-grained levels, and a numerical approach that can enable studying the coarse-grained dynamics in a network of arbitrarily large size is suggested. Among a number of cluster states formed, double clusters, composed of nearly equal sub-network sizes are seen to be stable; interestingly, the heterogeneity parameter in each of the double-cluster components tends to be consistent with the random variable over the entire network: Given a double-cluster state, permuting the dynamical variables of the neurons can lead to a combinatorially large number of different, yet similar "fine" states that appear practically identical at the coarse-grained level. For weak heterogeneity we find that correlations rapidly develop, within each cluster, between the neuron's "identity" (its own value of the heterogeneity parameter) and its dynamical state. For single- and double-cluster states we demonstrate an effective coarse-graining approach that uses the Polynomial Chaos expansion to succinctly describe the dynamics by these quickly established "identity-state" correlations. This coarse-graining approach is utilized, within the equation-free framework, to perform efficient computations of the neuron ensemble dynamics.

  8. Assessment of xylem phenology: a first attempt to verify its accuracy and precision.

    PubMed

    Lupi, C; Rossi, S; Vieira, J; Morin, H; Deslauriers, A

    2014-01-01

    This manuscript aims to evaluate the precision and accuracy of current methodology for estimating xylem phenology and tracheid production in trees. Through a simple approach, sampling at two positions on the stem of co-dominant black spruce trees in two sites of the boreal forest of Quebec, we were able to quantify variability among sites, between trees and within a tree for different variables. We demonstrated that current methodology is accurate for the estimation of the onset of xylogenesis, while the accuracy for the evaluation of the ending of xylogenesis may be improved by sampling at multiple positions on the stem. The pattern of variability in different phenological variables and cell production allowed us to advance a novel hypothesis on the shift in the importance of various drivers of xylogenesis, from factors mainly varying at the level of site (e.g., climate) at the beginning of the growing season to factors varying at the level of individual trees (e.g., possibly genetic variability) at the end of the growing season.

  9. Metronome Cueing of Walking Reduces Gait Variability after a Cerebellar Stroke.

    PubMed

    Wright, Rachel L; Bevins, Joseph W; Pratt, David; Sackley, Catherine M; Wing, Alan M

    2016-01-01

    Cerebellar stroke typically results in increased variability during walking. Previous research has suggested that auditory cueing reduces excessive variability in conditions such as Parkinson's disease and post-stroke hemiparesis. The aim of this case report was to investigate whether the use of a metronome cue during walking could reduce excessive variability in gait parameters after a cerebellar stroke. An elderly female with a history of cerebellar stroke and recurrent falling undertook three standard gait trials and three gait trials with an auditory metronome. A Vicon system was used to collect 3-D marker trajectory data. The coefficient of variation was calculated for temporal and spatial gait parameters. SDs of the joint angles were calculated and used to give a measure of joint kinematic variability. Step time, stance time, and double support time variability were reduced with metronome cueing. Variability in the sagittal hip, knee, and ankle angles were reduced to normal values when walking to the metronome. In summary, metronome cueing resulted in a decrease in variability for step, stance, and double support times and joint kinematics. Further research is needed to establish whether a metronome may be useful in gait rehabilitation after cerebellar stroke and whether this leads to a decreased risk of falling.

  10. Metronome Cueing of Walking Reduces Gait Variability after a Cerebellar Stroke

    PubMed Central

    Wright, Rachel L.; Bevins, Joseph W.; Pratt, David; Sackley, Catherine M.; Wing, Alan M.

    2016-01-01

    Cerebellar stroke typically results in increased variability during walking. Previous research has suggested that auditory cueing reduces excessive variability in conditions such as Parkinson’s disease and post-stroke hemiparesis. The aim of this case report was to investigate whether the use of a metronome cue during walking could reduce excessive variability in gait parameters after a cerebellar stroke. An elderly female with a history of cerebellar stroke and recurrent falling undertook three standard gait trials and three gait trials with an auditory metronome. A Vicon system was used to collect 3-D marker trajectory data. The coefficient of variation was calculated for temporal and spatial gait parameters. SDs of the joint angles were calculated and used to give a measure of joint kinematic variability. Step time, stance time, and double support time variability were reduced with metronome cueing. Variability in the sagittal hip, knee, and ankle angles were reduced to normal values when walking to the metronome. In summary, metronome cueing resulted in a decrease in variability for step, stance, and double support times and joint kinematics. Further research is needed to establish whether a metronome may be useful in gait rehabilitation after cerebellar stroke and whether this leads to a decreased risk of falling. PMID:27313563

  11. Development of laser-guided precision sprayers for tree crop applications

    USDA-ARS?s Scientific Manuscript database

    Tree crops in nurseries and orchards have great variations in shapes, sizes, canopy densities and gaps between in-row trees. The variability requires future sprayers to be flexible to spray the amount of chemicals that can match tree structures. A precision air-assisted sprayer was developed to appl...

  12. Spatial variability effects on precision and power of forage yield estimation

    USDA-ARS?s Scientific Manuscript database

    Spatial analyses of yield trials are important, as they adjust cultivar means for spatial variation and improve the statistical precision of yield estimation. While the relative efficiency of spatial analysis has been frequently reported in several yield trials, its application on long-term forage y...

  13. Variable-Length Computerized Adaptive Testing Based on Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hsu, Chia-Ling; Wang, Wen-Chung; Chen, Shu-Ying

    2013-01-01

    Interest in developing computerized adaptive testing (CAT) under cognitive diagnosis models (CDMs) has increased recently. CAT algorithms that use a fixed-length termination rule frequently lead to different degrees of measurement precision for different examinees. Fixed precision, in which the examinees receive the same degree of measurement…

  14. Precision agriculture: Data to knowledge decision

    USDA-ARS?s Scientific Manuscript database

    From the development of the first viable variable-rate fertilizer systems in the upper Midwest USA, precision agriculture is now about two decades old. In that time, new technologies have come into play, but the overall goal of using spatial data to create actionable knowledge that can then be used ...

  15. Development of a laser-guided embedded-computer-controlled air-assisted precision sprayer

    USDA-ARS?s Scientific Manuscript database

    An embedded computer-controlled, laser-guided, air-assisted, variable-rate precision sprayer was developed to automatically adjust spray outputs on both sides of the sprayer to match presence, size, shape, and foliage density of tree crops. The sprayer was the integration of an embedded computer, a ...

  16. Masses of Te130 and Xe130 and Double-β-Decay Q Value of Te130

    NASA Astrophysics Data System (ADS)

    Redshaw, Matthew; Mount, Brianna J.; Myers, Edmund G.; Avignone, Frank T., III

    2009-05-01

    The atomic masses of Te130 and Xe130 have been obtained by measuring cyclotron frequency ratios of pairs of triply charged ions simultaneously trapped in a Penning trap. The results, with 1 standard deviation uncertainty, are M(Te130)=129.906222744(16)u and M(Xe130)=129.903509351(15)u. From the mass difference the double-β-decay Q value of Te130 is determined to be Qββ(Te130)=2527.518(13)keV. This is a factor of 150 more precise than the result of the AME2003 [G. Audi , Nucl. Phys. A729, 337 (2003)NUPABL0375-947410.1016/j.nuclphysa.2003.11.003].

  17. Time Resolved Precision Differential Photometry with OAFA's Double Astrograph

    NASA Astrophysics Data System (ADS)

    González, E. P. A.; Podestá, F.; Podestá, R.; Pacheco, A. M.

    2018-01-01

    For the last 50 years, the Double Astrograph located at the Carlos U. Cesco station of the Observatorio Astronómico Félix Aguilar (OAFA), San Juan province, Argentina, was used for astrometric observations and research. The main programs involved the study of asteroid positions and proper motions of stars in the Southern hemisphere, being the latter a long time project that is near completion from which the SPM4 catalog is the most recent version (Girard et al. 2011). In this paper, new scientific applications in the field of photometry that can be accomplished with this telescope are presented. These first attempts show the potential of the instrument for such tasks.

  18. First Measurement of the Muon Anti-Neutrino Charged Current Quasielastic Double-Differential Cross-Section

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grange, Joseph M.

    2013-01-01

    This dissertation presents the first measurement of the muon antineutrino charged current quasi-elastic double-differential cross section. These data significantly extend the knowledge of neutrino and antineutrino interactions in the GeV range, a region that has recently come under scrutiny due to a number of conflicting experimental results. To maximize the precision of this measurement, three novel techniques were employed to measure the neutrino background component of the data set. Representing the first measurements of the neutrino contribution to an accelerator-based antineutrino beam in the absence of a magnetic field, the successful execution of these techniques carry implications for current andmore » future neutrino experiments.« less

  19. Rearrangement of valence neutrons in the neutrinoless double- β decay of Xe 136

    DOE PAGES

    Szwec, S. V.; Kay, B. P.; Cocolios, T. E.; ...

    2016-11-15

    Here, a quantitative description of the change in ground-state neutron occupancies between 136Xe and 136Ba, the initial and final state in the neutrinoless double-β decay of 136Xe, has been extracted from precision measurements of the cross sections of single-neutron-adding and -removing reactions. Comparisons are made to recent theoretical calculations of the same properties using various nuclear-structure models. These are the same calculations used to determine the magnitude of the nuclear matrix elements for the process, which at present disagree with each other by factors of 2 or 3. The experimental neutron occupancies show some disagreement with the theoretical calculations.

  20. Construct Validation of a Multidimensional Computerized Adaptive Test for Fatigue in Rheumatoid Arthritis

    PubMed Central

    Nikolaus, Stephanie; Bode, Christina; Taal, Erik; Vonkeman, Harald E.; Glas, Cees A. W.; van de Laar, Mart A. F. J.

    2015-01-01

    Objective Multidimensional computerized adaptive testing enables precise measurements of patient-reported outcomes at an individual level across different dimensions. This study examined the construct validity of a multidimensional computerized adaptive test (CAT) for fatigue in rheumatoid arthritis (RA). Methods The ‘CAT Fatigue RA’ was constructed based on a previously calibrated item bank. It contains 196 items and three dimensions: ‘severity’, ‘impact’ and ‘variability’ of fatigue. The CAT was administered to 166 patients with RA. They also completed a traditional, multidimensional fatigue questionnaire (BRAF-MDQ) and the SF-36 in order to examine the CAT’s construct validity. A priori criterion for construct validity was that 75% of the correlations between the CAT dimensions and the subscales of the other questionnaires were as expected. Furthermore, comprehensive use of the item bank, measurement precision and score distribution were investigated. Results The a priori criterion for construct validity was supported for two of the three CAT dimensions (severity and impact but not for variability). For severity and impact, 87% of the correlations with the subscales of the well-established questionnaires were as expected but for variability, 53% of the hypothesised relations were found. Eighty-nine percent of the items were selected between one and 137 times for CAT administrations. Measurement precision was excellent for the severity and impact dimensions, with more than 90% of the CAT administrations reaching a standard error below 0.32. The variability dimension showed good measurement precision with 90% of the CAT administrations reaching a standard error below 0.44. No floor- or ceiling-effects were found for the three dimensions. Conclusion The CAT Fatigue RA showed good construct validity and excellent measurement precision on the dimensions severity and impact. The dimension variability had less ideal measurement characteristics, pointing to the need to recalibrate the CAT item bank with a two-dimensional model, solely consisting of severity and impact. PMID:26710104

  1. A double-blind atropine trial for active learning of autonomic function.

    PubMed

    Fry, Jeffrey R; Burr, Steven A

    2011-12-01

    Here, we describe a human physiology laboratory class measuring changes in autonomic function over time in response to atropine. Students use themselves as subjects, generating ownership and self-interest in the learning as well as directly experiencing the active link between physiology and pharmacology in people. The class is designed to concomitantly convey the importance of bias in experimentation by adopting a double-blind placebo-controlled approach. We have used this class effectively in various forms with ∼600 students receiving atropine over the last 16 yr. This class has received favorable feedback from staff and students of medicine, pharmacy, and neuroscience, and we recommend it for such undergraduates. The learning objectives that students are expected to achieve are to be able to 1) know the ethical, safety, and hygiene requirements for using human volunteers as subjects; 2) implement and explain a double-blind placebo-controlled trial; 3) design, agree, and execute a protocol for making (and accurately recording) precise reproducible measurements of pulse rate, pupil diameter, and salivary flow; 4) evaluate the importance of predose periods and measurement consistency to detect effects (including any reversibility) after an intervention; 5) experience direct cause-and-effect relationships integrating physiology with pharmacology in people; 6) calculate appropriate summary statistics to describe the data and determine the data's statistical significance; 7) recognize normal variability both within and between subjects in baseline physiological parameters and also recognize normal variability in response to pharmacological treatment; 8) infer the distribution and role of muscarinic receptors in the autonomic nervous system with respect to the heart, eye, and mouth; 9) identify and explain the clinical significance of differences in effect due to the route and formulation of atropine; 10) produce and deliver a concise oral presentation of experimental findings; and 11) produce a written report in the form of a short scientific research article. The results of a typical study are presented, which demonstrate that the administration of atropine by a subcutaneous injection elicited a significant increase in pulse rate and pupil diameter and a significant decrease in salivary flow, whereas administration of atropine in an oral liquid elicited significant effects on pulse rate and salivary flow, and an oral solid format elicited a significant alteration in salivary flow alone. More detailed analysis of the salivary flow data demonstrated clear differences between the routes of administration and formulation in the onset and magnitude of action of atropine.

  2. Variable-Length Computerized Adaptive Testing: Adaptation of the A-Stratified Strategy in Item Selection with Content Balancing

    ERIC Educational Resources Information Center

    Huo, Yan

    2009-01-01

    Variable-length computerized adaptive testing (CAT) can provide examinees with tailored test lengths. With the fixed standard error of measurement ("SEM") termination rule, variable-length CAT can achieve predetermined measurement precision by using relatively shorter tests compared to fixed-length CAT. To explore the application of…

  3. A ring test of in vitro neutral detergent fiber digestibility: analytical variability and sample ranking

    USDA-ARS?s Scientific Manuscript database

    In vitro neutral detergent fiber (NDF) digestibility (NDFD) is an empirical measurement used to describe fermentability of NDF by rumen microbes. Variability is inherent in assays and affects the precision that can be expected for replicated samples. The study objective was to evaluate variability w...

  4. A ring test of in vitro neutral detergent fiber digestibility: Analytical variability and sample ranking

    USDA-ARS?s Scientific Manuscript database

    In vitro neutral detergent fiber (NDF) digestibility (NDFD) is an empirical measurement used to describe fermentability of NDF by rumen microbes. Variability is inherent in assays and affects the precision that can be expected for replicated samples. The study objective was to evaluate variability w...

  5. [Subcortical laminar heterotopia 'double cortex syndrome'].

    PubMed

    Teplyshova, A M; Gaskin, V V; Kustov, G V; Gudkova, A A; Luzin, R V; Trifonov, I S; Lebedeva, A V

    2017-01-01

    This article presents a clinical case of a 29-year-old patient with 'Double cortex syndrome' with epilepsy, intellectual and mental disorders. Subcortical band heterotopia is a rare disorder of neuronal migration. Such patients typically present with epilepsy and variable degrees of mental retardation and behavioral and intellectual disturbances. The main diagnostic method is magnetic resonance imaging (MRI).

  6. Double Linear Damage Rule for Fatigue Analysis

    NASA Technical Reports Server (NTRS)

    Halford, G.; Manson, S.

    1985-01-01

    Double Linear Damage Rule (DLDR) method for use by structural designers to determine fatigue-crack-initiation life when structure subjected to unsteady, variable-amplitude cyclic loadings. Method calculates in advance of service how many loading cycles imposed on structural component before macroscopic crack initiates. Approach eventually used in design of high performance systems and incorporated into design handbooks and codes.

  7. Dual-CGH interferometry test for x-ray mirror mandrels

    NASA Astrophysics Data System (ADS)

    Gao, Guangjun; Lehan, John P.; Griesmann, Ulf

    2009-06-01

    We describe a glancing-incidence interferometric double-pass test, based on a pair of computer-generated holograms (CGHs), for mandrels used to fabricate x-ray mirrors for space-based x-ray telescopes. The design of the test and its realization are described. The application illustrates the advantage of dual-CGH tests for the complete metrology of precise optical surfaces.

  8. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  9. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics.

    PubMed

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  10. Analytics-Driven Lossless Data Compression for Rapid In-situ Indexing, Storing, and Querying

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, John; Arkatkar, Isha; Lakshminarasimhan, Sriram

    2013-01-01

    The analysis of scientific simulations is highly data-intensive and is becoming an increasingly important challenge. Peta-scale data sets require the use of light-weight query-driven analysis methods, as opposed to heavy-weight schemes that optimize for speed at the expense of size. This paper is an attempt in the direction of query processing over losslessly compressed scientific data. We propose a co-designed double-precision compression and indexing methodology for range queries by performing unique-value-based binning on the most significant bytes of double precision data (sign, exponent, and most significant mantissa bits), and inverting the resulting metadata to produce an inverted index over amore » reduced data representation. Without the inverted index, our method matches or improves compression ratios over both general-purpose and floating-point compression utilities. The inverted index is light-weight, and the overall storage requirement for both reduced column and index is less than 135%, whereas existing DBMS technologies can require 200-400%. As a proof-of-concept, we evaluate univariate range queries that additionally return column values, a critical component of data analytics, against state-of-the-art bitmap indexing technology, showing multi-fold query performance improvements.« less

  11. Direct compression of chitosan: process and formulation factors to improve powder flow and tablet performance.

    PubMed

    Buys, Gerhard M; du Plessis, Lissinda H; Marais, Andries F; Kotze, Awie F; Hamman, Josias H

    2013-06-01

    Chitosan is a polymer derived from chitin that is widely available at relatively low cost, but due to compression challenges it has limited application for the production of direct compression tablets. The aim of this study was to use certain process and formulation variables to improve manufacturing of tablets containing chitosan as bulking agent. Chitosan particle size and flow properties were determined, which included bulk density, tapped density, compressibility and moisture uptake. The effect of process variables (i.e. compression force, punch depth, percentage compaction in a novel double fill compression process) and formulation variables (i.e. type of glidant, citric acid, pectin, coating with Eudragit S®) on chitosan tablet performance (i.e. mass variation, tensile strength, dissolution) was investigated. Moisture content of the chitosan powder, particle size and the inclusion of glidants had a pronounced effect on its flow ability. Varying the percentage compaction during the first cycle of a double fill compression process produced chitosan tablets with more acceptable tensile strength and dissolution rate properties. The inclusion of citric acid and pectin into the formulation significantly decreased the dissolution rate of isoniazid from the tablets due to gel formation. Direct compression of chitosan powder into tablets can be significantly improved by the investigated process and formulation variables as well as applying a double fill compression process.

  12. Intensity-level assessment of lower body plyometric exercises based on mechanical output of lower limb joints.

    PubMed

    Sugisaki, Norihide; Okada, Junichi; Kanehisa, Hiroaki

    2013-01-01

    The present study aimed to quantify the intensity of lower extremity plyometric exercises by determining joint mechanical output. Ten men (age, 27.3 ± 4.1 years; height, 173.6 ± 5.4 cm; weight, 69.4 ± 6.0 kg; 1-repetition maximum [1RM] load in back squat 118.5 ± 12.0 kg) performed the following seven plyometric exercises: two-foot ankle hop, repeated squat jump, double-leg hop, depth jumps from 30 and 60 cm, and single-leg and double-leg tuck jumps. Mechanical output variables (torque, angular impulse, power, and work) at the lower limb joints were determined using inverse-dynamics analysis. For all measured variables, ANOVA revealed significant main effects of exercise type for all joints (P < 0.05) along with significant interactions between joint and exercise (P < 0.01), indicating that the influence of exercise type on mechanical output varied among joints. Paired comparisons revealed that there were marked differences in mechanical output at the ankle and hip joints; most of the variables at the ankle joint were greatest for two-foot ankle hop and tuck jumps, while most hip joint variables were greatest for repeated squat jump or double-leg hop. The present results indicate the necessity for determining mechanical output for each joint when evaluating the intensity of plyometric exercises.

  13. Precision diet formulation to improve performance and profitability across various climates: Modeling the implications of increasing the formulation frequency of dairy cattle diets.

    PubMed

    White, Robin R; Capper, Judith L

    2014-03-01

    The objective of this study was to use a precision nutrition model to simulate the relationship between diet formulation frequency and dairy cattle performance across various climates. Agricultural Modeling and Training Systems (AMTS) CattlePro diet-balancing software (Cornell Research Foundation, Ithaca, NY) was used to compare 3 diet formulation frequencies (weekly, monthly, or seasonal) and 3 levels of climate variability (hot, cold, or variable). Predicted daily milk yield (MY), metabolizable energy (ME) balance, and dry matter intake (DMI) were recorded for each frequency-variability combination. Economic analysis was conducted to calculate the predicted revenue over feed and labor costs. Diet formulation frequency affected ME balance and MY but did not affect DMI. Climate variability affected ME balance and DMI but not MY. The interaction between climate variability and formulation frequency did not affect ME balance, MY, or DMI. Formulating diets more frequently increased MY, DMI, and ME balance. Economic analysis showed that formulating diets weekly rather than seasonally could improve returns over variable costs by $25,000 per year for a moderate-sized (300-cow) operation. To achieve this increase in returns, an entire feeding system margin of error of <1% was required. Formulating monthly, rather than seasonally, may be a more feasible alternative as this requires a margin of error of only 2.5% for the entire feeding system. Feeding systems with a low margin of error must be developed to better take advantage of the benefits of precision nutrition. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Deep infection in total hip arthroplasty

    PubMed Central

    Hamilton, Henry; Jamieson, John

    2008-01-01

    Objective To report on a 30-year prospective study of deep infection in 1993 consecutive total hip arthroplasties performed by a single surgeon. Methods The relations of numerous variables to the incidence of deep infection were studied. Results The cumulative infection rate after the index total hip arthroplasties rose from 0.8% at 2 years to 1.4% at 20 years; 9.6% of the index operations required further surgery. When infections attributed to these secondary procedures were included, the infection rate rose from 0.9% at 2 years to 2% at 20 years. Although the usual variables increased the incidence of infection, the significant and most precise predictors of infection were radiologic diagnoses of upper pole grade III and protrusio acetabuli, an elevated erythrocyte sedimentation rate, alcoholism and units of blood transfused. Conclusion From 2–20 years, the incidence of deep infection doubled. Preoperative recognition of the first 4 risk factors permits the use of additional prophylactic measures. Spinal or epidural anesthesia reduced the units of blood transfused (the fifth risk factor) and, hence, the risk of infection. Although most deep infections are seeded while the wound is open, there are many possible postoperative causes. In this study, fewer than one-third of the infections that presented after 2 years were related to hematogenous spread. The efficacy of clean air technology was supported, and it is recommended that all measures that may reduce the incidence of deep infection be employed. PMID:18377751

  15. [Application of water jet ERBEJET 2 in salivary glands surgery].

    PubMed

    Gasiński, Mateusz; Modrzejewski, Maciej; Cenda, Paweł; Nazim-Zygadło, Elzbieta; Kozok, Andrzej; Dobosz, Paweł

    2009-09-01

    Anatomical location of salivary glands requires from surgeon high precision during the operation in this site. Waterjet is one of the modern tools which allows to perform "minimal invasive" operating procedure. This tool helps to separate pathological structures from healthy tissue with a stream of high pressure saline pumped to the operating area via special designed applicators. Stream of fluid is generated by double piston pummp under 1 to 80 bar pressure that can be regulated. This allows to precise remove tumors, spare nerves and vessels in glandular tissue and minimize use of electrocoagulation. Waterjet is a modern tool that can help to improve the safety of patients and comfort of surgeon's work.

  16. In vivo blunt-end cloning through CRISPR/Cas9-facilitated non-homologous end-joining

    PubMed Central

    Geisinger, Jonathan M.; Turan, Sören; Hernandez, Sophia; Spector, Laura P.; Calos, Michele P.

    2016-01-01

    The CRISPR/Cas9 system facilitates precise DNA modifications by generating RNA-guided blunt-ended double-strand breaks. We demonstrate that guide RNA pairs generate deletions that are repaired with a high level of precision by non-homologous end-joining in mammalian cells. We present a method called knock-in blunt ligation for exploiting these breaks to insert exogenous PCR-generated sequences in a homology-independent manner without loss of additional nucleotides. This method is useful for making precise additions to the genome such as insertions of marker gene cassettes or functional elements, without the need for homology arms. We successfully utilized this method in human and mouse cells to insert fluorescent protein cassettes into various loci, with efficiencies up to 36% in HEK293 cells without selection. We also created versions of Cas9 fused to the FKBP12-L106P destabilization domain in an effort to improve Cas9 performance. Our in vivo blunt-end cloning method and destabilization-domain-fused Cas9 variant increase the repertoire of precision genome engineering approaches. PMID:26762978

  17. Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures

    NASA Astrophysics Data System (ADS)

    Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.

    2017-12-01

    Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.

  18. Optimizing ELISAs for precision and robustness using laboratory automation and statistical design of experiments.

    PubMed

    Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete

    2008-08-20

    Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.

  19. The precision of locomotor odometry in humans.

    PubMed

    Durgin, Frank H; Akagi, Mikio; Gallistel, Charles R; Haiken, Woody

    2009-03-01

    Two experiments measured the human ability to reproduce locomotor distances of 4.6-100 m without visual feedback and compared distance production with time production. Subjects were not permitted to count steps. It was found that the precision of human odometry follows Weber's law that variability is proportional to distance. The coefficients of variation for distance production were much lower than those measured for time production for similar durations. Gait parameters recorded during the task (average step length and step frequency) were found to be even less variable suggesting that step integration could be the basis for non-visual human odometry.

  20. The Precision of Mapping Between Number Words and the Approximate Number System Predicts Children’s Formal Math Abilities

    PubMed Central

    Libertus, Melissa E.; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-01-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive Approximate Number System (ANS), and by using words and symbols to count and represent numbers exactly. Further, by the time they are five years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children’s math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation – mapping accuracy and variability – might each relate to math performance. Here, we address these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. PMID:27348475

  1. The precision of mapping between number words and the approximate number system predicts children's formal math abilities.

    PubMed

    Libertus, Melissa E; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-10-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive approximate number system (ANS) and by using words and symbols to count and represent numbers exactly. Furthermore, by the time they are 5years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children's math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation-mapping accuracy and variability-might each relate to math performance. Here, we addressed these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities, even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. The role of precision agriculture for improved nutrient management on farms.

    PubMed

    Hedley, Carolyn

    2015-01-01

    Precision agriculture uses proximal and remote sensor surveys to delineate and monitor within-field variations in soil and crop attributes, guiding variable rate control of inputs, so that in-season management can be responsive, e.g. matching strategic nitrogen fertiliser application to site-specific field conditions. It has the potential to improve production and nutrient use efficiency, ensuring that nutrients do not leach from or accumulate in excessive concentrations in parts of the field, which creates environmental problems. The discipline emerged in the 1980s with the advent of affordable geographic positioning systems (GPS), and has further developed with access to an array of affordable soil and crop sensors, improved computer power and software, and equipment with precision application control, e.g. variable rate fertiliser and irrigation systems. Precision agriculture focusses on improving nutrient use efficiency at the appropriate scale requiring (1) appropriate decision support systems (e.g. digital prescription maps), and (2) equipment capable of varying application at these different scales, e.g. the footprint of a one-irrigation sprinkler or a fertiliser top-dressing aircraft. This article reviews the rapid development of this discipline, and uses New Zealand as a case study example, as it is a country where agriculture drives economic growth. Here, the high yield potentials on often young, variable soils provide opportunities for effective financial return from investment in these new technologies. © 2014 Society of Chemical Industry.

  3. EVEREST: Pixel Level Decorrelation of K2 Light Curves

    NASA Astrophysics Data System (ADS)

    Luger, Rodrigo; Agol, Eric; Kruse, Ethan; Barnes, Rory; Becker, Andrew; Foreman-Mackey, Daniel; Deming, Drake

    2016-10-01

    We present EPIC Variability Extraction and Removal for Exoplanet Science Targets (EVEREST), an open-source pipeline for removing instrumental noise from K2 light curves. EVEREST employs a variant of pixel level decorrelation to remove systematics introduced by the spacecraft’s pointing error and a Gaussian process to capture astrophysical variability. We apply EVEREST to all K2 targets in campaigns 0-7, yielding light curves with precision comparable to that of the original Kepler mission for stars brighter than {K}p≈ 13, and within a factor of two of the Kepler precision for fainter targets. We perform cross-validation and transit injection and recovery tests to validate the pipeline, and compare our light curves to the other de-trended light curves available for download at the MAST High Level Science Products archive. We find that EVEREST achieves the highest average precision of any of these pipelines for unsaturated K2 stars. The improved precision of these light curves will aid in exoplanet detection and characterization, investigations of stellar variability, asteroseismology, and other photometric studies. The EVEREST pipeline can also easily be applied to future surveys, such as the TESS mission, to correct for instrumental systematics and enable the detection of low signal-to-noise transiting exoplanets. The EVEREST light curves and the source code used to generate them are freely available online.

  4. Doubling down on peptide phosphorylation as a variable mass modification

    USDA-ARS?s Scientific Manuscript database

    Some mass spectrometrists believe that searching for variable post-translational modifications like phosphorylation of serine or threonine when using database-search algorithms to interpret peptide tandem mass spectra will increase false positive rates. The basis for this is the premise that the al...

  5. Manufacture of threads with variable pitch by using noncircular gears

    NASA Astrophysics Data System (ADS)

    Slătineanu, L.; Dodun, O.; Coteață, M.; Coman, I.; Nagîț, G.; Beșliu, I.

    2016-08-01

    There are mechanical equipments in which shafts threaded with variable pitch are included. Such a shaft could be met in the case of worm specific to the double enveloping worm gearing. Over the years, the researchers investigated some possibilities to geometrically define and manufacture the shaft zones characterized by a variable pitch. One of the methods able to facilitate the manufacture of threads with variable pitch is based on the use of noncircular gears in the threading kinematic chain for threading by cutting. In order to design the noncircular gears, the mathematical law of pitch variation has to be known. An analysis of pitch variation based on geometrical considerations was developed in the case of a double enveloping globoid worm. Subsequently, on the bases of a proper situation, a numerical model was determined. In this way, an approximately law of pitch variation was determined and it could be taken into consideration when designing the noncircular gears included in the kinematic chain of the cutting machine tool.

  6. Preliminary evaluation of a nest usage sensor to detect double nest occupations of laying hens.

    PubMed

    Zaninelli, Mauro; Costa, Annamaria; Tangorra, Francesco Maria; Rossi, Luciana; Agazzi, Alessandro; Savoini, Giovanni

    2015-01-26

    Conventional cage systems will be replaced by housing systems that allow hens to move freely. These systems may improve hens' welfare, but they lead to some disadvantages: disease, bone fractures, cannibalism, piling and lower egg production. New selection criteria for existing commercial strains should be identified considering individual data about laying performance and the behavior of hens. Many recording systems have been developed to collect these data. However, the management of double nest occupations remains critical for the correct egg-to-hen assignment. To limit such events, most systems adopt specific trap devices and additional mechanical components. Others, instead, only prevent these occurrences by narrowing the nest, without any detection and management. The aim of this study was to develop and test a nest usage "sensor", based on imaging analysis, that is able to automatically detect a double nest occupation. Results showed that the developed sensor correctly identified the double nest occupation occurrences. Therefore, the imaging analysis resulted in being a useful solution that could simplify the nest construction for this type of recording system, allowing the collection of more precise and accurate data, since double nest occupations would be managed and the normal laying behavior of hens would not be discouraged by the presence of the trap devices.

  7. Preliminary Evaluation of a Nest Usage Sensor to Detect Double Nest Occupations of Laying Hens

    PubMed Central

    Zaninelli, Mauro; Costa, Annamaria; Tangorra, Francesco Maria; Rossi, Luciana; Agazzi, Alessandro; Savoini, Giovanni

    2015-01-01

    Conventional cage systems will be replaced by housing systems that allow hens to move freely. These systems may improve hens' welfare, but they lead to some disadvantages: disease, bone fractures, cannibalism, piling and lower egg production. New selection criteria for existing commercial strains should be identified considering individual data about laying performance and the behavior of hens. Many recording systems have been developed to collect these data. However, the management of double nest occupations remains critical for the correct egg-to-hen assignment. To limit such events, most systems adopt specific trap devices and additional mechanical components. Others, instead, only prevent these occurrences by narrowing the nest, without any detection and management. The aim of this study was to develop and test a nest usage “sensor”, based on imaging analysis, that is able to automatically detect a double nest occupation. Results showed that the developed sensor correctly identified the double nest occupation occurrences. Therefore, the imaging analysis resulted in being a useful solution that could simplify the nest construction for this type of recording system, allowing the collection of more precise and accurate data, since double nest occupations would be managed and the normal laying behavior of hens would not be discouraged by the presence of the trap devices. PMID:25629704

  8. Application status and its affecting factors of double standard for multinational corporations in Korea.

    PubMed

    Ki, Myung; Choi, Jaewook; Lee, Juneyoung; Park, Heechan; Yoon, Seokjoon; Kim, Namhoon; Heo, Jungyeon

    2004-02-01

    We intended to evaluate the double standard status and to identify factors of determining double standard criteria in multinational corporations of Korea, and specifically those in the occupational health and safety area. A postal questionnaire had been sent, between August 2002 and September 2002, to multinational corporations in Korea. A double standard company was defined as those who answered in more than one item as adopting a different standard among the five items regarding double standard identification. By comparing double standard companies with equivalent standard companies, determinants for double standards were then identified using logistic regression analysis. Of multinational corporations, 45.1% had adopted a double standard. Based on the question naire's scale level, the factor of 'characteristic and size of multinational corporation' was found to have the most potent impact on increasing double standard risk. On the variable level, factors of 'number of affiliated companies' and 'existence of an auditing system with the parent company' showed a strong negative impact on double standard risk. Our study suggests that a distinctive approach is needed to manage the occupational safety and health for multinational corporations. This approach should be focused on the specific level of a corporation, not on a country level.

  9. The Mechanism of Viral Replication. Structure of Replication Complexes of Encephalomyocarditis Virus

    PubMed Central

    Thach, Sigrid S.; Dobbertin, Darrell; Lawrence, Charles; Golini, Fred; Thach, Robert E.

    1974-01-01

    The structure of the purified replicative intermediate of encephalomyocarditis virus was determined by electron microscopy. Approximately 80% of the replicative intermediate complexes were characterized by a filament of double-stranded RNA of widely variable length, which had a “bush” of single-stranded RNA at one end. In many examples one or more additional single-stranded bushes were appended internally to the double-stranded RNA filament. These results support the view that before deproteinization, replicative intermediate contains little if any double-stranded RNA. Images PMID:4366773

  10. A mechanical comparison of linear and double-looped hung supplemental heavy chain resistance to the back squat: a case study.

    PubMed

    Neelly, Kurt R; Terry, Joseph G; Morris, Martin J

    2010-01-01

    A relatively new and scarcely researched technique to increase strength is the use of supplemental heavy chain resistance (SHCR) in conjunction with plate weights to provide variable resistance to free weight exercises. The purpose of this case study was to determine the actual resistance being provided by a double-looped versus a linear hung SHCR to the back squat exercise. The linear technique simply hangs the chain directly from the bar, whereas the double-looped technique uses a smaller chain to adjust the height of the looped chain. In both techniques, as the squat descends, chain weight is unloaded onto the floor, and as the squat ascends, chain weight is progressively loaded back as resistance. One experienced and trained male weight lifter (age = 33 yr; height = 1.83 m; weight = 111.4 kg) served as the subject. Plate weight was set at 84.1 kg, approximately 50% of the subject's 1 repetition maximum. The SHCR was affixed to load cells, sampling at a frequency of 500 Hz, which were affixed to the Olympic bar. Data were collected as the subject completed the back squat under the following conditions: double-looped 1 chain (9.6 kg), double-looped 2 chains (19.2 kg), linear 1 chain, and linear 2 chains. The double-looped SHCR resulted in a 78-89% unloading of the chain weight at the bottom of the squat, whereas the linear hanging SHCR resulted in only a 36-42% unloading. The double-looped technique provided nearly 2 times the variable resistance at the top of the squat compared with the linear hanging technique, showing that attention must be given to the technique used to hang SHCR.

  11. Glioblastoma adaptation traced through decline of an IDH1 clonal driver and macro-evolution of a double-minute chromosome

    PubMed Central

    Favero, F.; McGranahan, N.; Salm, M.; Birkbak, N. J.; Sanborn, J. Z.; Benz, S. C.; Becq, J.; Peden, J. F.; Kingsbury, Z.; Grocok, R. J.; Humphray, S.; Bentley, D.; Spencer-Dene, B.; Gutteridge, A.; Brada, M.; Roger, S.; Dietrich, P.-Y.; Forshew, T.; Gerlinger, M.; Rowan, A.; Stamp, G.; Eklund, A. C.; Szallasi, Z.; Swanton, C.

    2015-01-01

    Background Glioblastoma (GBM) is the most common malignant brain cancer occurring in adults, and is associated with dismal outcome and few therapeutic options. GBM has been shown to predominantly disrupt three core pathways through somatic aberrations, rendering it ideal for precision medicine approaches. Methods We describe a 35-year-old female patient with recurrent GBM following surgical removal of the primary tumour, adjuvant treatment with temozolomide and a 3-year disease-free period. Rapid whole-genome sequencing (WGS) of three separate tumour regions at recurrence was carried out and interpreted relative to WGS of two regions of the primary tumour. Results We found extensive mutational and copy-number heterogeneity within the primary tumour. We identified a TP53 mutation and two focal amplifications involving PDGFRA, KIT and CDK4, on chromosomes 4 and 12. A clonal IDH1 R132H mutation in the primary, a known GBM driver event, was detectable at only very low frequency in the recurrent tumour. After sub-clonal diversification, evidence was found for a whole-genome doubling event and a translocation between the amplified regions of PDGFRA, KIT and CDK4, encoded within a double-minute chromosome also incorporating miR26a-2. The WGS analysis uncovered progressive evolution of the double-minute chromosome converging on the KIT/PDGFRA/PI3K/mTOR axis, superseding the IDH1 mutation in dominance in a mutually exclusive manner at recurrence, consequently the patient was treated with imatinib. Despite rapid sequencing and cancer genome-guided therapy against amplified oncogenes, the disease progressed, and the patient died shortly after. Conclusion This case sheds light on the dynamic evolution of a GBM tumour, defining the origins of the lethal sub-clone, the macro-evolutionary genomic events dominating the disease at recurrence and the loss of a clonal driver. Even in the era of rapid WGS analysis, cases such as this illustrate the significant hurdles for precision medicine success. PMID:25732040

  12. Single-row versus double-row rotator cuff repair: techniques and outcomes.

    PubMed

    Dines, Joshua S; Bedi, Asheesh; ElAttrache, Neal S; Dines, David M

    2010-02-01

    Double-row rotator cuff repair techniques incorporate a medial and lateral row of suture anchors in the repair configuration. Biomechanical studies of double-row repair have shown increased load to failure, improved contact areas and pressures, and decreased gap formation at the healing enthesis, findings that have provided impetus for clinical studies comparing single-row with double-row repair. Clinical studies, however, have not yet demonstrated a substantial improvement over single-row repair with regard to either the degree of structural healing or functional outcomes. Although double-row repair may provide an improved mechanical environment for the healing enthesis, several confounding variables have complicated attempts to establish a definitive relationship with improved rates of healing. Appropriately powered rigorous level I studies that directly compare single-row with double-row techniques in matched tear patterns are necessary to further address these questions. These studies are needed to justify the potentially increased implant costs and surgical times associated with double-row rotator cuff repair.

  13. A Linear Variable-[theta] Model for Measuring Individual Differences in Response Precision

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2011-01-01

    Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…

  14. Evaluation of three aging techniques and back-calculated growth for introduced Blue Catfish from Lake Oconee, Georgia

    USGS Publications Warehouse

    Homer, Michael D.; Peterson, James T.; Jennings, Cecil A.

    2015-01-01

    Back-calculation of length-at-age from otoliths and spines is a common technique employed in fisheries biology, but few studies have compared the precision of data collected with this method for catfish populations. We compared precision of back-calculated lengths-at-age for an introducedIctalurus furcatus (Blue Catfish) population among 3 commonly used cross-sectioning techniques. We used gillnets to collect Blue Catfish (n = 153) from Lake Oconee, GA. We estimated ages from a basal recess, articulating process, and otolith cross-section from each fish. We employed the Frasier-Lee method to back-calculate length-at-age for each fish, and compared the precision of back-calculated lengths among techniques using hierarchical linear models. Precision in age assignments was highest for otoliths (83.5%) and lowest for basal recesses (71.4%). Back-calculated lengths were variable among fish ages 1–3 for the techniques compared; otoliths and basal recesses yielded variable lengths at age 8. We concluded that otoliths and articulating processes are adequate for age estimation of Blue Catfish.

  15. Epidemiology in wonderland: Big Data and precision medicine.

    PubMed

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  16. High-precision mass measurements for the rp-process at JYFLTRAP

    NASA Astrophysics Data System (ADS)

    Canete, Laetitia; Eronen, Tommi; Jokinen, Ari; Kankainen, Anu; Moore, Ian D.; Nesterenko, Dimitry; Rinta-Antila, Sami

    2018-01-01

    The double Penning trap JYFLTRAP at the University of Jyväskylä has been successfully used to achieve high-precision mass measurements of nuclei involved in the rapid proton-capture (rp) process. A precise mass measurement of 31Cl is essential to estimate the waiting point condition of 30S in the rp-process occurring in type I x-ray bursts (XRBs). The mass-excess of 31C1 measured at JYFLTRAP, -7034.7(3.4) keV, is 15 more precise than the value given in the Atomic Mass Evaluation 2012. The proton separation energy Sp determined from the new mass-excess value confirmed that 30S is a waiting point, with a lower-temperature limit of 0.44 GK. The mass of 52Co effects both 51Fe(p,γ)52Co and 52Co(p,γ)53Ni reactions. The mass-excess value measured, - 34 331.6(6.6) keV is 30 times more precise than the value given in AME2012. The Q values for the 51Fe(p,γ)52Co and 52Co(p,γ)53Ni reactions are now known with a high precision, 1418(11) keV and 2588(26) keV respectively. The results show that 52Co is more proton bound and 53Ni less proton bound than what was expected from the extrapolated value.

  17. Using a remote sensing-based, percent tree cover map to enhance forest inventory estimation

    Treesearch

    Ronald E. McRoberts; Greg C. Liknes; Grant M. Domke

    2014-01-01

    For most national forest inventories, the variables of primary interest to users are forest area and growing stock volume. The precision of estimates of parameters related to these variables can be increased using remotely sensed auxiliary variables, often in combination with stratified estimators. However, acquisition and processing of large amounts of remotely sensed...

  18. Role of Imaging in the Era of Precision Medicine.

    PubMed

    Giardino, Angela; Gupta, Supriya; Olson, Emmi; Sepulveda, Karla; Lenchik, Leon; Ivanidze, Jana; Rakow-Penner, Rebecca; Patel, Midhir J; Subramaniam, Rathan M; Ganeshan, Dhakshinamoorthy

    2017-05-01

    Precision medicine is an emerging approach for treating medical disorders, which takes into account individual variability in genetic and environmental factors. Preventive or therapeutic interventions can then be directed to those who will benefit most from targeted interventions, thereby maximizing benefits and minimizing costs and complications. Precision medicine is gaining increasing recognition by clinicians, healthcare systems, pharmaceutical companies, patients, and the government. Imaging plays a critical role in precision medicine including screening, early diagnosis, guiding treatment, evaluating response to therapy, and assessing likelihood of disease recurrence. The Association of University Radiologists Radiology Research Alliance Precision Imaging Task Force convened to explore the current and future role of imaging in the era of precision medicine and summarized its finding in this article. We review the increasingly important role of imaging in various oncological and non-oncological disorders. We also highlight the challenges for radiology in the era of precision medicine. Published by Elsevier Inc.

  19. Statistical properties of mean stand biomass estimators in a LIDAR-based double sampling forest survey design.

    Treesearch

    H.E. Anderson; J. Breidenbach

    2007-01-01

    Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...

  20. A 3D Model of Double-Helical DNA Showing Variable Chemical Details

    ERIC Educational Resources Information Center

    Cady, Susan G.

    2005-01-01

    Since the first DNA model was created approximately 50 years ago using molecular models, students and teachers have been building simplified DNA models from various practical materials. A 3D double-helical DNA model, made by placing beads on a wire and stringing beads through holes in plastic canvas, is described. Suggestions are given to enhance…

  1. Childhood Epilepsy and Asthma: A Test of an Extension of the Double ABCX Model.

    ERIC Educational Resources Information Center

    Austin, Joan Kessner

    The Double ABCX Model of Family Adjustment and Adaptation, a model that predicts adaptation to chronic stressors on the family, was extended by dividing it into attitudes, coping, and adaptation of parents and child separately, and by including variables relevant to child adaptation to epilepsy or asthma. The extended model was tested on 246…

  2. Development of a femoral template for computer-assisted tunnel placement in anatomical double-bundle ACL reconstruction.

    PubMed

    Luites, J W H; Wymenga, A B; Blankevoort, L; Kooloos, J M G; Verdonschot, N

    2011-01-01

    Femoral graft placement is an important factor in the success of anterior cruciate ligament (ACL) reconstruction. In addition to improving the accuracy of femoral tunnel placement, Computer Assisted Surgery (CAS) can be used to determine the anatomic location. This is achieved by using a 3D femoral template which indicates the position of the anatomical ACL center based on endoscopically measurable landmarks. This study describes the development and application of this method. The template is generated through statistical shape analysis of the ACL insertion, with respect to the anteromedial (AM) and posterolateral (PL) bundles. The ligament insertion data, together with the osteocartilage edge on the lateral notch, were mapped onto a cylinder fitted to the intercondylar notch surface (n = 33). Anatomic variation, in terms of standard variation of the positions of the ligament centers in the template, was within 2.2 mm. The resulting template was programmed in a computer-assisted navigation system for ACL replacement and its accuracy and precision were determined on 31 femora. It was found that with the navigation system the AM and PL tunnels could be positioned with an accuracy of 2.5 mm relative to the anatomic insertion centers; the precision was 2.4 mm. This system consists of a template that can easily be implemented in 3D computer navigation software. Requiring no preoperative images and planning, the system provides adequate accuracy and precision to position the entrance of the femoral tunnels for anatomical single- or double-bundle ACL reconstruction.

  3. Metabolic modeling of dynamic brain 13C NMR multiplet data: Concepts and simulations with a two-compartment neuronal-glial model

    PubMed Central

    Shestov, Alexander A.; Valette, Julien; Deelchand, Dinesh K.; Uğurbil, Kâmil; Henry, Pierre-Gilles

    2016-01-01

    Metabolic modeling of dynamic 13C labeling curves during infusion of 13C-labeled substrates allows quantitative measurements of metabolic rates in vivo. However metabolic modeling studies performed in the brain to date have only modeled time courses of total isotopic enrichment at individual carbon positions (positional enrichments), not taking advantage of the additional dynamic 13C isotopomer information available from fine-structure multiplets in 13C spectra. Here we introduce a new 13C metabolic modeling approach using the concept of bonded cumulative isotopomers, or bonded cumomers. The direct relationship between bonded cumomers and 13C multiplets enables fitting of the dynamic multiplet data. The potential of this new approach is demonstrated using Monte-Carlo simulations with a brain two-compartment neuronal-glial model. The precision of positional and cumomer approaches are compared for two different metabolic models (with and without glutamine dilution) and for different infusion protocols ([1,6-13C2]glucose, [1,2-13C2]acetate, and double infusion [1,6-13C2]glucose + [1,2-13C2]acetate). In all cases, the bonded cumomer approach gives better precision than the positional approach. In addition, of the three different infusion protocols considered here, the double infusion protocol combined with dynamic bonded cumomer modeling appears the most robust for precise determination of all fluxes in the model. The concepts and simulations introduced in the present study set the foundation for taking full advantage of the available dynamic 13C multiplet data in metabolic modeling. PMID:22528840

  4. In the eye of the beholder: the effect of rater variability and different rating scales on QTL mapping.

    PubMed

    Poland, Jesse A; Nelson, Rebecca J

    2011-02-01

    The agronomic importance of developing durably resistant cultivars has led to substantial research in the field of quantitative disease resistance (QDR) and, in particular, mapping quantitative trait loci (QTL) for disease resistance. The assessment of QDR is typically conducted by visual estimation of disease severity, which raises concern over the accuracy and precision of visual estimates. Although previous studies have examined the factors affecting the accuracy and precision of visual disease assessment in relation to the true value of disease severity, the impact of this variability on the identification of disease resistance QTL has not been assessed. In this study, the effects of rater variability and rating scales on mapping QTL for northern leaf blight resistance in maize were evaluated in a recombinant inbred line population grown under field conditions. The population of 191 lines was evaluated by 22 different raters using a direct percentage estimate, a 0-to-9 ordinal rating scale, or both. It was found that more experienced raters had higher precision and that using a direct percentage estimation of diseased leaf area produced higher precision than using an ordinal scale. QTL mapping was then conducted using the disease estimates from each rater using stepwise general linear model selection (GLM) and inclusive composite interval mapping (ICIM). For GLM, the same QTL were largely found across raters, though some QTL were only identified by a subset of raters. The magnitudes of estimated allele effects at identified QTL varied drastically, sometimes by as much as threefold. ICIM produced highly consistent results across raters and for the different rating scales in identifying the location of QTL. We conclude that, despite variability between raters, the identification of QTL was largely consistent among raters, particularly when using ICIM. However, care should be taken in estimating QTL allele effects, because this was highly variable and rater dependent.

  5. High-precision 41K/39K measurements by MC-ICP-MS indicate terrestrial variability of δ41K

    USGS Publications Warehouse

    Morgan, Leah; Santiago Ramos, Danielle P.; Davidheiser-Kroll, Brett; Faithfull, John; Lloyd, Nicholas S.; Ellam, Rob M.; Higgins, John A.

    2018-01-01

    Potassium is a major component in continental crust, the fourth-most abundant cation in seawater, and a key element in biological processes. Until recently, difficulties with existing analytical techniques hindered our ability to identify natural isotopic variability of potassium isotopes in terrestrial materials. However, measurement precision has greatly improved and a range of K isotopic compositions has now been demonstrated in natural samples. In this study, we present a new technique for high-precision measurement of K isotopic ratios using high-resolution, cold plasma multi-collector mass spectrometry. We apply this technique to demonstrate natural variability in the ratio of 41K to 39K in a diverse group of geological and biological samples, including silicate and evaporite minerals, seawater, and plant and animal tissues. The total range in 41K/39K ratios is ca. 2.6‰, with a long-term external reproducibility of 0.17‰ (2, N=108). Seawater and seawater-derived evaporite minerals are systematically enriched in 41K compared to silicate minerals by ca. 0.6‰, a result consistent with recent findings1, 2. Although our average bulk-silicate Earth value (-0.54‰) is indistinguishable from previously published values, we find systematic δ41K variability in some high-temperature sample suites, particularly those with evidence for the presence of fluids. The δ41K values of biological samples span a range of ca. 1.2‰ between terrestrial mammals, plants, and marine organisms. Implications of terrestrial K isotope variability for the atomic weight of K and K-based geochronology are discussed. Our results indicate that high-precision measurements of stable K isotopes, made using commercially available mass spectrometers, can provide unique insights into the chemistry of potassium in geological and biological systems. 

  6. Highly Precise and Developmentally Programmed Genome Assembly in Paramecium Requires Ligase IV–Dependent End Joining

    PubMed Central

    Marmignon, Antoine; Ku, Michael; Silve, Aude; Meyer, Eric; Forney, James D.; Malinsky, Sophie; Bétermier, Mireille

    2011-01-01

    During the sexual cycle of the ciliate Paramecium, assembly of the somatic genome includes the precise excision of tens of thousands of short, non-coding germline sequences (Internal Eliminated Sequences or IESs), each one flanked by two TA dinucleotides. It has been reported previously that these genome rearrangements are initiated by the introduction of developmentally programmed DNA double-strand breaks (DSBs), which depend on the domesticated transposase PiggyMac. These DSBs all exhibit a characteristic geometry, with 4-base 5′ overhangs centered on the conserved TA, and may readily align and undergo ligation with minimal processing. However, the molecular steps and actors involved in the final and precise assembly of somatic genes have remained unknown. We demonstrate here that Ligase IV and Xrcc4p, core components of the non-homologous end-joining pathway (NHEJ), are required both for the repair of IES excision sites and for the circularization of excised IESs. The transcription of LIG4 and XRCC4 is induced early during the sexual cycle and a Lig4p-GFP fusion protein accumulates in the developing somatic nucleus by the time IES excision takes place. RNAi–mediated silencing of either gene results in the persistence of free broken DNA ends, apparently protected against extensive resection. At the nucleotide level, controlled removal of the 5′-terminal nucleotide occurs normally in LIG4-silenced cells, while nucleotide addition to the 3′ ends of the breaks is blocked, together with the final joining step, indicative of a coupling between NHEJ polymerase and ligase activities. Taken together, our data indicate that IES excision is a “cut-and-close” mechanism, which involves the introduction of initiating double-strand cleavages at both ends of each IES, followed by DSB repair via highly precise end joining. This work broadens our current view on how the cellular NHEJ pathway has cooperated with domesticated transposases for the emergence of new mechanisms involved in genome dynamics. PMID:21533177

  7. Highly precise and developmentally programmed genome assembly in Paramecium requires ligase IV-dependent end joining.

    PubMed

    Kapusta, Aurélie; Matsuda, Atsushi; Marmignon, Antoine; Ku, Michael; Silve, Aude; Meyer, Eric; Forney, James D; Malinsky, Sophie; Bétermier, Mireille

    2011-04-01

    During the sexual cycle of the ciliate Paramecium, assembly of the somatic genome includes the precise excision of tens of thousands of short, non-coding germline sequences (Internal Eliminated Sequences or IESs), each one flanked by two TA dinucleotides. It has been reported previously that these genome rearrangements are initiated by the introduction of developmentally programmed DNA double-strand breaks (DSBs), which depend on the domesticated transposase PiggyMac. These DSBs all exhibit a characteristic geometry, with 4-base 5' overhangs centered on the conserved TA, and may readily align and undergo ligation with minimal processing. However, the molecular steps and actors involved in the final and precise assembly of somatic genes have remained unknown. We demonstrate here that Ligase IV and Xrcc4p, core components of the non-homologous end-joining pathway (NHEJ), are required both for the repair of IES excision sites and for the circularization of excised IESs. The transcription of LIG4 and XRCC4 is induced early during the sexual cycle and a Lig4p-GFP fusion protein accumulates in the developing somatic nucleus by the time IES excision takes place. RNAi-mediated silencing of either gene results in the persistence of free broken DNA ends, apparently protected against extensive resection. At the nucleotide level, controlled removal of the 5'-terminal nucleotide occurs normally in LIG4-silenced cells, while nucleotide addition to the 3' ends of the breaks is blocked, together with the final joining step, indicative of a coupling between NHEJ polymerase and ligase activities. Taken together, our data indicate that IES excision is a "cut-and-close" mechanism, which involves the introduction of initiating double-strand cleavages at both ends of each IES, followed by DSB repair via highly precise end joining. This work broadens our current view on how the cellular NHEJ pathway has cooperated with domesticated transposases for the emergence of new mechanisms involved in genome dynamics.

  8. Apparatus and method for variable angle slant hole collimator

    DOEpatents

    Lee, Seung Joon; Kross, Brian J.; McKisson, John E.

    2017-07-18

    A variable angle slant hole (VASH) collimator for providing collimation of high energy photons such as gamma rays during radiological imaging of humans. The VASH collimator includes a stack of multiple collimator leaves and a means of quickly aligning each leaf to provide various projection angles. Rather than rotate the detector around the subject, the VASH collimator enables the detector to remain stationary while the projection angle of the collimator is varied for tomographic acquisition. High collimator efficiency is achieved by maintaining the leaves in accurate alignment through the various projection angles. Individual leaves include unique angled cuts to maintain a precise target collimation angle. Matching wedge blocks driven by two actuators with twin-lead screws accurately position each leaf in the stack resulting in the precise target collimation angle. A computer interface with the actuators enables precise control of the projection angle of the collimator.

  9. Estimating annual suspended-sediment loads in the northern and central Appalachian Coal region

    USGS Publications Warehouse

    Koltun, G.F.

    1985-01-01

    Multiple-regression equations were developed for estimating the annual suspended-sediment load, for a given year, from small to medium-sized basins in the northern and central parts of the Appalachian coal region. The regression analysis was performed with data for land use, basin characteristics, streamflow, rainfall, and suspended-sediment load for 15 sites in the region. Two variables, the maximum mean-daily discharge occurring within the year and the annual peak discharge, explained much of the variation in the annual suspended-sediment load. Separate equations were developed employing each of these discharge variables. Standard errors for both equations are relatively large, which suggests that future predictions will probably have a low level of precision. This level of precision, however, may be acceptable for certain purposes. It is therefore left to the user to asses whether the level of precision provided by these equations is acceptable for the intended application.

  10. Low-Crosstalk Composite Optical Crosspoint Switches

    NASA Technical Reports Server (NTRS)

    Pan, Jing-Jong; Liang, Frank

    1993-01-01

    Composite optical switch includes two elementary optical switches in tandem, plus optical absorbers. Like elementary optical switches, composite optical switches assembled into switch matrix. Performance enhanced by increasing number of elementary switches. Advantage of concept: crosstalk reduced to acceptably low level at moderate cost of doubling number of elementary switches rather than at greater cost of tightening manufacturing tolerances and exerting more-precise control over operating conditions.

  11. A HIGH-PRECISION NEAR-INFRARED SURVEY FOR RADIAL VELOCITY VARIABLE LOW-MASS STARS USING CSHELL AND A METHANE GAS CELL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gagné, Jonathan; Plavchan, Peter; Gao, Peter

    2016-05-01

    We present the results of a precise near-infrared (NIR) radial velocity (RV) survey of 32 low-mass stars with spectral types K2–M4 using CSHELL at the NASA InfraRed Telescope Facility in the K band with an isotopologue methane gas cell to achieve wavelength calibration and a novel, iterative RV extraction method. We surveyed 14 members of young (≈25–150 Myr) moving groups, the young field star ε Eridani, and 18 nearby (<25 pc) low-mass stars and achieved typical single-measurement precisions of 8–15 m s{sup −1}with a long-term stability of 15–50 m s{sup −1} over longer baselines. We obtain the best NIR RVmore » constraints to date on 27 targets in our sample, 19 of which were never followed by high-precision RV surveys. Our results indicate that very active stars can display long-term RV variations as low as ∼25–50 m s{sup −1} at ≈2.3125 μ m, thus constraining the effect of jitter at these wavelengths. We provide the first multiwavelength confirmation of GJ 876 bc and independently retrieve orbital parameters consistent with previous studies. We recovered RV variabilities for HD 160934 AB and GJ 725 AB that are consistent with their known binary orbits, and nine other targets are candidate RV variables with a statistical significance of 3 σ –5 σ . Our method, combined with the new iSHELL spectrograph, will yield long-term RV precisions of ≲5 m s{sup −1} in the NIR, which will allow the detection of super-Earths near the habitable zone of mid-M dwarfs.« less

  12. How Deep is Shallow? Improving Absolute and Relative Locations of Upper Crustal Seismicity in Switzerland

    NASA Astrophysics Data System (ADS)

    Diehl, T.; Kissling, E. H.; Singer, J.; Lee, T.; Clinton, J. F.; Waldhauser, F.; Wiemer, S.

    2017-12-01

    Information on the structure of upper-crustal fault systems and their connection with seismicity is key to the understanding of neotectonic processes. Precisely determined focal depths in combination with structural models can provide important insight into deformation styles of the upper crust (e.g. thin- vs. versus thick-skinned tectonics). Detailed images of seismogenic fault zones in the upper crust, on the other hand, will contribute to the assessment of the hazard related to natural and induced earthquakes, especially in regions targeted for radioactive waste repositories or geothermal energy production. The complex velocity structure of the uppermost crust and unfavorable network geometries, however, often hamper precise locations (i.e. focal depth) of shallow seismicity and therefore limit tectonic interpretations. In this study we present a new high-precision catalog of absolute locations of seismicity in Switzerland. High-quality travel-time data from local and regional earthquakes in the period 2000-2017 are used to solve the coupled hypocenter-velocity structure problem in 1D. For this purpose, the well-known VELEST inversion software was revised and extended to improve the quality assessment of travel-time data and to facilitate the identification of erroneous picks in the bulletin data. Results from the 1D inversion are used as initial parameters for a 3D local earthquake tomography. Well-studied earthquakes and high-quality quarry blasts are used to assess the quality of 1D and 3D relocations. In combination with information available from various controlled-source experiments, borehole data, and geological profiles, focal depths and associated host formations are assessed through comparison with the resolved 3D velocity structure. The new absolute locations and velocity models are used as initial values for relative double-difference relocation of earthquakes in Switzerland. Differential times are calculated from bulletin picks and waveform cross-correlation. The resulting double-difference catalog is used as a regional background catalog for a real-time double-difference approach. We will present our implementation strategy and test its performance for local applications using examples from well-recorded natural and induced earthquake sequences in Switzerland.

  13. A double-spike method for K-Ar measurement: A technique for high precision in situ dating on Mars and other planetary surfaces

    NASA Astrophysics Data System (ADS)

    Farley, K. A.; Hurowitz, J. A.; Asimow, P. D.; Jacobson, N. S.; Cartwright, J. A.

    2013-06-01

    A new method for K-Ar dating using a double isotope dilution technique is proposed and demonstrated. The method is designed to eliminate known difficulties facing in situ dating on planetary surfaces, especially instrument complexity and power availability. It may also have applicability in some terrestrial dating applications. Key to the method is the use of a solid tracer spike enriched in both 39Ar and 41K. When mixed with lithium borate flux in a Knudsen effusion cell, this tracer spike and a sample to be dated can be successfully fused and degassed of Ar at <1000 °C. The evolved 40Ar∗/39Ar ratio can be measured to high precision using noble gas mass spectrometry. After argon measurement the sample melt is heated to a slightly higher temperature (˜1030 °C) to volatilize potassium, and the evolved 39K/41K ratio measured by Knudsen effusion mass spectrometry. Combined with the known composition of the tracer spike, these two ratios define the K-Ar age using a single sample aliquot and without the need for extreme temperature or a mass determination. In principle the method can be implemented using a single mass spectrometer. Experiments indicate that quantitative extraction of argon from a basalt sample occurs at a sufficiently low temperature that potassium loss in this step is unimportant. Similarly, potassium isotope ratios measured in the Knudsen apparatus indicate good sample-spike equilibration and acceptably small isotopic fractionation. When applied to a flood basalt from the Viluy Traps, Siberia, a K-Ar age of 351 ± 19 Ma was obtained, a result within 1% of the independently known age. For practical reasons this measurement was made on two separate mass spectrometers, but a scheme for combining the measurements in a single analytical instrument is described. Because both parent and daughter are determined by isotope dilution, the precision on K-Ar ages obtained by the double isotope dilution method should routinely approach that of a pair of isotope ratio determinations, likely better than ±5%.

  14. Numerical Simulation Analysis of High-precision Dispensing Needles for Solid-liquid Two-phase Grinding

    NASA Astrophysics Data System (ADS)

    Li, Junye; Hu, Jinglei; Wang, Binyu; Sheng, Liang; Zhang, Xinming

    2018-03-01

    In order to investigate the effect of abrasive flow polishing surface variable diameter pipe parts, with high precision dispensing needles as the research object, the numerical simulation of the process of polishing high precision dispensing needle was carried out. Analysis of different volume fraction conditions, the distribution of the dynamic pressure and the turbulence viscosity of the abrasive flow field in the high precision dispensing needle, through comparative analysis, the effectiveness of the abrasive grain polishing high precision dispensing needle was studied, controlling the volume fraction of silicon carbide can change the viscosity characteristics of the abrasive flow during the polishing process, so that the polishing quality of the abrasive grains can be controlled.

  15. Precision Medicine: The New Frontier in Idiopathic Pulmonary Fibrosis.

    PubMed

    Brownell, Robert; Kaminski, Naftali; Woodruff, Prescott G; Bradford, Williamson Z; Richeldi, Luca; Martinez, Fernando J; Collard, Harold R

    2016-06-01

    Precision medicine is defined by the National Institute of Health's Precision Medicine Initiative Working Group as an approach to disease treatment that takes into account individual variability in genes, environment, and lifestyle. There has been increased interest in applying the concept of precision medicine to idiopathic pulmonary fibrosis, in particular to search for genetic and molecular biomarker-based profiles (so called endotypes) that identify mechanistically distinct disease subgroups. The relevance of precision medicine to idiopathic pulmonary fibrosis is yet to be established, but we believe that it holds great promise to provide targeted and highly effective therapies to patients. In this manuscript, we describe the field's nascent efforts in genetic/molecular endotype identification and how environmental and behavioral subgroups may also be relevant to disease management.

  16. The search for majoron emission in xenon-136 and two-neutrino double-beta decay of xenon-134 with the enriched xenon observatory

    NASA Astrophysics Data System (ADS)

    Walton, Josiah

    Despite neutrino oscillation experiments firmly establishing neutrinos have non-zero mass, the absolute mass scale is unknown. Moreover, it's unknown whether the neutrino is distinguishable from its antiparticle. The most promising approach for measuring the neutrino mass scale and answering the issue of neutrino-antineutrino distinguishability is by searching for neutrinoless double-beta decay, a very rare theorized process not allowed under the current theoretical framework of particle physics. Positive observation of neutrinoless double-beta decay would usher in a revolution in particle physics, since it would determine the neutrino mass scale, establish that neutrinos and antineutrinos are indistinguishable, and that the particle physics conservation law of total lepton number is violated in nature. The latter two consequences are particularly salient, as they lead to potential explanations of neutrino mass generation and the observed large asymmetry of matter over antimatter in the universe. The Enriched Xenon Observatory (EXO-200) is an international collaboration searching for the neutrinoless double-beta decay of the isotope 136 Xe. EXO-200 operates a unique world-class low-radioactivity detector containing 110 kg of liquified xenon isotopically enriched to 80.6% in 136Xe. Recently, EXO-200 published the most precise two-neutrino double-beta decay half-life ever measured and one of the strongest limits on the half-life of the neutrinoless double-beta decay mode of 136Xe. This work presents an improved experimental search for the majoron-mediated neutrinoless double-beta decay modes of 136Xe and a novel search for the yet unobserved two neutrino double-beta decay of 134Xe.

  17. Some aspects over the quality of thin films deposited on special steels used in hydraulic blades

    NASA Astrophysics Data System (ADS)

    Tugui, C. A.; Vizureanu, P.; Iftimie, N.; Steigmann, R.

    2016-08-01

    The experimental research involved in this paper consists in the obtaining of superior physical, chemical and mechanical properties of stainless steels used in the construction of hydraulic turbine blades. These properties are obtained by deposition of hard thin films in order to improve the wear resistance, increasing the hardness but maintaining the tenacious core of the material. The chosen methods for deposition are electrospark deposition because it has relatively low costs, are easy to obtain, the layers have a good adherence to support and the thickness can be variable in function of the established conditions and the pulsed laser deposition because high quality films can be obtained at nanometric precision. The samples will be prepared for the analysis of the structure using optical method as well as for the obtaining of the optimal roughness for the deposition. The physical, chemical and mechanical properties will be determined after deposition using SEM and EDX, in order to emphasize the structure film-substrate and repartition of the deposition elements on the surface and in transversal section. The non-destructive testing has emphasized the good adherence between deposited layer and the metallic support, due to double deposition, spallation regions doesn't appear.

  18. Lyapunov exponents for one-dimensional aperiodic photonic bandgap structures

    NASA Astrophysics Data System (ADS)

    Kissel, Glen J.

    2011-10-01

    Existing in the "gray area" between perfectly periodic and purely randomized photonic bandgap structures are the socalled aperoidic structures whose layers are chosen according to some deterministic rule. We consider here a onedimensional photonic bandgap structure, a quarter-wave stack, with the layer thickness of one of the bilayers subject to being either thin or thick according to five deterministic sequence rules and binary random selection. To produce these aperiodic structures we examine the following sequences: Fibonacci, Thue-Morse, Period doubling, Rudin-Shapiro, as well as the triadic Cantor sequence. We model these structures numerically with a long chain (approximately 5,000,000) of transfer matrices, and then use the reliable algorithm of Wolf to calculate the (upper) Lyapunov exponent for the long product of matrices. The Lyapunov exponent is the statistically well-behaved variable used to characterize the Anderson localization effect (exponential confinement) when the layers are randomized, so its calculation allows us to more precisely compare the purely randomized structure with its aperiodic counterparts. It is found that the aperiodic photonic systems show much fine structure in their Lyapunov exponents as a function of frequency, and, in a number of cases, the exponents are quite obviously fractal.

  19. Sternal instability measured with radiostereometric analysis. A study of method feasibility, accuracy and precision.

    PubMed

    Vestergaard, Rikke Falsig; Søballe, Kjeld; Hasenkam, John Michael; Stilling, Maiken

    2018-05-18

    A small, but unstable, saw-gap may hinder bone-bridging and induce development of painful sternal dehiscence. We propose the use of Radiostereometric Analysis (RSA) for evaluation of sternal instability and present a method validation. Four bone analogs (phantoms) were sternotomized and tantalum beads were inserted in each half. The models were reunited with wire cerclage and placed in a radiolucent separation device. Stereoradiographs (n = 48) of the phantoms in 3 positions were recorded at 4 imposed separation points. The accuracy and precision was compared statistically and presented as translations along the 3 orthogonal axes. 7 sternotomized patients were evaluated for clinical RSA precision by double-examination stereoradiographs (n = 28). In the phantom study, we found no systematic error (p > 0.3) between the three phantom positions, and precision for evaluation of sternal separation was 0.02 mm. Phantom accuracy was mean 0.13 mm (SD 0.25). In the clinical study, we found a detection limit of 0.42 mm for sternal separation and of 2 mm for anterior-posterior dislocation of the sternal halves for the individual patient. RSA is a precise and low-dose image modality feasible for clinical evaluation of sternal stability in research. ClinicalTrials.gov Identifier: NCT02738437 , retrospectively registered.

  20. Toward precision medicine in Alzheimer's disease.

    PubMed

    Reitz, Christiane

    2016-03-01

    In Western societies, Alzheimer's disease (AD) is the most common form of dementia and the sixth leading cause of death. In recent years, the concept of precision medicine, an approach for disease prevention and treatment that is personalized to an individual's specific pattern of genetic variability, environment and lifestyle factors, has emerged. While for some diseases, in particular select cancers and a few monogenetic disorders such as cystic fibrosis, significant advances in precision medicine have been made over the past years, for most other diseases precision medicine is only in its beginning. To advance the application of precision medicine to a wider spectrum of disorders, governments around the world are starting to launch Precision Medicine Initiatives, major efforts to generate the extensive scientific knowledge needed to integrate the model of precision medicine into every day clinical practice. In this article we summarize the state of precision medicine in AD, review major obstacles in its development, and discuss its benefits in this highly prevalent, clinically and pathologically complex disease.

  1. High-resolution rock-magnetic variability in shallow marine sediment: a sensitive paleoclimatic metronome

    NASA Astrophysics Data System (ADS)

    Arai, Kohsaku; Sakai, Hideo; Konishi, Kenji

    1997-05-01

    An outer shelf deposit in central Japan centered on the Olduvai normal polarity event in the reversed Matuyama chron reveals a close correlation of both the magnetic susceptibility and remanent intensity with the sedimentary cyclicities apparent in lithologies and molluscan assemblages. Two sedimentary cycles are characterized by distinctly similar, but double-peaked magnetic cyclicities. The rock-magnetic variability is primarily attributed to the relative abundance of terrigenous magnetic minerals, and the double peak of the variability is characterized by the concentration of finer-grained magnetic minerals. The concentration is suspected to be controlled by both climatic change and shifting proximity of the shoreline as a function of rise and fall of the sea level due to glacio-eustasy. Rock-magnetic study reveals the record of a 21 ka period of orbital precession cycles within the sedimentary cyclicity attributable to a 41 ka period of orbital obliquity forcing.

  2. FUZZY LOGIC BASED INTELLIGENT CONTROL OF A VARIABLE SPEED CAGE MACHINE WIND GENERATION SYSTEM

    EPA Science Inventory

    The paper describes a variable-speed wind generation system where fuzzy logic principles are used to optimize efficiency and enhance performance control. A squirrel cage induction generator feeds the power to a double-sided pulse width modulated converter system which either pump...

  3. FUZZY LOGIC BASED INTELLIGENT CONTROL OF A VARIABLE SPEED CAGE MACHINE WIND GENERATION SYSTEM

    EPA Science Inventory

    The report gives results of a demonstration of the successful application of fuzzy logic to enhance the performance and control of a variable-speed wind generation system. A squirrel cage induction generator feeds the power to either a double-sided pulse-width modulation converte...

  4. Precision Farming and Precision Pest Management: The Power of New Crop Production Technologies

    PubMed Central

    Strickland, R. Mack; Ess, Daniel R.; Parsons, Samuel D.

    1998-01-01

    The use of new technologies including Geographic Information Systems (GIS), the Global Positioning System (GPS), Variable Rate Technology (VRT), and Remote Sensing (RS) is gaining acceptance in the present high-technology, precision agricultural industry. GIS provides the ability to link multiple data values for the same geo-referenced location, and provides the user with a graphical visualization of such data. When GIS is coupled with GPS and RS, management decisions can be applied in a more precise "micro-managed" manner by using VRT techniques. Such technology holds the potential to reduce agricultural crop production costs as well as crop and environmental damage. PMID:19274236

  5. Kinematic modeling of a double octahedral Variable Geometry Truss (VGT) as an extensible gimbal

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1994-01-01

    This paper presents the complete forward and inverse kinematics solutions for control of the three degree-of-freedom (DOF) double octahedral variable geometry truss (VGT) module as an extensible gimbal. A VGT is a truss structure partially comprised of linearly actuated members. A VGT can be used as joints in a large, lightweight, high load-bearing manipulator for earth- and space-based remote operations, plus industrial applications. The results have been used to control the NASA VGT hardware as an extensible gimbal, demonstrating the capability of this device to be a joint in a VGT-based manipulator. This work is an integral part of a VGT-based manipulator design, simulation, and control tool.

  6. On Matrices, Automata, and Double Counting

    NASA Astrophysics Data System (ADS)

    Beldiceanu, Nicolas; Carlsson, Mats; Flener, Pierre; Pearson, Justin

    Matrix models are ubiquitous for constraint problems. Many such problems have a matrix of variables M, with the same constraint defined by a finite-state automaton A on each row of M and a global cardinality constraint gcc on each column of M. We give two methods for deriving, by double counting, necessary conditions on the cardinality variables of the gcc constraints from the automaton A. The first method yields linear necessary conditions and simple arithmetic constraints. The second method introduces the cardinality automaton, which abstracts the overall behaviour of all the row automata and can be encoded by a set of linear constraints. We evaluate the impact of our methods on a large set of nurse rostering problem instances.

  7. Determination of the direction to a source of antineutrinos via inverse beta decay in Double Chooz

    NASA Astrophysics Data System (ADS)

    Nikitenko, Ya.

    2016-11-01

    To determine the direction to a source of neutrinos (and antineutrinos) is an important problem for the physics of supernovae and of the Earth. The direction to a source of antineutrinos can be estimated through the reaction of inverse beta decay. We show that the reactor neutrino experiment Double Chooz has unique capabilities to study antineutrino signal from point-like sources. Contemporary experimental data on antineutrino directionality is given. A rigorous mathematical approach for neutrino direction studies has been developed. Exact expressions for the precision of the simple mean estimator of neutrinos' direction for normal and exponential distributions for a finite sample and for the limiting case of many events have been obtained.

  8. Masses of {sup 130}Te and {sup 130}Xe and Double-{beta}-Decay Q Value of {sup 130}Te

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redshaw, Matthew; Mount, Brianna J.; Myers, Edmund G.

    The atomic masses of {sup 130}Te and {sup 130}Xe have been obtained by measuring cyclotron frequency ratios of pairs of triply charged ions simultaneously trapped in a Penning trap. The results, with 1 standard deviation uncertainty, are M({sup 130}Te)=129.906 222 744(16) u and M({sup 130}Xe)=129.903 509 351(15) u. From the mass difference the double-{beta}-decay Q value of {sup 130}Te is determined to be Q{sub {beta}}{sub {beta}}({sup 130}Te)=2527.518(13) keV. This is a factor of 150 more precise than the result of the AME2003 [G. Audi et al., Nucl. Phys. A729, 337 (2003)].

  9. Micrometeorite penetration effects in gold foil

    NASA Technical Reports Server (NTRS)

    Hallgren, D. S.; Radigan, W.; Hemenway, C. L.

    1976-01-01

    Penetration structures revealed by a Skylab experiment dealing with exposure of single and double layers of 500-800 A thick gold foil to micrometeorites are examined. Examination of all double-layered gold foils revealed that particles producing holes of any type greater than 5 microns in diameter in the first foil break up into many fragments which in turn produce many more holes in the second foil. Evidence of an original particle is not found on any stainless steel plate below the foils, except in one instance. A precise relationship between the size of the event and the mass of the particle producing it could not be determined due to the extreme morphological variety in penetration effects. Fluxes from gold foil and crater experiments are briefly discussed.

  10. The Origin of High-angle Dip-slip Earthquakes at Geothermal Fields in California

    NASA Astrophysics Data System (ADS)

    Barbour, A. J.; Schoenball, M.; Martínez-Garzón, P.; Kwiatek, G.

    2016-12-01

    We examine the source mechanisms of earthquakes occurring in three California geothermal fields: The Geysers, Salton Sea, and Coso. We find source mechanisms ranging from strike slip faulting, consistent with the tectonic settings, to dip slip with unusually steep dip angles which are inconsistent with local structures. For example, we identify a fault zone in the Salton Sea Geothermal Field imaged using precisely-relocated hypocenters with a dip angle of 60° yet double-couple focal mechanisms indicate higher-angle dip-slip on ≥75° dipping planes. We observe considerable temporal variability in the distribution of source mechanisms. For example, at the Salton Sea we find that the number of high angle dip-slip events increased after 1989, when net-extraction rates were highest. There is a concurrent decline in strike-slip and strike-slip-normal faulting, the mechanisms expected from regional tectonics. These unusual focal mechanisms and their spatio-temporal patterns are enigmatic in terms of our understanding of faulting in geothermal regions. While near-vertical fault planes are expected to slip in a strike-slip sense, and dip slip is expected to occur on moderately dipping faults, we observe dip slip on near-vertical fault planes. However, for plausible stress states and accounting for geothermal production, the resolved fault planes should be stable. We systematically analyze the source mechanisms of these earthquakes using full moment tensor inversion to understand the constraints imposed by assuming a double-couple source. Applied to The Geysers field, we find a significant reduction in the number of high-angle dip-slip mechanisms using the full moment tensor. The remaining mechanisms displaying high-angle dip-slip could be consistent with faults accommodating subsidence and compaction associated with volumetric strain changes in the geothermal reservoir.

  11. Ambiguity and variability of database and software names in bioinformatics.

    PubMed

    Duck, Geraint; Kovacevic, Aleksandar; Robertson, David L; Stevens, Robert; Nenadic, Goran

    2015-01-01

    There are numerous options available to achieve various tasks in bioinformatics, but until recently, there were no tools that could systematically identify mentions of databases and tools within the literature. In this paper we explore the variability and ambiguity of database and software name mentions and compare dictionary and machine learning approaches to their identification. Through the development and analysis of a corpus of 60 full-text documents manually annotated at the mention level, we report high variability and ambiguity in database and software mentions. On a test set of 25 full-text documents, a baseline dictionary look-up achieved an F-score of 46 %, highlighting not only variability and ambiguity but also the extensive number of new resources introduced. A machine learning approach achieved an F-score of 63 % (with precision of 74 %) and 70 % (with precision of 83 %) for strict and lenient matching respectively. We characterise the issues with various mention types and propose potential ways of capturing additional database and software mentions in the literature. Our analyses show that identification of mentions of databases and tools is a challenging task that cannot be achieved by relying on current manually-curated resource repositories. Although machine learning shows improvement and promise (primarily in precision), more contextual information needs to be taken into account to achieve a good degree of accuracy.

  12. Within- and between-laboratory precision in the measurement of body volume using air displacement plethysmography and its effect on body composition assessment.

    PubMed

    Collins, A L; Saunders, S; McCarthy, H D; Williams, J E; Fuller, N J

    2004-01-01

    To determine and compare the extent of within- and between-laboratory precision in body volume (BV) measurements using air displacement plethysmography (ADP), the BOD POD body composition system, and to interpret any such variability in terms of body composition estimates. Repeated test procedures of BV assessment using the BOD POD ADP were reproduced at two laboratories for the estimation of precision, both within and between laboratories. In total, 30 healthy adult volunteers, 14 men (age, 19-48 y; body mass index (BMI), 19.7-30.3 kg/m2) and 16 women (age, 19-40 y; BMI, 16.3-35.7 kg/m2), were each subjected to two test procedures at both laboratories. Two additional volunteers were independently subjected to 10 repeated test procedures at both laboratories. Repeated measurements of BV, uncorrected for the effects of isothermal air in the lungs and the surface area artifact, were obtained using the BOD POD ADP, with the identical protocol being faithfully applied at both laboratories. Uncorrected BV measurements were adjusted to give estimates of actual BV that were used to calculate body density (body weight (BWt)/actual BV) from which estimates of body composition were derived. The differences between repeated BV measurements or body composition estimates were used to assess within-laboratory precision (repeatability), as standard deviation (SD) and coefficient of variation; the differences between measurements reproduced at each laboratory were used to determine between-laboratory precision (reproducibility), as bias and 95% limits of agreement (from SD of the differences between laboratories). The extent of within-laboratory methodological precision for BV (uncorrected and actual) was variable according to subject, sample group and laboratory conditions (range of SD, 0.04-0.13 l), and was mostly due to within-individual biological variability (typically 78-99%) rather than to technical imprecision. There was a significant (P<0.05) bias between laboratories for the 10 repeats on the two independent subjects (up to 0.29 l). Although no significant bias (P=0.077) was evident for the sample group of 30 volunteers (-0.05 l), the 95% limits of agreement were considerable (-0.68 to 0.58 l). The effects of this variability in BV on body composition were relatively greater: for example, within-laboratory precision (SD) for body fat as % BWt was between 0.56 and 1.34% depending on the subject and laboratory; the bias (-0.59%) was not significant between laboratories, but there were large 95% limits of agreement (-3.67 to 2.50%). Within-laboratory precision for each BOD POD instrument was reasonably good, but was variable according to the prevailing conditions. Although the bias between the two instruments was not significant for the BV measurements, implying that they can be used interchangeably for groups of similar subjects, the relatively large 95% limits of agreement indicate that greater consideration may be needed for assessing individuals with different ADP instruments. Therefore, use of a single ADP instrument is apparently preferable when assessing individuals on a longitudinal basis.

  13. Wide baseline stereo matching based on double topological relationship consistency

    NASA Astrophysics Data System (ADS)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, S. G.; Marsh, T. R.; Gaensicke, B. T.

    Using Liverpool Telescope+RISE photometry we identify the 2.78 hr period binary star CSS 41177 as a detached eclipsing double white dwarf binary with a 21,100 K primary star and a 10,500 K secondary star. This makes CSS 41177 only the second known eclipsing double white dwarf binary after NLTT 11748. The 2 minute long primary eclipse is 40% deep and the secondary eclipse 10% deep. From Gemini+GMOS spectroscopy, we measure the radial velocities of both components of the binary from the H{alpha} absorption line cores. These measurements, combined with the light curve information, yield white dwarf masses of M{sub 1}more » = 0.283 {+-} 0.064 M{sub sun} and M{sub 2} = 0.274 {+-} 0.034 M{sub sun}, making them both helium core white dwarfs. As an eclipsing, double-lined spectroscopic binary, CSS 41177 is ideally suited to measuring precise, model-independent masses and radii. The two white dwarfs will merge in roughly 1.1 Gyr to form a single sdB star.« less

  15. Optical characterization of pancreatic normal and tumor tissues with double integrating sphere system

    NASA Astrophysics Data System (ADS)

    Kiris, Tugba; Akbulut, Saadet; Kiris, Aysenur; Gucin, Zuhal; Karatepe, Oguzhan; Bölükbasi Ates, Gamze; Tabakoǧlu, Haşim Özgür

    2015-03-01

    In order to develop minimally invasive, fast and precise diagnostic and therapeutic methods in medicine by using optical methods, first step is to examine how the light propagates, scatters and transmitted through medium. So as to find out appropriate wavelengths, it is required to correctly determine the optical properties of tissues. The aim of this study is to measure the optical properties of both cancerous and normal ex-vivo pancreatic tissues. Results will be compared to detect how cancerous and normal tissues respond to different wavelengths. Double-integrating-sphere system and computational technique inverse adding doubling method (IAD) were used in the study. Absorption and reduced scattering coefficients of normal and cancerous pancreatic tissues have been measured within the range of 500-650 nm. Statistical significant differences between cancerous and normal tissues have been obtained at 550 nm and 630 nm for absorption coefficients. On the other hand; there were no statistical difference found for scattering coefficients at any wavelength.

  16. A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures

    NASA Astrophysics Data System (ADS)

    Youssef, Rasha M.; Maher, Hadir M.

    2008-10-01

    A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.

  17. Control system and method for a power delivery system having a continuously variable ratio transmission

    DOEpatents

    Frank, Andrew A.

    1984-01-01

    A control system and method for a power delivery system, such as in an automotive vehicle, having an engine coupled to a continuously variable ratio transmission (CVT). Totally independent control of engine and transmission enable the engine to precisely follow a desired operating characteristic, such as the ideal operating line for minimum fuel consumption. CVT ratio is controlled as a function of commanded power or torque and measured load, while engine fuel requirements (e.g., throttle position) are strictly a function of measured engine speed. Fuel requirements are therefore precisely adjusted in accordance with the ideal characteristic for any load placed on the engine.

  18. Spectrophotometer-Based Color Measurements

    DTIC Science & Technology

    2017-10-24

    public release; distribution is unlimited. AD U.S. ARMY ARMAMENT RESEARCH , DEVELOPMENT AND ENGINEERING CENTER Weapons and Software Engineering Center...for public release; distribution is unlimited. UNCLASSIFIED i CONTENTS Page Summary 1 Introduction 1 Methods , Assumptions, and Procedures 1...Values for Federal Color Standards 15 Distribution List 25 TABLES 1 Instrument precision 3 2 Method precision and operator variability 4 3

  19. The perennial problem of variability in adenosine triphosphate (ATP) tests for hygiene monitoring within healthcare settings.

    PubMed

    Whiteley, Greg S; Derry, Chris; Glasbey, Trevor; Fahey, Paul

    2015-06-01

    To investigate the reliability of commercial ATP bioluminometers and to document precision and variability measurements using known and quantitated standard materials. Four commercially branded ATP bioluminometers and their consumables were subjected to a series of controlled studies with quantitated materials in multiple repetitions of dilution series. The individual dilutions were applied directly to ATP swabs. To assess precision and reproducibility, each dilution step was tested in triplicate or quadruplicate and the RLU reading from each test point was recorded. Results across the multiple dilution series were normalized using the coefficient of variation. The results for pure ATP and bacterial ATP from suspensions of Staphylococcus epidermidis and Pseudomonas aeruginosa are presented graphically. The data indicate that precision and reproducibility are poor across all brands tested. Standard deviation was as high as 50% of the mean for all brands, and in the field users are not provided any indication of this level of imprecision. The variability of commercial ATP bioluminometers and their consumables is unacceptably high with the current technical configuration. The advantage of speed of response is undermined by instrument imprecision expressed in the numerical scale of relative light units (RLU).

  20. Gaia luminosities of pulsating A-F stars in the Kepler field

    NASA Astrophysics Data System (ADS)

    Balona, L. A.

    2018-06-01

    All stars in the Kepler field brighter than 12.5 magnitude have been classified according to variability type. A catalogue of δ Scuti and γ Doradus stars is presented. The problem of low frequencies in δ Sct stars, which occurs in over 98 percent of these stars, is discussed. Gaia DR2 parallaxes were used to obtain precise luminosities, enabling the instability strips of the two classes of variable to be precisely defined. Surprisingly, it turns out that the instability region of the γ Dor stars is entirely within the δ Sct instability strip. Thus γDor stars should not be considered a separate class of variable. The observed red and blue edges of the instability strip do not agree with recent model calculations. Stellar pulsation occurs in less than half of the stars in the instability region and arguments are presented to show that this cannot be explained by assuming pulsation at a level too low to be detected. Precise Gaia DR2 luminosities of high-amplitude δ Sct stars (HADS) show that most of these are normal δ Sct stars and not transition objects. It is argued that current ideas on A star envelopes need to be revised.

  1. On the precision of experimentally determined protein folding rates and φ-values

    PubMed Central

    De Los Rios, Miguel A.; Muralidhara, B.K.; Wildes, David; Sosnick, Tobin R.; Marqusee, Susan; Wittung-Stafshede, Pernilla; Plaxco, Kevin W.; Ruczinski, Ingo

    2006-01-01

    φ-Values, a relatively direct probe of transition-state structure, are an important benchmark in both experimental and theoretical studies of protein folding. Recently, however, significant controversy has emerged regarding the reliability with which φ-values can be determined experimentally: Because φ is a ratio of differences between experimental observables it is extremely sensitive to errors in those observations when the differences are small. Here we address this issue directly by performing blind, replicate measurements in three laboratories. By monitoring within- and between-laboratory variability, we have determined the precision with which folding rates and φ-values are measured using generally accepted laboratory practices and under conditions typical of our laboratories. We find that, unless the change in free energy associated with the probing mutation is quite large, the precision of φ-values is relatively poor when determined using rates extrapolated to the absence of denaturant. In contrast, when we employ rates estimated at nonzero denaturant concentrations or assume that the slopes of the chevron arms (mf and mu) are invariant upon mutation, the precision of our estimates of φ is significantly improved. Nevertheless, the reproducibility we thus obtain still compares poorly with the confidence intervals typically reported in the literature. This discrepancy appears to arise due to differences in how precision is calculated, the dependence of precision on the number of data points employed in defining a chevron, and interlaboratory sources of variability that may have been largely ignored in the prior literature. PMID:16501226

  2. Acute Mental Discomfort Associated with Suicide Behavior in a Clinical Sample of Patients with Affective Disorders: Ascertaining Critical Variables Using Artificial Intelligence Tools.

    PubMed

    Morales, Susana; Barros, Jorge; Echávarri, Orietta; García, Fabián; Osses, Alex; Moya, Claudia; Maino, María Paz; Fischman, Ronit; Núñez, Catalina; Szmulewicz, Tita; Tomicic, Alemka

    2017-01-01

    In efforts to develop reliable methods to detect the likelihood of impending suicidal behaviors, we have proposed the following. To gain a deeper understanding of the state of suicide risk by determining the combination of variables that distinguishes between groups with and without suicide risk. A study involving 707 patients consulting for mental health issues in three health centers in Greater Santiago, Chile. Using 345 variables, an analysis was carried out with artificial intelligence tools, Cross Industry Standard Process for Data Mining processes, and decision tree techniques. The basic algorithm was top-down, and the most suitable division produced by the tree was selected by using the lowest Gini index as a criterion and by looping it until the condition of belonging to the group with suicidal behavior was fulfilled. Four trees distinguishing the groups were obtained, of which the elements of one were analyzed in greater detail, since this tree included both clinical and personality variables. This specific tree consists of six nodes without suicide risk and eight nodes with suicide risk (tree decision 01, accuracy 0.674, precision 0.652, recall 0.678, specificity 0.670, F measure 0.665, receiver operating characteristic (ROC) area under the curve (AUC) 73.35%; tree decision 02, accuracy 0.669, precision 0.642, recall 0.694, specificity 0.647, F measure 0.667, ROC AUC 68.91%; tree decision 03, accuracy 0.681, precision 0.675, recall 0.638, specificity 0.721, F measure, 0.656, ROC AUC 65.86%; tree decision 04, accuracy 0.714, precision 0.734, recall 0.628, specificity 0.792, F measure 0.677, ROC AUC 58.85%). This study defines the interactions among a group of variables associated with suicidal ideation and behavior. By using these variables, it may be possible to create a quick and easy-to-use tool. As such, psychotherapeutic interventions could be designed to mitigate the impact of these variables on the emotional state of individuals, thereby reducing eventual risk of suicide. Such interventions may reinforce psychological well-being, feelings of self-worth, and reasons for living, for each individual in certain groups of patients.

  3. Fat scoring: Sources of variability

    USGS Publications Warehouse

    Krementz, D.G.; Pendleton, G.W.

    1990-01-01

    Fat scoring is a widely used nondestructive method of assessing total body fat in birds. This method has not been rigorously investigated. We investigated inter- and intraobserver variability in scoring as well as the predictive ability of fat scoring using five species of passerines. Between-observer variation in scoring was variable and great at times. Observers did not consistently score species higher or lower relative to other observers nor did they always score birds with more total body fat higher. We found that within-observer variation was acceptable but was dependent on the species being scored. The precision of fat scoring was species-specific and for most species, fat scores accounted for less than 50% of the variation in true total body fat. Overall, we would describe fat scoring as a fairly precise method of indexing total body fat but with limited reliability among observers.

  4. LC/MS/MS quantitation assay for pharmacokinetics of naringenin and double peaks phenomenon in rats plasma.

    PubMed

    Ma, Yan; Li, Peibo; Chen, Dawei; Fang, Tiezheng; Li, Haitian; Su, Weiwei

    2006-01-13

    A highly sensitive and specific electrospray ionization (ESI) liquid chromatography-tandem mass spectrometry (LC/MS/MS) method for quantitation of naringenin (NAR) and an explanation for the double peaks phenomenon was developed and validated. NAR was extracted from rat plasma and tissues along with the internal standard (IS), hesperidin, with ethyl acetate. The analytes were analyzed in the multiple-reaction-monitoring (MRM) mode as the precursor/product ion pair of m/z 273.4/151.3 for NAR and m/z 611.5/303.3 for the IS. The assay was linear over the concentration range of 5-2500 ng/mL. The lower limit quantification was 5 ng/mL, available for plasma pharmacokinetics of NAR in rats. Accuracy in within- and between-run precisions showed good reproducibility. When NAR was administered orally, only little and predominantly its glucuronidation were into circulation in the plasma. There existed double peaks phenomenon in plasma concentration-time curve leading to the relatively slow elimination of NAR in plasma. The results showed that there was a linear relationship between the AUC of total NAR and dosages. And the double peaks are mainly due to enterohepatic circulation.

  5. Fluoroscopic Placement of Double-Pigtail Ureteral Stents

    PubMed Central

    Chen, Gregory L.

    2001-01-01

    Purpose: Double-pigtail ureteral stent is placed cystoscopically after ureteroscopy. We describe a technique for fluoroscopic placement of ureteral stents and demonstrate its use in a non-randomized prospective study. Materials and methods: Double-pigtail stents were placed either fluoroscopically or cystoscopically in 121 consecutive patients. In the fluoroscopic method, the stent was placed over a guide wire using a stent pusher without the use of cystoscopy. Conversely, stents were placed through the working channel of the cystoscope under vision. The procedure, stent length, width, type, method, ureteral dilation, and use of a retrieval string were noted. Results: A wide range of stent sizes were used. The success with fluoroscopic placement of double-pigtail ureteral stents was 100% (89 of 89 cases). No stents migrated or required replacement. Stents were placed after ureteroscopic laser lithotripsy (53/89) and ureteroscopic tumor treatment (22/89). Cystoscopic visualization was used in 32 additional procedures requiring precise control (15 ureteral strictures and nine retrograde endopyelotomy). Conclusions: The fluoroscopic placement of ureteral stents is a safe and simple technique with a very high success rate. We have used cystoscopic placement only after incisional procedures such as retrograde endopyelotomy, stricture or ureterotomy. PMID:18493562

  6. Wenchuan Event Detection And Localization Using Waveform Correlation Coupled With Double Difference

    NASA Astrophysics Data System (ADS)

    Slinkard, M.; Heck, S.; Schaff, D. P.; Young, C. J.; Richards, P. G.

    2014-12-01

    The well-studied Wenchuan aftershock sequence triggered by the May 12, 2008, Ms 8.0, mainshock offers an ideal test case for evaluating the effectiveness of using waveform correlation coupled with double difference relocation to detect and locate events in a large aftershock sequence. We use Sandia's SeisCorr detector to process 3 months of data recorded by permanent IRIS and temporary ASCENT stations using templates from events listed in a global catalog to find similar events in the raw data stream. Then we take the detections and relocate them using the double difference method. We explore both the performance that can be expected with using just a small number of stations, and, the benefits of reprocessing a well-studied sequence such as this one using waveform correlation to find even more events. We benchmark our results against previously published results describing relocations of regional catalog data. Before starting this project, we had examples where with just a few stations at far-regional distances, waveform correlation combined with double difference did and impressive job of detection and location events with precision at the few hundred and even tens of meters level.

  7. Track chambers based on precision drift tubes housed inside 30 mm mylar pipe

    NASA Astrophysics Data System (ADS)

    Borisov, A.; Bozhko, N.; Fakhrutdinov, R.; Kozhin, A.; Leontiev, B.; Levin, A.

    2014-06-01

    We describe drift chambers consisting of 3 layers of 30 mm (OD) drift tubes made of double sided aluminized mylar film with thickness 0.125 mm. A single drift tube is self-supported structure withstanding 350 g tension of 50 microns sense wire located in the tube center with 10 microns precision with respect to end-plug outer surface. Such tubes allow to create drift chambers with small amount of material, construction of such chambers doesn't require hard frames. Twenty six chambers with working area from 0.8 × 1.0 to 2.5 × 2.0 m2 including 4440 tubes have been manufactured for experiments at 70-GeV proton accelerator at IHEP(Protvino).

  8. Integrating DNA strand-displacement circuitry with DNA tile self-assembly

    PubMed Central

    Zhang, David Yu; Hariadi, Rizal F.; Choi, Harry M.T.; Winfree, Erik

    2013-01-01

    DNA nanotechnology has emerged as a reliable and programmable way of controlling matter at the nanoscale through the specificity of Watson–Crick base pairing, allowing both complex self-assembled structures with nanometer precision and complex reaction networks implementing digital and analog behaviors. Here we show how two well-developed frameworks, DNA tile self-assembly and DNA strand-displacement circuits, can be systematically integrated to provide programmable kinetic control of self-assembly. We demonstrate the triggered and catalytic isothermal self-assembly of DNA nanotubes over 10 μm long from precursor DNA double-crossover tiles activated by an upstream DNA catalyst network. Integrating more sophisticated control circuits and tile systems could enable precise spatial and temporal organization of dynamic molecular structures. PMID:23756381

  9. Leveraging prognostic baseline variables to gain precision in randomized trials

    PubMed Central

    Colantuoni, Elizabeth; Rosenblum, Michael

    2015-01-01

    We focus on estimating the average treatment effect in a randomized trial. If baseline variables are correlated with the outcome, then appropriately adjusting for these variables can improve precision. An example is the analysis of covariance (ANCOVA) estimator, which applies when the outcome is continuous, the quantity of interest is the difference in mean outcomes comparing treatment versus control, and a linear model with only main effects is used. ANCOVA is guaranteed to be at least as precise as the standard unadjusted estimator, asymptotically, under no parametric model assumptions and also is locally semiparametric efficient. Recently, several estimators have been developed that extend these desirable properties to more general settings that allow any real-valued outcome (e.g., binary or count), contrasts other than the difference in mean outcomes (such as the relative risk), and estimators based on a large class of generalized linear models (including logistic regression). To the best of our knowledge, we give the first simulation study in the context of randomized trials that compares these estimators. Furthermore, our simulations are not based on parametric models; instead, our simulations are based on resampling data from completed randomized trials in stroke and HIV in order to assess estimator performance in realistic scenarios. We provide practical guidance on when these estimators are likely to provide substantial precision gains and describe a quick assessment method that allows clinical investigators to determine whether these estimators could be useful in their specific trial contexts. PMID:25872751

  10. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  11. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  12. Variability Analysis: Detection and Classification

    NASA Astrophysics Data System (ADS)

    Eyer, L.

    2005-01-01

    The Gaia mission will offer an exceptional opportunity to perform variability studies. The data homogeneity, its optimised photometric systems, composed of 11 medium and 4-5 broad bands, the high photometric precision in G band of one milli-mag for V = 13-15, the radial velocity measurements and the exquisite astrometric precision for one billion stars will permit a detailed description of variable objects like stars, quasars and asteroids. However the time sampling and the total number of measurements change from one object to another because of the satellite scanning law. The data analysis is a challenge because of the huge amount of data, the complexity of the observed objects and the peculiarities of the satellite, and needs thorough preparation. Experience can be gained by the study of past and present survey analyses and results, and Gaia should be put in perspective with the future large scale surveys, like PanSTARRS or LSST. We present the activities of the Variable Star Working Group and a general plan to digest this unprecedented data set, focusing here on the photometry.

  13. The cataclysmic variable AE Aquarii: orbital variability in V band

    NASA Astrophysics Data System (ADS)

    Zamanov, R.; Latev, G.

    2017-07-01

    We present 62.7 hours observations of the cataclysmic variable AE Aqr in Johnson V band. These are non-published archive electro-photometric data obtained during the time period 1993 to 1999. We construct the orbital variability in V band and obtain a Fourier fit to the double-wave quiescent light curve. The strongest flares in our data set are in phase interval 0.6 - 0.8. The data can be downloaded from http://www.astro.bas.bg/~rz/DATA/AEAqr.elphot.dat.

  14. VLBI observations at 2.3 GHz of the compact galaxy 1934-638

    NASA Technical Reports Server (NTRS)

    Tzioumis, Anastasios K.; Jauncey, David L.; Preston, Robert A.; Meier, David L.; Morabito, David D.; Skjerve, Lyle; Slade, Martin A.; Nicolson, George D.; Niell, Arthur E.; Wehrle, Ann E.

    1989-01-01

    VLBI observations of the strong radio source 1934-638 show it to be a binary with a component separation of 42.0 + or - 0.2 mas, a position angle of 90.5 + or - 1 deg, and component sizes of about 2.5 mas. The results imply the presence of an additional elongated component aligned with, and between, the compact double components. The sources's almost equal compact double structure, peaked spectrum, low variability, small polarization, and particle-dominated radio lobes suggests that it belongs to the class of symmetric compact double sources identified by Phillips and Mutel (1980, 1981, 1982).

  15. Mapping forest inventory and analysis data attributes within the framework of double sampling for stratification design

    Treesearch

    David C. Chojnacky; Randolph H. Wynne; Christine E. Blinn

    2009-01-01

    Methodology is lacking to easily map Forest Inventory and Analysis (FIA) inventory statistics for all attribute variables without having to develop separate models and methods for each variable. We developed a mapping method that can directly transfer tabular data to a map on which pixels can be added any way desired to estimate carbon (or any other variable) for a...

  16. Sex modifies the relationship between age and gait: a population-based study of older adults.

    PubMed

    Callisaya, Michele L; Blizzard, Leigh; Schmidt, Michael D; McGinley, Jennifer L; Srikanth, Velandai K

    2008-02-01

    Adequate mobility is essential to maintain an independent and active lifestyle. The aim of this cross-sectional study is to examine the associations of age with temporal and spatial gait variables in a population-based sample of older people, and whether these associations are modified by sex. Men and women aged 60-86 years were randomly selected from the Southern Tasmanian electoral roll (n = 223). Gait speed, step length, cadence, step width, and double-support phase were recorded with a GAITRite walkway. Regression analysis was used to model the relationship between age, sex, and gait variables. For men, after adjusting for height and weight, age was linearly associated with all gait variables (p <.05) except cadence (p =.11). For women, all variables demonstrated a curvilinear association, with age-related change in these variables commencing during the 7th decade. Significant interactions were found between age and sex for speed (p =.04), cadence (p =.01), and double-support phase (p =.03). Associations were observed between age and a broad range of temporal and spatial gait variables in this study. These associations differed by sex, suggesting that the aging process may affect gait in men and women differently. These results provide a basis for further research into sex differences and mechanisms underlying gait changes with advancing age.

  17. Resolution and Orbit Reconstruction of Spectroscopic Binary Stars with the Palomar Testbed Interferometer

    NASA Astrophysics Data System (ADS)

    Boden, A. F.; Lane, B. F.; Creech-Eakman, M. J.; Queloz, D.; Koresko, C. D.

    2000-05-01

    The Palomar Testbed Interferometer (PTI) is a long-baseline near-infrared interferometer located at Palomar Observatory. For the past several years we have had an ongoing program of resolving and reconstructing the visual and physical orbits of spectroscopic binary stars with PTI, with the goal of obtaining precise dynamical mass estimates and other physical parameters. We will present a number of new visual and physical orbit determinations derived from integrated reductions of PTI visibility and archival and new spectroscopic radial velocity data. The systems for which we will discuss our orbit models are: iota Pegasi (HD 210027), 64 Psc (HD 4676), 12 Boo (HD 123999), 75 Cnc (HD 78418), 47 And (HD 8374), HD 205539, BY Draconis (HDE 234677), and 3 Boo (HD 120064), and 3 Boo (HD 120064). All of these systems are double-lined binary systems (SB2), and integrated astrometric/radial velocity orbit modeling provides precise fundamental parameters (mass, luminosity) and system distance determinations comparable with Hipparcos precisions.

  18. Anomalous double-mode RR Lyrae stars in the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Soszyński, I.; Smolec, R.; Dziembowski, W. A.; Udalski, A.; Szymański, M. K.; Wyrzykowski, Ł.; Ulaczyk, K.; Poleski, R.; Pietrukowicz, P.; Kozłowski, S.; Skowron, D.; Skowron, J.; Mróz, P.; Pawlak, M.

    2016-12-01

    We report the discovery of a new subclass of double-mode RR Lyrae stars in the Large and Small Magellanic Clouds. The sample of 22 pulsating stars has been extracted from the latest edition of the Optical Gravitational Lensing Experiment collection of RR Lyrae variables in the Magellanic System. The stars pulsating simultaneously in the fundamental (F) and first-overtone (1O) modes have distinctly different properties than regular double-mode RR Lyrae variables (RRd stars). The P1O/PF period ratios of our anomalous RRd stars are within a range of 0.725-0.738, while `classical' double-mode RR Lyrae variables have period ratios in the range of 0.742-0.748. In contrast to the typical RRd stars, in the majority of the anomalous pulsators, the F-mode amplitudes are higher than the 1O-mode amplitudes. The light curves associated with the F-mode in the anomalous RRd stars show different morphology than the light curves of, both, regular RRd stars and single-mode RRab stars. Most of the anomalous double-mode stars show long-term modulations of the amplitudes (Blazhko-like effect). Translating the period ratios into the abundance parameter, Z, we find for our stars Z ∈ (0.002, 0.005) - an order of magnitude higher values than typical for RR Lyrae stars. The mass range of the RRd stars inferred from the WI versus PF diagram is (0.55-0.75) M⊙. These parameters cannot be accounted for with single star evolution assuming a Reimers-like mass-loss. Much greater mass-loss caused by interaction with other stars is postulated. We blame the peculiar pulsation properties of our stars to the parametric resonance instability of the 1O-mode to excitation of the F- and 2O-modes as with the inferred parameters of the stars 2ω1O ≈ ωF + ω2O.

  19. Nano- and micro-structuring of fused silica using time-delay adjustable double flash ns-laser radiation

    NASA Astrophysics Data System (ADS)

    Lorenz, Pierre; Zhao, Xiongtao; Ehrhardt, Martin; Zagoranskiy, Igor; Zimmer, Klaus; Han, Bing

    2018-02-01

    Large area, high speed, nanopatterning of surfaces by laser ablation is challenging due to the required high accuracy of the optical and mechanical systems fulfilling the precision of nanopatterning process. Utilization of self-organization approaches can provide an alternative decoupling spot precision and field of machining. The laser-induced front side etching (LIFE) and laser-induced back side dry etching (LIBDE) of fused silica were studied using single and double flash nanosecond laser pulses with a wavelength of 532 nm where the time delay Δτ of the double flash laser pulses was adjusted from 50 ns to 10 μs. The fused silica can be etched at both processes assisted by a 10 nm chromium layer where the etching depth Δz at single flash laser pulses is linear to the laser fluence and independent on the number of laser pulses, from 2 to 12 J/cm2, it is Δz = δLIFE/LIBDE . Φ with δLIFE 16 nm/(J/cm2) and δLIBDE 5.2 nm/(J/cm2) 3 . δLIFE. At double flash laser pulses, the Δz is dependent on the time delay Δτ of the laser pulses and the Δz slightly increased at decreasing Δτ. Furthermore, the surface nanostructuring of fused silica using IPSM-LIFE (LIFE using in-situ pre-structured metal layer) method with a single double flash laser pulse was tested. The first pulse of the double flash results in a melting of the metal layer. The surface tension of the liquid metal layer tends in a droplet formation process and dewetting process, respectively. If the liquid phase life time ΔtLF is smaller than the droplet formation time the metal can be "frozen" in an intermediated state like metal bare structures. The second laser treatment results in a evaporation of the metal and in a partial evaporation and melting of the fused silica surface, where the resultant structures in the fused silica surface are dependent on the lateral geometry of the pre-structured metal layer. A successful IPSM-LIFE structuring could be achieved assisted by a 20 nm molybdenum layer at Δτ >= 174 ns. That path the way for the high speed ultra-fast nanostructuring of dielectric surfaces by self-organizing processes. The different surface structures were analyzed by scanning electron microscopy (SEM) and white light interferometry (WLI).

  20. Precision Medicine: The New Frontier in Idiopathic Pulmonary Fibrosis

    PubMed Central

    Brownell, Robert; Kaminski, Naftali; Woodruff, Prescott G.; Bradford, Williamson Z.; Richeldi, Luca; Martinez, Fernando J.

    2016-01-01

    Precision medicine is defined by the National Institute of Health’s Precision Medicine Initiative Working Group as an approach to disease treatment that takes into account individual variability in genes, environment, and lifestyle. There has been increased interest in applying the concept of precision medicine to idiopathic pulmonary fibrosis, in particular to search for genetic and molecular biomarker-based profiles (so called endotypes) that identify mechanistically distinct disease subgroups. The relevance of precision medicine to idiopathic pulmonary fibrosis is yet to be established, but we believe that it holds great promise to provide targeted and highly effective therapies to patients. In this manuscript, we describe the field’s nascent efforts in genetic/molecular endotype identification and how environmental and behavioral subgroups may also be relevant to disease management. PMID:26991475

  1. Development of variable-rate precision spraying systems for tree crop production

    USDA-ARS?s Scientific Manuscript database

    Excessive pesticides are often applied to target and non-target areas in orchards and nurseries, resulting in greater production costs, worker exposure to unnecessary pesticide risks, and adverse contamination of the environment. To improve spray application efficiency, two types of variable-rate pr...

  2. An electronic flow control system for a variable-rate tree sprayer

    USDA-ARS?s Scientific Manuscript database

    Precise modulation of nozzle flow rates is a critical measure to achieve variable-rate spray applications. An electronic flow rate control system accommodating with microprocessors and pulse width modulation (PWM) controlled solenoid valves was designed to manipulate the output of spray nozzles inde...

  3. Development of digital flow control system for multi-channel variable-rate sprayers

    USDA-ARS?s Scientific Manuscript database

    Precision modulation of nozzle flow rates is a critical step for variable-rate spray applications in orchards and ornamental nurseries. An automatic flow rate control system activated with microprocessors and pulse width modulation (PWM) controlled solenoid valves was developed to control flow rates...

  4. A Simulation Investigation of Principal Component Regression.

    ERIC Educational Resources Information Center

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  5. High-Maneuverability Airframe: Initial Investigation of Configuration’s Aft End for Increased Stability, Range, and Maneuverability

    DTIC Science & Technology

    2013-09-01

    including the interaction effects between the fins and canards. 2. Solution Technique 2.1 Computational Aerodynamics The double-precision solver of a...and overset grids (unified-grid). • Total variation diminishing discretization based on a new multidimensional interpolation framework. • Riemann ... solvers to provide proper signal propagation physics including versions for preconditioned forms of the governing equations. • Consistent and

  6. An Astronomical Test of CCD Photometric Precision

    NASA Technical Reports Server (NTRS)

    Koch, David G.; Dunham, Edward W.; Borucki, William J.; Jenkins, Jon M.

    2001-01-01

    Ground-based differential photometry is limited to a precision of order 10(exp -3) because of atmospheric effects. A space-based photometer should be limited only by the inherent instrument precision and shot noise. Laboratory tests have shown that a precision of order 10-5 is achievable with commercially available charged coupled devices (CCDs). We have proposed to take this one step further by performing measurements at a telescope using a Wollaston prism as a beam splitter First-order atmospheric effects (e.g., extinction) will appear to be identical in the two images of each star formed by the prism and will be removed in the data analysis. This arrangement can determine the precision that is achievable under the influence of second-order atmospheric effects (e.g., variable point-spread function (PSF) from seeing). These telescopic observations will thus provide a lower limit to the precision that can be realized by a space-based differential photometer.

  7. Diode-pumped DUV cw all-solid-state laser to replace argon ion lasers

    NASA Astrophysics Data System (ADS)

    Zanger, Ekhard; Liu, B.; Gries, Wolfgang

    2000-04-01

    The slim series DELTATRAINTM-worldwide the first integrated cw diode-pumped all-solid-state DUV laser at 266 nm with a compact, slim design-has been developed. The slim design minimizes the DUV DPSSL footprint and thus greatly facilitates the replacement of commonly used gas ion lasers, including these with intra-cavity frequency doubling, in numerous industrial and scientific applications. Such a replacement will result in an operation cost reduction by several thousands US$DLR each year for one unit. Owing to its unique geometry-invariant frequency doubling cavity- based on the LAS patent-pending DeltaConcept architecture- this DUV laser provides excellent beam-pointing stability of <2 (mu) rad/ degree(s)C and power stability of <2%. The newest design of the cavity block has adopted a cemented resonator with each component positioned precisely inside a compact monolithic metal block. The automatic and precise crystal shifter ensures long operation lifetime of > 5000 hours of whole 266 nm laser. The microprocessor controlled power supply provides an automatic control of the whole 266 nm laser, making this DUV laser a hands-off system which can meet tough requirements posed by numerous industrial and scientific applications. It will replace the commonplace ion laser as the future DUV laser of choice.

  8. Architecture with GIDEON, A Program for Design in Structural DNA Nanotechnology

    PubMed Central

    Birac, Jeffrey J.; Sherman, William B.; Kopatsch, Jens; Constantinou, Pamela E.; Seeman, Nadrian C.

    2012-01-01

    We present geometry based design strategies for DNA nanostructures. The strategies have been implemented with GIDEON – a Graphical Integrated Development Environment for OligoNucleotides. GIDEON has a highly flexible graphical user interface that facilitates the development of simple yet precise models, and the evaluation of strains therein. Models are built on a simple model of undistorted B-DNA double-helical domains. Simple point and click manipulations of the model allow the minimization of strain in the phosphate-backbone linkages between these domains and the identification of any steric clashes that might occur as a result. Detailed analysis of 3D triangles yields clear predictions of the strains associated with triangles of different sizes. We have carried out experiments that confirm that 3D triangles form well only when their geometrical strain is less than 4% deviation from the estimated relaxed structure. Thus geometry-based techniques alone, without energetic considerations, can be used to explain general trends in DNA structure formation. We have used GIDEON to build detailed models of double crossover and triple crossover molecules, evaluating the non-planarity associated with base tilt and junction mis-alignments. Computer modeling using a graphical user interface overcomes the limited precision of physical models for larger systems, and the limited interaction rate associated with earlier, command-line driven software. PMID:16630733

  9. Statistical analysis of an RNA titration series evaluates microarray precision and sensitivity on a whole-array basis

    PubMed Central

    Holloway, Andrew J; Oshlack, Alicia; Diyagama, Dileepa S; Bowtell, David DL; Smyth, Gordon K

    2006-01-01

    Background Concerns are often raised about the accuracy of microarray technologies and the degree of cross-platform agreement, but there are yet no methods which can unambiguously evaluate precision and sensitivity for these technologies on a whole-array basis. Results A methodology is described for evaluating the precision and sensitivity of whole-genome gene expression technologies such as microarrays. The method consists of an easy-to-construct titration series of RNA samples and an associated statistical analysis using non-linear regression. The method evaluates the precision and responsiveness of each microarray platform on a whole-array basis, i.e., using all the probes, without the need to match probes across platforms. An experiment is conducted to assess and compare four widely used microarray platforms. All four platforms are shown to have satisfactory precision but the commercial platforms are superior for resolving differential expression for genes at lower expression levels. The effective precision of the two-color platforms is improved by allowing for probe-specific dye-effects in the statistical model. The methodology is used to compare three data extraction algorithms for the Affymetrix platforms, demonstrating poor performance for the commonly used proprietary algorithm relative to the other algorithms. For probes which can be matched across platforms, the cross-platform variability is decomposed into within-platform and between-platform components, showing that platform disagreement is almost entirely systematic rather than due to measurement variability. Conclusion The results demonstrate good precision and sensitivity for all the platforms, but highlight the need for improved probe annotation. They quantify the extent to which cross-platform measures can be expected to be less accurate than within-platform comparisons for predicting disease progression or outcome. PMID:17118209

  10. Analysis of the Precision of Variable Flip Angle T1 Mapping with Emphasis on the Noise Propagated from RF Transmit Field Maps.

    PubMed

    Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan

    2017-01-01

    In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.

  11. Precision enhancement of pavement roughness localization with connected vehicles

    NASA Astrophysics Data System (ADS)

    Bridgelall, R.; Huang, Y.; Zhang, Z.; Deng, F.

    2016-02-01

    Transportation agencies rely on the accurate localization and reporting of roadway anomalies that could pose serious hazards to the traveling public. However, the cost and technical limitations of present methods prevent their scaling to all roadways. Connected vehicles with on-board accelerometers and conventional geospatial position receivers offer an attractive alternative because of their potential to monitor all roadways in real-time. The conventional global positioning system is ubiquitous and essentially free to use but it produces impractically large position errors. This study evaluated the improvement in precision achievable by augmenting the conventional geo-fence system with a standard speed bump or an existing anomaly at a pre-determined position to establish a reference inertial marker. The speed sensor subsequently generates position tags for the remaining inertial samples by computing their path distances relative to the reference position. The error model and a case study using smartphones to emulate connected vehicles revealed that the precision in localization improves from tens of metres to sub-centimetre levels, and the accuracy of measuring localized roughness more than doubles. The research results demonstrate that transportation agencies will benefit from using the connected vehicle method to achieve precision and accuracy levels that are comparable to existing laser-based inertial profilers.

  12. A 256-channel, high throughput and precision time-to-digital converter with a decomposition encoding scheme in a Kintex-7 FPGA

    NASA Astrophysics Data System (ADS)

    Song, Z.; Wang, Y.; Kuang, J.

    2018-05-01

    Field Programmable Gate Arrays (FPGAs) made with 28 nm and more advanced process technology have great potentials for implementation of high precision time-to-digital convertors (TDC), because the delay cells in the tapped delay line (TDL) used for time interpolation are getting smaller and smaller. However, the bubble problems in the TDL status are becoming more complicated, which make it difficult to achieve TDCs on these chips with a high time precision. In this paper, we are proposing a novel decomposition encoding scheme, which not only can solve the bubble problem easily, but also has a high encoding efficiency. The potential of these chips to realize TDC can be fully released with the scheme. In a Xilinx Kintex-7 FPGA chip, we implemented a TDC system with 256 TDC channels, which doubles the number of TDC channels that our previous technique could achieve. Performances of all these TDC channels are evaluated. The average RMS time precision among them is 10.23 ps in the time-interval measurement range of (0–10 ns), and their measurement throughput reaches 277 M measures per second.

  13. Easy-DHPSF open-source software for three-dimensional localization of single molecules with precision beyond the optical diffraction limit.

    PubMed

    Lew, Matthew D; von Diezmann, Alexander R S; Moerner, W E

    2013-02-25

    Automated processing of double-helix (DH) microscope images of single molecules (SMs) streamlines the protocol required to obtain super-resolved three-dimensional (3D) reconstructions of ultrastructures in biological samples by single-molecule active control microscopy. Here, we present a suite of MATLAB subroutines, bundled with an easy-to-use graphical user interface (GUI), that facilitates 3D localization of single emitters (e.g. SMs, fluorescent beads, or quantum dots) with precisions of tens of nanometers in multi-frame movies acquired using a wide-field DH epifluorescence microscope. The algorithmic approach is based upon template matching for SM recognition and least-squares fitting for 3D position measurement, both of which are computationally expedient and precise. Overlapping images of SMs are ignored, and the precision of least-squares fitting is not as high as maximum likelihood-based methods. However, once calibrated, the algorithm can fit 15-30 molecules per second on a 3 GHz Intel Core 2 Duo workstation, thereby producing a 3D super-resolution reconstruction of 100,000 molecules over a 20×20×2 μm field of view (processing 128×128 pixels × 20000 frames) in 75 min.

  14. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  15. Design approaches to experimental mediation.

    PubMed

    Pirlott, Angela G; MacKinnon, David P

    2016-09-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.

  16. Arthroscopic Double-Row Transosseous Equivalent Rotator Cuff Repair with a Knotless Self-Reinforcing Technique.

    PubMed

    Mook, William R; Greenspoon, Joshua A; Millett, Peter J

    2016-01-01

    Rotator cuff tears are a significant cause of shoulder morbidity. Surgical techniques for repair have evolved to optimize the biologic and mechanical variables critical to tendon healing. Double-row repairs have demonstrated superior biomechanical advantages to a single-row. The preferred technique for rotator cuff repair of the senior author was reviewed and described in a step by step fashion. The final construct is a knotless double row transosseous equivalent construct. The described technique includes the advantages of a double-row construct while also offering self reinforcement, decreased risk of suture cut through, decreased risk of medial row overtensioning and tissue strangulation, improved vascularity, the efficiency of a knotless system, and no increased risk for subacromial impingement from the burden of suture knots. Arthroscopic knotless double row rotator cuff repair is a safe and effective method to repair rotator cuff tears.

  17. Arthroscopic Double-Row Transosseous Equivalent Rotator Cuff Repair with a Knotless Self-Reinforcing Technique

    PubMed Central

    Mook, William R.; Greenspoon, Joshua A.; Millett, Peter J.

    2016-01-01

    Background: Rotator cuff tears are a significant cause of shoulder morbidity. Surgical techniques for repair have evolved to optimize the biologic and mechanical variables critical to tendon healing. Double-row repairs have demonstrated superior biomechanical advantages to a single-row. Methods: The preferred technique for rotator cuff repair of the senior author was reviewed and described in a step by step fashion. The final construct is a knotless double row transosseous equivalent construct. Results: The described technique includes the advantages of a double-row construct while also offering self reinforcement, decreased risk of suture cut through, decreased risk of medial row overtensioning and tissue strangulation, improved vascularity, the efficiency of a knotless system, and no increased risk for subacromial impingement from the burden of suture knots. Conclusion: Arthroscopic knotless double row rotator cuff repair is a safe and effective method to repair rotator cuff tears. PMID:27733881

  18. Ozone column density determination from direct irradiance measurements in the ultraviolet performed by a four-channel precision filter radiometer.

    PubMed

    Ingold, T; Mätzler, C; Wehrli, C; Heimo, A; Kämpfer, N; Philipona, R

    2001-04-20

    Ultraviolet light was measured at four channels (305, 311, 318, and 332 nm) with a precision filter radiometer (UV-PFR) at Arosa, Switzerland (46.78 degrees , 9.68 degrees , 1850 m above sea level), within the instrument trial phase of a cooperative venture of the Swiss Meteorological Institute (MeteoSwiss) and the Physikalisch-Meteorologisches Observatorium Davos/World Radiation Center. We retrieved ozone-column density data from these direct relative irradiance measurements by adapting the Dobson standard method for all possible single-difference wavelength pairs and one double-difference pair (305/311 and 305/318) under conditions of cloud-free sky and of thin clouds (cloud optical depth <2.5 at 500 nm). All UV-PFR retrievals exhibited excellent agreement with those of collocated Dobson and Brewer spectrophotometers for data obtained during two months in 1999. Combining the results of the error analysis and the findings of the validation, we propose to retrieve ozone-column density by using the 305/311 single difference pair and the double-difference pair. Furthermore, combining both retrievals by building the ratio of ozone-column density yields information that is relevant to data quality control. Estimates of the 305/311 pair agree with measurements by the Dobson and Brewer instruments within 1% for both the mean and the standard deviation of the differences. For the double pair these values are in a range up to 1.6%. However, this pair is less sensitive to model errors. The retrieval performance is also consistent with satellite-based data from the Earth Probe Total Ozone Mapping Spectrometer (EP-TOMS) and the Global Ozone Monitoring Experiment instrument (GOME).

  19. Ozone Column Density Determination From Direct Irradiance Measurements in the Ultraviolet Performed by a Four-Channel Precision Filter Radiometer

    NASA Astrophysics Data System (ADS)

    Ingold, Thomas; Mätzler, Christian; Wehrli, Christoph; Heimo, Alain; Kämpfer, Niklaus; Philipona, Rolf

    2001-04-01

    Ultraviolet light was measured at four channels (305, 311, 318, and 332 nm) with a precision filter radiometer (UV-PFR) at Arosa, Switzerland (46.78 , 9.68 , 1850 m above sea level), within the instrument trial phase of a cooperative venture of the Swiss Meteorological Institute (MeteoSwiss) and the Physikalisch-Meteorologisches Observatorium Davos /World Radiation Center. We retrieved ozone-column density data from these direct relative irradiance measurements by adapting the Dobson standard method for all possible single-difference wavelength pairs and one double-difference pair (305 /311 and 305 /318) under conditions of cloud-free sky and of thin clouds (cloud optical depth <2.5 at 500 nm). All UV-PFR retrievals exhibited excellent agreement with those of collocated Dobson and Brewer spectrophotometers for data obtained during two months in 1999. Combining the results of the error analysis and the findings of the validation, we propose to retrieve ozone-column density by using the 305 /311 single difference pair and the double-difference pair. Furthermore, combining both retrievals by building the ratio of ozone-column density yields information that is relevant to data quality control. Estimates of the 305 /311 pair agree with measurements by the Dobson and Brewer instruments within 1% for both the mean and the standard deviation of the differences. For the double pair these values are in a range up to 1.6%. However, this pair is less sensitive to model errors. The retrieval performance is also consistent with satellite-based data from the Earth Probe Total Ozone Mapping Spectrometer (EP-TOMS) and the Global Ozone Monitoring Experiment instrument (GOME).

  20. Precise Interval Timer for Software Defined Radio

    NASA Technical Reports Server (NTRS)

    Pozhidaev, Aleksey (Inventor)

    2014-01-01

    A precise digital fractional interval timer for software defined radios which vary their waveform on a packet-by-packet basis. The timer allows for variable length in the preamble of the RF packet and allows to adjust boundaries of the TDMA (Time Division Multiple Access) Slots of the receiver of an SDR based on the reception of the RF packet of interest.

  1. Analysis on the precision of the dimensions of self-ligating brackets.

    PubMed

    Erduran, Rackel Hatice Milhomens Gualberto; Maeda, Fernando Akio; Ortiz, Sandra Regina Mota; Triviño, Tarcila; Fuziy, Acácio; Carvalho, Paulo Eduardo Guedes

    2016-12-01

    The present study aimed to evaluate the precision of the torque applied by 0.022" self-ligating brackets of different brands, the precision of parallelism between the inner walls of their slots, and precision of their slot height. Eighty brackets for upper central incisors of eight trademarked models were selected: Abzil, GAC, American Orthodontics, Morelli, Orthometric, Ormco, Forestadent, and Ortho Organizers. Images of the brackets were obtained using a scanning electron microscope (SEM) and these were measured using the AutoCAD 2011 software. The tolerance parameters stated in the ISO 27020 standard were used as references. The results showed that only the Orthometric, Morelli, and Ormco groups showed results inconsistent with the ISO standard. Regarding the parallelism of the internal walls of the slots, most of the models studied had results in line with the ISO prescription, except the Morelli group. In assessing bracket slot height, only the Forestadent, GAC, American Orthodontics, and Ormco groups presented results in accordance with the ISO standard. The GAC, Forestadent, and American Orthodontics groups did not differ in relation to the three factors of the ISO 27020 standard. Great variability of results is observed in relation to all the variables. © 2016 Wiley Periodicals, Inc.

  2. The influence of chronotype on making music: circadian fluctuations in pianists' fine motor skills

    PubMed Central

    Van Vugt, Floris T.; Treutler, Katharina; Altenmüller, Eckart; Jabusch, Hans-Christian

    2013-01-01

    Making music on a professional level requires a maximum of sensorimotor precision. Chronotype-dependent fluctuations of sensorimotor precision in the course of the day may prove a challenge for musicians because public performances or recordings are usually scheduled at fixed times of the day. We investigated pianists' sensorimotor timing precision in a scale playing task performed in the morning and in the evening. Participants' chronotype was established through the Munich Chrono-Type Questionnaire, where mid-sleep time served as a marker for the individual chronotypes. Twenty-one piano students were included in the study. Timing precision was decomposed into consistent within-trial variability (irregularity) and residual, between-trial variability (instability). The timing patterns of late chronotype pianists were more stable in the evening than in the morning, whereas early chronotype pianists did not show a difference between the two recording timepoints. In sum, the present results indicate that even highly complex sensorimotor tasks such as music playing are affected by interactions between chronotype and the time of day. Thus, even long-term, massed practice of these expert musicians has not been able to wash out circadian fluctuations in performance. PMID:23847515

  3. Deficits in Coordinative Bimanual Timing Precision in Children With Specific Language Impairment

    PubMed Central

    Goffman, Lisa; Zelaznik, Howard N.

    2017-01-01

    Purpose Our objective was to delineate components of motor performance in specific language impairment (SLI); specifically, whether deficits in timing precision in one effector (unimanual tapping) and in two effectors (bimanual clapping) are observed in young children with SLI. Method Twenty-seven 4- to 5-year-old children with SLI and 21 age-matched peers with typical language development participated. All children engaged in a unimanual tapping and a bimanual clapping timing task. Standard measures of language and motor performance were also obtained. Results No group differences in timing variability were observed in the unimanual tapping task. However, compared with typically developing peers, children with SLI were more variable in their timing precision in the bimanual clapping task. Nine of the children with SLI performed greater than 1 SD below the mean on a standardized motor assessment. The children with low motor performance showed the same profile as observed across all children with SLI, with unaffected unimanual and impaired bimanual timing precision. Conclusions Although unimanual timing is unaffected, children with SLI show a deficit in timing that requires bimanual coordination. We propose that the timing deficits observed in children with SLI are associated with the increased demands inherent in bimanual performance. PMID:28174821

  4. The influence of chronotype on making music: circadian fluctuations in pianists' fine motor skills.

    PubMed

    Van Vugt, Floris T; Treutler, Katharina; Altenmüller, Eckart; Jabusch, Hans-Christian

    2013-01-01

    Making music on a professional level requires a maximum of sensorimotor precision. Chronotype-dependent fluctuations of sensorimotor precision in the course of the day may prove a challenge for musicians because public performances or recordings are usually scheduled at fixed times of the day. We investigated pianists' sensorimotor timing precision in a scale playing task performed in the morning and in the evening. Participants' chronotype was established through the Munich Chrono-Type Questionnaire, where mid-sleep time served as a marker for the individual chronotypes. Twenty-one piano students were included in the study. Timing precision was decomposed into consistent within-trial variability (irregularity) and residual, between-trial variability (instability). The timing patterns of late chronotype pianists were more stable in the evening than in the morning, whereas early chronotype pianists did not show a difference between the two recording timepoints. In sum, the present results indicate that even highly complex sensorimotor tasks such as music playing are affected by interactions between chronotype and the time of day. Thus, even long-term, massed practice of these expert musicians has not been able to wash out circadian fluctuations in performance.

  5. A new numerically stable implementation of the T-matrix method for electromagnetic scattering by spheroidal particles

    NASA Astrophysics Data System (ADS)

    Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.

    2013-07-01

    We propose, describe, and demonstrate a new numerically stable implementation of the extended boundary-condition method (EBCM) to compute the T-matrix for electromagnetic scattering by spheroidal particles. Our approach relies on the fact that for many of the EBCM integrals in the special case of spheroids, a leading part of the integrand integrates exactly to zero, which causes catastrophic loss of precision in numerical computations. This feature was in fact first pointed out by Waterman in the context of acoustic scattering and electromagnetic scattering by infinite cylinders. We have recently studied it in detail in the case of electromagnetic scattering by particles. Based on this study, the principle of our new implementation is therefore to compute all the integrands without the problematic part to avoid the primary cause of loss of precision. Particular attention is also given to choosing the algorithms that minimise loss of precision in every step of the method, without compromising on speed. We show that the resulting implementation can efficiently compute in double precision arithmetic the T-matrix and therefore optical properties of spheroidal particles to a high precision, often down to a remarkable accuracy (10-10 relative error), over a wide range of parameters that are typically considered problematic. We discuss examples such as high-aspect ratio metallic nanorods and large size parameter (≈35) dielectric particles, which had been previously modelled only using quadruple-precision arithmetic codes.

  6. A 3.9 ps Time-Interval RMS Precision Time-to-Digital Converter Using a Dual-Sampling Method in an UltraScale FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Liu, Chong

    2016-10-01

    Field programmable gate arrays (FPGAs) manufactured with more advanced processing technology have faster carry chains and smaller delay elements, which are favorable for the design of tapped delay line (TDL)-style time-to-digital converters (TDCs) in FPGA. However, new challenges are posed in using them to implement TDCs with a high time precision. In this paper, we propose a bin realignment method and a dual-sampling method for TDC implementation in a Xilinx UltraScale FPGA. The former realigns the disordered time delay taps so that the TDC precision can approach the limit of its delay granularity, while the latter doubles the number of taps in the delay line so that the TDC precision beyond the cell delay limitation can be expected. Two TDC channels were implemented in a Kintex UltraScale FPGA, and the effectiveness of the new methods was evaluated. For fixed time intervals in the range from 0 to 440 ns, the average RMS precision measured by the two TDC channels reaches 5.8 ps using the bin realignment, and it further improves to 3.9 ps by using the dual-sampling method. The time precision has a 5.6% variation in the measured temperature range. Every part of the TDC, including dual-sampling, encoding, and on-line calibration, could run at a 500 MHz clock frequency. The system measurement dead time is only 4 ns.

  7. [Medical imaging in tumor precision medicine: opportunities and challenges].

    PubMed

    Xu, Jingjing; Tan, Yanbin; Zhang, Minming

    2017-05-25

    Tumor precision medicine is an emerging approach for tumor diagnosis, treatment and prevention, which takes account of individual variability of environment, lifestyle and genetic information. Tumor precision medicine is built up on the medical imaging innovations developed during the past decades, including the new hardware, new imaging agents, standardized protocols, image analysis and multimodal imaging fusion technology. Also the development of automated and reproducible analysis algorithm has extracted large amount of information from image-based features. With the continuous development and mining of tumor clinical and imaging databases, the radiogenomics, radiomics and artificial intelligence have been flourishing. Therefore, these new technological advances bring new opportunities and challenges to the application of imaging in tumor precision medicine.

  8. Multi-Particle Interferometry Based on Double Entangled States

    NASA Technical Reports Server (NTRS)

    Pittman, Todd B.; Shih, Y. H.; Strekalov, D. V.; Sergienko, A. V.; Rubin, M. H.

    1996-01-01

    A method for producing a 4-photon entangled state based on the use of two independent pair sources is discussed. Of particular interest is that each of the pair sources produces a two-photon state which is simultaneously entangled in both polarization and space-time variables. Performing certain measurements which exploit this double entanglement provides an opportunity for verifying the recent demonstration of nonlocality by Greenberger, Horne, and Zeilinger.

  9. Stress and Family Quality of Life in Parents of Children with Autism Spectrum Disorder: Parent Gender and the Double ABCX Model

    ERIC Educational Resources Information Center

    McStay, Rebecca L.; Trembath, David; Dissanayake, Cheryl

    2014-01-01

    Past research has supported the utility of the Double ABCX model of family adaptation for parents raising a child with autism spectrum disorder (ASD). What remains unclear is the impact of family-related variables on outcomes in both mothers and fathers within the same family. We explored the potential predictors of maternal and paternal stress…

  10. Double emulsion solvent evaporation techniques used for drug encapsulation.

    PubMed

    Iqbal, Muhammad; Zafar, Nadiah; Fessi, Hatem; Elaissari, Abdelhamid

    2015-12-30

    Double emulsions are complex systems, also called "emulsions of emulsions", in which the droplets of the dispersed phase contain one or more types of smaller dispersed droplets themselves. Double emulsions have the potential for encapsulation of both hydrophobic as well as hydrophilic drugs, cosmetics, foods and other high value products. Techniques based on double emulsions are commonly used for the encapsulation of hydrophilic molecules, which suffer from low encapsulation efficiency because of rapid drug partitioning into the external aqueous phase when using single emulsions. The main issue when using double emulsions is their production in a well-controlled manner, with homogeneous droplet size by optimizing different process variables. In this review special attention has been paid to the application of double emulsion techniques for the encapsulation of various hydrophilic and hydrophobic anticancer drugs, anti-inflammatory drugs, antibiotic drugs, proteins and amino acids and their applications in theranostics. Moreover, the optimized ratio of the different phases and other process parameters of double emulsions are discussed. Finally, the results published regarding various types of solvents, stabilizers and polymers used for the encapsulation of several active substances via double emulsion processes are reported. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Frequency Characteristics of the MAGLEV Double-layered Propulsion Coil

    NASA Astrophysics Data System (ADS)

    Ema, Satoshi

    The MAGLEV (magnetically levitated vehicle) is now well along in development testing at Yamanashi Test Line. The MAGLEV power source needs to supply a variable voltage and variable frequency to propulsion coils, which installed on outdoor guideway. The output voltage of the electric power converter contains many higher harmonics, which causes many troubles such as inductive interference. Accordingly, it is necessary to clarify the frequency characteristics of the propulsion coils and the power feeding circuit. In view of this situation, experiments and the theoretical analysis concerning the frequency characteristics of the propulsion coils with single-layer arrangement and the power feeding circuit at Miyazaki Test Line had been performed by the author. But the arrangement of the propulsion coils had been changed in Yamanashi Test Line from the single-layered coils to the double-layered coils for the stability of the super-conducting magnet on board. Thus, experiments and investigations concerning the frequency characteristics(resonance characteristics)of the propulsion coils with double-layer arrangement at Yamanashi Test Line have been performed but a theoretical analysis had not been done enough. A theoretical analysis was therefore done in this paper by applying the inverted L equivalent circuit with mutual inductance and capacitance to the propulsion coil, from which the positive and zero phase characteristics of the double-layered propulsion coils were analyzed.

  12. Use of the preconditioned conjugate gradient algorithm as a generic solver for mixed-model equations in animal breeding applications.

    PubMed

    Tsuruta, S; Misztal, I; Strandén, I

    2001-05-01

    Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.

  13. Extreme D'Hondt and round-off effects in voting computations

    NASA Astrophysics Data System (ADS)

    Konstantinov, M. M.; Pelova, G. B.

    2015-11-01

    D'Hondt (or Jefferson) method and Hare-Niemeyer (or Hamilton) method are widely used worldwide for seat allocation in proportional systems. Everything seems to be well known in this area. However, this is not the case. For example the D'Hondt method can violate the quota rule from above but this effect is not analyzed as a function of the number of parties and/or the threshold used. Also, allocation methods are often implemented automatically as computer codes in machine arithmetic believing that following the IEEE standards for double precision binary arithmetics would guarantee correct results. Unfortunately this may not happen not only for double precision arithmetic (usually producing 15-16 true decimal digits) but also for any relative precision of the underlying binary machine arithmetics. This paper deals with the following new issues.Find conditions (threshold in particular) such that D'Hondt seat allocation violates maximally the quota rule. Analyze possible influence of rounding errors in the automatic implementation of Hare-Niemeyer method in machine arithmetic.Concerning the first issue, it is known that the maximal deviation of D'Hondt allocation from upper quota for the Bulgarian proportional system (240 MP and 4% barrier) is 5. This fact had been established in 1991. A classical treatment of voting issues is the monograph [1], while electoral problems specific for Bulgaria have been treated in [2, 4]. The effect of threshold on extreme seat allocations is also analyzed in [3]. Finally we would like to stress that Voting Theory may sometimes be mathematically trivial but always has great political impact. This is a strong motivation for further investigations in this area.

  14. Properties of an eclipsing double white dwarf binary NLTT 11748

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplan, David L.; Walker, Arielle N.; Marsh, Thomas R.

    2014-01-10

    We present high-quality ULTRACAM photometry of the eclipsing detached double white dwarf binary NLTT 11748. This system consists of a carbon/oxygen white dwarf and an extremely low mass (<0.2 M {sub ☉}) helium-core white dwarf in a 5.6 hr orbit. To date, such extremely low-mass white dwarfs, which can have thin, stably burning outer layers, have been modeled via poorly constrained atmosphere and cooling calculations where uncertainties in the detailed structure can strongly influence the eventual fates of these systems when mass transfer begins. With precise (individual precision ≈1%), high-cadence (≈2 s), multicolor photometry of multiple primary and secondary eclipsesmore » spanning >1.5 yr, we constrain the masses and radii of both objects in the NLTT 11748 system to a statistical uncertainty of a few percent. However, we find that overall uncertainty in the thickness of the envelope of the secondary carbon/oxygen white dwarf leads to a larger (≈13%) systematic uncertainty in the primary He WD's mass. Over the full range of possible envelope thicknesses, we find that our primary mass (0.136-0.162 M {sub ☉}) and surface gravity (log (g) = 6.32-6.38; radii are 0.0423-0.0433 R {sub ☉}) constraints do not agree with previous spectroscopic determinations. We use precise eclipse timing to detect the Rømer delay at 7σ significance, providing an additional weak constraint on the masses and limiting the eccentricity to ecos ω = (– 4 ± 5) × 10{sup –5}. Finally, we use multicolor data to constrain the secondary's effective temperature (7600 ± 120 K) and cooling age (1.6-1.7 Gyr).« less

  15. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  16. Development of a CRISPR/Cas9 genome editing toolbox for Corynebacterium glutamicum.

    PubMed

    Liu, Jiao; Wang, Yu; Lu, Yujiao; Zheng, Ping; Sun, Jibin; Ma, Yanhe

    2017-11-16

    Corynebacterium glutamicum is an important industrial workhorse and advanced genetic engineering tools are urgently demanded. Recently, the clustered regularly interspaced short palindromic repeats (CRISPR) and their CRISPR-associated proteins (Cas) have revolutionized the field of genome engineering. The CRISPR/Cas9 system that utilizes NGG as protospacer adjacent motif (PAM) and has good targeting specificity can be developed into a powerful tool for efficient and precise genome editing of C. glutamicum. Herein, we developed a versatile CRISPR/Cas9 genome editing toolbox for C. glutamicum. Cas9 and gRNA expression cassettes were reconstituted to combat Cas9 toxicity and facilitate effective termination of gRNA transcription. Co-transformation of Cas9 and gRNA expression plasmids was exploited to overcome high-frequency mutation of cas9, allowing not only highly efficient gene deletion and insertion with plasmid-borne editing templates (efficiencies up to 60.0 and 62.5%, respectively) but also simple and time-saving operation. Furthermore, CRISPR/Cas9-mediated ssDNA recombineering was developed to precisely introduce small modifications and single-nucleotide changes into the genome of C. glutamicum with efficiencies over 80.0%. Notably, double-locus editing was also achieved in C. glutamicum. This toolbox works well in several C. glutamicum strains including the widely-used strains ATCC 13032 and ATCC 13869. In this study, we developed a CRISPR/Cas9 toolbox that could facilitate markerless gene deletion, gene insertion, precise base editing, and double-locus editing in C. glutamicum. The CRISPR/Cas9 toolbox holds promise for accelerating the engineering of C. glutamicum and advancing its application in the production of biochemicals and biofuels.

  17. Simultaneous, accurate measurement of the 3D position and orientation of single molecules

    PubMed Central

    Backlund, Mikael P.; Lew, Matthew D.; Backer, Adam S.; Sahl, Steffen J.; Grover, Ginni; Agrawal, Anurag; Piestun, Rafael; Moerner, W. E.

    2012-01-01

    Recently, single molecule-based superresolution fluorescence microscopy has surpassed the diffraction limit to improve resolution to the order of 20 nm or better. These methods typically use image fitting that assumes an isotropic emission pattern from the single emitters as well as control of the emitter concentration. However, anisotropic single-molecule emission patterns arise from the transition dipole when it is rotationally immobile, depending highly on the molecule’s 3D orientation and z position. Failure to account for this fact can lead to significant lateral (x, y) mislocalizations (up to ∼50–200 nm). This systematic error can cause distortions in the reconstructed images, which can translate into degraded resolution. Using parameters uniquely inherent in the double-lobed nature of the Double-Helix Point Spread Function, we account for such mislocalizations and simultaneously measure 3D molecular orientation and 3D position. Mislocalizations during an axial scan of a single molecule manifest themselves as an apparent lateral shift in its position, which causes the standard deviation (SD) of its lateral position to appear larger than the SD expected from photon shot noise. By correcting each localization based on an estimated orientation, we are able to improve SDs in lateral localization from ∼2× worse than photon-limited precision (48 vs. 25 nm) to within 5 nm of photon-limited precision. Furthermore, by averaging many estimations of orientation over different depths, we are able to improve from a lateral SD of 116 (∼4× worse than the photon-limited precision; 28 nm) to 34 nm (within 6 nm of the photon limit). PMID:23129640

  18. AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.

  19. Double-bundle anterior cruciate ligament reconstruction is superior to single-bundle reconstruction in terms of revision frequency: a study of 22,460 patients from the Swedish National Knee Ligament Register.

    PubMed

    Svantesson, Eleonor; Sundemo, David; Hamrin Senorski, Eric; Alentorn-Geli, Eduard; Musahl, Volker; Fu, Freddie H; Desai, Neel; Stålman, Anders; Samuelsson, Kristian

    2017-12-01

    Studies comparing single- and double-bundle anterior cruciate ligament (ACL) reconstructions often include a combined analysis of anatomic and non-anatomic techniques. The purpose of this study was to compare the revision rates between single- and double-bundle ACL reconstructions in the Swedish National Knee Ligament Register with regard to surgical variables as determined by the anatomic ACL reconstruction scoring checklist (AARSC). Patients from the Swedish National Knee Ligament Register who underwent either single- or double-bundle ACL reconstruction with hamstring tendon autograft during the period 2007-2014 were included. The follow-up period started with primary ACL reconstruction, and the outcome measure was set as revision surgery. An online questionnaire based on the items of the AARSC was used to determine the surgical technique implemented in the single-bundle procedures. These were organized into subgroups based on surgical variables, and the revision rates were compared with the double-bundle ACL reconstruction. Hazard ratios (HR) with 95% confidence interval (CI) was calculated and adjusted for confounders by Cox regression. A total of 22,460 patients were included in the study, of which 21,846 were single-bundle and 614 were double-bundle ACL reconstruction. Double-bundle ACL reconstruction had a revision frequency of 2.0% (n = 12) and single-bundle 3.2% (n = 689). Single-bundle reconstruction had an increased risk of revision surgery compared with double-bundle [adjusted HR 1.98 (95% CI 1.12-3.51), p = 0.019]. The subgroup analysis showed a significantly increased risk of revision surgery in patients undergoing single-bundle with anatomic technique using transportal drilling [adjusted HR 2.51 (95% CI 1.39-4.54), p = 0.002] compared with double-bundle ACL reconstruction. Utilizing a more complete anatomic technique according to the AARSC lowered the hazard rate considerably when transportal drilling was performed but still resulted in significantly increased risk of revision surgery compared with double-bundle ACL reconstruction [adjusted HR 1.87 (95% CI 1.04-3.38), p = 0.037]. Double-bundle ACL reconstruction is associated with a lower risk of revision surgery than single-bundle ACL reconstruction. Single-bundle procedures performed using transportal femoral drilling technique had significantly higher risk of revision surgery compared with double-bundle. However, a reference reconstruction with transportal drilling defined as a more complete anatomic reconstruction reduces the risk of revision surgery considerably. III.

  20. Measurement of semiochemical release rates with a dedicated environmental control system

    Treesearch

    Heping Zhu; Harold W. Thistle; Christopher M. Ranger; Hongping Zhou; Brian L. Strom

    2015-01-01

    Insect semiochemical dispensers are commonly deployed under variable environmental conditions over a specified period. Predictions of their longevity are hampered by a lack of methods to accurately monitor and predict how primary variables affect semiochemical release rate. A system was constructed to precisely determine semiochemical release rates under...

  1. Confirmation of radial velocity variability in Arcturus

    NASA Technical Reports Server (NTRS)

    Cochran, William D.

    1988-01-01

    The paper presents results of high-precision measurements of radial-velocity variations in Alpha Boo. Significant radial-velocity variability is detected well in excess of the random and systematic measurement errors. The radial velocity varies by an amount greater than 200 m/sec with a period of around 2 days.

  2. Continuous-variable quantum probes for structured environments

    NASA Astrophysics Data System (ADS)

    Bina, Matteo; Grasselli, Federico; Paris, Matteo G. A.

    2018-01-01

    We address parameter estimation for structured environments and suggest an effective estimation scheme based on continuous-variables quantum probes. In particular, we investigate the use of a single bosonic mode as a probe for Ohmic reservoirs, and obtain the ultimate quantum limits to the precise estimation of their cutoff frequency. We assume the probe prepared in a Gaussian state and determine the optimal working regime, i.e., the conditions for the maximization of the quantum Fisher information in terms of the initial preparation, the reservoir temperature, and the interaction time. Upon investigating the Fisher information of feasible measurements, we arrive at a remarkable simple result: homodyne detection of canonical variables allows one to achieve the ultimate quantum limit to precision under suitable, mild, conditions. Finally, upon exploiting a perturbative approach, we find the invariant sweet spots of the (tunable) characteristic frequency of the probe, able to drive the probe towards the optimal working regime.

  3. Nonlinear Friction Compensation of Ball Screw Driven Stage Based on Variable Natural Length Spring Model and Disturbance Observer

    NASA Astrophysics Data System (ADS)

    Asaumi, Hiroyoshi; Fujimoto, Hiroshi

    Ball screw driven stages are used for industrial equipments such as machine tools and semiconductor equipments. Fast and precise positioning is necessary to enhance productivity and microfabrication technology of the system. The rolling friction of the ball screw driven stage deteriorate the positioning performance. Therefore, the control system based on the friction model is necessary. In this paper, we propose variable natural length spring model (VNLS model) as the friction model. VNLS model is simple and easy to implement as friction controller. Next, we propose multi variable natural length spring model (MVNLS model) as the friction model. MVNLS model can represent friction characteristic of the stage precisely. Moreover, the control system based on MVNLS model and disturbance observer is proposed. Finally, the simulation results and experimental results show the advantages of the proposed method.

  4. Shape optimization using a NURBS-based interface-enriched generalized FEM

    DOE PAGES

    Najafi, Ahmad R.; Safdari, Masoud; Tortorelli, Daniel A.; ...

    2016-11-26

    This study presents a gradient-based shape optimization over a fixed mesh using a non-uniform rational B-splines-based interface-enriched generalized finite element method, applicable to multi-material structures. In the proposed method, non-uniform rational B-splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. As a result, verification and illustrative problems are solved to demonstrate the precision andmore » capability of the method.« less

  5. Control system and method for a power delivery system having a continuously variable ratio transmission

    DOEpatents

    Frank, A.A.

    1984-07-10

    A control system and method for a power delivery system, such as in an automotive vehicle, having an engine coupled to a continuously variable ratio transmission (CVT). Totally independent control of engine and transmission enable the engine to precisely follow a desired operating characteristic, such as the ideal operating line for minimum fuel consumption. CVT ratio is controlled as a function of commanded power or torque and measured load, while engine fuel requirements (e.g., throttle position) are strictly a function of measured engine speed. Fuel requirements are therefore precisely adjusted in accordance with the ideal characteristic for any load placed on the engine. 4 figs.

  6. Machine Protection System for the Stepper Motor Actuated SyLMAND Mirrors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subramanian, V. R.; Dolton, W.; Wells, G.

    2010-06-23

    SyLMAND, the Synchrotron Laboratory for Micro and Nano Devices at the Canadian Light Source, consists of a dedicated X-ray lithography beamline on a bend magnet port, and process support laboratories in a clean room environment. The beamline includes a double mirror system with flat, chromium-coated silicon mirrors operated at varying grazing angles of incidence (4 mrad to 45 mrad) for spectral adjustment by high energy cut-off. Each mirror can be independently moved by two stepper motors to precisely control the pitch and vertical position. We present in this paper the machine protection system implemented in the double mirror system tomore » allow for safe operation of the two mirrors and to avoid consequences of potential stepper motor malfunction.« less

  7. Double sided grating fabrication for high energy X-ray phase contrast imaging

    DOE PAGES

    Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick; ...

    2018-04-19

    State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less

  8. Double elementary Goldstone Higgs boson production in future linear colliders

    NASA Astrophysics Data System (ADS)

    Guo, Yu-Chen; Yue, Chong-Xing; Liu, Zhi-Cheng

    2018-03-01

    The Elementary Goldstone Higgs (EGH) model is a perturbative extension of the Standard Model (SM), which identifies the EGH boson as the observed Higgs boson. In this paper, we study pair production of the EGH boson in future linear electron positron colliders. The cross-sections in the TeV region can be changed to about ‑27%, 163% and ‑34% for the e+e‑→ Zhh, e+e‑→ νν¯hh and e+e‑→ tt¯hh processes with respect to the SM predictions, respectively. According to the expected measurement precisions, such correction effects might be observed in future linear colliders. In addition, we compare the cross-sections of double SM-like Higgs boson production with the predictions in other new physics models.

  9. Science & Technology Review September 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radousky, H B

    2006-07-18

    This month's article has the following articles: (1) Simulations Help Plan for Large Earthquakes--Commentary by Jane C. S. Long; (2) Re-creating the 1906 San Francisco Earthquake--Supercomputer simulations of Bay Area earthquakes are providing insight into the great 1906 quake and future temblors along several faults; (3) Decoding the Origin of a Bioagent--The microstructure of a bacterial organism can be linked to the methods used to formulate the pathogen; (4) A New Look at How Aging Bones Fracture--Livermore scientists find that the increased risk of fracture from osteoporosis may be due to a change in the physical structure of trabecular bone;more » and (5) Fusion Targets on the Double--Advances in precision manufacturing allow the production of double-shell fusion targets with submicrometer tolerances.« less

  10. Study on depth profile of heavy ion irradiation effects in poly(tetrafluoroethylene-co-ethylene)

    NASA Astrophysics Data System (ADS)

    Gowa, Tomoko; Shiotsu, Tomoyuki; Urakawa, Tatsuya; Oka, Toshitaka; Murakami, Takeshi; Oshima, Akihiro; Hama, Yoshimasa; Washio, Masakazu

    2011-02-01

    High linear energy transfer (LET) heavy ion beams were used to irradiate poly(tetrafluoroethylene-co-ethylene) (ETFE) under vacuum and in air. The irradiation effects in ETFE as a function of the depth were precisely evaluated by analyzing each of the films of the irradiated samples, which were made of stacked ETFE films. It was indicated that conjugated double bonds were generated by heavy ion beam irradiation, and their amounts showed the Bragg-curve-like distributions. Also, it was suggested that higher LET beams would induce radical formation in high density and longer conjugated C=C double bonds could be generated by the second-order reactions. Moreover, for samples irradiated in air, C=O was produced correlating to the yield of oxygen molecules diffusing from the sample surface.

  11. Double sided grating fabrication for high energy X-ray phase contrast imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick

    State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less

  12. Double sampling to estimate density and population trends in birds

    USGS Publications Warehouse

    Bart, Jonathan; Earnst, Susan L.

    2002-01-01

    We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.

  13. Digital sun sensor multi-spot operation.

    PubMed

    Rufino, Giancarlo; Grassi, Michele

    2012-11-28

    The operation and test of a multi-spot digital sun sensor for precise sun-line determination is described. The image forming system consists of an opaque mask with multiple pinhole apertures producing multiple, simultaneous, spot-like images of the sun on the focal plane. The sun-line precision can be improved by averaging multiple simultaneous measures. Nevertheless, the sensor operation on a wide field of view requires acquiring and processing images in which the number of sun spots and the related intensity level are largely variable. To this end, a reliable and robust image acquisition procedure based on a variable shutter time has been considered as well as a calibration function exploiting also the knowledge of the sun-spot array size. Main focus of the present paper is the experimental validation of the wide field of view operation of the sensor by using a sensor prototype and a laboratory test facility. Results demonstrate that it is possible to keep high measurement precision also for large off-boresight angles.

  14. Routine Clinical Quantitative Rest Stress Myocardial Perfusion for Managing Coronary Artery Disease: Clinical Relevance of Test-Retest Variability.

    PubMed

    Kitkungvan, Danai; Johnson, Nils P; Roby, Amanda E; Patel, Monika B; Kirkeeide, Richard; Gould, K Lance

    2017-05-01

    Positron emission tomography (PET) quantifies stress myocardial perfusion (in cc/min/g) and coronary flow reserve to guide noninvasively the management of coronary artery disease. This study determined their test-retest precision within minutes and daily biological variability essential for bounding clinical decision-making or risk stratification based on low flow ischemic thresholds or follow-up changes. Randomized trials of fractional flow reserve-guided percutaneous coronary interventions established an objective, quantitative, outcomes-driven standard of physiological stenosis severity. However, pressure-derived fractional flow reserve requires invasive coronary angiogram and was originally validated by comparison to noninvasive PET. The time course and test-retest precision of serial quantitative rest-rest and stress-stress global myocardial perfusion by PET within minutes and days apart in the same patient were compared in 120 volunteers undergoing serial 708 quantitative PET perfusion scans using rubidium 82 (Rb-82) and dipyridamole stress with a 2-dimensional PET-computed tomography scanner (GE DST 16) and University of Texas HeartSee software with our validated perfusion model. Test-retest methodological precision (coefficient of variance) for serial quantitative global myocardial perfusion minutes apart is ±10% (mean ΔSD at rest ±0.09, at stress ±0.23 cc/min/g) and for days apart is ±21% (mean ΔSD at rest ±0.2, at stress ±0.46 cc/min/g) reflecting added biological variability. Global myocardial perfusion at 8 min after 4-min dipyridamole infusion is 10% higher than at standard 4 min after dipyridamole. Test-retest methodological precision of global PET myocardial perfusion by serial rest or stress PET minutes apart is ±10%. Day-to-different-day biological plus methodological variability is ±21%, thereby establishing boundaries of variability on physiological severity to guide or follow coronary artery disease management. Maximum stress increases perfusion and coronary flow reserve, thereby reducing potentially falsely low values mimicking ischemia. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  15. The geometrical precision of virtual bone models derived from clinical computed tomography data for forensic anthropology.

    PubMed

    Colman, Kerri L; Dobbe, Johannes G G; Stull, Kyra E; Ruijter, Jan M; Oostra, Roelof-Jan; van Rijn, Rick R; van der Merwe, Alie E; de Boer, Hans H; Streekstra, Geert J

    2017-07-01

    Almost all European countries lack contemporary skeletal collections for the development and validation of forensic anthropological methods. Furthermore, legal, ethical and practical considerations hinder the development of skeletal collections. A virtual skeletal database derived from clinical computed tomography (CT) scans provides a potential solution. However, clinical CT scans are typically generated with varying settings. This study investigates the effects of image segmentation and varying imaging conditions on the precision of virtual modelled pelves. An adult human cadaver was scanned using varying imaging conditions, such as scanner type and standard patient scanning protocol, slice thickness and exposure level. The pelvis was segmented from the various CT images resulting in virtually modelled pelves. The precision of the virtual modelling was determined per polygon mesh point. The fraction of mesh points resulting in point-to-point distance variations of 2 mm or less (95% confidence interval (CI)) was reported. Colour mapping was used to visualise modelling variability. At almost all (>97%) locations across the pelvis, the point-to-point distance variation is less than 2 mm (CI = 95%). In >91% of the locations, the point-to-point distance variation was less than 1 mm (CI = 95%). This indicates that the geometric variability of the virtual pelvis as a result of segmentation and imaging conditions rarely exceeds the generally accepted linear error of 2 mm. Colour mapping shows that areas with large variability are predominantly joint surfaces. Therefore, results indicate that segmented bone elements from patient-derived CT scans are a sufficiently precise source for creating a virtual skeletal database.

  16. Two-phase strategy of controlling motor coordination determined by task performance optimality.

    PubMed

    Shimansky, Yury P; Rand, Miya K

    2013-02-01

    A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.

  17. The effect of dental artifacts, contrast media, and experience on interobserver contouring variations in head and neck anatomy.

    PubMed

    O'Daniel, Jennifer C; Rosenthal, David I; Garden, Adam S; Barker, Jerry L; Ahamad, Anesa; Ang, K Kian; Asper, Joshua A; Blanco, Angel I; de Crevoisier, Renaud; Holsinger, F Christopher; Patel, Chirag B; Schwartz, David L; Wang, He; Dong, Lei

    2007-04-01

    To investigate interobserver variability in the delineation of head-and-neck (H&N) anatomic structures on CT images, including the effects of image artifacts and observer experience. Nine observers (7 radiation oncologists, 1 surgeon, and 1 physician assistant) with varying levels of H&N delineation experience independently contoured H&N gross tumor volumes and critical structures on radiation therapy treatment planning CT images alongside reference diagnostic CT images for 4 patients with oropharynx cancer. Image artifacts from dental fillings partially obstructed 3 images. Differences in the structure volumes, center-of-volume positions, and boundary positions (1 SD) were measured. In-house software created three-dimensional overlap distributions, including all observers. The effects of dental artifacts and observer experience on contouring precision were investigated, and the need for contrast media was assessed. In the absence of artifacts, all 9 participants achieved reasonable precision (1 SD < or =3 mm all boundaries). The structures obscured by dental image artifacts had larger variations when measured by the 3 metrics (1 SD = 8 mm cranial/caudal boundary). Experience improved the interobserver consistency of contouring for structures obscured by artifacts (1 SD = 2 mm cranial/caudal boundary). Interobserver contouring variability for anatomic H&N structures, specifically oropharyngeal gross tumor volumes and parotid glands, was acceptable in the absence of artifacts. Dental artifacts increased the contouring variability, but experienced participants achieved reasonable precision even with artifacts present. With a staging contrast CT image as a reference, delineation on a noncontrast treatment planning CT image can achieve acceptable precision.

  18. Modelling of Lunar Laser Ranging in the Geocentric Frame and Comparison with the Common-View Double-Difference Lunar Laser Ranging Approach

    NASA Astrophysics Data System (ADS)

    Svehla, D.; Rothacher, M.

    2016-12-01

    Is it possible to process Lunar Laser Ranging (LLR) measurements in the geocentric frame in a similar way SLR measurements are modelled for GPS satellites and estimate all global reference frame parameters like in the case of GPS? The answer is yes. We managed to process Lunar laser measurements to Apollo and Luna retro-reflectors on the Moon in a similar way we are processing SLR measurements to GPS satellites. We make use of the latest Lunar libration models and DE430 ephemerides given in the Solar system baricentric frame and model uplink and downlink Lunar laser ranges in the geocentric frame as one way measurements, similar to SLR measurements to GPS satellites. In the first part of this contribution we present the estimation of the Lunar orbit as well as the Earth orientation parameters (including UT1 or UT0) with this new formulation. In the second part, we form common-view double-difference LLR measurements between two Lunar retro-reflectors and two LLR telescopes to show the actual noise of the LLR measurements. Since, by forming double-differences of LLR measurements, all range biases are removed and orbit errors are significantly reduced (the Lunar orbit is much farther away than the GPS orbits), one can consider double-difference LLR as an "orbit-free" and "bias-free" differential approach. In the end, we make a comparison with the SLR double-difference approach with Galileo satellites, where we already demonstrated submillimeter precision, and discuss possible combination of LLR and SLR to GNSS satellites using double-difference approach.

  19. The influence of carrier dynamics on double-state lasing in quantum dot lasers at variable temperature

    NASA Astrophysics Data System (ADS)

    Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.

    2014-12-01

    It is shown in analytical form that the carrier capture from the matrix as well as carrier dynamics in quantum dots plays an important role in double-state lasing phenomenon. In particular, the de-synchronization of hole and electron captures allows one to describe recently observed quenching of ground-state lasing, which takes place in quantum dot lasers operating in double-state lasing regime at high injection. From the other side, the detailed analysis of charge carrier dynamics in the single quantum dot enables one to describe the observed light-current characteristics and key temperature dependences.

  20. Design of control system for optical fiber drawing machine driven by double motor

    NASA Astrophysics Data System (ADS)

    Yu, Yue Chen; Bo, Yu Ming; Wang, Jun

    2018-01-01

    Micro channel Plate (MCP) is a kind of large-area array electron multiplier with high two-dimensional spatial resolution, used as high-performance night vision intensifier. The high precision control of the fiber is the key technology of the micro channel plate manufacturing process, and it was achieved by the control of optical fiber drawing machine driven by dual-motor in this paper. First of all, utilizing STM32 chip, the servo motor drive and control circuit was designed to realize the dual motor synchronization. Secondly, neural network PID control algorithm was designed for controlling the fiber diameter fabricated in high precision; Finally, the hexagonal fiber was manufactured by this system and it shows that multifilament diameter accuracy of the fiber is +/- 1.5μm.

  1. Dynamical investigations of the multiple stars

    NASA Astrophysics Data System (ADS)

    Kiyaeva, Olga V.; Zhuchkov, Roman Ya.

    2017-11-01

    Two multiple stars - the quadruple star - Bootis (ADS 9173) and the triple star T Taury were investigated. The visual double star - Bootiswas studied on the basis of the Pulkovo 26-inch refractor observations 1982-2013. An invisible satellite of the component A was discovered due to long-term uniform series of observations. Its orbital period is 20 ± 2 years. The known invisible satellite of the component B with near 5 years period was confirmed due to high precision CCD observations. The astrometric orbits of the both components were calculated. The orbits of inner and outer pairs of the pre-main sequence binary T Taury were calculated on the basis of high precision observations by the VLT and on the Keck II Telescope. This weakly hierarchical triple system is stable with probability more than 70%.

  2. Pulsars in binary systems: probing binary stellar evolution and general relativity.

    PubMed

    Stairs, Ingrid H

    2004-04-23

    Radio pulsars in binary orbits often have short millisecond spin periods as a result of mass transfer from their companion stars. They therefore act as very precise, stable, moving clocks that allow us to investigate a large set of otherwise inaccessible astrophysical problems. The orbital parameters derived from high-precision binary pulsar timing provide constraints on binary evolution, characteristics of the binary pulsar population, and the masses of neutron stars with different mass-transfer histories. These binary systems also test gravitational theories, setting strong limits on deviations from general relativity. Surveys for new pulsars yield new binary systems that increase our understanding of all these fields and may open up whole new areas of physics, as most spectacularly evidenced by the recent discovery of an extremely relativistic double-pulsar system.

  3. Item Response Theory Modeling of the Philadelphia Naming Test.

    PubMed

    Fergadiotis, Gerasimos; Kellough, Stacey; Hula, William D

    2015-06-01

    In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating explanatory variables to item difficulty. This article describes the statistical model underlying the computer adaptive PNT presented in a companion article (Hula, Kellough, & Fergadiotis, 2015). Using archival data, we evaluated the fit of the PNT to 1- and 2-parameter logistic models and examined the precision of the resulting parameter estimates. We regressed the item difficulty estimates on three predictor variables: word length, age of acquisition, and contextual diversity. The 2-parameter logistic model demonstrated marginally better fit, but the fit of the 1-parameter logistic model was adequate. Precision was excellent for both person ability and item difficulty estimates. Word length, age of acquisition, and contextual diversity all independently contributed to variance in item difficulty. Item-response-theory methods can be productively used to analyze and quantify anomia severity in aphasia. Regression of item difficulty on lexical variables supported the validity of the PNT and interpretation of anomia severity scores in the context of current word-finding models.

  4. Big genomics and clinical data analytics strategies for precision cancer prognosis.

    PubMed

    Ow, Ghim Siong; Kuznetsov, Vladimir A

    2016-11-07

    The field of personalized and precise medicine in the era of big data analytics is growing rapidly. Previously, we proposed our model of patient classification termed Prognostic Signature Vector Matching (PSVM) and identified a 37 variable signature comprising 36 let-7b associated prognostic significant mRNAs and the age risk factor that stratified large high-grade serous ovarian cancer patient cohorts into three survival-significant risk groups. Here, we investigated the predictive performance of PSVM via optimization of the prognostic variable weights, which represent the relative importance of one prognostic variable over the others. In addition, we compared several multivariate prognostic models based on PSVM with classical machine learning techniques such as K-nearest-neighbor, support vector machine, random forest, neural networks and logistic regression. Our results revealed that negative log-rank p-values provides more robust weight values as opposed to the use of other quantities such as hazard ratios, fold change, or a combination of those factors. PSVM, together with the classical machine learning classifiers were combined in an ensemble (multi-test) voting system, which collectively provides a more precise and reproducible patient stratification. The use of the multi-test system approach, rather than the search for the ideal classification/prediction method, might help to address limitations of the individual classification algorithm in specific situation.

  5. Interlaboratory variability in the quantification of new generation antiepileptic drugs based on external quality assessment data.

    PubMed

    Williams, John; Bialer, Meir; Johannessen, Svein I; Krämer, Günther; Levy, René; Mattson, Richard H; Perucca, Emilio; Patsalos, Philip N; Wilson, John F

    2003-01-01

    To assess interlaboratory variability in the determination of serum levels of new antiepileptic drugs (AEDs). Lyophilised serum samples containing clinically relevant concentrations of felbamate (FBM), gabapentin (GBP), lamotrigine (LTG), the monohydroxy derivative of oxcarbazepine (OCBZ; MHD), tiagabine (TGB), topiramate (TPM), and vigabatrin (VGB) were distributed monthly among 70 laboratories participating in the international Heathcontrol External Quality Assessment Scheme (EQAS). Assay results returned over a 15-month period were evaluated for precision and accuracy. The most frequently measured compound was LTG (65), followed by MHD (39), GBP (19), TPM (18), VGB (15), FBM (16), and TGB (8). High-performance liquid chromatography was the most commonly used assay technique for all drugs except for TPM, for which two thirds of laboratories used a commercial immunoassay. For all assay methods combined, precision was <11% for MHD, FBM, TPM, and LTG, close to 15% for GBP and VGB, and as high as 54% for TGB (p < 0.001). Mean accuracy values were <10% for all drugs other than TGB, for which measured values were on average 13.9% higher than spiked values, with a high variability around the mean (45%). No differences in precision and accuracy were found between methods, except for TPM, for which gas chromatography showed poorer accuracy compared with immunoassay and gas chromatography-mass spectrometry. With the notable exception of TGB, interlaboratory variability in the determination of new AEDs was comparable to that reported with older-generation agents. Poor assay performance is related more to individual operators than to the intrinsic characteristics of the method applied. Participation in an EQAS scheme is recommended to ensure adequate control of assay variability in therapeutic drug monitoring.

  6. QCDLoop: A comprehensive framework for one-loop scalar integrals

    NASA Astrophysics Data System (ADS)

    Carrazza, Stefano; Ellis, R. Keith; Zanderighi, Giulia

    2016-12-01

    We present a new release of the QCDLoop library based on a modern object-oriented framework. We discuss the available new features such as the extension to the complex masses, the possibility to perform computations in double and quadruple precision simultaneously, and useful caching mechanisms to improve the computational speed. We benchmark the performance of the new library, and provide practical examples of phenomenological implementations by interfacing this new library to Monte Carlo programs.

  7. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  8. Fusion Imaging: A Novel Staging Modality in Testis Cancer

    DTIC Science & Technology

    2010-01-01

    the anatomic precision of computed tomography. To the best of our knowledge, this represents the first study of the effectiveness using fusion...imaging in evaluation of patients with testis cancer. Methods: A prospective study of 49 patients presenting to Walter Reed Army Medical Center with...incidence of testis cancer has been increasing at an annual rate of 3%, leading to a doubling in cases world-wide over the last 40 years. With the advent

  9. Obtention of low oxidation states of copper from Cu 2+-Al 3+ layered double hydroxides containing organic sulfonates in the interlayer

    NASA Astrophysics Data System (ADS)

    Trujillano, Raquel; Holgado, María Jesús; Rives, Vicente

    2009-03-01

    A series of hydrotalcite-type compounds containing Cu(II) and Al(III) in the layers, and carbonate or different alkylsulfonates in the interlayer, have been prepared and studied. Calcination of these solids gives rise to formation of metallic copper and Cu 2+ and Cu + oxides or sulfates, depending on the calcination temperature and on the precise nature of the interlayer alkylsulfonate.

  10. Evaluation of response variables in computer-simulated virtual cataract surgery

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Laurell, Carl-Gustaf; Simawi, Wamidh; Nordqvist, Per; Skarman, Eva; Nordh, Leif

    2006-02-01

    We have developed a virtual reality (VR) simulator for phacoemulsification (phaco) surgery. The current work aimed at evaluating the precision in the estimation of response variables identified for measurement of the performance of VR phaco surgery. We identified 31 response variables measuring; the overall procedure, the foot pedal technique, the phacoemulsification technique, erroneous manipulation, and damage to ocular structures. Totally, 8 medical or optometry students with a good knowledge of ocular anatomy and physiology but naive to cataract surgery performed three sessions each of VR Phaco surgery. For measurement, the surgical procedure was divided into a sculpting phase and an evacuation phase. The 31 response variables were measured for each phase in all three sessions. The variance components for individuals and iterations of sessions within individuals were estimated with an analysis of variance assuming a hierarchal model. The consequences of estimated variabilities for sample size requirements were determined. It was found that generally there was more variability for iterated sessions within individuals for measurements of the sculpting phase than for measurements of the evacuation phase. This resulted in larger required sample sizes for detection of difference between independent groups or change within group, for the sculpting phase as compared to for the evacuation phase. It is concluded that several of the identified response variables can be measured with sufficient precision for evaluation of VR phaco surgery.

  11. Dynamic comparisons of piezoelectric ejecta diagnostics

    NASA Astrophysics Data System (ADS)

    Buttler, W. T.; Zellner, M. B.; Olson, R. T.; Rigg, P. A.; Hixson, R. S.; Hammerberg, J. E.; Obst, A. W.; Payton, J. R.; Iverson, A.; Young, J.

    2007-03-01

    We investigate the quantitative reliability and precision of three different piezoelectric technologies for measuring ejected areal mass from shocked surfaces. Specifically we performed ejecta measurements on Sn shocked at two pressures, P ≈215 and 235 kbar. The shock in the Sn was created by launching a impactor with a powder gun. We self-compare and cross-compare these measurements to assess the ability of these probes to precisely determine the areal mass ejected from a shocked surface. We demonstrate the precision of each technology to be good, with variabilities on the order of ±10%. We also discuss their relative accuracy.

  12. Design of a self-calibration high precision micro-angle deformation optical monitoring scheme

    NASA Astrophysics Data System (ADS)

    Gu, Yingying; Wang, Li; Guo, Shaogang; Wu, Yun; Liu, Da

    2018-03-01

    In order to meet the requirement of high precision and micro-angle measurement on orbit, a self-calibrated optical non-contact real-time monitoring device is designed. Within three meters, the micro-angle variable of target relative to measuring basis can be measured in real-time. The range of angle measurement is +/-50'', the angle measurement accuracy is less than 2''. The equipment can realize high precision real-time monitoring the micro-angle deformation, which caused by high strength vibration and shock of rock launching, sun radiation and heat conduction on orbit and so on.

  13. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  14. What Do Young Science Students Need to Learn about Variables?

    ERIC Educational Resources Information Center

    Kuhn, Deanna

    2016-01-01

    Have the Next Generation Science Standards fulfilled a goal of specifying the objectives of precollege science education in clear and exact enough terms to make them readily implementable? Using students' understanding of the concept of a variable as a case in point, the author suggests that the standards, despite their seeming precision and…

  15. Current status and future directions of precision aerial application for site-specific crop management in the USA

    USDA-ARS?s Scientific Manuscript database

    The first variable-rate aerial application system was developed about a decade ago in the USA and since then, aerial application has benefitted from these technologies. Many areas of the United States rely on readily available agricultural airplanes or helicopters for pest management, and variable-...

  16. Contact lens overrefraction variability in corneal power estimation after refractive surgery.

    PubMed

    Joslin, Charlotte E; Koster, James; Tu, Elmer Y

    2005-12-01

    To evaluate the accuracy and precision of the contact lens overrefraction (CLO) method in determining corneal refractive power in post-refractive-surgery eyes. Refractive Surgery Service and Contact Lens Service, University of Illinois, Chicago, Illinois, USA. Fourteen eyes of 7 subjects who had a single myopic laser in situ keratomileusis procedure within 12 months with refractive stability were included in this prospective case series. The CLO method was compared with the historical method of predicting the corneal power using 4 different lens fitting strategies and 3 refractive pupil scan sizes (3 mm, 5 mm, and total pupil). Rigid lenses included 3 9.0 mm overall diameter lenses fit flat, steep, and an average of the 2, and a 15.0 mm diameter lens steep fit. Cycloplegic CLO was performed using the autorefractor function of the Nidek OPD-Scan ARK-10000. Results with each strategy were compared with the corneal power estimated with the historical method. The bias (mean of the difference), 95% limits of agreement, and difference versus mean plots for each strategy are presented. In each subject, the CLO-estimated corneal power varied based on lens fit. On average, the bias between CLO and historical methods ranged from -0.38 to +2.42 diopters (D) and was significantly different from 0 in all but 3 strategies. Substantial variability in precision existed between fitting strategies, with the range of the 95% limits of agreement approximating 0.50 D in 2 strategies and 2.59 D in the worst-case scenario. The least precise fitting strategy was use of flat-fitting 9.0 mm diameter lenses. The accuracy and precision of the CLO method of estimating corneal power in post-refractive-surgery eyes was highly variable on the basis of how rigid lense were fit. One of the most commonly used fitting strategies in clinical practice--flat-fitting a 9.0 diameter lens-resulted in the poorest accuracy and precision. Results also suggest use of large-diameter lenses may improve outcomes.

  17. The effect of sensory uncertainty due to amblyopia (lazy eye) on the planning and execution of visually-guided 3D reaching movements.

    PubMed

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C; Chandrakumar, Manokaraananthan; Wong, Agnes M F

    2012-01-01

    Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50-100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R(2)) which correlates the spatial position of the limb during the movement to endpoint position. Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R(2) values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing. Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their abnormal stereopsis.

  18. The Effect of Sensory Uncertainty Due to Amblyopia (Lazy Eye) on the Planning and Execution of Visually-Guided 3D Reaching Movements

    PubMed Central

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C.; Chandrakumar, Manokaraananthan; Wong, Agnes M. F.

    2012-01-01

    Background Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. Methods Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50–100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R2) which correlates the spatial position of the limb during the movement to endpoint position. Results Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R2 values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing. Conclusion Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their abnormal stereopsis. PMID:22363549

  19. Glioblastoma adaptation traced through decline of an IDH1 clonal driver and macro-evolution of a double-minute chromosome.

    PubMed

    Favero, F; McGranahan, N; Salm, M; Birkbak, N J; Sanborn, J Z; Benz, S C; Becq, J; Peden, J F; Kingsbury, Z; Grocok, R J; Humphray, S; Bentley, D; Spencer-Dene, B; Gutteridge, A; Brada, M; Roger, S; Dietrich, P-Y; Forshew, T; Gerlinger, M; Rowan, A; Stamp, G; Eklund, A C; Szallasi, Z; Swanton, C

    2015-05-01

    Glioblastoma (GBM) is the most common malignant brain cancer occurring in adults, and is associated with dismal outcome and few therapeutic options. GBM has been shown to predominantly disrupt three core pathways through somatic aberrations, rendering it ideal for precision medicine approaches. We describe a 35-year-old female patient with recurrent GBM following surgical removal of the primary tumour, adjuvant treatment with temozolomide and a 3-year disease-free period. Rapid whole-genome sequencing (WGS) of three separate tumour regions at recurrence was carried out and interpreted relative to WGS of two regions of the primary tumour. We found extensive mutational and copy-number heterogeneity within the primary tumour. We identified a TP53 mutation and two focal amplifications involving PDGFRA, KIT and CDK4, on chromosomes 4 and 12. A clonal IDH1 R132H mutation in the primary, a known GBM driver event, was detectable at only very low frequency in the recurrent tumour. After sub-clonal diversification, evidence was found for a whole-genome doubling event and a translocation between the amplified regions of PDGFRA, KIT and CDK4, encoded within a double-minute chromosome also incorporating miR26a-2. The WGS analysis uncovered progressive evolution of the double-minute chromosome converging on the KIT/PDGFRA/PI3K/mTOR axis, superseding the IDH1 mutation in dominance in a mutually exclusive manner at recurrence, consequently the patient was treated with imatinib. Despite rapid sequencing and cancer genome-guided therapy against amplified oncogenes, the disease progressed, and the patient died shortly after. This case sheds light on the dynamic evolution of a GBM tumour, defining the origins of the lethal sub-clone, the macro-evolutionary genomic events dominating the disease at recurrence and the loss of a clonal driver. Even in the era of rapid WGS analysis, cases such as this illustrate the significant hurdles for precision medicine success. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology.

  20. Servo Driven Corotation: Development of AN Inertial Clock.

    NASA Astrophysics Data System (ADS)

    Cheung, Wah-Kwan Stephen

    An inertial clock to test non-metricity of gravity is proposed here. A first, room-temperature, servo corotation -protected, double magnetically suspended precision rotor system is developed for this purpose. The specific goal was to exhibit the properties of such a clock in its entirety at whatever level of precision was achievable. A monolithic system has been completed for these preliminary studies. It includes particular development of individual experimental sub-systems (a hybrid double magnetic suspension; a diffusion pumping system; a microcomputer -controlled eddy-current drive system; and the angular period measuring schemes for the doubly suspended rotors). Double magnetic suspension had been investigated by Beams for other purposes. The upper transducer is optical but parametrized and the lower transducer employs the frequency modulation characteristic of a LC tank circuit. The doubly suspended rotors corotate so that the upper rotor is servoed to rotate at the same angular velocity as that of the lower rotor. This creates a "drag free" environment for the lower rotor and effectively eliminates the gas drag on the lower rotor. Consequently, the decay time constant of the lower rotor increases. With other means of protection, the lower rotor will then, with perfect system operation, suffer no drag and therefore become the inertial time keeper. A commercial microcomputer is introduced to execute the servo-corotation. The tests thus far are, with one exception, run at atmospheric pressure. An idealized analysis for open and closed loop corotation is shown. Such analysis includes only the viscous drag acting on the corotating rotors. The analysis suggests that angular position control be added to the present feedback drive which is of derivative nature only. Open and closed corotation runs show that a strong torsional coupling besides that of the gas drag exists between the rotors. When misalignment of the support pole pieces is deliberately made significant, a stronger coupling between the rotors results. The coupling is suspected to be magnetic in nature. The complicated geometry of the double magnetic suspension scheme makes it difficult to evaluate the known mechanical cranking effect applied to this situation.

Top