Xu, Liangliang; Xu, Nengxiong
2017-01-01
This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points’ spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available. PMID:28989754
Mei, Gang; Xu, Liangliang; Xu, Nengxiong
2017-09-01
This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.
NASA Astrophysics Data System (ADS)
Fagan, Mike; Dueben, Peter; Palem, Krishna; Carver, Glenn; Chantry, Matthew; Palmer, Tim; Schlacter, Jeremy
2017-04-01
It has been shown that a mixed precision approach that judiciously replaces double precision with single precision calculations can speed-up global simulations. In particular, a mixed precision variation of the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) showed virtually the same quality model results as the standard double precision version (Vana et al., Single precision in weather forecasting models: An evaluation with the IFS, Monthly Weather Review, in print). In this study, we perform detailed measurements of savings in computing time and energy using a mixed precision variation of the -OpenIFS- model. The mixed precision variation of OpenIFS is analogous to the IFS variation used in Vana et al. We (1) present results for energy measurements for simulations in single and double precision using Intel's RAPL technology, (2) conduct a -scaling- study to quantify the effects that increasing model resolution has on both energy dissipation and computing cycles, (3) analyze the differences between single core and multicore processing, and (4) compare the effects of different compiler technologies on the mixed precision OpenIFS code. In particular, we compare intel icc/ifort with gnu gcc/gfortran.
NASA Astrophysics Data System (ADS)
Masset, Frédéric
2015-09-01
GFARGO is a GPU version of FARGO. It is written in C and C for CUDA and runs only on NVIDIA’s graphics cards. Though it corresponds to the standard, isothermal version of FARGO, not all functionnalities of the CPU version have been translated to CUDA. The code is available in single and double precision versions, the latter compatible with FERMI architectures. GFARGO can run on a graphics card connected to the display, allowing the user to see in real time how the fields evolve.
NASA Astrophysics Data System (ADS)
Chang, Chao-Hsi; Wang, Jian-Xiong; Wu, Xing-Gang
2010-06-01
An upgraded (second) version of the package GENXICC (A Generator for Hadronic Production of the Double Heavy Baryons Ξ, Ξ and Ξ by C.H. Chang, J.X. Wang and X.G. Wu [its first version in: Comput. Phys. Comm. 177 (2007) 467]) is presented. Users, with this version being implemented in PYTHIA and a GNU C compiler, may simulate full events of these processes in various experimental environments conveniently. In comparison with the previous version, in order to implement it in PYTHIA properly, a subprogram for the fragmentation of the produced double heavy diquark to the relevant baryon is supplied and the interface of the generator to PYTHIA is changed accordingly. In the subprogram, with explanation, certain necessary assumptions (approximations) are made in order to conserve the momenta and the QCD 'color' flow for the fragmentation. Program summaryProgram title: GENXICC2.0 Catalogue identifier: ADZJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 102 482 No. of bytes in distributed program, including test data, etc.: 1 469 519 Distribution format: tar.gz Programming language: Fortran 77/90 Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating system: Linux RAM: About 2.0 MByte Classification: 11.2 Catalogue identifier of previous version: ADZJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 177 (2007) 467 Does the new version supersede the previous version?: No Nature of problem: Hadronic production of double heavy baryons Ξ, Ξ and Ξ Solution method: The code is based on NRQCD framework. With proper options, it can generate weighted and un-weighted events of hadronic double heavy baryon production. When the hadronizations of the produced jets and double heavy diquark are taken into account in the production, the upgraded version with proper interface to PYTHIA can generate full events. Reasons for new version: Responding to the feedback from users, we improve the generator mainly by carefully completing the 'final non-perturbative process', i.e. the formulation of the double heavy baryon from relevant intermediate diquark. In the present version, the information for fragmentation about momentum-flow and the color-flow, that is necessary for PYTHIA to generate full events, is retained although reasonable approximations are made. In comparison with the original version, the upgraded one can implement it in PYTHIA properly to do the full event simulation of the double heavy baryon production. Summary of revisions:We try to explain the treatment of the momentum distribution of the process more clearly than the original version, and show how the final baryon is generated through the typical intermediate diquark precisely. We present color flow of the involved processes precisely and the corresponding changes for the program are made. The corresponding changes of the program are explained in the paper. Restrictions: The color flow, particularly, in the piece of code programming of the fragmentation from the produced colorful double heavy diquark into a relevant double heavy baryon, is treated carefully so as to implement it in PYTHIA properly. Running time: It depends on which option is chosen to configure PYTHIA when generating full events and also on which mechanism is chosen to generate the events. Typically, for the most complicated case with gluon-gluon fusion mechanism to generate the mixed events via the intermediate diquark in (cc)[ and (cc)[ states, under the option, IDWTUP=1, to generate 1000 events, takes about 20 hours on a 1.8 GHz Intel P4-processor machine, whereas under the option, IDWTUP=3, even to generate 106 events takes about 40 minutes on the same machine.
Solving Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1987-01-01
Initial-value ordinary differential equation solution via variable order Adams method (SIVA/DIVA) package is collection of subroutines for solution of nonstiff ordinary differential equations. There are versions for single-precision and double-precision arithmetic. Requires fewer evaluations of derivatives than other variable-order Adams predictor/ corrector methods. Option for direct integration of second-order equations makes integration of trajectory problems significantly more efficient. Written in FORTRAN 77.
COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Smith, J. P.
1994-01-01
The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.
COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Smith, J. P.
1994-01-01
The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.
Calculating Trajectories And Orbits
NASA Technical Reports Server (NTRS)
Alderson, Daniel J.; Brady, Franklyn H.; Breckheimer, Peter J.; Campbell, James K.; Christensen, Carl S.; Collier, James B.; Ekelund, John E.; Ellis, Jordan; Goltz, Gene L.; Hintz, Gerarld R.;
1989-01-01
Double-Precision Trajectory Analysis Program, DPTRAJ, and Orbit Determination Program, ODP, developed and improved over years to provide highly reliable and accurate navigation capability for deep-space missions like Voyager. Each collection of programs working together to provide desired computational results. DPTRAJ, ODP, and supporting utility programs capable of handling massive amounts of data and performing various numerical calculations required for solving navigation problems associated with planetary fly-by and lander missions. Used extensively in support of NASA's Voyager project. DPTRAJ-ODP available in two machine versions. UNIVAC version, NPO-15586, written in FORTRAN V, SFTRAN, and ASSEMBLER. VAX/VMS version, NPO-17201, written in FORTRAN V, SFTRAN, PL/1 and ASSEMBLER.
Accuracy of the lattice-Boltzmann method using the Cell processor
NASA Astrophysics Data System (ADS)
Harvey, M. J.; de Fabritiis, G.; Giupponi, G.
2008-11-01
Accelerator processors like the new Cell processor are extending the traditional platforms for scientific computation, allowing orders of magnitude more floating-point operations per second (flops) compared to standard central processing units. However, they currently lack double-precision support and support for some IEEE 754 capabilities. In this work, we develop a lattice-Boltzmann (LB) code to run on the Cell processor and test the accuracy of this lattice method on this platform. We run tests for different flow topologies, boundary conditions, and Reynolds numbers in the range Re=6 350 . In one case, simulation results show a reduced mass and momentum conservation compared to an equivalent double-precision LB implementation. All other cases demonstrate the utility of the Cell processor for fluid dynamics simulations. Benchmarks on two Cell-based platforms are performed, the Sony Playstation3 and the QS20/QS21 IBM blade, obtaining a speed-up factor of 7 and 21, respectively, compared to the original PC version of the code, and a conservative sustained performance of 28 gigaflops per single Cell processor. Our results suggest that choice of IEEE 754 rounding mode is possibly as important as double-precision support for this specific scientific application.
2013-09-01
including the interaction effects between the fins and canards. 2. Solution Technique 2.1 Computational Aerodynamics The double-precision solver of a...and overset grids (unified-grid). • Total variation diminishing discretization based on a new multidimensional interpolation framework. • Riemann ... solvers to provide proper signal propagation physics including versions for preconditioned forms of the governing equations. • Consistent and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guangye; Chacon, Luis; Barnes, Daniel C
2012-01-01
Recently, a fully implicit, energy- and charge-conserving particle-in-cell method has been developed for multi-scale, full-f kinetic simulations [G. Chen, et al., J. Comput. Phys. 230, 18 (2011)]. The method employs a Jacobian-free Newton-Krylov (JFNK) solver and is capable of using very large timesteps without loss of numerical stability or accuracy. A fundamental feature of the method is the segregation of particle orbit integrations from the field solver, while remaining fully self-consistent. This provides great flexibility, and dramatically improves the solver efficiency by reducing the degrees of freedom of the associated nonlinear system. However, it requires a particle push per nonlinearmore » residual evaluation, which makes the particle push the most time-consuming operation in the algorithm. This paper describes a very efficient mixed-precision, hybrid CPU-GPU implementation of the implicit PIC algorithm. The JFNK solver is kept on the CPU (in double precision), while the inherent data parallelism of the particle mover is exploited by implementing it in single-precision on a graphics processing unit (GPU) using CUDA. Performance-oriented optimizations, with the aid of an analytical performance model, the roofline model, are employed. Despite being highly dynamic, the adaptive, charge-conserving particle mover algorithm achieves up to 300 400 GOp/s (including single-precision floating-point, integer, and logic operations) on a Nvidia GeForce GTX580, corresponding to 20 25% absolute GPU efficiency (against the peak theoretical performance) and 50-70% intrinsic efficiency (against the algorithm s maximum operational throughput, which neglects all latencies). This is about 200-300 times faster than an equivalent serial CPU implementation. When the single-precision GPU particle mover is combined with a double-precision CPU JFNK field solver, overall performance gains 100 vs. the double-precision CPU-only serial version are obtained, with no apparent loss of robustness or accuracy when applied to a challenging long-time scale ion acoustic wave simulation.« less
COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics
NASA Astrophysics Data System (ADS)
Barletta, Paolo
2012-02-01
Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as the spatial densities or the energy distributions, can be readily evaluated. Reasons for new version: A number of issues made the old version very difficult to be ported on different architectures, and impossible to compile on Windows. Furthermore, the test runs results could only be replicated poorly, as a consequence of the simulations being very sensitive to the machine background noise. In practise, as the particles are simulated for billions and billions of steps, the consequence of a small difference in the initial conditions due to the finiteness of double precision real can have macroscopic effects in the output. This is not a problem in its own right, but a feature of such simulations. However, for sake of completeness we have introduced a quadruple precision version of the code which yields the same results independently of the software used to compile it, or the hardware architecture where the code is run. Summary of revisions: A number of bugs in the dynamic memory allocation have been detected and removed, mostly in the cool.cpp file. All files have been renamed with a .cpp ending, rather than .c++, to make them compatible with Windows. The Random Number Generator routine, which is the computational core of the algorithm, has been re-written in C++, and there is no need any longer for cross FORTRAN-C++ compilation. A quadruple precision version of the code is provided alongside the original double precision one. The makefile allows the user to choose which one to compile by setting the switch PRECISION to either double or quad. The source code and header files have been organised into directories to make the code file system look neater. Restrictions: The in-trap motion of the particles is treated classically. Running time: The running time is relatively short, 1-2 hours. However it is convenient to replicate each simulation several times with different initialisations of the random sequence.
A parallel solver for huge dense linear systems
NASA Astrophysics Data System (ADS)
Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.
2011-11-01
HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.
Time Resolved Precision Differential Photometry with OAFA's Double Astrograph
NASA Astrophysics Data System (ADS)
González, E. P. A.; Podestá, F.; Podestá, R.; Pacheco, A. M.
2018-01-01
For the last 50 years, the Double Astrograph located at the Carlos U. Cesco station of the Observatorio Astronómico Félix Aguilar (OAFA), San Juan province, Argentina, was used for astrometric observations and research. The main programs involved the study of asteroid positions and proper motions of stars in the Southern hemisphere, being the latter a long time project that is near completion from which the SPM4 catalog is the most recent version (Girard et al. 2011). In this paper, new scientific applications in the field of photometry that can be accomplished with this telescope are presented. These first attempts show the potential of the instrument for such tasks.
End-of-life decisions and the reinvented Rule of Double Effect: a critical analysis.
Lindblad, Anna; Lynöe, Niels; Juth, Niklas
2014-09-01
The Rule of Double Effect (RDE) holds that it may be permissible to harm an individual while acting for the sake of a proportionate good, given that the harm is not an intended means to the good but merely a foreseen side-effect. Although frequently used in medical ethical reasoning, the rule has been repeatedly questioned in the past few decades. However, Daniel Sulmasy, a proponent who has done a lot of work lately defending the RDE, has recently presented a reformulated and more detailed version of the rule. Thanks to its greater precision, this reinvented RDE avoids several problems thought to plague the traditional RDE. Although an improvement compared with the traditional version, we argue that Sulmasy's reinvented RDE will not stand closer scrutiny. Not only has the range of proper applicability narrowed significantly, but, more importantly, Sulmasy fails to establish that there is a morally relevant distinction between intended and foreseen effects. In particular, he fails to establish that there is any distinction that can account for the alleged moral difference between sedation therapy and euthanasia. © 2012 John Wiley & Sons Ltd.
Utilities for master source code distribution: MAX and Friends
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.
1988-01-01
MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.
An improved multiple linear regression and data analysis computer program package
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.
JaSTA-2: Second version of the Java Superposition T-matrix Application
NASA Astrophysics Data System (ADS)
Halder, Prithish; Das, Himadri Sekhar
2017-12-01
In this article, we announce the development of a new version of the Java Superposition T-matrix App (JaSTA-2), to study the light scattering properties of porous aggregate particles. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precision superposition T-matrix codes for multi-sphere clusters in random orientation, developed by Mackowski and Mischenko (1996). The new version consists of two options as part of the input parameters: (i) single wavelength and (ii) multiple wavelengths. The first option (which retains the applicability of older version of JaSTA) calculates the light scattering properties of aggregates of spheres for a single wavelength at a given instant of time whereas the second option can execute the code for a multiple numbers of wavelengths in a single run. JaSTA-2 provides convenient and quicker data analysis which can be used in diverse fields like Planetary Science, Atmospheric Physics, Nanoscience, etc. This version of the software is developed for Linux platform only, and it can be operated over all the cores of a processor using the multi-threading option.
In vivo blunt-end cloning through CRISPR/Cas9-facilitated non-homologous end-joining
Geisinger, Jonathan M.; Turan, Sören; Hernandez, Sophia; Spector, Laura P.; Calos, Michele P.
2016-01-01
The CRISPR/Cas9 system facilitates precise DNA modifications by generating RNA-guided blunt-ended double-strand breaks. We demonstrate that guide RNA pairs generate deletions that are repaired with a high level of precision by non-homologous end-joining in mammalian cells. We present a method called knock-in blunt ligation for exploiting these breaks to insert exogenous PCR-generated sequences in a homology-independent manner without loss of additional nucleotides. This method is useful for making precise additions to the genome such as insertions of marker gene cassettes or functional elements, without the need for homology arms. We successfully utilized this method in human and mouse cells to insert fluorescent protein cassettes into various loci, with efficiencies up to 36% in HEK293 cells without selection. We also created versions of Cas9 fused to the FKBP12-L106P destabilization domain in an effort to improve Cas9 performance. Our in vivo blunt-end cloning method and destabilization-domain-fused Cas9 variant increase the repertoire of precision genome engineering approaches. PMID:26762978
BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orea, Adrian; Betancourt, Minerba
aThe objective for this project was to use MINERvA data to tune the simulation models in order to obtain the precision needed for current and future neutrino experiments. In order to do this, the current models need to be validated and then improved.more » $$\\#10146$$; Validation was done by recreating figures that have been used in previous publications $$\\#61553$$; This was done by comparing data from the detector and the simulation model (GENIE) $$\\#10146$$; Additionally, a newer version of GENIE was compared to the GENIE used for the publications to validate the new version as well as to note any improvements Another objective was to add new samples into the NUISANCE framework, which was used to compare data from the detector and simulation models. $$\\#10146$$; Specifically, the added sample was the two dimensional histogram of the double differential cross section as a function of the transversal and z-direction momentum for Numu and Numubar $$\\#61553$$; Was also used for validation« less
Kendon, Vivien M; Nemoto, Kae; Munro, William J
2010-08-13
We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.
NASA Astrophysics Data System (ADS)
Oh, Kwang Jin; Kang, Ji Hoon; Myung, Hun Joo
2012-02-01
We have revised a general purpose parallel molecular dynamics simulation program mm_par using the object-oriented programming. We parallelized the revised version using a hierarchical scheme in order to utilize more processors for a given system size. The benchmark result will be presented here. New version program summaryProgram title: mm_par2.0 Catalogue identifier: ADXP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 390 858 No. of bytes in distributed program, including test data, etc.: 25 068 310 Distribution format: tar.gz Programming language: C++ Computer: Any system operated by Linux or Unix Operating system: Linux Classification: 7.7 External routines: We provide wrappers for FFTW [1], Intel MKL library [2] FFT routine, and Numerical recipes [3] FFT, random number generator, and eigenvalue solver routines, SPRNG [4] random number generator, Mersenne Twister [5] random number generator, space filling curve routine. Catalogue identifier of previous version: ADXP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 560 Does the new version supersede the previous version?: Yes Nature of problem: Structural, thermodynamic, and dynamical properties of fluids and solids from microscopic scales to mesoscopic scales. Solution method: Molecular dynamics simulation in NVE, NVT, and NPT ensemble, Langevin dynamics simulation, dissipative particle dynamics simulation. Reasons for new version: First, object-oriented programming has been used, which is known to be open for extension and closed for modification. It is also known to be better for maintenance. Second, version 1.0 was based on atom decomposition and domain decomposition scheme [6] for parallelization. However, atom decomposition is not popular due to its poor scalability. On the other hand, domain decomposition scheme is better for scalability. It still has a limitation in utilizing a large number of cores on recent petascale computers due to the requirement that the domain size is larger than the potential cutoff distance. To go beyond such a limitation, a hierarchical parallelization scheme has been adopted in this new version and implemented using MPI [7] and OPENMP [8]. Summary of revisions: (1) Object-oriented programming has been used. (2) A hierarchical parallelization scheme has been adopted. (3) SPME routine has been fully parallelized with parallel 3D FFT using volumetric decomposition scheme [9]. K.J.O. thanks Mr. Seung Min Lee for useful discussion on programming and debugging. Running time: Running time depends on system size and methods used. For test system containing a protein (PDB id: 5DHFR) with CHARMM22 force field [10] and 7023 TIP3P [11] waters in simulation box having dimension 62.23 Å×62.23 Å×62.23 Å, the benchmark results are given in Fig. 1. Here the potential cutoff distance was set to 12 Å and the switching function was applied from 10 Å for the force calculation in real space. For the SPME [12] calculation, K, K, and K were set to 64 and the interpolation order was set to 4. To do the fast Fourier transform, we used Intel MKL library. All bonds including hydrogen atoms were constrained using SHAKE/RATTLE algorithms [13,14]. The code was compiled using Intel compiler version 11.1 and mvapich2 version 1.5. Fig. 2 shows performance gains from using CUDA-enabled version [15] of mm_par for 5DHFR simulation in water on Intel Core2Quad 2.83 GHz and GeForce GTX 580. Even though mm_par2.0 is not ported yet for GPU, its performance data would be useful to expect mm_par2.0 performance on GPU. Timing results for 1000 MD steps. 1, 2, 4, and 8 in the figure mean the number of OPENMP threads. Timing results for 1000 MD steps from double precision simulation on CPU, single precision simulation on GPU, and double precision simulation on GPU.
MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...
2017-01-18
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.
2013-10-15
We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less
A double sealing technique for increasing the precision of headspace-gas chromatographic analysis.
Xie, Wei-Qi; Yu, Kong-Xian; Gong, Yi-Xian
2018-01-19
This paper investigates a new double sealing technique for increasing the precision of the headspace gas chromatographic method. The air leakage problem caused by the high pressure in the headspace vial during the headspace sampling process has a great impact to the measurement precision in the conventional headspace analysis (i.e., single sealing technique). The results (using ethanol solution as the model sample) show that the present technique is effective to minimize such a problem. The double sealing technique has an excellent measurement precision (RSD < 0.15%) and accuracy (recovery = 99.1%-100.6%) for the ethanol quantification. The detection precision of the present method was 10-20 times higher than that in earlier HS-GC work that use conventional single sealing technique. The present double sealing technique may open up a new avenue, and also serve as a general strategy for improving the performance (i.e., accuracy and precision) of headspace analysis of various volatile compounds. Copyright © 2017 Elsevier B.V. All rights reserved.
AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
A finite difference Hartree-Fock program for atoms and diatomic molecules
NASA Astrophysics Data System (ADS)
Kobus, Jacek
2013-03-01
The newest version of the two-dimensional finite difference Hartree-Fock program for atoms and diatomic molecules is presented. This is an updated and extended version of the program published in this journal in 1996. It can be used to obtain reference, Hartree-Fock limit values of total energies and multipole moments for a wide range of diatomic molecules and their ions in order to calibrate existing and develop new basis sets, calculate (hyper)polarizabilities (αzz, βzzz, γzzzz, Az,zz, Bzz,zz) of atoms, homonuclear and heteronuclear diatomic molecules and their ions via the finite field method, perform DFT-type calculations using LDA or B88 exchange functionals and LYP or VWN correlations ones or the self-consistent multiplicative constant method, perform one-particle calculations with (smooth) Coulomb and Krammers-Henneberger potentials and take account of finite nucleus models. The program is easy to install and compile (tarball+configure+make) and can be used to perform calculations within double- or quadruple-precision arithmetic. Catalogue identifier: ADEB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 2 No. of lines in distributed program, including test data, etc.: 171196 No. of bytes in distributed program, including test data, etc.: 9481802 Distribution format: tar.gz Programming language: Fortran 77, C. Computer: any 32- or 64-bit platform. Operating system: Unix/Linux. RAM: Case dependent, from few MB to many GB Classification: 16.1. Catalogue identifier of previous version: ADEB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 98(1996)346 Does the new version supersede the previous version?: Yes Nature of problem: The program finds virtually exact solutions of the Hartree-Fock and density functional theory type equations for atoms, diatomic molecules and their ions. The lowest energy eigenstates of a given irreducible representation and spin can be obtained. The program can be used to perform one-particle calculations with (smooth) Coulomb and Krammers-Henneberger potentials and also DFT-type calculations using LDA or B88 exchange functionals and LYP or VWN correlations ones or the self-consistent multiplicative constant method. Solution method: Single-particle two-dimensional numerical functions (orbitals) are used to construct an antisymmetric many-electron wave function of the restricted open-shell Hartree-Fock model. The orbitals are obtained by solving the Hartree-Fock equations as coupled two-dimensional second-order (elliptic) partial differential equations (PDEs). The Coulomb and exchange potentials are obtained as solutions of the corresponding Poisson equations. The PDEs are discretized by the eighth-order central difference stencil on a two-dimensional single grid, and the resulting large and sparse system of linear equations is solved by the (multicolour) successive overrelaxation ((MC)SOR) method. The self-consistent-field iterations are interwoven with the (MC)SOR ones and orbital energies and normalization factors are used to monitor the convergence. The accuracy of solutions depends mainly on the grid and the system under consideration, which means that within double precision arithmetic one can obtain orbitals and energies having up to 12 significant figures. If more accurate results are needed, quadruple-precision floating-point arithmetic can be used. Reasons for new version: Additional features, many modifications and corrections, improved convergence rate, overhauled code and documentation. Summary of revisions: see ChangeLog found in tar.gz archive Restrictions: The present version of the program is restricted to 60 orbitals. The maximum grid size is determined at compilation time. Unusual features: The program uses two C routines for allocating and deallocating memory. Several BLAS (Basic Linear Algebra System) routines are emulated by the program. When possible they should be replaced by their library equivalents. Additional comments: automake and autoconf tools are required to build and compile the program; checked with f77, gfortran and ifort compilers Running time: Very case dependent - from a few CPU seconds for the H2 defined on a small grid up to several weeks for the Hartree-Fock-limit calculations for 40-50 electron molecules.
NASA Technical Reports Server (NTRS)
Riley, Gary
1991-01-01
The C Language Integrated Production System (CLIPS) is a forward chaining rule based language developed by NASA. CLIPS was designed specifically to provide high portability, low cost, and easy integration with external systems. The current release of CLIPS, version 4.3, is being used by over 2500 users throughout the public and private community. The primary addition to the next release of CLIPS, version 5.0, will be the CLIPS Object Oriented Language (COOL). The major capabilities of COOL are: class definition with multiple inheritance and no restrictions on the number, types, or cardinality of slots; message passing which allows procedural code bundled with an object to be executed; and query functions which allow groups of instances to be examined and manipulated. In addition to COOL, numerous other enhancements were added to CLIPS including: generic functions (which allow different pieces of procedural code to be executed depending upon the types or classes of the arguments); integer and double precision data type support; multiple conflict resolution strategies; global variables; logical dependencies; type checking on facts; full ANSI compiler support; and incremental reset for rules.
Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born
2012-01-01
We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031
NASA Technical Reports Server (NTRS)
Dietz, J. B.
1973-01-01
The EHFR program reference information which is presented consists of the following subprogram detailed data: purpose-description of the routine, a list of the calling programs, an argument list description, nomenclature definition, flow charts, and a compilation listing of each subprogram. Each of the EHFR subprograms were developed specifically for this routine and do not have an applicability of a general nature. Single precision accuracy available on the Univac 1108 is used exclusively in all but two of the 31 EHFR subprograms. The double precision variables required are identified in the nomenclature definition of the two subprograms that require them. A concise definition of the purpose, function, and capabilities is made in the subprogram description. The description references the appropriate Volume 1 sections of the report which contain the applicable detailed definitions, governing equations, and assumptions used. The compilation listing of each subprogram defines the program/data storage requirements, identifies the labeled block common data required, and identifies other subprograms called during execution. For Vol. 1, see N73-31842.
Double-slit experiment with single wave-driven particles and its relation to quantum mechanics.
Andersen, Anders; Madsen, Jacob; Reichelt, Christian; Rosenlund Ahl, Sonja; Lautrup, Benny; Ellegaard, Clive; Levinsen, Mogens T; Bohr, Tomas
2015-07-01
In a thought-provoking paper, Couder and Fort [Phys. Rev. Lett. 97, 154101 (2006)] describe a version of the famous double-slit experiment performed with droplets bouncing on a vertically vibrated fluid surface. In the experiment, an interference pattern in the single-particle statistics is found even though it is possible to determine unambiguously which slit the walking droplet passes. Here we argue, however, that the single-particle statistics in such an experiment will be fundamentally different from the single-particle statistics of quantum mechanics. Quantum mechanical interference takes place between different classical paths with precise amplitude and phase relations. In the double-slit experiment with walking droplets, these relations are lost since one of the paths is singled out by the droplet. To support our conclusions, we have carried out our own double-slit experiment, and our results, in particular the long and variable slit passage times of the droplets, cast strong doubt on the feasibility of the interference claimed by Couder and Fort. To understand theoretically the limitations of wave-driven particle systems as analogs to quantum mechanics, we introduce a Schrödinger equation with a source term originating from a localized particle that generates a wave while being simultaneously guided by it. We show that the ensuing particle-wave dynamics can capture some characteristics of quantum mechanics such as orbital quantization. However, the particle-wave dynamics can not reproduce quantum mechanics in general, and we show that the single-particle statistics for our model in a double-slit experiment with an additional splitter plate differs qualitatively from that of quantum mechanics.
SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.
Liu, T; Ding, A; Xu, X
2012-06-01
To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.
An evaluation of Lake States STEMS85.
Margaret R. Holdaway
1986-01-01
An updated version of the Lake States variant of STEMS is evaluated and compared with the previous version. The new version is slightly more accurate and precise. The strengths and weaknesses of this tree growth projection system are also identified.
Double resonance calibration of g factor standards: Carbon fibers as a high precision standard
NASA Astrophysics Data System (ADS)
Herb, Konstantin; Tschaggelar, Rene; Denninger, Gert; Jeschke, Gunnar
2018-04-01
The g factor of paramagnetic defects in commercial high performance carbon fibers was determined by a double resonance experiment based on the Overhauser shift due to hyperfine coupled protons. Our carbon fibers exhibit a single, narrow and perfectly Lorentzian shaped ESR line and a g factor slightly higher than gfree with g = 2.002644 =gfree · (1 + 162ppm) with a relative uncertainty of 15ppm . This precisely known g factor and their inertness qualify them as a high precision g factor standard for general purposes. The double resonance experiment for calibration is applicable to other potential standards with a hyperfine interaction averaged by a process with very short correlation time.
The Los Alamos National Laboratory precision double crystal spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, D.V.; Stevens, C.J.; Liefield, R.J.
1994-03-01
This report discusses the following topics on the LANL precision double crystal X-ray spectrometer: Motivation for construction of the instrument; a brief history of the instrument; mechanical systems; motion control systems; computer control system; vacuum system; alignment program; scan programs; observations of the copper K{alpha} lines; and characteristics and specifications.
Jeffery, A.; Elmquist, R. E.; Cage, M. E.
1995-01-01
Precision tests verify the dc equivalent circuit used by Ricketts and Kemeny to describe a quantum Hall effect device in terms of electrical circuit elements. The tests employ the use of cryogenic current comparators and the double-series and triple-series connection techniques of Delahaye. Verification of the dc equivalent circuit in double-series and triple-series connections is a necessary step in developing the ac quantum Hall effect as an intrinsic standard of resistance. PMID:29151768
Liao, Kai; Fan, Xi-Long; Ding, Xuheng; Biesiada, Marek; Zhu, Zong-Hong
2017-12-12
The original PDF version of this Article inadvertently highlighted the author surnames and omitted the publication date. These have now been corrected in the PDF version of the Article. The HTML version was correct from the time of publication.
MOIL-opt: Energy-Conserving Molecular Dynamics on a GPU/CPU system
Ruymgaart, A. Peter; Cardenas, Alfredo E.; Elber, Ron
2011-01-01
We report an optimized version of the molecular dynamics program MOIL that runs on a shared memory system with OpenMP and exploits the power of a Graphics Processing Unit (GPU). The model is of heterogeneous computing system on a single node with several cores sharing the same memory and a GPU. This is a typical laboratory tool, which provides excellent performance at minimal cost. Besides performance, emphasis is made on accuracy and stability of the algorithm probed by energy conservation for explicit-solvent atomically-detailed-models. Especially for long simulations energy conservation is critical due to the phenomenon known as “energy drift” in which energy errors accumulate linearly as a function of simulation time. To achieve long time dynamics with acceptable accuracy the drift must be particularly small. We identify several means of controlling long-time numerical accuracy while maintaining excellent speedup. To maintain a high level of energy conservation SHAKE and the Ewald reciprocal summation are run in double precision. Double precision summation of real-space non-bonded interactions improves energy conservation. In our best option, the energy drift using 1fs for a time step while constraining the distances of all bonds, is undetectable in 10ns simulation of solvated DHFR (Dihydrofolate reductase). Faster options, shaking only bonds with hydrogen atoms, are also very well behaved and have drifts of less than 1kcal/mol per nanosecond of the same system. CPU/GPU implementations require changes in programming models. We consider the use of a list of neighbors and quadratic versus linear interpolation in lookup tables of different sizes. Quadratic interpolation with a smaller number of grid points is faster than linear lookup tables (with finer representation) without loss of accuracy. Atomic neighbor lists were found most efficient. Typical speedups are about a factor of 10 compared to a single-core single-precision code. PMID:22328867
High precision calcium isotope analysis using 42Ca-48Ca double-spike TIMS technique
NASA Astrophysics Data System (ADS)
Feng, L.; Zhou, L.; Gao, S.; Tong, S. Y.; Zhou, M. L.
2014-12-01
Double spike techniques are widely used for determining calcium isotopic compositions of natural samples. The most important factor controlling precision of the double spike technique is the choice of appropriate spike isotope pair, the composition of double spikes and the ratio of spike to sample(CSp/CN). We propose an optimal 42Ca-48Ca double spike protocol which yields the best internal precision for calcium isotopic composition determinations among all kinds of spike pairs and various spike compositions and ratios of spike to sample, as predicted by linear error propagation method. It is suggested to use spike composition of 42Ca/(42Ca+48Ca) = 0.44 mol/mol and CSp/(CN+ CSp)= 0.12mol/mol because it takes both advantages of the largest mass dispersion between 42Ca and 48Ca (14%) and lowest spike cost. Spiked samples were purified by pass through homemade micro-column filled with Ca special resin. K, Ti and other interference elements were completely separated, while 100% calcium was recovered with negligible blank. Data collection includes integration time, idle time, focus and peakcenter frequency, which were all carefully designed for the highest internal precision and lowest analysis time. All beams were automatically measured in a sequence by Triton TIMS so as to eliminate difference of analytical conditions between samples and standards, and also to increase the analytical throughputs. The typical internal precision of 100 duty cycles for one beam is 0.012‒0.015 ‰ (2δSEM), which agrees well with the predicted internal precision of 0.0124 ‰ (2δSEM). Our methods improve internal precisions by a factor of 2‒10 compared to previous methods of determination of calcium isotopic compositions by double spike TIMS. We analyzed NIST SRM 915a, NIST SRM 915b and Pacific Seawater as well as interspersed geological samples during two months. The obtained average δ44/40Ca (all relative to NIST SRM 915a) is 0.02 ± 0.02 ‰ (n=28), 0.72±0.04 ‰ (n=10) and 1.93±0.03 ‰ (n=21) for NIST SRM 915a, NIST SRM 915b and Pacific Seawater, respectively. The long-term reproducibility is 0.10‰ (2 δSD), which is comparable to the best external precision of 0.04 ‰ (2 δSD) of previous methods, but our sample throughputs are doubled with significant reduction in amount of spike used for single samples.
Improving Weather Forecasts Through Reduced Precision Data Assimilation
NASA Astrophysics Data System (ADS)
Hatfield, Samuel; Düben, Peter; Palmer, Tim
2017-04-01
We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18
Reply to Boyle's "Who is entitled to double effect?
Quinn, Warren
1991-10-01
I have only minor quibbles with Boyle's presentation of my version of the Doctrine of Double Effect (DDE) (Boyle, 1991). On my view, the extra morally problematic element in cases of direct intention is the subordination of a victim to purposes that he or she either rightfully rejects or (and this is something that I should now wish to add in light of Boyle's criticisms) cannot rightfully accept. In cases of indirect intention the victim is incidentally affected by an agent's strategy, but in cases of direct intention the victim is made part of the strategy. Boyle suggests at one point that this amounts to using the person.... But I do not think the "using" metaphor is always apt in these cases, although it is perhaps helpful in pointing to the objectionable element in direct intention, which in its perfectly general form can be put only more abstractly.... My only other concern with Boyle's exposition of my view involves his use of the expression "intentionally harming"....The discussion there tends to suggest, in contrast to what Boyle has said earlier, that I represent DDE as discriminating between cases of incidental and intentional harming. But this is precisely what I tried to avoid....
Testing and Validating Gadget2 for GPUs
NASA Astrophysics Data System (ADS)
Wibking, Benjamin; Holley-Bockelmann, K.; Berlind, A. A.
2013-01-01
We are currently upgrading a version of Gadget2 (Springel et al., 2005) that is optimized for NVIDIA's CUDA GPU architecture (Frigaard, unpublished) to work with the latest libraries and graphics cards. Preliminary tests of its performance indicate a ~40x speedup in the particle force tree approximation calculation, with overall speedup of 5-10x for cosmological simulations run with GPUs compared to running on the same CPU cores without GPU acceleration. We believe this speedup can be reasonably increased by an additional factor of two with futher optimization, including overlap of computation on CPU and GPU. Tests of single-precision GPU numerical fidelity currently indicate accuracy of the mass function and the spectral power density to within a few percent of extended-precision CPU results with the unmodified form of Gadget. Additionally, we plan to test and optimize the GPU code for Millenium-scale "grand challenge" simulations of >10^9 particles, a scale that has been previously untested with this code, with the aid of the NSF XSEDE flagship GPU-based supercomputing cluster codenamed "Keeneland." Current work involves additional validation of numerical results, extending the numerical precision of the GPU calculations to double precision, and evaluating performance/accuracy tradeoffs. We believe that this project, if successful, will yield substantial computational performance benefits to the N-body research community as the next generation of GPU supercomputing resources becomes available, both increasing the electrical power efficiency of ever-larger computations (making simulations possible a decade from now at scales and resolutions unavailable today) and accelerating the pace of research in the field.
NASA Astrophysics Data System (ADS)
Huang, Kuo-Ting; Chen, Hsi-Chao; Lin, Ssu-Fan; Lin, Ke-Ming; Syue, Hong-Ye
2012-09-01
While tin-doped indium oxide (ITO) has been extensively applied in flexible electronics, the problem of the residual stress has many obstacles to overcome. This study investigated the residual stress of flexible electronics by the double beam shadow moiré interferometer, and focused on the precision improvement with phase shifting interferometry (PSI). According to the out-of-plane displacement equation, the theoretical error depends on the grating pitch and the angle between incident light and CCD. The angle error could be reduced to 0.03% by the angle shift of 10° as a result of the double beam interferometer was a symmetrical system. But the experimental error of the double beam moiré interferometer still reached to 2.2% by the noise of the vibration and interferograms. In order to improve the measurement precision, PSI was introduced to the double shadow moiré interferometer. Wavefront phase was reconstructed by the five interferograms with the Hariharan algorithm. The measurement results of standard cylinder indicating the error could be reduced from 2.2% to less than 1% with PSI. The deformation of flexible electronic could be reconstructed fast and calculated the residual stress with the Stoney correction formula. This shadow moiré interferometer with PSI could improve the precision of residual stress for flexible electronics.
2014-05-01
Netbook version. Both versions were presented to subjects, with about half receiving the CRT-based test first and with a separate test between...versions of the CCT; however, only the Netbook results (version 11) will be reported here because it is the commercially-available version. The CCT...told that the test was short and fast-paced. The Netbook version allowed the participants to record their own responses, using the mouse to mark
Further Simplification of the Simple Erosion Narrowing Score With Item Response Theory Methodology.
Oude Voshaar, Martijn A H; Schenk, Olga; Ten Klooster, Peter M; Vonkeman, Harald E; Bernelot Moens, Hein J; Boers, Maarten; van de Laar, Mart A F J
2016-08-01
To further simplify the simple erosion narrowing score (SENS) by removing scored areas that contribute the least to its measurement precision according to analysis based on item response theory (IRT) and to compare the measurement performance of the simplified version to the original. Baseline and 18-month data of the Combinatietherapie Bij Reumatoide Artritis (COBRA) trial were modeled using longitudinal IRT methodology. Measurement precision was evaluated across different levels of structural damage. SENS was further simplified by omitting the least reliably scored areas. Discriminant validity of SENS and its simplification were studied by comparing their ability to differentiate between the COBRA and sulfasalazine arms. Responsiveness was studied by comparing standardized change scores between versions. SENS data showed good fit to the IRT model. Carpal and feet joints contributed the least statistical information to both erosion and joint space narrowing scores. Omitting the joints of the foot reduced measurement precision for the erosion score in cases with below-average levels of structural damage (relative efficiency compared with the original version ranged 35-59%). Omitting the carpal joints had minimal effect on precision (relative efficiency range 77-88%). Responsiveness of a simplified SENS without carpal joints closely approximated the original version (i.e., all Δ standardized change scores were ≤0.06). Discriminant validity was also similar between versions for both the erosion score (relative efficiency = 97%) and the SENS total score (relative efficiency = 84%). Our results show that the carpal joints may be omitted from the SENS without notable repercussion for its measurement performance. © 2016, American College of Rheumatology.
Double resonance calibration of g factor standards: Carbon fibers as a high precision standard.
Herb, Konstantin; Tschaggelar, Rene; Denninger, Gert; Jeschke, Gunnar
2018-04-01
The g factor of paramagnetic defects in commercial high performance carbon fibers was determined by a double resonance experiment based on the Overhauser shift due to hyperfine coupled protons. Our carbon fibers exhibit a single, narrow and perfectly Lorentzian shaped ESR line and a g factor slightly higher than g free with g=2.002644=g free ·(1+162ppm) with a relative uncertainty of 15ppm. This precisely known g factor and their inertness qualify them as a high precision g factor standard for general purposes. The double resonance experiment for calibration is applicable to other potential standards with a hyperfine interaction averaged by a process with very short correlation time. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Kernel optimization for short-range molecular dynamics
NASA Astrophysics Data System (ADS)
Hu, Changjun; Wang, Xianmeng; Li, Jianjiang; He, Xinfu; Li, Shigang; Feng, Yangde; Yang, Shaofeng; Bai, He
2017-02-01
To optimize short-range force computations in Molecular Dynamics (MD) simulations, multi-threading and SIMD optimizations are presented in this paper. With respect to multi-threading optimization, a Partition-and-Separate-Calculation (PSC) method is designed to avoid write conflicts caused by using Newton's third law. Serial bottlenecks are eliminated with no additional memory usage. The method is implemented by using the OpenMP model. Furthermore, the PSC method is employed on Intel Xeon Phi coprocessors in both native and offload models. We also evaluate the performance of the PSC method under different thread affinities on the MIC architecture. In the SIMD execution, we explain the performance influence in the PSC method, considering the "if-clause" of the cutoff radius check. The experiment results show that our PSC method is relatively more efficient compared to some traditional methods. In double precision, our 256-bit SIMD implementation is about 3 times faster than the scalar version.
Microfluidic approach for encapsulation via double emulsions.
Wang, Wei; Zhang, Mao-Jie; Chu, Liang-Yin
2014-10-01
Double emulsions, with inner drops well protected by the outer shells, show great potential as compartmentalized systems to encapsulate multiple components for protecting actives, masking flavor, and targetedly delivering and controllably releasing drugs. Precise control of the encapsulation characteristics of each component is critical to achieve an optimal therapeutic efficacy for pharmaceutical applications. Such controllable encapsulation can be realized by using microfluidic approaches for producing monodisperse double emulsions with versatile and controllable structures as the encapsulation system. The size, number and composition of the emulsion drops can be accurately manipulated for optimizing the encapsulation of each component for pharmaceutical applications. In this review, we highlight the outstanding advantages of controllable microfluidic double emulsions for highly efficient and precisely controllable encapsulation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Radiographic evaluation of BFX acetabular component position in dogs.
Renwick, Alasdair; Gemmill, Toby; Pink, Jonathan; Brodbelt, David; McKee, Malcolm
2011-07-01
To assess the reliability of radiographic measurement of angle of lateral opening (ALO) and angle of version of BFX acetabular cups. In vitro radiographic study. BFX cups (24, 28, and 32 mm). Total hip replacement constructs (cups, 17 mm femoral head and a #7 CFX stem) were mounted on an inclinometer. Ventrodorsal radiographs were obtained with ALO varying between 21° and 70° and inclination set at 0°, 10°, 20°, and 30°. Radiographs were randomized using a random sequence generator. Three observers blinded to the radiograph order assessed ALO using 3 methods: (1) an ellipse method based on trigonometry; (2) using a measurement from the center of the femoral head to the truncated surface of the cup; (3) by visual estimation using a reference chart. Version was measured by assessing the ventral edge of the truncated surface. ALO methods 2 and 3 were accurate and precise to within 10° and were significantly more accurate and precise than method 1 (P < .001). All methods were significantly less accurate with increasing inclination. Version measurement was accurate and precise to within 7° with 0-20° of inclination, but significantly less accurate with 30° of inclination. Methods 2 and 3, but not method 1, were sufficiently accurate and precise to be clinically useful. Version measurement was clinically useful when inclination was ≤ 20°. © Copyright 2011 by The American College of Veterinary Surgeons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufmann, Ralph M., E-mail: rkaufman@math.purdue.edu; Khlebnikov, Sergei, E-mail: skhleb@physics.purdue.edu; Wehefritz-Kaufmann, Birgit, E-mail: ebkaufma@math.purdue.edu
2012-11-15
Motivated by the Double Gyroid nanowire network we develop methods to detect Dirac points and classify level crossings, aka. singularities in the spectrum of a family of Hamiltonians. The approach we use is singularity theory. Using this language, we obtain a characterization of Dirac points and also show that the branching behavior of the level crossings is given by an unfolding of A{sub n} type singularities. Which type of singularity occurs can be read off a characteristic region inside the miniversal unfolding of an A{sub k} singularity. We then apply these methods in the setting of families of graph Hamiltonians,more » such as those for wire networks. In the particular case of the Double Gyroid we analytically classify its singularities and show that it has Dirac points. This indicates that nanowire systems of this type should have very special physical properties. - Highlights: Black-Right-Pointing-Pointer New method for analytically finding Dirac points. Black-Right-Pointing-Pointer Novel relation of level crossings to singularity theory. Black-Right-Pointing-Pointer More precise version of the von-Neumann-Wigner theorem for arbitrary smooth families of Hamiltonians of fixed size. Black-Right-Pointing-Pointer Analytical proof of the existence of Dirac points for the Gyroid wire network.« less
Double-trap measurement of the proton magnetic moment at 0.3 parts per billion precision.
Schneider, Georg; Mooser, Andreas; Bohman, Matthew; Schön, Natalie; Harrington, James; Higuchi, Takashi; Nagahama, Hiroki; Sellner, Stefan; Smorra, Christian; Blaum, Klaus; Matsuda, Yasuyuki; Quint, Wolfgang; Walz, Jochen; Ulmer, Stefan
2017-11-24
Precise knowledge of the fundamental properties of the proton is essential for our understanding of atomic structure as well as for precise tests of fundamental symmetries. We report on a direct high-precision measurement of the magnetic moment μ p of the proton in units of the nuclear magneton μ N The result, μ p = 2.79284734462 (±0.00000000082) μ N , has a fractional precision of 0.3 parts per billion, improves the previous best measurement by a factor of 11, and is consistent with the currently accepted value. This was achieved with the use of an optimized double-Penning trap technique. Provided a similar measurement of the antiproton magnetic moment can be performed, this result will enable a test of the fundamental symmetry between matter and antimatter in the baryonic sector at the 10 -10 level. Copyright © 2017, American Association for the Advancement of Science.
Solving lattice QCD systems of equations using mixed precision solvers on GPUs
NASA Astrophysics Data System (ADS)
Clark, M. A.; Babich, R.; Barros, K.; Brower, R. C.; Rebbi, C.
2010-09-01
Modern graphics hardware is designed for highly parallel numerical tasks and promises significant cost and performance benefits for many scientific applications. One such application is lattice quantum chromodynamics (lattice QCD), where the main computational challenge is to efficiently solve the discretized Dirac equation in the presence of an SU(3) gauge field. Using NVIDIA's CUDA platform we have implemented a Wilson-Dirac sparse matrix-vector product that performs at up to 40, 135 and 212 Gflops for double, single and half precision respectively on NVIDIA's GeForce GTX 280 GPU. We have developed a new mixed precision approach for Krylov solvers using reliable updates which allows for full double precision accuracy while using only single or half precision arithmetic for the bulk of the computation. The resulting BiCGstab and CG solvers run in excess of 100 Gflops and, in terms of iterations until convergence, perform better than the usual defect-correction approach for mixed precision.
Simplifying HL7 Version 3 messages.
Worden, Robert; Scott, Philip
2011-01-01
HL7 Version 3 offers a semantically robust method for healthcare interoperability but has been criticized as overly complex to implement. This paper reviews initiatives to simplify HL7 Version 3 messaging and presents a novel approach based on semantic mapping. Based on user-defined definitions, precise transforms between simple and full messages are automatically generated. Systems can be interfaced with the simple messages and achieve interoperability with full Version 3 messages through the transforms. This reduces the costs of HL7 interfacing and will encourage better uptake of HL7 Version 3 and CDA.
Shilov, V N; Borkovskaja, Y B; Dukhin, A S
2004-09-15
Existing theories of electroacoustic phenomena in concentrated colloids neglect the possibility of double layer overlap and are valid mostly for the "thin double layer," when the double layer thickness is much less than the particle size. In this paper we present a new electroacoustic theory which removes this restriction. This would make this new theory applicable to characterizing a variety of aqueous nanocolloids and of nonaqueous dispersions. There are two versions of the theory leading to the analytical solutions. The first version corresponds to strongly overlapped diffuse layers (so-called quasi-homogeneous model). It yields a simple analytical formula for colloid vibration current (CVI), which is valid for arbitrary ultrasound frequency, but for restricted kappa alpha range. This version of the theory, as well the Smoluchowski theory for microelectrophoresis, is independent of particle shape and polydispersity. This makes it very attractive for practical use, with the hope that it might be as useful as classical Smoluchowski theory. In order to determine the kappa alpha range of the quasi-homogeneous model validity we develop the second version that limits ultrasound frequency, but applies no restriction on kappa alpha. The ultrasound frequency should substantially exceed the Maxwell-Wagner relaxation frequency. This limitation makes active conductivity related current negligible compared to the passive dielectric displacement current. It is possible to derive an expression for CVI in the concentrated dispersion as formulae inhering definite integrals with integrands depending on equilibrium potential distribution. This second version allowed us to estimate the ranges of the applicability of the first, quasi-homogeneous version. It turns out that the quasi-homogeneous model works for kappa alpha values up to almost 1. For instance, at volume fraction 30%, the highest kappa alpha limit of the quasi-homogeneous model is 0.65. Therefore, this version of the electroacoustic theory is valid for almost all nonaqueous dispersions and a wide variety of nanocolloids, especially with sizes under 100 nm.
DFMSPH14: A C-code for the double folding interaction potential of two spherical nuclei
NASA Astrophysics Data System (ADS)
Gontchar, I. I.; Chushnyakova, M. V.
2016-09-01
This is a new version of the DFMSPH code designed to obtain the nucleus-nucleus potential by using the double folding model (DFM) and in particular to find the Coulomb barrier. The new version uses the charge, proton, and neutron density distributions provided by the user. Also we added an option for fitting the DFM potential by the Gross-Kalinowski profile. The main functionalities of the original code (e.g. the nucleus-nucleus potential as a function of the distance between the centers of mass of colliding nuclei, the Coulomb barrier characteristics, etc.) have not been modified. Catalog identifier: AEFH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 114404 Distribution format: tar.gz Programming language: C Computer: PC and Mac Operation system: Windows XP and higher, MacOS, Unix/Linux Memory required to execute with typical data: below 10 Mbyte Classification: 17.9 Catalog identifier of previous version: AEFH_v1_0 Journal reference of previous version: Comp. Phys. Comm. 181 (2010) 168 Does the new version supersede the previous version?: Yes Nature of physical problem: The code calculates in a semimicroscopic way the bare interaction potential between two colliding spherical nuclei as a function of the center of mass distance. The height and the position of the Coulomb barrier are found. The calculated potential is approximated by an analytical profile (Woods-Saxon or Gross-Kalinowski) near the barrier. Dependence of the barrier parameters upon the characteristics of the effective NN forces (like, e.g. the range of the exchange part of the nuclear term) can be investigated. Method of solution: The nucleus-nucleus potential is calculated using the double folding model with the Coulomb and the effective M3Y NN interactions. For the direct parts of the Coulomb and the nuclear terms, the Fourier transform method is used. In order to calculate the exchange parts, the density matrix expansion method is applied. Typical running time: less than 1 minute. Reason for new version: Many users asked us how to implement their own density distributions in the DFMSPH. Now this option has been added. Also we found that the calculated Double-Folding Potential (DFP) is approximated more accurately by the Gross-Kalinowski (GK) profile. This option has been also added.
An Application of Response Surface Methodology to a Macroeconomic Model.
1985-12-01
L21,ILI,ISl ,KI,SBI,PYl ,MI,P11,Q1 double precision TE,TW,TC,TN,TR,G,W2,R2,T,H,NP,NL,N8,NE,FR, :: & PF,LB A double precision CLI, SCLI ,PCL1IDL1 ,WlLl...TCLI-TNLI- SCLI )*PILI/PRLI) & .0.012*FR)+0.5*R1 PR=0.5*(-131. 17+2.32.PI)+0.5*PR *The monetary sector is omitted (See Chapter IV) C L1=0.5*(0.14*(M-TW
The predictive information obtained by testing multiple software versions
NASA Technical Reports Server (NTRS)
Lee, Larry D.
1987-01-01
Multiversion programming is a redundancy approach to developing highly reliable software. In applications of this method, two or more versions of a program are developed independently by different programmers and the versions are combined to form a redundant system. One variation of this approach consists of developing a set of n program versions and testing the versions to predict the failure probability of a particular program or a system formed from a subset of the programs. The precision that might be obtained, and also the effect of programmer variability if predictions are made over repetitions of the process of generating different program versions, are examined.
Nakata, Maho; Braams, Bastiaan J; Fujisawa, Katsuki; Fukuda, Mituhiro; Percus, Jerome K; Yamashita, Makoto; Zhao, Zhengji
2008-04-28
The reduced density matrix (RDM) method, which is a variational calculation based on the second-order reduced density matrix, is applied to the ground state energies and the dipole moments for 57 different states of atoms, molecules, and to the ground state energies and the elements of 2-RDM for the Hubbard model. We explore the well-known N-representability conditions (P, Q, and G) together with the more recent and much stronger T1 and T2(') conditions. T2(') condition was recently rederived and it implies T2 condition. Using these N-representability conditions, we can usually calculate correlation energies in percentage ranging from 100% to 101%, whose accuracy is similar to CCSD(T) and even better for high spin states or anion systems where CCSD(T) fails. Highly accurate calculations are carried out by handling equality constraints and/or developing multiple precision arithmetic in the semidefinite programming (SDP) solver. Results show that handling equality constraints correctly improves the accuracy from 0.1 to 0.6 mhartree. Additionally, improvements by replacing T2 condition with T2(') condition are typically of 0.1-0.5 mhartree. The newly developed multiple precision arithmetic version of SDP solver calculates extraordinary accurate energies for the one dimensional Hubbard model and Be atom. It gives at least 16 significant digits for energies, where double precision calculations gives only two to eight digits. It also provides physically meaningful results for the Hubbard model in the high correlation limit.
How GNSS Enables Precision Farming
DOT National Transportation Integrated Search
2014-12-01
Precision farming: Feeding a Growing Population Enables Those Who Feed the World. Immediate and Ongoing Needs - population growth (more to feed) - urbanization (decrease in arable land) Double food production by 2050 to meet world demand. To meet thi...
Guan, Xuewei; Hou, Likai; Ren, Yukun; Deng, Xiaokang; Lang, Qi; Jia, Yankai; Hu, Qingming; Tao, Ye; Liu, Jiangwei; Jiang, Hongyuan
2016-05-01
Droplet-based microfluidics has provided a means to generate multi-core double emulsions, which are versatile platforms for microreactors in materials science, synthetic biology, and chemical engineering. To provide new opportunities for double emulsion platforms, here, we report a glass capillary microfluidic approach to first fabricate osmolarity-responsive Water-in-Oil-in-Water (W/O/W) double emulsion containing two different inner droplets/cores and to then trigger the coalescence between the encapsulated droplets precisely. To achieve this, we independently control the swelling speed and size of each droplet in the dual-core double emulsion by controlling the osmotic pressure between the inner droplets and the collection solutions. When the inner two droplets in one W/O/W double emulsion swell to the same size and reach the instability of the oil film interface between the inner droplets, core-coalescence happens and this coalescence process can be controlled precisely. This microfluidic methodology enables the generation of highly monodisperse dual-core double emulsions and the osmolarity-controlled swelling behavior provides new stimuli to trigger the coalescence between the encapsulated droplets. Such swelling-caused core-coalescence behavior in dual-core double emulsion establishes a novel microreactor for nanoliter-scale reactions, which can protect reaction materials and products from being contaminated or released.
Guan, Xuewei; Hou, Likai; Ren, Yukun; Deng, Xiaokang; Lang, Qi; Jia, Yankai; Hu, Qingming; Tao, Ye; Liu, Jiangwei; Jiang, Hongyuan
2016-01-01
Droplet-based microfluidics has provided a means to generate multi-core double emulsions, which are versatile platforms for microreactors in materials science, synthetic biology, and chemical engineering. To provide new opportunities for double emulsion platforms, here, we report a glass capillary microfluidic approach to first fabricate osmolarity-responsive Water-in-Oil-in-Water (W/O/W) double emulsion containing two different inner droplets/cores and to then trigger the coalescence between the encapsulated droplets precisely. To achieve this, we independently control the swelling speed and size of each droplet in the dual-core double emulsion by controlling the osmotic pressure between the inner droplets and the collection solutions. When the inner two droplets in one W/O/W double emulsion swell to the same size and reach the instability of the oil film interface between the inner droplets, core-coalescence happens and this coalescence process can be controlled precisely. This microfluidic methodology enables the generation of highly monodisperse dual-core double emulsions and the osmolarity-controlled swelling behavior provides new stimuli to trigger the coalescence between the encapsulated droplets. Such swelling-caused core-coalescence behavior in dual-core double emulsion establishes a novel microreactor for nanoliter-scale reactions, which can protect reaction materials and products from being contaminated or released. PMID:27279935
Asymptotic One-Point Functions in Gauge-String Duality with Defects.
Buhl-Mortensen, Isak; de Leeuw, Marius; Ipsen, Asger C; Kristjansen, Charlotte; Wilhelm, Matthias
2017-12-29
We take the first step in extending the integrability approach to one-point functions in AdS/dCFT to higher loop orders. More precisely, we argue that the formula encoding all tree-level one-point functions of SU(2) operators in the defect version of N=4 supersymmetric Yang-Mills theory, dual to the D5-D3 probe-brane system with flux, has a natural asymptotic generalization to higher loop orders. The asymptotic formula correctly encodes the information about the one-loop correction to the one-point functions of nonprotected operators once dressed by a simple flux-dependent factor, as we demonstrate by an explicit computation involving a novel object denoted as an amputated matrix product state. Furthermore, when applied to the Berenstein-Maldacena-Nastase vacuum state, the asymptotic formula gives a result for the one-point function which in a certain double-scaling limit agrees with that obtained in the dual string theory up to wrapping order.
Documentation for the machine-readable version of the Henry Draper Catalogue (edition 1985)
NASA Technical Reports Server (NTRS)
Roman, N. G.; Warren, W. H., Jr.
1985-01-01
An updated, corrected and extended machine-readable version of the catalog is described. Published and unpublished errors discovered in the previous version was corrected; letters indicating supplemental stars in the BD have been moved to a new byte to distinguish them from double-star components; and the machine readable portion of The Henry Draper Extension (HDE) (HA 100) was converted to the same format as the main catalog, with additional data added as necessary.
Thickenings and conformal gravity
NASA Astrophysics Data System (ADS)
Lebrun, Claude
1991-07-01
A twistor correspondence is given for complex conformal space-times with vanishing Bach and Eastwood-Dighton tensors; when the Weyl curvature is algebraically general, these equations are precisely the conformal version of Einstein's vacuum equations with cosmological constant. This gives a fully curved version of the linearized correspondence of Baston and Mason [B-M].
Airborne Double Pulsed 2-Micron IPDA Lidar for Atmospheric CO2 Measurement
NASA Technical Reports Server (NTRS)
Yu, Jirong; Petros, Mulugeta; Refaat, Tamer; Singh, Upendra
2015-01-01
We have developed an airborne 2-micron Integrated Path Differential Absorption (IPDA) lidar for atmospheric CO2 measurements. The double pulsed, high pulse energy lidar instrument can provide high-precision CO2 column density measurements.
Who is entitled to double effect?
Boyle, J
1991-10-01
The doctrine of double effect continues to be an important tool in bioethical casuistry. Its role within the Catholic moral tradition continues, and there is considerable interest in it by contemporary moral philosophers. But problems of justification and correct application remain. I argue that if the traditional Catholic conviction that there are exceptionless norms prohibiting inflicting some kinds of harms on people is correct, then double effect is justified and necessary. The objection that double effect is superfluous is a rejection of that normative conviction, not a refutation of double effect itself. This justification suggests the correct way of applying double effect to controversial cases. But versions of double effect which dispense with the absolutism of the Catholic tradition lack justification and fall to the objection that double effect is an unnecessary complication.
SIVA/DIVA- INITIAL VALUE ORDINARY DIFFERENTIAL EQUATION SOLUTION VIA A VARIABLE ORDER ADAMS METHOD
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The SIVA/DIVA package is a collection of subroutines for the solution of ordinary differential equations. There are versions for single precision and double precision arithmetic. These solutions are applicable to stiff or nonstiff differential equations of first or second order. SIVA/DIVA requires fewer evaluations of derivatives than other variable order Adams predictor-corrector methods. There is an option for the direct integration of second order equations which can make integration of trajectory problems significantly more efficient. Other capabilities of SIVA/DIVA include: monitoring a user supplied function which can be separate from the derivative; dynamically controlling the step size; displaying or not displaying output at initial, final, and step size change points; saving the estimated local error; and reverse communication where subroutines return to the user for output or computation of derivatives instead of automatically performing calculations. The user must supply SIVA/DIVA with: 1) the number of equations; 2) initial values for the dependent and independent variables, integration stepsize, error tolerance, etc.; and 3) the driver program and operational parameters necessary for subroutine execution. SIVA/DIVA contains an extensive diagnostic message library should errors occur during execution. SIVA/DIVA is written in FORTRAN 77 for batch execution and is machine independent. It has a central memory requirement of approximately 120K of 8 bit bytes. This program was developed in 1983 and last updated in 1987.
Parameter estimation by decoherence in the double-slit experiment
NASA Astrophysics Data System (ADS)
Matsumura, Akira; Ikeda, Taishi; Kukita, Shingo
2018-06-01
We discuss a parameter estimation problem using quantum decoherence in the double-slit interferometer. We consider a particle coupled to a massive scalar field after the particle passing through the double slit and solve the dynamics non-perturbatively for the coupling by the WKB approximation. This allows us to analyze the estimation problem which cannot be treated by master equation used in the research of quantum probe. In this model, the scalar field reduces the interference fringes of the particle and the fringe pattern depends on the field mass and coupling. To evaluate the contrast and the estimation precision obtained from the pattern, we introduce the interferometric visibility and the Fisher information matrix of the field mass and coupling. For the fringe pattern observed on the distant screen, we derive a simple relation between the visibility and the Fisher matrix. Also, focusing on the estimation precision of the mass, we find that the Fisher information characterizes the wave-particle duality in the double-slit interferometer.
Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank
2016-01-01
Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the latest generations of some biological RMs.
Effect of Software Version on the Accuracy of an Intraoral Scanning Device.
Haddadi, Yasser; Bahrami, Golnosh; Isidor, Flemming
2018-04-06
To investigate the impact of software version on the accuracy of an intraoral scanning device. A master tooth was scanned with a high-precision optical scanner and then 10 times with a CEREC Omnicam scanner with software versions 4.4.0 and 4.4.4. Discrepancies were measured using quality control software. Mean deviation for 4.4.0 was 36.2 ± 35 μm and for 4.4.4 was 20.7 ± 14.2 μm (P ≤ .001). Software version has a significant impact on the accuracy of an intraoral scanner. It is important that researchers also publish the software version of scanners when publishing their findings.
ERIC Educational Resources Information Center
Hula, William D.; Kellough, Stacey; Fergadiotis, Gerasimos
2015-01-01
Purpose: The purpose of this study was to develop a computerized adaptive test (CAT) version of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996), to reduce test length while maximizing measurement precision. This article is a direct extension of a companion article (Fergadiotis, Kellough, & Hula, 2015),…
Pulse intensity characterization of the LCLS nanosecond double-bunch mode of operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yanwen; Decker, Franz-Josef; Turner, James
The recent demonstration of the 'nanosecond double-bunch' operation mode,i.e.two X-ray pulses separated in time between 0.35 and hundreds of nanoseconds and by increments of 0.35 ns, offers new opportunities to investigate ultrafast dynamics in diverse systems of interest. However, in order to reach its full potential, this mode of operation requires the precise characterization of the intensity of each X-ray pulse within each pulse pair for any time separation. Here, a transmissive single-shot diagnostic that achieves this goal for time separations larger than 0.7 ns with a precision better than 5% is presented. Lastly, it also provides real-time monitoring feedbackmore » to help tune the accelerator parameters to deliver double pulse intensity distributions optimized for specific experimental goals.« less
Pulse intensity characterization of the LCLS nanosecond double-bunch mode of operation
Sun, Yanwen; Decker, Franz-Josef; Turner, James; ...
2018-03-27
The recent demonstration of the 'nanosecond double-bunch' operation mode,i.e.two X-ray pulses separated in time between 0.35 and hundreds of nanoseconds and by increments of 0.35 ns, offers new opportunities to investigate ultrafast dynamics in diverse systems of interest. However, in order to reach its full potential, this mode of operation requires the precise characterization of the intensity of each X-ray pulse within each pulse pair for any time separation. Here, a transmissive single-shot diagnostic that achieves this goal for time separations larger than 0.7 ns with a precision better than 5% is presented. Lastly, it also provides real-time monitoring feedbackmore » to help tune the accelerator parameters to deliver double pulse intensity distributions optimized for specific experimental goals.« less
Precise Hypocenter Determination around Palu Koro Fault: a Preliminary Results
NASA Astrophysics Data System (ADS)
Fawzy Ismullah, M. Muhammad; Nugraha, Andri Dian; Ramdhan, Mohamad; Wandono
2017-04-01
Sulawesi area is located in complex tectonic pattern. High seismicity activity in the middle of Sulawesi is related to Palu Koro fault (PKF). In this study, we determined precise hypocenter around PKF by applying double-difference method. We attempt to investigate of the seismicity rate, geometry of the fault and distribution of focus depth around PKF. We first re-pick P-and S-wave arrival time of the PKF events to determine the initial hypocenter location using Hypoellipse method through updated 1-D seismic velocity. Later on, we relocated the earthquake event using double-difference method. Our preliminary results show the distribution of relocated events are located around PKF and have smaller residual time than the initial location. We will enhance the hypocenter location through updating of arrival time by applying waveform cross correlation method as input for double-difference relocation.
NASA Astrophysics Data System (ADS)
Schunck, N.; Dobaczewski, J.; McDonnell, J.; Satuła, W.; Sheikh, J. A.; Staszczak, A.; Stoitsov, M.; Toivanen, P.
2012-01-01
We describe the new version (v2.49t) of the code HFODD which solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogolyubov (HFB) problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented the following physics features: (i) the isospin mixing and projection, (ii) the finite-temperature formalism for the HFB and HF + BCS methods, (iii) the Lipkin translational energy correction method, (iv) the calculation of the shell correction. A number of specific numerical methods have also been implemented in order to deal with large-scale multi-constraint calculations and hardware limitations: (i) the two-basis method for the HFB method, (ii) the Augmented Lagrangian Method (ALM) for multi-constraint calculations, (iii) the linear constraint method based on the approximation of the RPA matrix for multi-constraint calculations, (iv) an interface with the axial and parity-conserving Skyrme-HFB code HFBTHO, (v) the mixing of the HF or HFB matrix elements instead of the HF fields. Special care has been paid to using the code on massively parallel leadership class computers. For this purpose, the following features are now available with this version: (i) the Message Passing Interface (MPI) framework, (ii) scalable input data routines, (iii) multi-threading via OpenMP pragmas, (iv) parallel diagonalization of the HFB matrix in the simplex-breaking case using the ScaLAPACK library. Finally, several little significant errors of the previous published version were corrected. New version program summaryProgram title:HFODD (v2.49t) Catalogue identifier: ADFL_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADFL_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence v3 No. of lines in distributed program, including test data, etc.: 190 614 No. of bytes in distributed program, including test data, etc.: 985 898 Distribution format: tar.gz Programming language: FORTRAN-90 Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT4, Cray XT5 Operating system: UNIX, LINUX, Windows XP Has the code been vectorized or parallelized?: Yes, parallelized using MPI RAM: 10 Mwords Word size: The code is written in single-precision for the use on a 64-bit processor. The compiler option -r8 or +autodblpad (or equivalent) has to be used to promote all real and complex single-precision floating-point items to double precision when the code is used on a 32-bit machine. Classification: 17.22 Catalogue identifier of previous version: ADFL_v2_2 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2361 External routines: The user must have access to the NAGLIB subroutine f02axe, or LAPACK subroutines zhpev, zhpevx, zheevr, or zheevd, which diagonalize complex hermitian matrices, the LAPACK subroutines dgetri and dgetrf which invert arbitrary real matrices, the LAPACK subroutines dsyevd, dsytrf and dsytri which compute eigenvalues and eigenfunctions of real symmetric matrices, the LINPACK subroutines zgedi and zgeco, which invert arbitrary complex matrices and calculate determinants, the BLAS routines dcopy, dscal, dgeem and dgemv for double-precision linear algebra and zcopy, zdscal, zgeem and zgemv for complex linear algebra, or provide another set of subroutines that can perform such tasks. The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Does the new version supersede the previous version?: Yes Nature of problem: The nuclear mean field and an analysis of its symmetries in realistic cases are the main ingredients of a description of nuclear states. Within the Local Density Approximation, or for a zero-range velocity-dependent Skyrme interaction, the nuclear mean field is local and velocity dependent. The locality allows for an effective and fast solution of the self-consistent Hartree-Fock equations, even for heavy nuclei, and for various nucleonic ( n-particle- n-hole) configurations, deformations, excitation energies, or angular momenta. Similarly, Local Density Approximation in the particle-particle channel, which is equivalent to using a zero-range interaction, allows for a simple implementation of pairing effects within the Hartree-Fock-Bogolyubov method. Solution method: The program uses the Cartesian harmonic oscillator basis to expand single-particle or single-quasiparticle wave functions of neutrons and protons interacting by means of the Skyrme effective interaction and zero-range pairing interaction. The expansion coefficients are determined by the iterative diagonalization of the mean-field Hamiltonians or Routhians which depend non-linearly on the local neutron and proton densities. Suitable constraints are used to obtain states corresponding to a given configuration, deformation or angular momentum. The method of solution has been presented in: [J. Dobaczewski, J. Dudek, Comput. Phys. Commun. 102 (1997) 166]. Reasons for new version: Version 2.49s of HFODD provides a number of new options such as the isospin mixing and projection of the Skyrme functional, the finite-temperature HF and HFB formalism and optimized methods to perform multi-constrained calculations. It is also the first version of HFODD to contain threading and parallel capabilities. Summary of revisions: Isospin mixing and projection of the HF states has been implemented. The finite-temperature formalism for the HFB equations has been implemented. The Lipkin translational energy correction method has been implemented. Calculation of the shell correction has been implemented. The two-basis method for the solution to the HFB equations has been implemented. The Augmented Lagrangian Method (ALM) for calculations with multiple constraints has been implemented. The linear constraint method based on the cranking approximation of the RPA matrix has been implemented. An interface between HFODD and the axially-symmetric and parity-conserving code HFBTHO has been implemented. The mixing of the matrix elements of the HF or HFB matrix has been implemented. A parallel interface using the MPI library has been implemented. A scalable model for reading input data has been implemented. OpenMP pragmas have been implemented in three subroutines. The diagonalization of the HFB matrix in the simplex-breaking case has been parallelized using the ScaLAPACK library. Several little significant errors of the previous published version were corrected. Running time: In serial mode, running 6 HFB iterations for 152Dy for conserved parity and signature symmetries in a full spherical basis of N=14 shells takes approximately 8 min on an AMD Opteron processor at 2.6 GHz, assuming standard BLAS and LAPACK libraries. As a rule of thumb, runtime for HFB calculations for parity and signature conserved symmetries roughly increases as N, where N is the number of full HO shells. Using custom-built optimized BLAS and LAPACK libraries (such as in the ATLAS implementation) can bring down the execution time by 60%. Using the threaded version of the code with 12 threads and threaded BLAS libraries can bring an additional factor 2 speed-up, so that the same 6 HFB iterations now take of the order of 2 min 30 s.
NASA Astrophysics Data System (ADS)
Johnson, Kendall B.; Hopkins, Greg
2017-08-01
The Double Arm Linkage precision Linear motion (DALL) carriage has been developed as a simplified, rugged, high performance linear motion stage. Initially conceived as a moving mirror stage for the moving mirror of a Fourier Transform Spectrometer (FTS), it is applicable to any system requiring high performance linear motion. It is based on rigid double arm linkages connecting a base to a moving carriage through flexures. It is a monolithic design. The system is fabricated from one piece of material including the flexural elements, using high precision machining. The monolithic design has many advantages. There are no joints to slip or creep and there are no CTE (coefficient of thermal expansion) issues. This provides a stable, robust design, both mechanically and thermally and is expected to provide a wide operating temperature range, including cryogenic temperatures, and high tolerance to vibration and shock. Furthermore, it provides simplicity and ease of implementation, as there is no assembly or alignment of the mechanism. It comes out of the machining operation aligned and there are no adjustments. A prototype has been fabricated and tested, showing superb shear performance and very promising tilt performance. This makes it applicable to both corner cube and flat mirror FTS systems respectively.
Methodological study of computational approaches to address the problem of strong correlations
NASA Astrophysics Data System (ADS)
Lee, Juho
The main focus of this thesis is the detailed investigation of computational methods to tackle strongly correlated materials in which a rich variety of exotic phenomena are found. A many-body problem with sizable electronic correlations can no longer be explained by independent-particle approximations such as density functional theory (DFT) or tight-binding approaches. The influence of an electron to the others is too strong for each electron to be treated as an independent quasiparticle and consequently those standard band-structure methods fail even at a qualitative level. One of the most powerful approaches for strong correlations is the dynamical mean-field theory (DMFT), which has enlightened the understanding of the Mott transition based on the Hubbard model. For realistic applications, the dynamical mean-field theory is combined with various independent-particles approaches. The most widely used one is the DMFT combined with the DFT in the local density approximation (LDA), so-called LDA+DMFT. In this approach, the electrons in the weakly correlated orbitals are calculated by LDA while others in the strongly correlated orbitals are treated by DMFT. Recently, the method combining DMFT with Hedin's GW approximation was also developed, in which the momentum-dependent self-energy is also added. In this thesis, we discuss the application of those methodologies based on DMFT. First, we apply the dynamical mean-field theory to solve the 3-dimensional Hubbard model in Chap. 3. In this application, we model the interface between the thermodynamically coexisting metal and Mott insulator. We show how to model the required slab geometry and extract the electronic spectra. We construct an effective Landau free energy and compute the variation of its parameters across the phase diagram. Finally, using a linear mixture of the density and double-occupancy, we identify a natural Ising order parameter which unifies the treatment of the bandwidth and filling controlled Mott transitions. Secondly, we study the double-counting problem, a subtle issue that arises in LDA+DMFT. We propose a highly precise double-counting functional, in which the intersection of LDA and DMFT is calculated exactly, and implement a parameter-free version of the LDA+DMFT that is tested on one of the simplest strongly correlated systems, the H2 molecule. We show that the exact double-counting treatment along with a good DMFT projector leads to very accurate and total energy and excitation spectrum of H2 molecule. Finally, we implement various versions of GW+DMFT, in its fully self-consistent way, one shot GW approximation, and quasiparticle self-consistent scheme, and studied how well these combined methods perform on H2 molecule as compared to more established methods such as LDA+DMFT. We found that most flavors of GW+DMFT break down in strongly correlated regime due to causality violation. Among GW+DMFT methods, only the self-consistent quasiparticle GW+DMFT with static double-counting, and a new method with causal double-counting, correctly recover the atomic limit at large H-atom separation. While some flavors of GW+DMFT improve the single-electron spectra of LDA+DMFT, the total energy is best predicted by LDA+DMFT, for which the exact double-counting is known, and is static.
In trans paired nicking triggers seamless genome editing without double-stranded DNA cutting.
Chen, Xiaoyu; Janssen, Josephine M; Liu, Jin; Maggio, Ignazio; 't Jong, Anke E J; Mikkers, Harald M M; Gonçalves, Manuel A F V
2017-09-22
Precise genome editing involves homologous recombination between donor DNA and chromosomal sequences subjected to double-stranded DNA breaks made by programmable nucleases. Ideally, genome editing should be efficient, specific, and accurate. However, besides constituting potential translocation-initiating lesions, double-stranded DNA breaks (targeted or otherwise) are mostly repaired through unpredictable and mutagenic non-homologous recombination processes. Here, we report that the coordinated formation of paired single-stranded DNA breaks, or nicks, at donor plasmids and chromosomal target sites by RNA-guided nucleases based on CRISPR-Cas9 components, triggers seamless homology-directed gene targeting of large genetic payloads in human cells, including pluripotent stem cells. Importantly, in addition to significantly reducing the mutagenicity of the genome modification procedure, this in trans paired nicking strategy achieves multiplexed, single-step, gene targeting, and yields higher frequencies of accurately edited cells when compared to the standard double-stranded DNA break-dependent approach.CRISPR-Cas9-based gene editing involves double-strand breaks at target sequences, which are often repaired by mutagenic non-homologous end-joining. Here the authors use Cas9 nickases to generate coordinated single-strand breaks in donor and target DNA for precise homology-directed gene editing.
Torres, Elisa R
2013-09-25
Few studies have examined differences in disability and comorbity among major depressive disorder (MDD), dysthymia, and double depression in African-Americans (AA). A secondary analysis was performed on AA in the National Survey of American Life. Interviews occurred 2001-2003. A four stage national area probability sampling was performed. DSM-IV-TR diagnoses were obtained with a modified version of the World Health Organization's expanded version of the Composite International Diagnostic Interview. Disability was measured by interview with the World Health Organization's Disability Assessment Schedule II. Compared to non-depressed AA, AA endorsing MDD (t=19.0, p=0.0001) and double depression (t=18.7, p=0.0001) reported more global disability; AA endorsing MDD (t=8.5, p=0.0063) reported more disability in the getting around domain; AA endorsing MDD (t=19.1, p=0.0001) and double depression (t=12.1, p=0.0014) reported more disability in the life activities domain. AA who endorsed double depression reported similar disability and comorbidities with AA who endorsed MDD. Few AA endorsed dysthymia. This was a cross-sectional study subject to recall bias. The NSAL did not measure minor depression. The current study supports the idea of deleting distinct chronic subtypes of depression and consolidating them into a single category termed chronic depression. © 2013 Elsevier B.V. All rights reserved.
Creating a Computer Adaptive Test Version of the Late-Life Function & Disability Instrument
Jette, Alan M.; Haley, Stephen M.; Ni, Pengsheng; Olarsch, Sippy; Moed, Richard
2009-01-01
Background This study applied Item Response Theory (IRT) and Computer Adaptive Test (CAT) methodologies to develop a prototype function and disability assessment instrument for use in aging research. Herein, we report on the development of the CAT version of the Late-Life Function & Disability instrument (Late-Life FDI) and evaluate its psychometric properties. Methods We employed confirmatory factor analysis, IRT methods, validation, and computer simulation analyses of data collected from 671 older adults residing in residential care facilities. We compared accuracy, precision, and sensitivity to change of scores from CAT versions of two Late-Life FDI scales with scores from the fixed-form instrument. Score estimates from the prototype CAT versus the original instrument were compared in a sample of 40 older adults. Results Distinct function and disability domains were identified within the Late-Life FDI item bank and used to construct two prototype CAT scales. Using retrospective data, scores from computer simulations of the prototype CAT scales were highly correlated with scores from the original instrument. The results of computer simulation, accuracy, precision, and sensitivity to change of the CATs closely approximated those of the fixed-form scales, especially for the 10- or 15-item CAT versions. In the prospective study each CAT was administered in less than 3 minutes and CAT scores were highly correlated with scores generated from the original instrument. Conclusions CAT scores of the Late-Life FDI were highly comparable to those obtained from the full-length instrument with a small loss in accuracy, precision, and sensitivity to change. PMID:19038841
NASA Astrophysics Data System (ADS)
Regnier, D.; Dubray, N.; Verrière, M.; Schunck, N.
2018-04-01
The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this paper, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different types of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank-Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. We emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).
Documentation for the machine-readable version of the SAO-HD-GC-DM cross index version 1983
NASA Technical Reports Server (NTRS)
Roman, N. G.; Warren, W. H., Jr.; Schofield, N., Jr.
1983-01-01
An updated and extended machine readable version of the Smithsonian Astrophysical Observatory star catalog (SAO) is described. A correction of all errors which were found since preparation of the original catalog which resulted from misidentifications and omissions of components in multiple star systems and missing Durchmusterung numbers (the common identifier) in the SAO Catalog are included and component identifications from the Index of Visual Double Stars (IDS) are appended to all multiple SAO entries with the same DM numbers, and lower case letter identifiers for supplemental BD stars are added. A total of 11,398 individual corrections and data additions is incorporated into the present version of the cross index.
A 1D radiative transfer benchmark with polarization via doubling and adding
NASA Astrophysics Data System (ADS)
Ganapol, B. D.
2017-11-01
Highly precise numerical solutions to the radiative transfer equation with polarization present a special challenge. Here, we establish a precise numerical solution to the radiative transfer equation with combined Rayleigh and isotropic scattering in a 1D-slab medium with simple polarization. The 2-Stokes vector solution for the fully discretized radiative transfer equation in space and direction derives from the method of doubling and adding enhanced through convergence acceleration. Updates to benchmark solutions found in the literature to seven places for reflectance and transmittance as well as for angular flux follow. Finally, we conclude with the numerical solution in a partially randomly absorbing heterogeneous medium.
Elliptic Solvers for Mediterranean Sea Ocean Modeling,
1984-05-01
KWSP =21*(112+2*21+6) C PARAMETER (NX=IH-2, NY=JHS-2, KQ=NY*((NX+7)/4+1)+(NY+3)/2+8) 9C DOUBLE PRECISION AX,AY,AC(KH),ACKL DIMENSION HD(IH,JH),HT(IH...JH),RS(IH,JH) C COMMON/BV/ W1(IH,JHS),W2(IH,JHS),W3(IH,JHS),W4(IH,JHS) DOUBLE PRECISION WQ DIMENSION MAP(IH,JHS),WQ(JHS,5),WC( KWSP ) EQUIVALENCE (W1(1...AND ALL OTHER MODES C ( TYPICALLY MXKC1 .GE. MXKC2 .GE .MXKC3 ), C MXKC3 = MAX SIZE OF CV FOR RESTART T.S. C ( TYPICALLY MXKC3 = MXKP*3 ). C KWSP
Rakhman, A.; Hafez, Mohamed A.; Nanda, Sirish K.; ...
2016-03-31
Here, a high-finesse Fabry-Perot cavity with a frequency-doubled continuous wave green laser (532 nm) has been built and installed in Hall A of Jefferson Lab for high precision Compton polarimetry. The infrared (1064 nm) beam from a ytterbium-doped fiber amplifier seeded by a Nd:YAG nonplanar ring oscillator laser is frequency doubled in a single-pass periodically poled MgO:LiNbO 3 crystal. The maximum achieved green power at 5 W infrared pump power is 1.74 W with a total conversion efficiency of 34.8%. The green beam is injected into the optical resonant cavity and enhanced up to 3.7 kW with a corresponding enhancementmore » of 3800. The polarization transfer function has been measured in order to determine the intra-cavity circular laser polarization within a measurement uncertainty of 0.7%. The PREx experiment at Jefferson Lab used this system for the first time and achieved 1.0% precision in polarization measurements of an electron beam with energy and current of 1.0 GeV and 50 μA.« less
Wearable Platform for Real-time Monitoring of Sodium in Sweat.
McCaul, Margaret; Porter, Adam; Barrett, Ruairi; White, Paddy; Stroiescu, Florien; Wallace, Gordon; Diamond, Dermot
2018-06-19
A fully integrated and wearable platform for harvesting and analysing sweat sodium concentration in real time during exercise has been developed and tested. The platform was largely produced using 3D printing, which greatly simplifies fabrication and operation compared to previous versions generated with traditional production techniques. The 3D printed platform doubles the capacity of the sample storage reservoir to about 1.3 ml, reduces the assembly time and provides simple and precise component alignment and contact of the integrated solid-state ion-selective and reference electrodes with the sorbent material. The sampling flowrate in the device can be controlled by introducing threads to enhance wicking of sweat from the skin, across the electrodes to the storage area. The platform was characterised in the lab and in exercise trials over a period of about 60 minutes continuous monitoring. Sweat sodium concentration was found to rise initially to approximately 17 mM and decline gradually over the period of the trial to about 11-12 mM. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Miao, Yipu; Merz, Kenneth M
2015-04-14
We present an efficient implementation of ab initio self-consistent field (SCF) energy and gradient calculations that run on Compute Unified Device Architecture (CUDA) enabled graphical processing units (GPUs) using recurrence relations. We first discuss the machine-generated code that calculates the electron-repulsion integrals (ERIs) for different ERI types. Next we describe the porting of the SCF gradient calculation to GPUs, which results in an acceleration of the computation of the first-order derivative of the ERIs. However, only s, p, and d ERIs and s and p derivatives could be executed simultaneously on GPUs using the current version of CUDA and generation of NVidia GPUs using a previously described algorithm [Miao and Merz J. Chem. Theory Comput. 2013, 9, 965-976.]. Hence, we developed an algorithm to compute f type ERIs and d type ERI derivatives on GPUs. Our benchmarks shows the performance GPU enable ERI and ERI derivative computation yielded speedups of 10-18 times relative to traditional CPU execution. An accuracy analysis using double-precision calculations demonstrates that the overall accuracy is satisfactory for most applications.
A new technique for radiographic measurement of acetabular cup orientation.
Derbyshire, Brian; Diggle, Peter J; Ingham, Christopher J; Macnair, Rory; Wimhurst, James; Jones, Henry Wynn
2014-02-01
Accurate radiographic measurement of acetabular cup orientation is required in order to assess susceptibility to impingement, dislocation, and edge loading wear. In this study, the accuracy and precision of a new radiographic cup orientation measurement system were assessed and compared to those of two commercially available systems. Two types of resurfacing hip prostheses and an uncemented prosthesis were assessed. Radiographic images of each prosthesis were created with the cup set at different, known angles of version and inclination in a measurement jig. The new system was the most accurate and precise and could repeatedly measure version and inclination to within a fraction of a degree. In addition it has a facility to distinguish cup retroversion from anteversion on anteroposterior radiographs. © 2013.
A precision device needs precise simulation: Software description of the CBM Silicon Tracking System
NASA Astrophysics Data System (ADS)
Malygina, Hanna; Friese, Volker;
2017-10-01
Precise modelling of detectors in simulations is the key to the understanding of their performance, which, in turn, is a prerequisite for the proper design choice and, later, for the achievement of valid physics results. In this report, we describe the implementation of the Silicon Tracking System (STS), the main tracking device of the CBM experiment, in the CBM software environment. The STS makes uses of double-sided silicon micro-strip sensors with double metal layers. We present a description of transport and detector response simulation, including all relevant physical effects like charge creation and drift, charge collection, cross-talk and digitization. Of particular importance and novelty is the description of the time behaviour of the detector, since its readout will not be externally triggered but continuous. We also cover some aspects of local reconstruction, which in the CBM case has to be performed in real-time and thus requires high-speed algorithms.
1983-11-01
compound operations, with status. (h) Pre-programmed CRC and double-precision multiply/divide algo- rithms. (i) Double length accumulator with full...IH1.25 _ - MICROCOP ’ RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS-1963-A .4 ’* • • . - . .. •. . . . . . . . . . . . . . • - -. .• ,. o. . . .- "o
1982-10-27
are buried within * a much larger, special purpose package. We regret such omissions, but to have reached the practi- tioners in each of the diverse...sparse matrix (form PAQ ) 4. Method of solution: Distribution count sort 5. Programming language: FORTRAN g Precision: Single and double precision 7
Measurement of theta13 in the double Chooz experiment
NASA Astrophysics Data System (ADS)
Yang, Guang
Neutrino oscillation has been established for over a decade. The mixing angle theta13 is one of the parameters that is most difficult to measure due to its small value. Currently, reactor antineutrino experiments provide the best knowledge of theta13, using the electron antineutrino disappearance phenomenon. The most compelling advantage is the high intensity of the reactor antineutrino rate. The Double Chooz experiment, located on the border of France and Belgium, is such an experiment, which aims to have one of the most precise theta 13 measurements in the world. Double Chooz has a single-detector phase and a double-detector phase. For the single-detector phase, the limit of the theta 13 sensitivity comes mostly from the reactor flux. However, the uncertainty on the reactor flux is highly suppressed in the double-detector phase. Oscillation analyses for the two phases have different strategies but need similar inputs, including background estimation, detection systematics evaluation, energy reconstruction and so on. The Double Chooz detectors are filled with gadolinium (Gd) doped liquid scintillator and use the inverse beta decay (IBD) signal so that for each phase, there are two independent theta13 measurements based on different neutron capturer (Gd or hydrogen). Multiple oscillation analyses are performed to provide the best 13 results. In addition to the 13 measurement, Double Chooz is also an excellent \\playground" to do diverse physics research. For example, a 252Cf calibration source study has been done to understand the spontaneous decay of this radioactive source. Further, Double Chooz also has the ability to do a sterile neutrino search in a certain mass region. Moreover, some new physics ideas can be tested in Double Chooz. In this thesis, the detailed methods to provide precise theta13 measurement will be described and the other physics topics will be introduced.
NASA Astrophysics Data System (ADS)
Hełminiak, K. G.; Konacki, M.; Muterspaugh, M. W.; Browne, S. E.; Howard, A. W.; Kulkarni, S. R.
2012-01-01
We present the most precise to date orbital and physical parameters of the well-known short period (P= 5.975 d), eccentric (e= 0.3) double-lined spectroscopic binary BY Draconis (BY Dra), a prototype of a class of late-type, active, spotted flare stars. We calculate the full spectroscopic/astrometric orbital solution by combining our precise radial velocities (RVs) and the archival astrometric measurements from the Palomar Testbed Interferometer (PTI). The RVs were derived based on the high-resolution echelle spectra taken between 2004 and 2008 with the Keck I/high-resolution echelle spectrograph, Shane/CAT/HamSpec and TNG/SARG telescopes/spectrographs using our novel iodine-cell technique for double-lined binary stars. The RVs and available PTI astrometric data spanning over eight years allow us to reach 0.2-0.5 per cent level of precision in Msin 3i and the parallax but the geometry of the orbit (i≃ 154°) hampers the absolute mass precision to 3.3 per cent, which is still an order of magnitude better than for previous studies. We compare our results with a set of Yonsei-Yale theoretical stellar isochrones and conclude that BY Dra is probably a main-sequence system more metal rich than the Sun. Using the orbital inclination and the available rotational velocities of the components, we also conclude that the rotational axes of the components are likely misaligned with the orbital angular momentum. Given BY Dra's main-sequence status, late spectral type and the relatively short orbital period, its high orbital eccentricity and probable spin-orbit misalignment are not in agreement with the tidal theory. This disagreement may possibly be explained by smaller rotational velocities of the components and the presence of a substellar mass companion to BY Dra AB.
Precision measurements of the RSA method using a phantom model of hip prosthesis.
Mäkinen, Tatu J; Koort, Jyri K; Mattila, Kimmo T; Aro, Hannu T
2004-04-01
Radiostereometric analysis (RSA) has become one of the recommended techniques for pre-market evaluation of new joint implant designs. In this study we evaluated the effect of repositioning of X-ray tubes and phantom model on the precision of the RSA method. In precision measurements, we utilized mean error of rigid body fitting (ME) values as an internal control for examinations. ME value characterizes relative motion among the markers within each rigid body and is conventionally used to detect loosening of a bone marker. Three experiments, each consisting of 10 double examinations, were performed. In the first experiment, the X-ray tubes and the phantom model were not repositioned between one double examination. In experiments two and three, the X-ray tubes were repositioned between one double examination. In addition, the position of the phantom model was changed in experiment three. Results showed that significant differences could be found in 2 of 12 comparisons when evaluating the translation and rotation of the prosthetic components. Repositioning procedures increased ME values mimicking deformation of rigid body segments. Thus, ME value seemed to be a more sensitive parameter than migration values in this study design. These results confirmed the importance of standardized radiographic technique and accurate patient positioning for RSA measurements. Standardization and calibration procedures should be performed with phantom models in order to avoid unnecessary radiation dose of the patients. The present model gives the means to establish and to follow the intra-laboratory precision of the RSA method. The model is easily applicable in any research unit and allows the comparison of the precision values in different laboratories of multi-center trials.
Development of a Japanese version of the emotional skills and competence questionnaire.
Toyota, Hiroshi; Morita, Taisuke; Taksic, Vladimir
2007-10-01
The present study described development of a Japanese version of the Emotional Skills and Competence Questionnaire and examined the relations of scores with those on Big Five scales of personality and self-esteem scales. The participants were 615 undergraduates. Factor analysis led to the shortened version of 24 items in three subscales. Although Cronbach alphas were low for the subscale, Manage and Regulate Emotion, values were satisfactory for the other two subscales, Express and Label Emotion and Perceive and Understand Emotion. Total scores of this version were positively correlated with score for self-esteem, Extraversion, and Openness but negatively correlated with scores on Neuroticism. This shorter Japanese versions shows suitable internal consistency and content validity, but other reliabilities and validities must be examined precisely.
Methods for semi-automated indexing for high precision information retrieval.
Berrios, Daniel C; Cucina, Russell J; Fagan, Lawrence M
2002-01-01
To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy.
STT Doubles with Large Delta M - Part VII: Andromeda, Pisces, Auriga
NASA Astrophysics Data System (ADS)
Knapp, Wilfried; Nanson, John
2017-01-01
The results of visual double star observing sessions suggested a pattern for STT doubles with large DM of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. Similar to the other objects covered so far several of the components show parameters quite different from the current WDS data.
Influence of double stimulation on sound-localization behavior in barn owls.
Kettler, Lutz; Wagner, Hermann
2014-12-01
Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Regnier, D.; Dubray, N.; Verriere, M.
The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less
Regnier, D.; Dubray, N.; Verriere, M.; ...
2017-12-20
The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less
The Double Star Orbit Initial Value Problem
NASA Astrophysics Data System (ADS)
Hensley, Hagan
2018-04-01
Many precise algorithms exist to find a best-fit orbital solution for a double star system given a good enough initial value. Desmos is an online graphing calculator tool with extensive capabilities to support animations and defining functions. It can provide a useful visual means of analyzing double star data to arrive at a best guess approximation of the orbital solution. This is a necessary requirement before using a gradient-descent algorithm to find the best-fit orbital solution for a binary system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagan, Mike; Schlachter, Jeremy; Yoshii, Kazutomo
Abstract—Energy and power consumption are major limitations to continued scaling of computing systems. Inexactness where the quality of the solution can be traded for energy savings has been proposed as a counterintuitive approach to overcoming those limitation. However, in the past, inexactness has been necessitated the need for highly customized or specialized hardware. In order to move away from customization, in earlier work [4], it was shown that by interpreting precision in the computation to be the parameter to trade to achieve inexactness, weather prediction and page rank could both benefit in terms of yielding energy savings through reduced precision,more » while preserving the quality of the application. However, this required representations of numbers that were not readily available on commercial off-the-shelf (COTS) processors. In this paper, we provide opportunities for extending the the notion of trading precision for energy savings into the world COTS. We provide a model and analyze the opportunities and behavior of all three IEEE compliant precision values available on COTS processors: (i) double (ii) single, and (iii) half. Through measurements, we show through a limit study energy savings in going from double to half precision can potentially exceed a factor of four, largely due to memory and cache effects.« less
High Precision Material Study at Near Millimeter Wavelengths.
1983-08-30
propagating through these tubes , the beams are allowed to expand for a short distance in free space before they are combined by a mylar -film beam- splitter...Laser Precision Rkp-5200). 22 6 The attenuation of the low-loss EH mode in circular plexiglass tubes of I.D. 0.95 cm, and of various lengths. he...pyroelectric detectors (Laser Precision Rkp-545): L L, and L TPx lens; BS1, wire-mesh beam splitter; BS, mylar -film beam splitter; DPC, double-prism coupler
McPherson, Malcolm J.; Bellman, Robert A.
1984-01-01
A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.
McPherson, M.J.; Bellman, R.A.
1982-09-27
A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.
NASA Astrophysics Data System (ADS)
Goerigk, Lars; Grimme, Stefan
2010-05-01
We present an extension of our previously published benchmark set for low-lying valence transitions of large organic dyes [L. Goerigk et al., Phys. Chem. Chem. Phys. 11, 4611 (2009)]. The new set comprises in total 12 molecules, including two charged species and one with a clear charge-transfer transition. Our previous study on TD-DFT methods is repeated for the new test set with a larger basis set. Additionally, we want to shed light on different spin-scaled variants of the configuration interaction singles with perturbative doubles correction [CIS(D)] and the approximate coupled cluster singles and doubles method (CC2). Particularly for CIS(D) we want to clarify, which of the proposed versions can be recommended. Our results indicate that an unpublished SCS-CIS(D) variant, which is implemented into the TURBOMOLE program package, shows worse results than the original CIS(D) method, while other modified versions perform better. An SCS-CIS(D) version with a parameterization, that has already been used in an application by us recently [L. Goerigk and S. Grimme, ChemPhysChem 9, 2467 (2008)], yields the best results. Another SCS-CIS(D) version and the SOS-CIS(D) method [Y. M. Rhee and M. Head-Gordon, J. Phys. Chem. A 111, 5314 (2007)] perform very similar, though. For the electronic transitions considered herein, there is no improvement observed when going from the original CC2 to the SCS-CC2 method but further adjustment of the latter seems to be beneficial. Double-hybrid density functionals belong to best methods tested here. Particularly B2GP-PLYP provides uniformly good results for the complete set and is considered to be close to chemical accuracy within an ab initio theory of color. For conventional hybrid functionals, a Fock-exchange mixing parameter of about 0.4 seems to be optimum in TD-DFT treatments of large chromophores. A range-separated functional such as, e.g., CAM-B3LYP seems also to be promising.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, J.C.
1994-08-01
The Type B drum packages (TBD) are conceptualized as a family of containers in which a single 208 L or 114 L (55 gal or 30 gal) drum containing Type B quantities of radioactive material (RAM) can be packaged for shipment. The TBD containers are being developed to fill a void in the packaging and transportation capabilities of the U.S. Department of Energy as no container packaging single drums of Type B RAM exists offering double containment. Several multiple-drum containers currently exist, as well as a number of shielded casks, but the size and weight of these containers present manymore » operational challenges for single-drum shipments. As an alternative, the TBD containers will offer up to three shielded versions (light, medium, and heavy) and one unshielded version, each offering single or optional double containment for a single drum. To reduce operational complexity, all versions will share similar design and operational features where possible. The primary users of the TBD containers are envisioned to be any organization desiring to ship single drums of Type B RAM, such as laboratories, waste retrieval activities, emergency response teams, etc. Currently, the TBD conceptual design is being developed with the final design and analysis to be completed in 1995 to 1996. Testing and certification of the unshielded version are planned to be completed in 1996 to 1997 with production to begin in 1997 to 1998.« less
Washington Double Star Catalog Cross Index (1950 position sort)
NASA Technical Reports Server (NTRS)
1993-01-01
A machine-readable version of the Washington Catalog of Visual Double Stars (WDS) was prepared in 1984 on the basis of a data file that was collected and maintained for more than a century by a succession of double-star observers. Although this catalog is being continually updated, a new copy for distribution is not expected to be available for a few years. The WDS contains DM numbers, but many of these are listed only in the notes, which makes it difficult to search for double-star information, except by position. Hence, a cross index that provides complete DM identifications is desirable, and it appears useful to add HD numbers for systems in that catalog. Aitken Double Star (ADS) numbers were retained from the WDS, but no attempt was made to correct these except for obvious errors.
A FORTRAN version implementation of block adjustment of CCD frames and its preliminary application
NASA Astrophysics Data System (ADS)
Yu, Y.; Tang, Z.-H.; Li, J.-L.; Zhao, M.
2005-09-01
A FORTRAN version implementation of the block adjustment (BA) of overlapping CCD frames is developed and its flowchart is shown. The program is preliminarily applied to obtain the optical positions of four extragalactic radio sources. The results show that because of the increase in the number and sky coverage of reference stars the precision of optical positions with BA is improved compared with the single CCD frame adjustment.
Kim, Soon Hee; Yeon, Yeung Kyu; Lee, Jung Min; Chao, Janet Ren; Lee, Young Jin; Seo, Ye Been; Sultan, Md Tipu; Lee, Ok Joo; Lee, Ji Seung; Yoon, Sung-Il; Hong, In-Sun; Khang, Gilson; Lee, Sang Jin; Yoo, James J; Park, Chan Hum
2018-06-11
The original version of this Article contained errors in Figs. 5 and 6. In Fig. 5b, the second panel on the bottom row was stretched out of proportion. In Fig. 6d, the first panel was also stretched out of proportion. In Fig. 6f, the fifth panel inadvertently repeated the fourth. This has been corrected in both the PDF and HTML versions of the Article.
Methods for semi-automated indexing for high precision information retrieval
NASA Technical Reports Server (NTRS)
Berrios, Daniel C.; Cucina, Russell J.; Fagan, Lawrence M.
2002-01-01
OBJECTIVE: To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. DESIGN: Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. PARTICIPANTS: Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. MEASUREMENTS: Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. RESULTS: Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). SUMMARY: Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy.
Methods for Semi-automated Indexing for High Precision Information Retrieval
Berrios, Daniel C.; Cucina, Russell J.; Fagan, Lawrence M.
2002-01-01
Objective. To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. Design. Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. Participants. Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. Measurements. Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. Results. Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). Summary. Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy. PMID:12386114
STT Doubles with Large Delta_M - Part VIII: Tau Per Ori Cam Mon Cnc Peg
NASA Astrophysics Data System (ADS)
Knapp, Wilfried; Nanson, John
2017-04-01
The results of visual double star observing sessions suggested a pattern for STT doubles with large delta_M of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. Again like for the other STT objects covered so far several of the components show parameters quite different from the current WDS data.
Fabrication of large diffractive optical elements in thick film on a concave lens surface.
Xie, Yongjun; Lu, Zhenwu; Li, Fengyou
2003-05-05
We demonstrate experimentally the technique of fabricating large diffractive optical elements (DOEs) in thick film on a concave lens surface (mirrors) with precise alignment by using the strategy of double exposure. We adopt the method of double exposure to overcome the difficulty of processing thick photoresist on a large curved substrate. A uniform thick film with arbitrary thickness on a concave lens can be obtained with this technique. We fabricate a large concentric circular grating with a 10-ìm period on a concave lens surface in film with a thickness of 2.0 ìm after development. It is believed that this technique can also be used to fabricate larger DOEs in thicker film on the concave or convex lens surface with precise alignment. There are other potential applications of this technique, such as fabrication of micro-optoelectromechanical systems (MOEMS) or microelectromechanical systems (MEMS) and fabrication of microlens arrays on a large concave lens surface or convex lens surface with precise alignment.
Precision Measurements of $$A_1^n$$ in the Deep Inelastic Regime
Parno, Diana; Flay, David; Posik, Matthew; ...
2015-04-07
We have performed precision measurements of the double-spin virtual-photon asymmetry A₁ on the neutron in the deep inelastic scattering regime, using an open-geometry, large-acceptance spectrometer and a longitudinally and transversely polarized ³He target. Our data cover a wide kinematic range 0.277 ≤ x ≤ 0.5480 at an average Q² value of 3.078 (GeV/c)², doubling the available high-precision neutron data in this x range. We have combined our results with world data on proton targets to make a leading-order extraction of the ratio of polarized-to-unpolarized parton distribution functions for up quarks and for down quarks in the same kinematic range. Ourmore » data are consistent with a previous observation of an View the MathML source A 1 n zero crossing near x=0.5. We find no evidence of a transition to a positive slope in (Δd+Δd¯)/(d+d¯) up to x=0.548.« less
NASA Astrophysics Data System (ADS)
Shui, Tao; Yang, Wen-Xing; Chen, Ai-Xi; Liu, Shaopeng; Li, Ling; Zhu, Zhonghu
2018-03-01
We propose a scheme for high-precision two-dimensional (2D) atom localization via the four-wave mixing (FWM) in a four-level double-Λ atomic system. Due to the position-dependent atom-field interaction, the 2D position information of the atoms can be directly determined by the measurement of the normalized light intensity of output FWM-generated field. We further show that, when the position-dependent generated FWM field has become sufficiently intense, efficient back-coupling to the FWM generating state becomes important. This back-coupling pathway leads to competitive multiphoton destructive interference of the FWM generating state by three supplied and one internally generated fields. We find that the precision of 2D atom localization can be improved significantly by the multiphoton destructive interference and depends sensitively on the frequency detunings and the pump field intensity. Interestingly enough, we show that adjusting the frequency detunings and the pump field intensity can modify significantly the FWM efficiency, and consequently lead to a redistribution of the atoms. As a result, the atom can be localized in one of four quadrants with holding the precision of atom localization.
SU (2) lattice gauge theory simulations on Fermi GPUs
NASA Astrophysics Data System (ADS)
Cardoso, Nuno; Bicudo, Pedro
2011-05-01
In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200× the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2× slower) than single precision computations.
NASA Astrophysics Data System (ADS)
Blaauw, Maarten; Christen, J. Andrés; Bennett, K. D.; Reimer, Paula J.
2018-05-01
Reliable chronologies are essential for most Quaternary studies, but little is known about how age-depth model choice, as well as dating density and quality, affect the precision and accuracy of chronologies. A meta-analysis suggests that most existing late-Quaternary studies contain fewer than one date per millennium, and provide millennial-scale precision at best. We use existing and simulated sediment cores to estimate what dating density and quality are required to obtain accurate chronologies at a desired precision. For many sites, a doubling in dating density would significantly improve chronologies and thus their value for reconstructing and interpreting past environmental changes. Commonly used classical age-depth models stop becoming more precise after a minimum dating density is reached, but the precision of Bayesian age-depth models which take advantage of chronological ordering continues to improve with more dates. Our simulations show that classical age-depth models severely underestimate uncertainty and are inaccurate at low dating densities, and also perform poorly at high dating densities. On the other hand, Bayesian age-depth models provide more realistic precision estimates, including at low to average dating densities, and are much more robust against dating scatter and outliers. Indeed, Bayesian age-depth models outperform classical ones at all tested dating densities, qualities and time-scales. We recommend that chronologies should be produced using Bayesian age-depth models taking into account chronological ordering and based on a minimum of 2 dates per millennium.
[The Confusion Assessment Method: Transcultural adaptation of a French version].
Antoine, V; Belmin, J; Blain, H; Bonin-Guillaume, S; Goldsmith, L; Guerin, O; Kergoat, M-J; Landais, P; Mahmoudi, R; Morais, J A; Rataboul, P; Saber, A; Sirvain, S; Wolfklein, G; de Wazieres, B
2018-05-01
The Confusion Assessment Method (CAM) is a validated key tool in clinical practice and research programs to diagnose delirium and assess its severity. There is no validated French version of the CAM training manual and coding guide (Inouye SK). The aim of this study was to establish a consensual French version of the CAM and its manual. Cross-cultural adaptation to achieve equivalence between the original version and a French adapted version of the CAM manual. A rigorous process was conducted including control of cultural adequacy of the tool's components, double forward and back translations, reconciliation, expert committee review (including bilingual translators with different nationalities, a linguist, highly qualified clinicians, methodologists) and pretesting. A consensual French version of the CAM was achieved. Implementation of the CAM French version in daily clinical practice will enable optimal diagnosis of delirium diagnosis and enhance communication between health professionals in French speaking countries. Validity and psychometric properties are being tested in a French multicenter cohort, opening up new perspectives for improved quality of care and research programs in French speaking countries. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling
NASA Astrophysics Data System (ADS)
Thornes, Tobias; Duben, Peter; Palmer, Tim
2016-04-01
At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.
Salomon-Ferrer, Romelia; Götz, Andreas W; Poole, Duncan; Le Grand, Scott; Walker, Ross C
2013-09-10
We present an implementation of explicit solvent all atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA-enabled GPUs. First released publicly in April 2010 as part of version 11 of the AMBER MD package and further improved and optimized over the last two years, this implementation supports the three most widely used statistical mechanical ensembles (NVE, NVT, and NPT), uses particle mesh Ewald (PME) for the long-range electrostatics, and runs entirely on CUDA-enabled NVIDIA graphics processing units (GPUs), providing results that are statistically indistinguishable from the traditional CPU version of the software and with performance that exceeds that achievable by the CPU version of AMBER software running on all conventional CPU-based clusters and supercomputers. We briefly discuss three different precision models developed specifically for this work (SPDP, SPFP, and DPDP) and highlight the technical details of the approach as it extends beyond previously reported work [Götz et al., J. Chem. Theory Comput. 2012, DOI: 10.1021/ct200909j; Le Grand et al., Comp. Phys. Comm. 2013, DOI: 10.1016/j.cpc.2012.09.022].We highlight the substantial improvements in performance that are seen over traditional CPU-only machines and provide validation of our implementation and precision models. We also provide evidence supporting our decision to deprecate the previously described fully single precision (SPSP) model from the latest release of the AMBER software package.
Fabrication and Assembly of High-Precision Hinge and Latch Joints for Deployable Optical Instruments
NASA Technical Reports Server (NTRS)
Phelps, James E.
1999-01-01
Descriptions are presented of high-precision hinge and latch joints that have been co-developed, for application to deployable optical instruments, by NASA Langley Research Center and Nyma/ADF. Page-sized versions of engineering drawings are included in two appendices to describe all mechanical components of both joints. Procedures for assembling the mechanical components of both joints are also presented. The information herein is intended to facilitate the fabrication and assembly of the high-precision hinge and latch joints, and enable the incorporation of these joints into the design of deployable optical instrument systems.
A neural-network-based approach to the double traveling salesman problem.
Plebe, Alessio; Anile, Angelo Marcello
2002-02-01
The double traveling salesman problem is a variation of the basic traveling salesman problem where targets can be reached by two salespersons operating in parallel. The real problem addressed by this work concerns the optimization of the harvest sequence for the two independent arms of a fruit-harvesting robot. This application poses further constraints, like a collision-avoidance function. The proposed solution is based on a self-organizing map structure, initialized with as many artificial neurons as the number of targets to be reached. One of the key components of the process is the combination of competitive relaxation with a mechanism for deleting and creating artificial neurons. Moreover, in the competitive relaxation process, information about the trajectory connecting the neurons is combined with the distance of neurons from the target. This strategy prevents tangles in the trajectory and collisions between the two tours. Results of tests indicate that the proposed approach is efficient and reliable for harvest sequence planning. Moreover, the enhancements added to the pure self-organizing map concept are of wider importance, as proved by a traveling salesman problem version of the program, simplified from the double version for comparison.
Machining of Silicon-Ribbon-Forming Dies
NASA Technical Reports Server (NTRS)
Menna, A. A.
1985-01-01
Carbon extension for dies used in forming silicon ribbon crystals machined precisely with help of special tool. Die extension has edges beveled toward narrow flats at top, with slot precisely oriented and centered between flats and bevels. Cutting tool assembled from standard angle cutter and circular saw or saws. Angle cutters cuts bevels while slot saw cuts slot between them. In alternative version, custom-ground edges or additional circular saws also cut flats simultaneously.
NASA Astrophysics Data System (ADS)
Dobaczewski, J.; Olbratowski, P.
2005-05-01
We describe the new version (v2.08k) of the code HFODD which solves the nuclear Skyrme-Hartree-Fock or Skyrme-Hartree-Fock-Bogolyubov problem by using the Cartesian deformed harmonic-oscillator basis. Similarly as in the previous version (v2.08i), all symmetries can be broken, which allows for calculations with angular frequency and angular momentum tilted with respect to the mass distribution. In the new version, three minor errors have been corrected. New Version Program SummaryTitle of program: HFODD; version: 2.08k Catalogue number: ADVA Catalogue number of previous version: ADTO (Comput. Phys. Comm. 158 (2004) 158) Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVA Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Does the new version supersede the previous one: yes Computers on which this or another recent version has been tested: SG Power Challenge L, Pentium-II, Pentium-III, AMD-Athlon Operating systems under which the program has been tested: UNIX, LINUX, Windows-2000 Programming language used: Fortran Memory required to execute with typical data: 10M words No. of bits in a word: 64 No. of lines in distributed program, including test data, etc.: 52 631 No. of bytes in distributed program, including test data, etc.: 266 885 Distribution format:tar.gz Nature of physical problem: The nuclear mean-field and an analysis of its symmetries in realistic cases are the main ingredients of a description of nuclear states. Within the Local Density Approximation, or for a zero-range velocity-dependent Skyrme interaction, the nuclear mean-field is local and velocity dependent. The locality allows for an effective and fast solution of the self-consistent Hartree-Fock equations, even for heavy nuclei, and for various nucleonic ( n-particle n-hole) configurations, deformations, excitation energies, or angular momenta. Similar Local Density Approximation in the particle-particle channel, which is equivalent to using a zero-range interaction, allows for a simple implementation of pairing effects within the Hartree-Fock-Bogolyubov method. Solution method: The program uses the Cartesian harmonic-oscillator basis to expand single-particle or single-quasiparticle wave functions of neutrons and protons interacting by means of the Skyrme effective interaction and zero-range pairing interaction. The expansion coefficients are determined by the iterative diagonalization of the mean field Hamiltonians or Routhians which depend non-linearly on the local neutron and proton densities. Suitable constrains are used to obtain states corresponding to a given configuration, deformation or angular momentum. The method of solution has been presented in [J. Dobaczewski, J. Dudek, Comput. Phys. Comm. 102 (1997) 166]. Summary of revisions: 1. Incorrect value of the " t" force parameter for SLY5 has been corrected. 2. Opening of an empty file "FILREC" for IWRIRE=-1 has been removed. 3. Call to subroutine "OLSTOR" has been moved before that to "SPZERO". In this way, correct data transferred to "FLISIG", "FLISIM", "FLISIQ" or "FLISIZ" allow for a correct determination of the candidate states for diabatic blocking. These corrections pertain to the user interface of the code and do not affect results performed for forces other than SLY5. Restrictions on the complexity of the problem: The main restriction is the CPU time required for calculations of heavy deformed nuclei and for a given precision required. Pairing correlations are only included for even-even nuclei and conserved simplex symmetry. Unusual features: The user must have access to the NAGLIB subroutine F02AXE or to the LAPACK subroutines ZHPEV or ZHPEVX, which diagonalize complex Hermitian matrices, or provide another subroutine which can perform such a task. The LAPACK subroutines ZHPEV and ZHPEVX can be obtained from the Netlib Repository at University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/cgi-bin/netlibfiles.pl?filename=/lapack/complex16/zhpev.f and http://netlib2.cs.utk.edu/cgi-bin/netlibfiles.pl?filename=/lapack/complex16/zhpevx.f, respectively. The code is written in single-precision for use on a 64-bit processor. The compiler option -r8 or +autodblpad (or equivalent) has to be used to promote all real and complex single-precision floating-point items to double precision when the code is used on a 32-bit machine. Typical running time: One Hartree-Fock iteration for the superdeformed, rotating, parity conserving state of 15266Dy 86 takes about six seconds on the AMD-Athlon 1600+ processor. Starting from the Woods-Saxon wave functions, about fifty iterations are required to obtain the energy converged within the precision of about 0.1 keV. In the case when every value of the angular velocity is converged separately, the complete superdeformed band with precisely determined dynamical moments J can be obtained within forty minutes of CPU on the AMD-Athlon 1600+ processor. This time can be often reduced by a factor of three when a self-consistent solution for a given rotational frequency is used as a starting point for a neighboring rotational frequency. Additional comments: The actual output files obtained during user's test runs may differ from those provided in the distribution file. The differences may occur because various compilers may produce different results in the following aspects: The initial Nilsson spectrum (the starting point of each run) is Kramers degenerate, and thus the diagonalization routine may return the degenerate states in arbitrary order and in arbitrary mixture. For an odd number of particles, one of these states becomes occupied, and the other one is left empty. Therefore, starting points of such runs can widely vary from compiler to compiler, and these differences cannot be controlled. For axial shapes, two quadrupole moments (with respect to two different axes) become very small and their values reflect only a numerical noise. However, depending on which of these two moments is smaller, the intrinsic-frame Euler axes will differ, most often by 180 degrees. Hence, signs of some moments and angular momenta may vary from compiler to compiler, and these differences cannot be controlled. These differences are insignificant. The final energies do not depend on them, although the intermediate results can.
2015-09-01
Hat Enterprise Linux, version 6.5 • Android Development Tools (ADT), version 22.3.0-887826 • Saferoot1 • Samsung Galaxy S3 • Dell Precision T7400...method used for the Samsung Galaxy S3 is called Saferoot1—a well- known, open- source software. According to the Saferoot website, the process of...is applicable for the Samsung Galaxy S3 as well as many other Android devices, but there are several steps involved in rooting an Android device (as
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W. S.
Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.
Linearized lattice Boltzmann method for micro- and nanoscale flow and heat transfer.
Shi, Yong; Yap, Ying Wan; Sader, John E
2015-07-01
Ability to characterize the heat transfer in flowing gases is important for a wide range of applications involving micro- and nanoscale devices. Gas flows away from the continuum limit can be captured using the Boltzmann equation, whose analytical solution poses a formidable challenge. An efficient and accurate numerical simulation of the Boltzmann equation is thus highly desirable. In this article, the linearized Boltzmann Bhatnagar-Gross-Krook equation is used to develop a hierarchy of thermal lattice Boltzmann (LB) models based on half-space Gaussian-Hermite (GH) quadrature ranging from low to high algebraic precision, using double distribution functions. Simplified versions of the LB models in the continuum limit are also derived, and are shown to be consistent with existing thermal LB models for noncontinuum heat transfer reported in the literature. Accuracy of the proposed LB hierarchy is assessed by simulating thermal Couette flows for a wide range of Knudsen numbers. Effects of the underlying quadrature schemes (half-space GH vs full-space GH) and continuum-limit simplifications on computational accuracy are also elaborated. The numerical findings in this article provide direct evidence of improved computational capability of the proposed LB models for modeling noncontinuum flows and heat transfer at small length scales.
STT Doubles with Large DM - Part IV: Ophiuchus and Hercules
NASA Astrophysics Data System (ADS)
Knapp, Wilfried; Nanson, John
2016-04-01
The results of visual double star observing sessions suggested a pattern for STT doubles with large DM of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. We found that like in the other constellations covered so far (Gem, Leo, UMa, etc.) at least several of the selected objects in Ophiuchus and Hercules show parameters quite different from the current WDS data.
STT Doubles with Large DM - Part V: Aquila, Delphinus, Cygnus, Aquarius
NASA Astrophysics Data System (ADS)
Knapp, Wilfried; Nanson, John
2016-07-01
The results of visual double star observing sessions suggested a pattern for STT doubles with large DM of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. We found that, as in the other constellations covered so far (Gem, Leo, UMa etc.), at least several of the selected objects in Aql, Del, Cyg and Aqr show parameters quite different from the current WDS data
NASA Astrophysics Data System (ADS)
Behr, Bradford B.; Cenko, Andrew T.; Hajian, Arsen R.; McMillan, Robert S.; Murison, Marc; Meade, Jeff; Hindsley, Robert
2011-07-01
We present orbital parameters for six double-lined spectroscopic binaries (ι Pegasi, ω Draconis, 12 Boötis, V1143 Cygni, β Aurigae, and Mizar A) and two double-lined triple star systems (κ Pegasi and η Virginis). The orbital fits are based upon high-precision radial velocity (RV) observations made with a dispersed Fourier Transform Spectrograph, or dFTS, a new instrument that combines interferometric and dispersive elements. For some of the double-lined binaries with known inclination angles, the quality of our RV data permits us to determine the masses M 1 and M 2 of the stellar components with relative errors as small as 0.2%.
A novel double fine guide sensor design on space telescope
NASA Astrophysics Data System (ADS)
Zhang, Xu-xu; Yin, Da-yi
2018-02-01
To get high precision attitude for space telescope, a double marginal FOV (field of view) FGS (Fine Guide Sensor) is proposed. It is composed of two large area APS CMOS sensors and both share the same lens in main light of sight. More star vectors can be get by two FGS and be used for high precision attitude determination. To improve star identification speed, the vector cross product in inter-star angles for small marginal FOV different from traditional way is elaborated and parallel processing method is applied to pyramid algorithm. The star vectors from two sensors are then used to attitude fusion with traditional QUEST algorithm. The simulation results show that the system can get high accuracy three axis attitudes and the scheme is feasibility.
Airborne 2-Micron Double Pulsed Direct Detection IPDA Lidar for Atmospheric CO2 Measurement
NASA Technical Reports Server (NTRS)
Yu, Jirong; Petros, Mulugeta; Refaat, Tamer F.; Reithmaier, Karl; Remus, Ruben; Singh, Upendra; Johnson, Will; Boyer, Charlie; Fay, James; Johnston, Susan;
2015-01-01
An airborne 2-micron double-pulsed Integrated Path Differential Absorption (IPDA) lidar has been developed for atmospheric CO2 measurements. This new 2-miron pulsed IPDA lidar has been flown in spring of 2014 for total ten flights with 27 flight hours. It provides high precision measurement capability by unambiguously eliminating contamination from aerosols and clouds that can bias the IPDA measurement.
Neural-Network-Development Program
NASA Technical Reports Server (NTRS)
Phillips, Todd A.
1993-01-01
NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.
A numerical differentiation library exploiting parallel architectures
NASA Astrophysics Data System (ADS)
Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.
2009-08-01
We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.
SU (2) lattice gauge theory simulations on Fermi GPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoso, Nuno, E-mail: nunocardoso@cftp.ist.utl.p; Bicudo, Pedro, E-mail: bicudo@ist.utl.p
2011-05-10
In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes formore » the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200x the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2x slower) than single precision computations.« less
Double soft limit of the graviton amplitude from the Cachazo-He-Yuan formalism
NASA Astrophysics Data System (ADS)
Saha, Arnab Priya
2017-08-01
We present a complete analysis for double soft limit of graviton scattering amplitude using the formalism proposed by Cachazo, He, and Yuan. Our results agree with that obtained via Britto-Cachazo-Feng-Witten (BCFW) recursion relations in [T. Klose, T. McLoughlin, D. Nandan, J. Plefka, and G. Travaglini, Double-soft limits of gluons and gravitons, J. High Energy Phys. 07 (2015) 135., 10.1007/JHEP07(2015)135]. In addition we find precise relations between degenerate and nondegenerate solutions of scattering equations with local and nonlocal terms in the soft factor.
Lew, Matthew D.; Thompson, Michael A.; Badieirostami, Majid; Moerner, W. E.
2010-01-01
The point spread function (PSF) of a widefield fluorescence microscope is not suitable for three-dimensional super-resolution imaging. We characterize the localization precision of a unique method for 3D superresolution imaging featuring a double-helix point spread function (DH-PSF). The DH-PSF is designed to have two lobes that rotate about their midpoint in any transverse plane as a function of the axial position of the emitter. In effect, the PSF appears as a double helix in three dimensions. By comparing the Cramer-Rao bound of the DH-PSF with the standard PSF as a function of the axial position, we show that the DH-PSF has a higher and more uniform localization precision than the standard PSF throughout a 2 μm depth of field. Comparisons between the DH-PSF and other methods for 3D super-resolution are briefly discussed. We also illustrate the applicability of the DH-PSF for imaging weak emitters in biological systems by tracking the movement of quantum dots in glycerol and in live cells. PMID:20563317
Documentation for the machine-readable version of the Bright Star Catalogue
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.
1982-01-01
The machine-readable version of The Bright Star Catalogue, 4th edition, is described. In addition to the large number of newly determined fundamental data, such as photoelectric magnitudes, MK spectral types, parallaxes, and radial velocities, the new version contains data and information not included in the third edition such as the identification of IR sources, U-B and R-I colors, radial velocity comments (indication and identification of spectroscopic and occultation binaries), and projected rotational velocities. The equatorial coordinates for equinoxes 1900 and 2000 are recorded to greater precision details concerning variability, spectral characteristics, duplicity, and group membership are included. Data compiled through 1979, some information and variable-star designations found through 1981 are considered.
Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui
2004-01-01
A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.
Estimation of satellite position, clock and phase bias corrections
NASA Astrophysics Data System (ADS)
Henkel, Patrick; Psychas, Dimitrios; Günther, Christoph; Hugentobler, Urs
2018-05-01
Precise point positioning with integer ambiguity resolution requires precise knowledge of satellite position, clock and phase bias corrections. In this paper, a method for the estimation of these parameters with a global network of reference stations is presented. The method processes uncombined and undifferenced measurements of an arbitrary number of frequencies such that the obtained satellite position, clock and bias corrections can be used for any type of differenced and/or combined measurements. We perform a clustering of reference stations. The clustering enables a common satellite visibility within each cluster and an efficient fixing of the double difference ambiguities within each cluster. Additionally, the double difference ambiguities between the reference stations of different clusters are fixed. We use an integer decorrelation for ambiguity fixing in dense global networks. The performance of the proposed method is analysed with both simulated Galileo measurements on E1 and E5a and real GPS measurements of the IGS network. We defined 16 clusters and obtained satellite position, clock and phase bias corrections with a precision of better than 2 cm.
Selections From the AIDSinfo Glossary | NIH MedlinePlus the Magazine
... is responsible for most HIV infections throughout the world, whereas HIV-2 is found primarily in West Africa. Retrovirus A type of virus that stores its genetic information in a single-stranded RNA molecule, and constructs a double-stranded DNA version ...
70 Years of Making the World Safer: Extended
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Extended version with narration. This video shows our roles in making the world safer — working to end World War II, providing stable isotopes for research, providing unique precision manufacturing capabilities, and meeting nonproliferation and global security missions.
Double-moment cloud microphysics scheme for the deep convection parameterization in the GFDL AM3
NASA Astrophysics Data System (ADS)
Belochitski, A.; Donner, L.
2014-12-01
A double-moment cloud microphysical scheme originally developed by Morrision and Gettelman (2008) for the stratiform clouds and later adopted for the deep convection by Song and Zhang (2011) has been implemented in to the Geophysical Fluid Dynamics Laboratory's atmospheric general circulation model AM3. The scheme treats cloud drop, cloud ice, rain, and snow number concentrations and mixing ratios as diagnostic variables and incorporates processes of autoconversion, self-collection, collection between hydrometeor species, sedimentation, ice nucleation, drop activation, homogeneous and heterogeneous freezing, and the Bergeron-Findeisen process. Such detailed representation of microphysical processes makes the scheme suitable for studying the interactions between aerosols and convection, as well as aerosols' indirect effects on clouds and their roles in climate change. The scheme is first tested in the single column version of the GFDL AM3 using forcing data obtained at the U.S. Department of Energy Atmospheric Radiation Measurment project's Southern Great Planes site. Scheme's impact on SCM simulations is discussed. As the next step, runs of the full atmospheric GCM incorporating the new parameterization are compared to the unmodified version of GFDL AM3. Global climatological fields and their variability are contrasted with those of the original version of the GCM. Impact on cloud radiative forcing and climate sensitivity is investigated.
Atomically Precise Interfaces from Non-stoichiometric Deposition
NASA Astrophysics Data System (ADS)
Nie, Yuefeng; Zhu, Ye; Lee, Che-Hui; Kourkoutis, Lena; Mundy, Julia; Junquera, Javier; Ghosez, Philippe; Baek, David; Sung, Suk Hyun; Xi, Xiaoxing; Shen, Kyle; Muller, David; Schlom, Darrell
2015-03-01
Complex oxide heterostructures display some of the most chemically abrupt, atomically precise interfaces, which is advantageous when constructing new interface phases with emergent properties by juxtaposing incompatible ground states. One might assume that atomically precise interfaces result from stoichiometric growth. Here we show that the most precise control is, however, obtained by using deliberate and specific non-stoichiometric growth conditions. For the precise growth of Srn+1TinO3n+1 Ruddlesden-Popper (RP) phases, stoichiometric deposition leads to the loss of the first RP rock-salt double layer, but growing with a strontium-rich surface layer restores the bulk stoichiometry and ordering of the subsurface RP structure. Our results dramatically expand the materials that can be prepared in epitaxial heterostructures with precise interface control--from just the n = 1 end members (perovskites) to the entire RP homologous series--enabling the exploration of novel quantum phenomena at a richer variety of oxide interfaces.
Atomically precise interfaces from non-stoichiometric deposition
NASA Astrophysics Data System (ADS)
Nie, Y. F.; Zhu, Y.; Lee, C.-H.; Kourkoutis, L. F.; Mundy, J. A.; Junquera, J.; Ghosez, Ph.; Baek, D. J.; Sung, S.; Xi, X. X.; Shen, K. M.; Muller, D. A.; Schlom, D. G.
2014-08-01
Complex oxide heterostructures display some of the most chemically abrupt, atomically precise interfaces, which is advantageous when constructing new interface phases with emergent properties by juxtaposing incompatible ground states. One might assume that atomically precise interfaces result from stoichiometric growth. Here we show that the most precise control is, however, obtained by using deliberate and specific non-stoichiometric growth conditions. For the precise growth of Srn+1TinOn+1 Ruddlesden-Popper (RP) phases, stoichiometric deposition leads to the loss of the first RP rock-salt double layer, but growing with a strontium-rich surface layer restores the bulk stoichiometry and ordering of the subsurface RP structure. Our results dramatically expand the materials that can be prepared in epitaxial heterostructures with precise interface control—from just the n=∞ end members (perovskites) to the entire RP homologous series—enabling the exploration of novel quantum phenomena at a richer variety of oxide interfaces.
Corpus and Method for Identifying Citations in Non-Academic Text (Open Access, Publisher’s Version)
2014-05-31
patents, train a CRF classifier to find new citations, and apply a reranker to incorporate non-local information. Our best system achieves 0.83 F -score on...report precision, recall, and F -scores on chunk level. CRF training and decoding is performed with the CRF++ package7 using its default setting. 5.1...only obtain a very small number of training examples for statistical rerankers. 7http://crfpp.sourceforge.net Precision Recall F -score TEXT 0.7997 0.7805
DOE Office of Scientific and Technical Information (OSTI.GOV)
Utama, Muhammad Reza July, E-mail: muhammad.reza@bmkg.go.id; Indonesian Meteorological, Climatological and Geophysical Agency; Nugraha, Andri Dian
The precise hypocenter was determined location using double difference method around subduction zone in Moluccas area eastern part of Indonesia. The initial hypocenter location from MCGA data catalogue of 1,945 earthquake events. Basically the principle of double-difference algorithm assumes if the distance between two earthquake hypocenter distribution is very small compared to the distance between the station to the earthquake source, the ray path can be considered close to both earthquakes. The results show the initial earthquakes with a certain depth (fix depth 10 km) relocated and can be interpreted more reliable in term of seismicity and geological setting. Themore » relocation of the intra slab earthquakes beneath Banda Arc are also clearly observed down to depth of about 400 km. The precise relocated hypocenter will give invaluable seismicity information for other seismological and tectonic studies especially for seismic hazard analysis in this region.« less
Accurate computation of gravitational field of a tesseroid
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2018-02-01
We developed an accurate method to compute the gravitational field of a tesseroid. The method numerically integrates a surface integral representation of the gravitational potential of the tesseroid by conditionally splitting its line integration intervals and by using the double exponential quadrature rule. Then, it evaluates the gravitational acceleration vector and the gravity gradient tensor by numerically differentiating the numerically integrated potential. The numerical differentiation is conducted by appropriately switching the central and the single-sided second-order difference formulas with a suitable choice of the test argument displacement. If necessary, the new method is extended to the case of a general tesseroid with the variable density profile, the variable surface height functions, and/or the variable intervals in longitude or in latitude. The new method is capable of computing the gravitational field of the tesseroid independently on the location of the evaluation point, namely whether outside, near the surface of, on the surface of, or inside the tesseroid. The achievable precision is 14-15 digits for the potential, 9-11 digits for the acceleration vector, and 6-8 digits for the gradient tensor in the double precision environment. The correct digits are roughly doubled if employing the quadruple precision computation. The new method provides a reliable procedure to compute the topographic gravitational field, especially that near, on, and below the surface. Also, it could potentially serve as a sure reference to complement and elaborate the existing approaches using the Gauss-Legendre quadrature or other standard methods of numerical integration.
Derivatized versions of ligase enzymes for constructing DNA sequences
Mariella, Jr., Raymond P.; Christian, Allen T [Tracy, CA; Tucker, James D [Novi, MN; Dzenitis, John M [Livermore, CA; Papavasiliou, Alexandros P [Oakland, CA
2006-08-15
A method of making very long, double-stranded synthetic poly-nucleotides. A multiplicity of short oligonucleotides is provided. The short oligonucleotides are sequentially hybridized to each other. Enzymatic ligation of the oligonucleotides provides a contiguous piece of PCR-ready DNA of predetermined sequence.
An automatic multigrid method for the solution of sparse linear systems
NASA Technical Reports Server (NTRS)
Shapira, Yair; Israeli, Moshe; Sidi, Avram
1993-01-01
An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.
Using VizieR/Aladin to Measure Neglected Double Stars
NASA Astrophysics Data System (ADS)
Harshaw, Richard
2013-04-01
The VizierR service of the Centres de Donnes Astronomiques de Strasbourg (France) offers amateur astronomers a treasure trove of resources, including access to the most current version of the Washington Double Star Catalog (WDS) and links to tens of thousands of digitized sky survey plates via the Aladin Java applet. These plates allow the amateur to make accurate measurements of position angle and separation for many neglected pairs that fall within reasonable tolerances for the use of Aladin. This paper presents 428 measurements of 251 neglected pairs from the WDS.
Measurement-induced decoherence and information in double-slit interference.
Kincaid, Joshua; McLelland, Kyle; Zwolak, Michael
2016-07-01
The double slit experiment provides a classic example of both interference and the effect of observation in quantum physics. When particles are sent individually through a pair of slits, a wave-like interference pattern develops, but no such interference is found when one observes which "path" the particles take. We present a model of interference, dephasing, and measurement-induced decoherence in a one-dimensional version of the double-slit experiment. Using this model, we demonstrate how the loss of interference in the system is correlated with the information gain by the measuring apparatus/observer. In doing so, we give a modern account of measurement in this paradigmatic example of quantum physics that is accessible to students taking quantum mechanics at the graduate or senior undergraduate levels.
NASA Astrophysics Data System (ADS)
Dobaczewski, J.; Olbratowski, P.
2004-04-01
We describe the new version (v2.08i) of the code HFODD which solves the nuclear Skyrme-Hartree-Fock or Skyrme-Hartree-Fock-Bogolyubov problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, all symmetries can be broken, which allows for calculations with angular frequency and angular momentum tilted with respect to the mass distribution. The new version contains an interface to the LAPACK subroutine ZHPEVX. Program summaryTitle of the program:HFODD (v2.08i) Catalogue number: ADTO Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTO Reference in CPC for earlier version of program: J. Dobaczewski and J. Dudek, Comput. Phys. Commun. 131 (2000) 164 (v1.75r) Catalogue number of previous version: ADML Licensing provisions: none Does the new version supersede the previous one: yes Computers on which the program has been tested: SG Power Challenge L, Pentium-II, Pentium-III, AMD-Athlon Operating systems: UNIX, LINUX, Windows-2000 Programming language used: FORTRAN-77 and FORTRAN-90 Memory required to execute with typical data: 10 Mwords No. of bits in a word: The code is written in single-precision for the use on a 64-bit processor. The compiler option -r8 or +autodblpad (or equivalent) has to be used to promote all real and complex single-precision floating-point items to double precision when the code is used on a 32-bit machine. Has the code been vectorised?: Yes No. of bytes in distributed program, including test data, etc.: 265352 No. of lines in distributed program: 52656 Distribution format: tar gzip file Nature of physical problem: The nuclear mean-field and an analysis of its symmetries in realistic cases are the main ingredients of a description of nuclear states. Within the Local Density Approximation, or for a zero-range velocity-dependent Skyrme interaction, the nuclear mean-field is local and velocity dependent. The locality allows for an effective and fast solution of the self-consistent Hartree-Fock equations, even for heavy nuclei, and for various nucleonic (n-particle n-hole) configurations, deformations, excitation energies, or angular momenta. Similar Local Density Approximation in the particle-particle channel, which is equivalent to using a zero-range interaction, allows for a simple implementation of pairing effects within the Hartree-Fock-Bogolyubov method. Method of solution: The program uses the Cartesian harmonic oscillator basis to expand single-particle or single-quasiparticle wave functions of neutrons and protons interacting by means of the Skyrme effective interaction and zero-range pairing interaction. The expansion coefficients are determined by the iterative diagonalization of the mean field Hamiltonians or Routhians which depend non-linearly on the local neutron and proton densities. Suitable constraints are used to obtain states corresponding to a given configuration, deformation or angular momentum. The method of solution has been presented in: J. Dobaczewski, J. Dudek, Comput. Phys. Commun. 102 (1997) 166. Summary of revisions:Two insignificant errors have been corrected. Breaking of all the three plane-reflection symmetries has been implemented. Breaking of all the three time-reversal×plane-reflection symmetries has been implemented. Conservation of parity with simultaneously broken simplex has been implemented. Tilted-axis cranking has been implemented. Cranking with isovector angular frequency has been implemented. Quadratic constraint on tilted angular momentum has been added. Constraint on the vector product of angular frequency and angular momentum has been added. Calculation of surface multipole moments has been added. Constraints on surface multipole moments have been added. Calculation of magnetic moments has been added. Calculation of multipole and surface multipole moments in the center-of-mass reference frame has been added. Calculation of multipole, surface multipole, and magnetic moments in the principal-axes (intrinsic) reference frame has been added. Calculation of angular momenta in the center-of-mass and principal-axes reference frames has been added. New single-particle observables for a diabatic blocking have been added. Solution of the Hartree-Fock-Bogolyubov equations has been implemented. Non-standard spin-orbit energy density has been implemented. Non-standard center-of-mass corrections have been implemented. Definition of the time-odd terms through the Landau parameters has been implemented. Definition of Skyrme forces taken from the literature now includes the force parameters as well as the value of the nucleon mass and the treatment of tensor, spin-orbit, and center-of-mass terms specific to the given force. Interface to the LAPACK subroutine ZHPEVX has been implemented. Computer memory management has been improved by implementing the memory-allocation features available within FORTRAN-90. Restrictions on the complexity of the problem: The main restriction is the CPU time required for calculations of heavy deformed nuclei and for a given precision required. Pairing correlations are only included for even-even nuclei and conserved simplex symmetry. Typical running time: One Hartree-Fock iteration for the superdeformed, rotating, parity conserving state of 15266Dy 86 takes about six seconds on the AMD-Athlon 1600+ processor. Starting from the Woods-Saxon wave functions, about fifty iterations are required to obtain the energy converged within the precision of about 0.1 keV. In case when every value of the angular velocity is converged separately, the complete superdeformed band with precisely determined dynamical moments J(2) can be obtained within forty minutes of CPU on the AMD-Athlon 1600+ processor. This time can be often reduced by a factor of three when a self-consistent solution for a given rotational frequency is used as a starting point for a neighboring rotational frequency. Unusual features of the program: The user must have an access to the NAGLIB subroutine F02AXE, or to the LAPACK subroutines ZHPEV or ZHPEVX, which diagonalize complex hermitian matrices, or provide another subroutine which can perform such a task. The LAPACK subroutines ZHPEV and ZHPEVX can be obtained from the Netlib Repository at University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/cgi-bin/netlibfiles.pl?filename=/lapack/complex16/zhpev.f and http://netlib2.cs.utk.edu/cgi-bin/netlibfiles.pl?filename=/lapack/complex16/zhpevx.f respectively.
Tellurium Stable Isotope Fractionation in Chondritic Meteorites
NASA Astrophysics Data System (ADS)
Fehr, M. A.; Hammond, S. J.; Parkinson, I. J.
2014-09-01
New Te double spike procedures were set up to obtain high-precision accurate Te stable isotope data. Tellurium stable isotope data for 16 chondrite falls are presented, providing evidence for significant Te stable isotope fractionation.
NULL Convention Floating Point Multiplier
Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069
NULL convention floating point multiplier.
Albert, Anitha Juliette; Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
Sterile Neutrino Search with the Double Chooz Experiment
NASA Astrophysics Data System (ADS)
Hellwig, D.; Matsubara, T.;
2017-09-01
Double Chooz is a reactor antineutrino disappearance experiment located in Chooz, France. A far detector at a distance of about 1 km from reactor cores is operating since 2011; a near detector of identical design at a distance of about 400 m is operating since begin 2015. Beyond the precise measurement of θ 13, Double Chooz has a strong sensitivity to so called light sterile neutrinos. Sterile neutrinos are neutrino mass states not taking part in weak interactions, but may mix with known neutrino states. In this paper, we present an analysis method to search for sterile neutrinos and the expected sensitivity with the baselines of our detectors.
Double Star Measurements at the Northern Sky with a 10 inch Newtonian in 2014 and 2015
NASA Astrophysics Data System (ADS)
Anton, Rainer
2017-07-01
A 10 inch Newtonian was used for recordings of double stars with a CCD webcam, and measurements of 120 pairs were done with the technique of “lucky imaging”. A rather accurate value of the image scale was obtained with reference systems from the recently published Gaia catalogue of very precise position data. For several pairs, deviations from currently assumed orbits were found. Some images of noteworthy systems are also presented.
Double metric, generalized metric, and α' -deformed double field theory
NASA Astrophysics Data System (ADS)
Hohm, Olaf; Zwiebach, Barton
2016-03-01
We relate the unconstrained "double metric" of the "α' -geometry" formulation of double field theory to the constrained generalized metric encoding the spacetime metric and b -field. This is achieved by integrating out auxiliary field components of the double metric in an iterative procedure that induces an infinite number of higher-derivative corrections. As an application, we prove that, to first order in α' and to all orders in fields, the deformed gauge transformations are Green-Schwarz-deformed diffeomorphisms. We also prove that to first order in α' the spacetime action encodes precisely the Green-Schwarz deformation with Chern-Simons forms based on the torsionless gravitational connection. This seems to be in tension with suggestions in the literature that T-duality requires a torsionful connection, but we explain that these assertions are ambiguous since actions that use different connections are related by field redefinitions.
Testing Precision of Movement of Curiosity Robotic Arm
2012-02-22
A NASA Mars Science Laboratory test rover called the Vehicle System Test Bed, or VSTB, at NASA Jet Propulsion Laboratory, Pasadena, CA serves as the closest double for Curiosity in evaluations of the mission hardware and software.
3D Printing in Surgical Management of Double Outlet Right Ventricle.
Yoo, Shi-Joon; van Arsdell, Glen S
2017-01-01
Double outlet right ventricle (DORV) is a heterogeneous group of congenital heart diseases that require individualized surgical approach based on precise understanding of the complex cardiovascular anatomy. Physical 3-dimensional (3D) print models not only allow fast and unequivocal perception of the complex anatomy but also eliminate misunderstanding or miscommunication among imagers and surgeons. Except for those cases showing well-recognized classic surgical anatomy of DORV such as in cases with a typical subaortic or subpulmonary ventricular septal defect, 3D print models are of enormous value in surgical decision and planning. Furthermore, 3D print models can also be used for rehearsal of the intended procedure before the actual surgery on the patient so that the outcome of the procedure is precisely predicted and the procedure can be optimally tailored for the patient's specific anatomy. 3D print models are invaluable resource for hands-on surgical training of congenital heart surgeons.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826
Potential Application of a Graphical Processing Unit to Parallel Computations in the NUBEAM Code
NASA Astrophysics Data System (ADS)
Payne, J.; McCune, D.; Prater, R.
2010-11-01
NUBEAM is a comprehensive computational Monte Carlo based model for neutral beam injection (NBI) in tokamaks. NUBEAM computes NBI-relevant profiles in tokamak plasmas by tracking the deposition and the slowing of fast ions. At the core of NUBEAM are vector calculations used to track fast ions. These calculations have recently been parallelized to run on MPI clusters. However, cost and interlink bandwidth limit the ability to fully parallelize NUBEAM on an MPI cluster. Recent implementation of double precision capabilities for Graphical Processing Units (GPUs) presents a cost effective and high performance alternative or complement to MPI computation. Commercially available graphics cards can achieve up to 672 GFLOPS double precision and can handle hundreds of thousands of threads. The ability to execute at least one thread per particle simultaneously could significantly reduce the execution time and the statistical noise of NUBEAM. Progress on implementation on a GPU will be presented.
NASA Astrophysics Data System (ADS)
Aghasyan, M.; Alexeev, M. G.; Alexeev, G. D.; Amoroso, A.; Andrieux, V.; Anfimov, N. V.; Anosov, V.; Antoshkin, A.; Augsten, K.; Augustyniak, W.; Austregesilo, A.; Azevedo, C. D. R.; Badełek, B.; Balestra, F.; Ball, M.; Barth, J.; Beck, R.; Bedfer, Y.; Bernhard, J.; Bicker, K.; Bielert, E. R.; Birsa, R.; Bodlak, M.; Bordalo, P.; Bradamante, F.; Bressan, A.; Büchele, M.; Burtsev, V. E.; Capozza, L.; Chang, W.-C.; Chatterjee, C.; Chiosso, M.; Choi, I.; Chumakov, A. G.; Chung, S.-U.; Cicuttin, A.; Crespo, M. L.; Dalla Torre, S.; Dasgupta, S. S.; Dasgupta, S.; Denisov, O. Yu.; Dhara, L.; Donskov, S. V.; Doshita, N.; Dreisbach, Ch.; Dünnweber, W.; Dusaev, R. R.; Dziewiecki, M.; Efremov, A.; Eversheim, P. D.; Faessler, M.; Ferrero, A.; Finger, M.; Finger, M.; Fischer, H.; Franco, C.; Du Fresne von Hohenesche, N.; Friedrich, J. M.; Frolov, V.; Fuchey, E.; Gautheron, F.; Gavrichtchouk, O. P.; Gerassimov, S.; Giarra, J.; Giordano, F.; Gnesi, I.; Gorzellik, M.; Grasso, A.; Gridin, A.; Grosse Perdekamp, M.; Grube, B.; Grussenmeyer, T.; Guskov, A.; Hahne, D.; Hamar, G.; von Harrach, D.; Heinsius, F. H.; Heitz, R.; Herrmann, F.; Horikawa, N.; D'Hose, N.; Hsieh, C.-Y.; Huber, S.; Ishimoto, S.; Ivanov, A.; Iwata, T.; Jary, V.; Joosten, R.; Jörg, P.; Kabuß, E.; Kerbizi, A.; Ketzer, B.; Khaustov, G. V.; Khokhlov, Yu. A.; Kisselev, Yu.; Klein, F.; Koivuniemi, J. H.; Kolosov, V. N.; Kondo, K.; Königsmann, K.; Konorov, I.; Konstantinov, V. F.; Kotzinian, A. M.; Kouznetsov, O. M.; Kral, Z.; Krämer, M.; Kremser, P.; Krinner, F.; Kroumchtein, Z. V.; Kulinich, Y.; Kunne, F.; Kurek, K.; Kurjata, R. P.; Kuznetsov, I. I.; Kveton, A.; Lednev, A. A.; Levchenko, E. A.; Levillain, M.; Levorato, S.; Lian, Y.-S.; Lichtenstadt, J.; Longo, R.; Lyubovitskij, V. E.; Maggiora, A.; Magnon, A.; Makins, N.; Makke, N.; Mallot, G. K.; Mamon, S. A.; Marianski, B.; Martin, A.; Marzec, J.; Matoušek, J.; Matsuda, H.; Matsuda, T.; Meshcheryakov, G. V.; Meyer, M.; Meyer, W.; Mikhailov, Yu. V.; Mikhasenko, M.; Mitrofanov, E.; Mitrofanov, N.; Miyachi, Y.; Moretti, A.; Nagaytsev, A.; Nerling, F.; Neyret, D.; Nový, J.; Nowak, W.-D.; Nukazuka, G.; Nunes, A. S.; Olshevsky, A. G.; Orlov, I.; Ostrick, M.; Panzieri, D.; Parsamyan, B.; Paul, S.; Peng, J.-C.; Pereira, F.; Pešek, M.; Pešková, M.; Peshekhonov, D. V.; Pierre, N.; Platchkov, S.; Pochodzalla, J.; Polyakov, V. A.; Pretz, J.; Quaresma, M.; Quintans, C.; Ramos, S.; Regali, C.; Reicherz, G.; Riedl, C.; Rogacheva, N. S.; Ryabchikov, D. I.; Rybnikov, A.; Rychter, A.; Salac, R.; Samoylenko, V. D.; Sandacz, A.; Santos, C.; Sarkar, S.; Savin, I. A.; Sawada, T.; Sbrizzai, G.; Schiavon, P.; Schmidt, K.; Schmieden, H.; Schönning, K.; Seder, E.; Selyunin, A.; Silva, L.; Sinha, L.; Sirtl, S.; Slunecka, M.; Smolik, J.; Srnka, A.; Steffen, D.; Stolarski, M.; Subrt, O.; Sulc, M.; Suzuki, H.; Szabelski, A.; Szameitat, T.; Sznajder, P.; Tasevsky, M.; Tessaro, S.; Tessarotto, F.; Thiel, A.; Tomsa, J.; Tosello, F.; Tskhay, V.; Uhl, S.; Vasilishin, B. I.; Vauth, A.; Veloso, J.; Vidon, A.; Virius, M.; Wallner, S.; Weisrock, T.; Wilfert, M.; Windmolders, R.; Ter Wolbeek, J.; Zaremba, K.; Zavada, P.; Zavertyaev, M.; Zemlyanichkina, E.; Ziembicki, M.; Compass Collaboration
2018-06-01
We present a precise measurement of the proton longitudinal double-spin asymmetry A1p and the proton spin-dependent structure function g1p at photon virtualities 0.006(GeV / c) 2
Developing a New Wireless Sensor Network Platform and Its Application in Precision Agriculture
Aquino-Santos, Raúl; González-Potes, Apolinar; Edwards-Block, Arthur; Virgen-Ortiz, Raúl Alejandro
2011-01-01
Wireless sensor networks are gaining greater attention from the research community and industrial professionals because these small pieces of “smart dust” offer great advantages due to their small size, low power consumption, easy integration and support for “green” applications. Green applications are considered a hot topic in intelligent environments, ubiquitous and pervasive computing. This work evaluates a new wireless sensor network platform and its application in precision agriculture, including its embedded operating system and its routing algorithm. To validate the technological platform and the embedded operating system, two different routing strategies were compared: hierarchical and flat. Both of these routing algorithms were tested in a small-scale network applied to a watermelon field. However, we strongly believe that this technological platform can be also applied to precision agriculture because it incorporates a modified version of LORA-CBF, a wireless location-based routing algorithm that uses cluster-based flooding. Cluster-based flooding addresses the scalability concerns of wireless sensor networks, while the modified LORA-CBF routing algorithm includes a metric to monitor residual battery energy. Furthermore, results show that the modified version of LORA-CBF functions well with both the flat and hierarchical algorithms, although it functions better with the flat algorithm in a small-scale agricultural network. PMID:22346622
Developing a new wireless sensor network platform and its application in precision agriculture.
Aquino-Santos, Raúl; González-Potes, Apolinar; Edwards-Block, Arthur; Virgen-Ortiz, Raúl Alejandro
2011-01-01
Wireless sensor networks are gaining greater attention from the research community and industrial professionals because these small pieces of "smart dust" offer great advantages due to their small size, low power consumption, easy integration and support for "green" applications. Green applications are considered a hot topic in intelligent environments, ubiquitous and pervasive computing. This work evaluates a new wireless sensor network platform and its application in precision agriculture, including its embedded operating system and its routing algorithm. To validate the technological platform and the embedded operating system, two different routing strategies were compared: hierarchical and flat. Both of these routing algorithms were tested in a small-scale network applied to a watermelon field. However, we strongly believe that this technological platform can be also applied to precision agriculture because it incorporates a modified version of LORA-CBF, a wireless location-based routing algorithm that uses cluster-based flooding. Cluster-based flooding addresses the scalability concerns of wireless sensor networks, while the modified LORA-CBF routing algorithm includes a metric to monitor residual battery energy. Furthermore, results show that the modified version of LORA-CBF functions well with both the flat and hierarchical algorithms, although it functions better with the flat algorithm in a small-scale agricultural network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukhovoj, A. M., E-mail: suchovoj@nf.jinr.ru; Mitsyna, L. V., E-mail: mitsyna@nf.jinr.ru; Jovancevic, N., E-mail: nikola.jovancevic@uns.ac.rs
The intensities of two-step cascades in 43 nuclei of mass number in the range of 28 ≤ A ≤ 200 were approximated to a high degree of precision within a modified version of the practical cascade-gammadecay model introduced earlier. In this version, the rate of the decrease in the model-dependent density of vibrational levels has the same value for any Cooper pair undergoing breakdown. The most probable values of radiative strength functions both for E1 and for M1 transitions are determined by using one or two peaks against a smooth model dependence on the gamma-transition energy. The statement that themore » thresholds for the breaking of Cooper pairs are higher for spherical than for deformed nuclei is a basic result of the respective analysis. The parameters of the cascade-decay process are now determined to a precision that makes it possible to observe the systematic distinctions between them for nuclei characterized by different parities of neutrons and protons.« less
NASA Astrophysics Data System (ADS)
Merzlaya, Anastasia;
2017-01-01
The heavy-ion programme of the NA61/SHINE experiment at CERN SPS is expanding to allow precise measurements of exotic particles with lifetime few hundred microns. A Vertex Detector for open charm measurements at the SPS is being constructed by the NA61/SHINE Collaboration to meet the challenges of high spatial resolution of secondary vertices and efficiency of track registration. This task is solved by the application of the coordinate sensitive CMOS Monolithic Active Pixel Sensors with extremely low material budget in the new Vertex Detector. A small-acceptance version of the Vertex Detector is being tested this year, later it will be expanded to a large-acceptance version. Simulation studies will be presented. A method of track reconstruction in the inhomogeneous magnetic field for the Vertex Detector was developed and implemented. Numerical calculations show the possibility of high precision measurements in heavy ion collisions of strange and multi strange particles, as well as heavy flavours, like charmed particles.
Schoenecker, Kathryn A.; Lubow, Bruce C.
2016-01-01
Accurately estimating the size of wildlife populations is critical to wildlife management and conservation of species. Raw counts or “minimum counts” are still used as a basis for wildlife management decisions. Uncorrected raw counts are not only negatively biased due to failure to account for undetected animals, but also provide no estimate of precision on which to judge the utility of counts. We applied a hybrid population estimation technique that combined sightability modeling, radio collar-based mark-resight, and simultaneous double count (double-observer) modeling to estimate the population size of elk in a high elevation desert ecosystem. Combining several models maximizes the strengths of each individual model while minimizing their singular weaknesses. We collected data with aerial helicopter surveys of the elk population in the San Luis Valley and adjacent mountains in Colorado State, USA in 2005 and 2007. We present estimates from 7 alternative analyses: 3 based on different methods for obtaining a raw count and 4 based on different statistical models to correct for sighting probability bias. The most reliable of these approaches is a hybrid double-observer sightability model (model MH), which uses detection patterns of 2 independent observers in a helicopter plus telemetry-based detections of radio collared elk groups. Data were fit to customized mark-resight models with individual sighting covariates. Error estimates were obtained by a bootstrapping procedure. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to double-observer modeling. The resulting population estimate corrected for multiple sources of undercount bias that, if left uncorrected, would have underestimated the true population size by as much as 22.9%. Our comparison of these alternative methods demonstrates how various components of our method contribute to improving the final estimate and demonstrates why each is necessary.
Precision controllability of the F-15 airplane
NASA Technical Reports Server (NTRS)
Sisk, T. R.; Matheny, N. W.
1979-01-01
A flying qualities evaluation conducted on a preproduction F-15 airplane permitted an assessment to be made of its precision controllability in the high subsonic and low transonic flight regime over the allowable angle of attack range. Precision controllability, or gunsight tracking, studies were conducted in windup turn maneuvers with the gunsight in the caged pipper mode and depressed 70 mils. This evaluation showed the F-15 airplane to experience severe buffet and mild-to-moderate wing rock at the higher angles of attack. It showed the F-15 airplane radial tracking precision to vary from approximately 6 to 20 mils over the load factor range tested. Tracking in the presence of wing rock essentially doubled the radial tracking error generated at the lower angles of attack. The stability augmentation system affected the tracking precision of the F-15 airplane more than it did that of previous aircraft studied.
The new version 2.12 of BKG Ntrip Client (BNC)
NASA Astrophysics Data System (ADS)
Stürze, Andrea; Mervart, Leos; Weber, Georg; Rülke, Axel; Wiesensarter, Erwin; Neumaier, Peter
2016-04-01
A new version of the BKG Ntrip Client (BNC) has been released. Originally developed in cooperation of the Federal Agency for Cartography and Geodesy (BKG) and the Czech Technical University (CTU) with a focus on multi-stream real-time access to GPS observations, the software has once again been substantially extended. Promoting Open Standards as recommended by the Radio Technical Commission for Maritime Services (RTCM) remains the prime subject. Beside its Graphical User Interface (GUI), the real-time software for Windows, Linux, Mac, and Linux platforms now comes with complete Command Line Interface (CLI) and considerable post processing functionality. RINEX Version 3 file editing & Quality Check (QC) with full support of Galileo, BeiDou, and SBAS - besides GPS and GLONASS - is part of the new features. Comparison of satellite orbit/clock files in SP3 format is another fresh ability of BNC. Simultaneous multi-station Precise Point Positioning (PPP) for real-time displacement-monitoring of entire reference station networks is one more recent addition to BNC. Implemented RTCM messages for PPP (under development) comprise satellite orbit and clock corrections, code and phase observation biases, and the Vertical Total Electron Content (VTEC) of the ionosphere. The well established, mature codebase is mostly written in C++ language. Its publication under GNU GPL is thought to be well-suited for test, validation and demonstration of new approaches in precise real-time satellite navigation when IP streaming is involved. The poster highlights BNC features which are new in version 2.12 and beneficial to IAG institutions and services such as IGS/RT-IGS and to the interested public in general.
Data Identifiers, Versioning, and Micro-citation
NASA Astrophysics Data System (ADS)
Parsons, M. A.; Duerr, R. E.
2012-12-01
Data citation, especially using Digital Object Identifiers (DOIs), is an increasingly accepted scientific practice. For example, the AGU Council asserts that data "publications" should "be credited and cited like the products of any other scientific activity," and Thomson Reuters has recently announced a data citation index built from DOIs assigned to data sets. Correspondingly, formal guidelines for how to cite a data set (using DOIs or similar identifiers/locators) have recently emerged, notably those from the international DataCite consortium, the UK Digital Curation Centre, and the US Federation of Earth Science Information Partners. These different data citation guidelines are largely congruent. They agree on the basic practice and elements of data citation, especially for relatively static, whole data collections. There is less agreement on some of the more subtle nuances of data citation. They define different methods for handling different data set versions, especially for the very dynamic, growing data sets that are common in Earth Sciences. They also differ in how people should cite specific, arbitrarily large elements, "passages," or subsets of a larger data collection, i.e., the precise data records actually used in a study. This detailed "micro-citation", and careful reference to exact versions of data are essential to ensure scientific reproducibility. Identifiers such as DOIs are necessary but not sufficient for the precise, detailed, references necessary. Careful practice must be coupled with the use of curated identifiers. In this paper we review the pros and cons of different approaches to versioning and micro-citation. We suggest a workable solution for most existing Earth science data and suggest a more rigorous path forward for the future.
Double Star Measurements at the Southern Sky with a 50 cm Reflector in 2016
NASA Astrophysics Data System (ADS)
Anton, Rainer
2017-10-01
A 50 cm Ritchey-Chrétien reflector was used for recordings of double stars with a CCD webcam, and measurements of 95 pairs were mostly obtained from âlucky imagesâ, and in some cases by speckle interferometry. The image scale was calibrated with reference systems from the recently published Gaia catalogue of precise position data. For several pairs, deviations from currently assumed orbits were found. Some images of noteworthy systems are also pre-sented.
Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Sun, Jianwei
2015-01-13
A correct description of the anion-π interaction is essential for the design of selective anion receptors and channels and important for advances in the field of supramolecular chemistry. However, it is challenging to do accurate, precise, and efficient calculations of this interaction, which are lacking in the literature. In this article, by testing sets of 20 binary anion-π complexes of fluoride, chloride, bromide, nitrate, or carbonate ions with hexafluorobenzene, 1,3,5-trifluorobenzene, 2,4,6-trifluoro-1,3,5-triazine, or 1,3,5-triazine and 30 ternary π-anion-π' sandwich complexes composed from the same monomers, we suggest domain-based local-pair natural orbital coupled cluster energies extrapolated to the complete basis-set limit as reference values. We give a detailed explanation of the origin of anion-π interactions, using the permanent quadrupole moments, static dipole polarizabilities, and electrostatic potential maps. We use symmetry-adapted perturbation theory (SAPT) to calculate the components of the anion-π interaction energies. We examine the performance of the direct random phase approximation (dRPA), the second-order screened exchange (SOSEX), local-pair natural-orbital (LPNO) coupled electron pair approximation (CEPA), and several dispersion-corrected density functionals (including generalized gradient approximation (GGA), meta-GGA, and double hybrid density functional). The LPNO-CEPA/1 results show the best agreement with the reference results. The dRPA method is only slightly less accurate and precise than the LPNO-CEPA/1, but it is considerably more efficient (6-17 times faster) for the binary complexes studied in this paper. For 30 ternary π-anion-π' sandwich complexes, we give dRPA interaction energies as reference values. The double hybrid functionals are much more efficient but less accurate and precise than dRPA. The dispersion-corrected double hybrid PWPB95-D3(BJ) and B2PLYP-D3(BJ) functionals perform better than the GGA and meta-GGA functionals for the present test set.
CHEMKIN2. General Gas-Phase Chemical Kinetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rupley, F.M.
1992-01-24
CHEMKIN is a high-level tool for chemists to use to describe arbitrary gas-phase chemical reaction mechanisms and systems of governing equations. It remains, however, for the user to select and implement a solution method; this is not provided. It consists of two major components: the Interpreter and the Gas-phase Subroutine Library. The Interpreter reads a symbolic description of an arbitrary, user-specified chemical reaction mechanism. A data file is generated which forms a link to the Gas-phase Subroutine Library, a collection of about 200 modular subroutines which may be called to return thermodynamic properties, chemical production rates, derivatives of thermodynamic properties,more » derivatives of chemical production rates, or sensitivity parameters. Both single and double precision versions of CHEMKIN are included. Also provided is a set of FORTRAN subroutines for evaluating gas-phase transport properties such as thermal conductivities, viscosities, and diffusion coefficients. These properties are an important part of any computational simulation of a chemically reacting flow. The transport properties subroutines are designed to be used in conjunction with the CHEMKIN Subroutine Library. The transport properties depend on the state of the gas and on certain molecular parameters. The parameters considered are the Lennard-Jones potential well depth and collision diameter, the dipole moment, the polarizability, and the rotational relaxation collision number.« less
Precise Penning trap measurements of double β-decay Q-values
NASA Astrophysics Data System (ADS)
Redshaw, M.; Brodeur, M.; Bollen, G.; Bustabad, S.; Eibach, M.; Gulyuz, K.; Izzo, C.; Lincoln, D. L.; Novario, S. J.; Ringle, R.; Sandler, R.; Schwarz, S.; Valverde, A. A.
2015-10-01
The double β-decay (ββ -decay) Q-value, defined as the mass difference between parent and daughter atoms, is an important parameter for both two-neutrino ββ -decay (2 νββ) and neutrinoless ββ -decay (0 νββ) experiments. The Q-value enters into the calculation of the phase space factors, which relate the measured ββ -decay half-life to the nuclear matrix element and, in the case of 0 νββ , the effective Majorana mass of the neutrino. In addition, the Q-value defines the total kinetic energy of the two electrons emitted in 0 νββ , corresponding to the location of the single peak that is the sought after signature of 0 νββ . Hence, it is essential to have a precise and accurate Q-value determination. Over the last decade, the Penning trap mass spectrometry community has made a significant effort to provide precise ββ -decay Q-value determinations. Here we report on recent measurements with the Low Energy Beam and Ion Trap (LEBIT) facility at the National Superconducting Cyclotron Laboratory (NSCL) of the 48Ca, 82Se, and 96Zr Q-values. These measurements complete the determination of ββ -decay Q-values for the 11 ``best'' candidates (those with Q >2 MeV). We also report on a measurement of the 78Kr double electron capture (2EC) Q-value and discuss ongoing Penning trap measurements relating to ββ -decay and 2EC. Support from NSF Contract No. PHY-1102511, and DOE Grant No. 03ER-41268.
Quantum treatment of the capture of an atom by a fast nucleus incident on a molecule
NASA Astrophysics Data System (ADS)
Shakeshaft, Robin; Spruch, Larry
1980-04-01
The classical double-scattering model of Thomas for the capture of electrons from atoms by fast ions yields a cross section σ which dominates over the single scattering contribution for sufficiently fast ions. The magnitude of the classical double-scattering σ differs, however, from its quantum-mechanical (second-Born) analog by an order of magnitude. Further, a "fast ion" means an ion of some MeV, and at those energies the cross sections are very low. On the other hand, as noted by Bates, Cook, and Smith, the double-scattering cross section for the capture of atoms from molecules by fast ions dominates over the single-scattering contribution for incident ions of very much lower energy; roughly, one must have the velocity of the incident projectile much larger than a characteristic internal velocity of the particles in the target. It follows that we are in the asymptotic domain not at about 10 MeV but at about 100 eV. For the reaction H+ + CH4-->H+2 + CH3 with incident proton energies of 70 to 150 eV, the peak in the angular distribution as determined experimentally is at almost precisely the value predicted by the classical model, but the theoretical total cross section is about 30 times too large. Using a quantum version of the classical model, which involves the same kinematics and therefore preserves the agreement with the angular distribution, we obtain somewhat better agreement with the experimental total cross section, by a factor of about 5. (To obtain very good agreement, one may have to perform a really accurate calculation of large-angle elastic scattering of protons and H atoms by CH3, and take into account interference effects.) In the center-of-mass frame, for sufficiently high incident energy, the first of the two scatterings involves the scattering of H+ by H through an angle of very close to 90°, and it follows that the nuclei of the emergent H+2 ion will almost all be in the singlet state. We have also calculated the cross section for the reaction D+ + CH4-->(HD)+ + CH3.
High precision locating control system based on VCM for Talbot lithography
NASA Astrophysics Data System (ADS)
Yao, Jingwei; Zhao, Lixin; Deng, Qian; Hu, Song
2016-10-01
Aiming at the high precision and efficiency requirements of Z-direction locating in Talbot lithography, a control system based on Voice Coil Motor (VCM) was designed. In this paper, we built a math model of VCM and its moving characteristic was analyzed. A double-closed loop control strategy including position loop and current loop were accomplished. The current loop was implemented by driver, in order to achieve the rapid follow of the system current. The position loop was completed by the digital signal processor (DSP) and the position feedback was achieved by high precision linear scales. Feed forward control and position feedback Proportion Integration Differentiation (PID) control were applied in order to compensate for dynamic lag and improve the response speed of the system. And the high precision and efficiency of the system were verified by simulation and experiments. The results demonstrated that the performance of Z-direction gantry was obviously improved, having high precision, quick responses, strong real-time and easily to expend for higher precision.
The double-layer of penetrable ions: an alternative route to charge reversal.
Frydel, Derek; Levin, Yan
2013-05-07
We investigate a double-layer of penetrable ions near a charged wall. We find a new mechanism for charge reversal that occurs in the weak-coupling regime and, accordingly, the system is suitable for the mean-field analysis. The penetrability is achieved by smearing-out the ionic charge inside a sphere, so there is no need to introduce non-electrostatic forces and the system in the low coupling limit can be described by a modified version of the Poisson-Boltzmann equation. The predictions of the theory are compared with the Monte Carlo simulations.
Floating point arithmetic in future supercomputers
NASA Technical Reports Server (NTRS)
Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.
1989-01-01
Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.
Measurement-induced decoherence and information in double-slit interference
Kincaid, Joshua; McLelland, Kyle; Zwolak, Michael
2016-01-01
The double slit experiment provides a classic example of both interference and the effect of observation in quantum physics. When particles are sent individually through a pair of slits, a wave-like interference pattern develops, but no such interference is found when one observes which “path” the particles take. We present a model of interference, dephasing, and measurement-induced decoherence in a one-dimensional version of the double-slit experiment. Using this model, we demonstrate how the loss of interference in the system is correlated with the information gain by the measuring apparatus/observer. In doing so, we give a modern account of measurement in this paradigmatic example of quantum physics that is accessible to students taking quantum mechanics at the graduate or senior undergraduate levels. PMID:27807373
Tocolysis in term breech external cephalic version.
Nor Azlin, M I; Haliza, H; Mahdy, Z A; Anson, I; Fahya, M N; Jamil, M A
2005-01-01
To study the effect of ritodrine tocolysis on the success of external cephalic version (ECV) and to assess the role of ECV in breech presentation at our centre. A prospective randomized double-blind-controlled trial comparing ritodrine and placebo in ECV of singleton term breech pregnancy at a tertiary hospital. Among the 60 patients who were recruited, there was a success rate of 36.7%. Ritodrine tocolysis significantly improved the success rate of ECV (50% vs. 23%; P=0.032). There was a marked effect of ritodrine tocolysis on the ECV success in nulliparae (36.4% vs. 13.0%) and multiparae (87.5% vs. 57.1%). External cephalic version has shown to reduce the rate of cesarean section for breech presentation by 33.5% in our unit. External cephalic version significantly reduced the rate of cesarean section in breech presentation, and ritodrine tocolysis improved the success of ECV and should be offered to both nulliparous and parous women in the case of term breech presentation.
Intravenous nitroglycerin for external cephalic version: a randomized controlled trial.
Hilton, Jennifer; Allan, Bruce; Swaby, Cheryl; Wahba, Raouf; Wah, Raouf; Jarrell, John; Wood, Stephen; Ross, Sue; Tran, Quynh
2009-09-01
To estimate whether treatment with intravenous nitroglycerin for uterine relaxation increases the chance of successful external cephalic version. Two double-blind, randomized clinical trials were undertaken: one in nulliparous women and a second in multiparous women. Women presenting for external cephalic version at term were eligible to participate. The primary outcome was immediate success of external cephalic version. Other outcomes were presentation at delivery, cesarean delivery rate, and side effects and complications. Sample size calculations were based on a 100% increase in success of external cephalic version with a one-sided analysis and alpha=0.05 (80% power). In total, 126 women were recruited-82 in the nulliparous trial and 44 in the multiparous trial. Seven patients did not have external cephalic version before delivery but were included in the analysis of success of external cephalic version. One patient was lost to follow-up. The external cephalic version success rate for nulliparous patients was 24% (10 of 42) in patients who received nitroglycerin compared with 8% (3 of 40) in those who receive placebo (P=.04, one-sided Fisher exact test, odds ratio 3.85, lower bound 1.22). In multiparous patients, the external cephalic version success rate did not differ significantly between groups: 44% (10 of 23) in the nitroglycerin group compared with 43% (9 of 21) in the placebo group (P=.60). Treatment with intravenous nitroglycerin increased the rate of successful external cephalic version in nulliparous, but not in multiparous, women. Treatment with intravenous nitroglycerin appeared to be safe, but our numbers were too small to rule out rare serious adverse effects. I.
Using the CoRE Requirements Method with ADARTS. Version 01.00.05
1994-03-01
requirements; combining ADARTS processes and objects derived from CoRE requirements into an ADARTS software architecture design ; and taking advantage of...CoRE’s precision in the ADARTS process structuring, class structuring, and software architecture design activities. Object-oriented requirements and
High Precision Prediction of Functional Sites in Protein Structures
Buturovic, Ljubomir; Wong, Mike; Tang, Grace W.; Altman, Russ B.; Petkovic, Dragutin
2014-01-01
We address the problem of assigning biological function to solved protein structures. Computational tools play a critical role in identifying potential active sites and informing screening decisions for further lab analysis. A critical parameter in the practical application of computational methods is the precision, or positive predictive value. Precision measures the level of confidence the user should have in a particular computed functional assignment. Low precision annotations lead to futile laboratory investigations and waste scarce research resources. In this paper we describe an advanced version of the protein function annotation system FEATURE, which achieved 99% precision and average recall of 95% across 20 representative functional sites. The system uses a Support Vector Machine classifier operating on the microenvironment of physicochemical features around an amino acid. We also compared performance of our method with state-of-the-art sequence-level annotator Pfam in terms of precision, recall and localization. To our knowledge, no other functional site annotator has been rigorously evaluated against these key criteria. The software and predictive models are incorporated into the WebFEATURE service at http://feature.stanford.edu/wf4.0-beta. PMID:24632601
Conceptual modeling of coincident failures in multiversion software
NASA Technical Reports Server (NTRS)
Littlewood, Bev; Miller, Douglas R.
1989-01-01
Recent work by Eckhardt and Lee (1985) shows that independently developed program versions fail dependently (specifically, simultaneous failure of several is greater than would be the case under true independence). The present authors show there is a precise duality between input choice and program choice in this model and consider a generalization in which different versions can be developed using diverse methodologies. The use of diverse methodologies is shown to decrease the probability of the simultaneous failure of several versions. Indeed, it is theoretically possible to obtain versions which exhibit better than independent failure behavior. The authors try to formalize the notion of methodological diversity by considering the sequence of decision outcomes that constitute a methodology. They show that diversity of decision implies likely diversity of behavior for the different verions developed under such forced diversity. For certain one-out-of-n systems the authors obtain an optimal method for allocating diversity between versions. For two-out-of-three systems there seem to be no simple optimality results which do not depend on constraints which cannot be verified in practice.
Quasi-Speckle Measurements of Close Double Stars With a CCD Camera
NASA Astrophysics Data System (ADS)
Harshaw, Richard
2017-01-01
CCD measurements of visual double stars have been an active area of amateur observing for several years now. However, most CCD measurements rely on “lucky imaging” (selecting a very small percentage of the best frames of a larger frame set so as to get the best “frozen” atmosphere for the image), a technique that has limitations with regards to how close the stars can be and still be cleanly resolved in the lucky image. In this paper, the author reports how using deconvolution stars in the analysis of close double stars can greatly enhance the quality of the autocorellogram, leading to a more precise solution using speckle reduction software rather than lucky imaging.
Prediction of radial breathing-like modes of double-walled carbon nanotubes with arbitrary chirality
NASA Astrophysics Data System (ADS)
Ghavanloo, Esmaeal; Fazelzadeh, S. Ahmad
2014-10-01
The radial breathing-like modes (RBLMs) of double-walled carbon nanotubes (DWCNTs) with arbitrary chirality are investigated by a simple analytical model. For this purpose, DWCNT is considered as double concentric elastic thin cylindrical shells, which are coupled through van der Waals (vdW) forces between two adjacent tubes. Lennard-Jones potential and a molecular mechanics model are used to calculate the vdW forces and to predict the mechanical properties, respectively. The validity of these theoretical results is confirmed through the comparison of the experimental results. Finally, a new approach is proposed to determine the diameters and the chiral indices of the inner and outer tubes of the DWCNTs with high precision.
NASA Technical Reports Server (NTRS)
Kuan, Gary M.; Dekens, Frank G.
2006-01-01
The Space Interferometry Mission (SIM) is a microarcsecond interferometric space telescope that requires picometer level precision measurements of its truss and interferometer baselines. Single-gauge metrology errors due to non-ideal physical characteristics of corner cubes reduce the angular measurement capability of the science instrument. Specifically, the non-common vertex error (NCVE) of a shared vertex, double corner cube introduces micrometer level single-gauge errors in addition to errors due to dihedral angles and reflection phase shifts. A modified SIM Kite Testbed containing an articulating double corner cube is modeled and the results are compared to the experimental testbed data. The results confirm modeling capability and viability of calibration techniques.
Doubling down on naturalness with a supersymmetric twin Higgs
NASA Astrophysics Data System (ADS)
Craig, Nathaniel; Howe, Kiel
2014-03-01
We show that naturalness of the weak scale can be comfortably reconciled with both LHC null results and observed Higgs properties provided the double protection of supersymmetry and the twin Higgs mechanism. This double protection radically alters conventional signs of naturalness at the LHC while respecting gauge coupling unification and precision electroweak limits. We find the measured Higgs mass, couplings, and percent-level naturalness of the weak scale are compatible with stops at ~ 3.5 TeV and higgsinos at ~ 1 TeV. The primary signs of naturalness in this scenario include modifications of Higgs couplings, a modest invisible Higgs width, resonant Higgs pair production, and an invisibly-decaying heavy Higgs.
STT Doubles with Large δM - Part VI: Cygnus Multiples
NASA Astrophysics Data System (ADS)
Knapp, Wilfried; Nanson, John
2016-10-01
The results of visual double star observing sessions suggested a pattern for STT doubles with large delta_M of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. Of these objects we found three rather complex multiples in Cygnus of special interest so we decided to write a separate report to have more room to include the non STT components as well. Again like for the other objects covered so far several of the components show parameters quite different from the current WDS data.
Control of DC gas flow in a single-stage double-inlet pulse tube cooler
NASA Astrophysics Data System (ADS)
Wang, C.; Thummes, G.; Heiden, C.
The use of double-inlet mode in the pulse tube cooler opens up a possibility of DC gas flow circulating around the regenerator and pulse tube. Numerical analysis shows that effects of DC flow in a single-stage pulse tube cooler are different in some aspects from that in a 4 K pulse tube cooler. For highest cooler efficiency, DC flow should be compensated to a small value, i.e. DC flow over average AC flow at regenerator inlet should be in the range -0.0013 to +0.00016. Dual valves with reversed asymmetric geometries were used for the double-inlet bypass to control the DC flow in this paper. The experiment, performed in a single-stage double-inlet pulse tube cooler, verified that the cooler performance can be significantly improved by precisely controlling the DC flow.
Reducing questionnaire length did not improve physician response rate: a randomized trial.
Bolt, Eva E; van der Heide, Agnes; Onwuteaka-Philipsen, Bregje D
2014-04-01
To examine the effect of reducing questionnaire length on the response rate in a physician survey. A postal four double-page questionnaire on end-of-life decision making was sent to a random sample of 1,100 general practitioners, 400 elderly care physicians, and 500 medical specialists. Another random sample of 500 medical specialists received a shorter questionnaire of two double pages. After 3 months and one reminder, all nonresponding physicians received an even shorter questionnaire of one double page. Total response was 64% (1,456 of 2,269 eligible respondents). Response rate of medical specialists for the four double-page questionnaire was equal to that of the two double-page questionnaire (190 and 191 questionnaires were returned, respectively). The total response rate increased from 53% to 64% after sending a short one double-page questionnaire (1,203-1,456 respondents). The results of our study suggest that reducing the length of a long questionnaire in a physician survey does not necessarily improve response rate. To improve response rate and gather more information, researchers could decide to send a drastically shortened version of the questionnaire to nonresponders. Copyright © 2014 Elsevier Inc. All rights reserved.
Program Processes Thermocouple Readings
NASA Technical Reports Server (NTRS)
Quave, Christine A.; Nail, William, III
1995-01-01
Digital Signal Processor for Thermocouples (DART) computer program implements precise and fast method of converting voltage to temperature for large-temperature-range thermocouple applications. Written using LabVIEW software. DART available only as object code for use on Macintosh II FX or higher-series computers running System 7.0 or later and IBM PC-series and compatible computers running Microsoft Windows 3.1. Macintosh version of DART (SSC-00032) requires LabVIEW 2.2.1 or 3.0 for execution. IBM PC version (SSC-00031) requires LabVIEW 3.0 for Windows 3.1. LabVIEW software product of National Instruments and not included with program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Jing-Jy; Flood, Paul E.; LePoire, David
In this report, the results generated by RESRAD-RDD version 2.01 are compared with those produced by RESRAD-RDD version 1.7 for different scenarios with different sets of input parameters. RESRAD-RDD version 1.7 is spreadsheet-driven, performing calculations with Microsoft Excel spreadsheets. RESRAD-RDD version 2.01 revamped version 1.7 by using command-driven programs designed with Visual Basic.NET to direct calculations with data saved in Microsoft Access database, and re-facing the graphical user interface (GUI) to provide more flexibility and choices in guideline derivation. Because version 1.7 and version 2.01 perform the same calculations, the comparison of their results serves as verification of both versions.more » The verification covered calculation results for 11 radionuclides included in both versions: Am-241, Cf-252, Cm-244, Co-60, Cs-137, Ir-192, Po-210, Pu-238, Pu-239, Ra-226, and Sr-90. At first, all nuclidespecific data used in both versions were compared to ensure that they are identical. Then generic operational guidelines and measurement-based radiation doses or stay times associated with a specific operational guideline group were calculated with both versions using different sets of input parameters, and the results obtained with the same set of input parameters were compared. A total of 12 sets of input parameters were used for the verification, and the comparison was performed for each operational guideline group, from A to G, sequentially. The verification shows that RESRAD-RDD version 1.7 and RESRAD-RDD version 2.01 generate almost identical results; the slight differences could be attributed to differences in numerical precision with Microsoft Excel and Visual Basic.NET. RESRAD-RDD version 2.01 allows the selection of different units for use in reporting calculation results. The results of SI units were obtained and compared with the base results (in traditional units) used for comparison with version 1.7. The comparison shows that RESRAD-RDD version 2.01 correctly reports calculation results in the unit specified in the GUI.« less
NASA Astrophysics Data System (ADS)
Brant Dodson, J.; Taylor, Patrick C.; Branson, Mark
2018-05-01
Recently launched cloud observing satellites provide information about the vertical structure of deep convection and its microphysical characteristics. In this study, CloudSat reflectivity data is stratified by cloud type, and the contoured frequency by altitude diagrams reveal a double-arc structure in deep convective cores (DCCs) above 8 km. This suggests two distinct hydrometeor modes (snow versus hail/graupel) controlling variability in reflectivity profiles. The day-night contrast in the double arcs is about four times larger than the wet-dry season contrast. Using QuickBeam, the vertical reflectivity structure of DCCs is analyzed in two versions of the Superparameterized Community Atmospheric Model (SP-CAM) with single-moment (no graupel) and double-moment (with graupel) microphysics. Double-moment microphysics shows better agreement with observed reflectivity profiles; however, neither model variant captures the double-arc structure. Ultimately, the results show that simulating realistic DCC vertical structure and its variability requires accurate representation of ice microphysics, in particular the hail/graupel modes, though this alone is insufficient.
GACD: Integrated Software for Genetic Analysis in Clonal F1 and Double Cross Populations.
Zhang, Luyan; Meng, Lei; Wu, Wencheng; Wang, Jiankang
2015-01-01
Clonal species are common among plants. Clonal F1 progenies are derived from the hybridization between 2 heterozygous clones. In self- and cross-pollinated species, double crosses can be made from 4 inbred lines. A clonal F1 population can be viewed as a double cross population when the linkage phase is determined. The software package GACD (Genetic Analysis of Clonal F1 and Double cross) is freely available public software, capable of building high-density linkage maps and mapping quantitative trait loci (QTL) in clonal F1 and double cross populations. Three functionalities are integrated in GACD version 1.0: binning of redundant markers (BIN); linkage map construction (CDM); and QTL mapping (CDQ). Output of BIN can be directly used as input of CDM. After adding the phenotypic data, the output of CDM can be used as input of CDQ. Thus, GACD acts as a pipeline for genetic analysis. GACD and example datasets are freely available from www.isbreeding.net. © The American Genetic Association. 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Magic angle for barrier-controlled double quantum dots
NASA Astrophysics Data System (ADS)
Yang, Xu-Chen; Wang, Xin
2018-01-01
We show that the exchange interaction of a singlet-triplet spin qubit confined in double quantum dots, when being controlled by the barrier method, is insensitive to a charged impurity lying along certain directions away from the center of the double-dot system. These directions differ from the polar axis of the double dots by the magic angle, equaling arccos(1 /√{3 })≈54 .7∘ , a value previously found in atomic physics and nuclear magnetic resonance. This phenomenon can be understood from an expansion of the additional Coulomb interaction created by the impurity, but also relies on the fact that the exchange interaction solely depends on the tunnel coupling in the barrier-control scheme. Our results suggest that for a scaled-up qubit array, when all pairs of double dots rotate their respective polar axes from the same reference line by the magic angle, crosstalk between qubits can be eliminated, allowing clean single-qubit operations. While our model is a rather simplified version of actual experiments, our results suggest that it is possible to minimize unwanted couplings by judiciously designing the layout of the qubits.
Design and study on optic fiber sensor detection system
NASA Astrophysics Data System (ADS)
Jiang, Xuemei; Liu, Quan; Liang, Xiaoyu; Lin, Haiyan
2005-11-01
With the development of industry and agriculture, the environmental pollution becomes more and more serious. Various kinds of poisonous gas are the important pollution sources. Various kinds of poisonous gas, such as the carbon monoxide, sulfureted hydrogen, sulfur dioxide, methane, acetylene are threatening human normal life and production seriously especially today when industry and various kinds of manufacturing industries develop at full speed. The acetylene is a kind of gas with very lively chemical property, extremely apt to burn, resolve and explode, and it is great to destroy things among these poisonous gases. Comparing with other inflammable and explosive gas, the explosion range of the acetylene is heavier. Therefore carrying on monitoring acetylene pollution sources scene in real time, grasping the state of pollution taking place and development in time, have very important meanings. Aim at the above problems, a set of optical fiber detection system of acetylene gas based on the characteristic of spectrum absorption of acetylene is presented in this paper, which has reference channel and is for on-line and real-time detection. In order to eliminate the effect of other factors on measurement precision, the double light sources, double light paths and double cells are used in this system. Because of the use of double wavelength compensating method, this system can eliminate the disturbance in the optical paths, the problem of instability is solved and the measurement precision is greatly enhanced. Some experimental results are presented at the end of this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halverson, Samuel; Roy, Arpita; Mahadevan, Suvrath
2015-06-10
We present the design and test results of a compact optical fiber double-scrambler for high-resolution Doppler radial velocity instruments. This device consists of a single optic: a high-index n ∼ 2 ball lens that exchanges the near and far fields between two fibers. When used in conjunction with octagonal fibers, this device yields very high scrambling gains (SGs) and greatly desensitizes the fiber output from any input illumination variations, thereby stabilizing the instrument profile of the spectrograph and improving the Doppler measurement precision. The system is also highly insensitive to input pupil variations, isolating the spectrograph from telescope illumination variationsmore » and seeing changes. By selecting the appropriate glass and lens diameter the highest efficiency is achieved when the fibers are practically in contact with the lens surface, greatly simplifying the alignment process when compared to classical double-scrambler systems. This prototype double-scrambler has demonstrated significant performance gains over previous systems, achieving SGs in excess of 10,000 with a throughput of ∼87% using uncoated Polymicro octagonal fibers. Adding a circular fiber to the fiber train further increases the SG to >20,000, limited by laboratory measurement error. While this fiber system is designed for the Habitable-zone Planet Finder spectrograph, it is more generally applicable to other instruments in the visible and near-infrared. Given the simplicity and low cost, this fiber scrambler could also easily be multiplexed for large multi-object instruments.« less
Caffeine Does Not Modulate Inhibitory Control
ERIC Educational Resources Information Center
Tieges, Zoe; Snel, Jan; Kok, Albert; Ridderinkhof, K. Richard
2009-01-01
The effects of a 3 mg/kg body weight (BW) dose of caffeine were assessed on behavioral indices of response inhibition. To meet these aims, we selected a modified AX version of the Continuous Performance Test (CPT), the stop task, and the flanker task. In three double-blind, placebo-controlled, within-subjects experiments, these tasks were…
ERIC Educational Resources Information Center
Geller, Daniel; Donnelly, Craig; Lopez, Frank; Rubin, Richard; Newcorn, Jeffrey; Sutton, Virginia; Bakken, Rosalie; Paczkowski, Martin; Kelsey, Douglas; Sumner, Calvin
2007-01-01
Objective: Research suggests 25% to 35% of children with attention-deficit/hyperactivity disorder (ADHD) have comorbid anxiety disorders. This double-blind study compared atomoxetine with placebo for treating pediatric ADHD with comorbid anxiety, as measured by the ADHD Rating Scale-IV-Parent Version: Investigator Administered and Scored…
Multi-Attribute Sequential Search
ERIC Educational Resources Information Center
Bearden, J. Neil; Connolly, Terry
2007-01-01
This article describes empirical and theoretical results from two multi-attribute sequential search tasks. In both tasks, the DM sequentially encounters options described by two attributes and must pay to learn the values of the attributes. In the "continuous" version of the task the DM learns the precise numerical value of an attribute when she…
A Rasch Analysis of the Junior Metacognitive Awareness Inventory with Singapore Students
ERIC Educational Resources Information Center
Ning, Hoi Kwan
2018-01-01
The psychometric properties of the 2 versions of the Junior Metacognitive Awareness Inventory were examined with Singapore student samples. Other than 2 misfitting items and an underutilized response scale, Rasch analysis demonstrated that the instruments have good measurement precision, and no differential item functioning was detected across…
The Astronomical Almanac Online - Welcome
(incl. eclipses) Time-Scales and Coordinate Systems Sun Moon Planets Natural Satellites Dwarf Planets version contains precise ephemerides of the Sun, Moon, planets, and satellites, data for eclipses and : Phenomena (incl. eclipses) Section B: Time-Scales and Coordinate Systems Section C: Sun Section D: Moon
NASA Technical Reports Server (NTRS)
Lewandowski, W.
1994-01-01
The introduction of the GPS common-view method at the beginning of the 1980's led to an immediate and dramatic improvement of international time comparisons. Since then, further progress brought the precision and accuracy of GPS common-view intercontinental time transfer from tens of nanoseconds to a few nanoseconds, even with SA activated. This achievement was made possible by the use of the following: ultra-precise ground antenna coordinates, post-processed precise ephemerides, double-frequency measurements of ionosphere, and appropriate international coordination and standardization. This paper reviews developments and applications of the GPS common-view method during the last decade and comments on possible future improvements whose objective is to attain sub-nanosecond uncertainty.
Method for measuring retardation of infrared wave-plate by modulated-polarized visible light
NASA Astrophysics Data System (ADS)
Zhang, Ying; Song, Feijun
2012-11-01
A new method for precisely measuring the optical phase retardation of wave-plates in the infrared spectral region is presented by using modulated-polarized visible light. An electro-optic modulator is used to accurately determine the zero point by the frequency-doubled signal of the Modulated-polarized light. A Babinet-Soleil compensator is employed to make the phase delay compensation. Based on this method, an instrument is set up to measure the retardations of the infrared wave-plates with visible region laser. Measurement results with high accuracy and sound repetition are obtained by simple calculation. Its measurement precision is less than and repetitive precision is within 0.3%.
NASA Astrophysics Data System (ADS)
Guo, H.; Zhang, H.
2016-12-01
Relocating high-precision earthquakes is a central task for monitoring earthquakes and studying the structure of earth's interior. The most popular location method is the event-pair double-difference (DD) relative location method, which uses the catalog and/or more accurate waveform cross-correlation (WCC) differential times from event pairs with small inter-event separations to the common stations to reduce the effect of the velocity uncertainties outside the source region. Similarly, Zhang et al. [2010] developed a station-pair DD location method which uses the differential times from common events to pairs of stations to reduce the effect of the velocity uncertainties near the source region, to relocate the non-volcanic tremors (NVT) beneath the San Andreas Fault (SAF). To utilize advantages of both DD location methods, we have proposed and developed a new double-pair DD location method to use the differential times from pairs of events to pairs of stations. The new method can remove the event origin time and station correction terms from the inversion system and cancel out the effects of the velocity uncertainties near and outside the source region simultaneously. We tested and applied the new method on the northern California regular earthquakes to validate its performance. In comparison, among three DD location methods, the new double-pair DD method can determine more accurate relative locations and the station-pair DD method can better improve the absolute locations. Thus, we further proposed a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time. For NVTs, it is difficult to pick the first arrivals and derive the WCC event-pair differential times, thus the general practice is to measure station-pair envelope WCC differential times. However, station-pair tremor locations are scattered due to the low-precision relative locations. The ability that double-pair data can be directly constructed from the station-pair data means that double-pair DD method can be used for improving NVT locations. We have applied the new method to the NVTs beneath the SAF near Cholame, California. Compared to the previous results, the new double-pair DD tremor locations are more concentrated and show more detailed structures.
Nifedipine as a uterine relaxant for external cephalic version: a randomized controlled trial.
Kok, Marjolein; Bais, Joke M; van Lith, Jan M; Papatsonis, Dimitri M; Kleiverda, Gunilla; Hanny, Dahrs; Doornbos, Johannes P; Mol, Ben W; van der Post, Joris A
2008-08-01
To estimate the effectiveness of nifedipine as a uterine relaxant during external cephalic version to correct breech presentation. In this randomized, double-blind, placebo-controlled trial, women with a singleton fetus in breech presentation and a gestational age of 36 weeks or more were eligible for enrollment. Participating women received two doses of either nifedipine 10 mg or placebo, 30 and 15 minutes before the external cephalic version attempt. The primary outcome was a cephalic-presenting fetus immediately after the procedure. Secondary outcome measures were cephalic presentation at delivery, mode of delivery, and adverse events. A sample size of 292 was calculated to provide 80% power to detect a 17% improvement of the external cephalic version success rate, assuming a placebo group rate of 40% and alpha of .05. Outcome data for 310 of 320 randomly assigned participants revealed no significant difference in external cephalic version success rates between treatment (42%) and control group (37%) (relative risk 1.1, 95%; 95% confidence interval 0.85-1.5). The cesarean delivery rate was 51% in the treatment group and 46% in the control group (relative risk 1.1, 95% confidence interval 0.88-1.4). Nifedipine did not significantly improve the success of external cephalic version. Future use of nifedipine to improve the outcome of external cephalic version should be limited to large clinical trials.
Rosales, Roberto S; Martin-Hidalgo, Yolanda; Reboso-Morales, Luis; Atroshi, Isam
2016-03-03
The purpose of this study was to assess the reliability and construct validity of the Spanish version of the 6-item carpal tunnel syndrome (CTS) symptoms scale (CTS-6). In this cross-sectional study 40 patients diagnosed with CTS based on clinical and neurophysiologic criteria, completed the standard Spanish versions of the CTS-6 and the disabilities of the arm, shoulder and hand (QuickDASH) scales on two occasions with a 1-week interval. Internal-consistency reliability was assessed with the Cronbach alpha coefficient and test-retest reliability with the intraclass correlation coefficient, two way random effect model and absolute agreement definition (ICC2,1). Cross-sectional precision was analyzed with the Standard Error of the Measurement (SEM). Longitudinal precision for test-retest reliability coefficient was assessed with the Standard Error of the Measurement difference (SEMdiff) and the Minimal Detectable Change at 95 % confidence level (MDC95). For assessing construct validity it was hypothesized that the CTS-6 would have a strong positive correlation with the QuickDASH, analyzed with the Pearson correlation coefficient (r). The standard Spanish version of the CTS-6 presented a Cronbach alpha of 0.81 with a SEM of 0.3. Test-retest reliability showed an ICC of 0.85 with a SRMdiff of 0.36 and a MDC95 of 0.7. The correlation between CTS-6 and the QuickDASH was concordant with the a priori formulated construct hypothesis (r 0.69) CONCLUSIONS: The standard Spanish version of the 6-item CTS symptoms scale showed good internal consistency, test-retest reliability and construct validity for outcomes assessment in CTS. The CTS-6 will be useful to clinicians and researchers in Spanish speaking parts of the world. The use of standardized outcome measures across countries also will facilitate comparison of research results in carpal tunnel syndrome.
SARAH 4: A tool for (not only SUSY) model builders
NASA Astrophysics Data System (ADS)
Staub, Florian
2014-06-01
We present the new version of the Mathematica package SARAH which provides the same features for a non-supersymmetric model as previous versions for supersymmetric models. This includes an easy and straightforward definition of the model, the calculation of all vertices, mass matrices, tadpole equations, and self-energies. Also the two-loop renormalization group equations for a general gauge theory are now included and have been validated with the independent Python code PyR@TE. Model files for FeynArts, CalcHep/CompHep, WHIZARD and in the UFO format can be written, and source code for SPheno for the calculation of the mass spectrum, a set of precision observables, and the decay widths and branching ratios of all states can be generated. Furthermore, the new version includes routines to output model files for Vevacious for both, supersymmetric and non-supersymmetric, models. Global symmetries are also supported with this version and by linking Susyno the handling of Lie groups has been improved and extended.
Flens, Gerard; Smits, Niels; Terwee, Caroline B; Dekker, Joost; Huijbrechts, Irma; Spinhoven, Philip; de Beurs, Edwin
2017-12-01
We used the Dutch-Flemish version of the USA PROMIS adult V1.0 item bank for Anxiety as input for developing a computerized adaptive test (CAT) to measure the entire latent anxiety continuum. First, psychometric analysis of a combined clinical and general population sample ( N = 2,010) showed that the 29-item bank has psychometric properties that are required for a CAT administration. Second, a post hoc CAT simulation showed efficient and highly precise measurement, with an average number of 8.64 items for the clinical sample, and 9.48 items for the general population sample. Furthermore, the accuracy of our CAT version was highly similar to that of the full item bank administration, both in final score estimates and in distinguishing clinical subjects from persons without a mental health disorder. We discuss the future directions and limitations of CAT development with the Dutch-Flemish version of the PROMIS Anxiety item bank.
Harada, Ken; Akashi, Tetsuya; Niitsu, Kodai; Shimada, Keiko; Ono, Yoshimasa A; Shindo, Daisuke; Shinada, Hiroyuki; Mori, Shigeo
2018-01-17
Advanced electron microscopy technologies have made it possible to perform precise double-slit interference experiments. We used a 1.2-MV field emission electron microscope providing coherent electron waves and a direct detection camera system enabling single-electron detections at a sub-second exposure time. We developed a method to perform the interference experiment by using an asymmetric double-slit fabricated by a focused ion beam instrument and by operating the microscope under a "pre-Fraunhofer" condition, different from the Fraunhofer condition of conventional double-slit experiments. Here, pre-Fraunhofer condition means that each single-slit observation was performed under the Fraunhofer condition, while the double-slit observations were performed under the Fresnel condition. The interference experiments with each single slit and with the asymmetric double slit were carried out under two different electron dose conditions: high-dose for calculation of electron probability distribution and low-dose for each single electron distribution. Finally, we exemplified the distribution of single electrons by color-coding according to the above three types of experiments as a composite image.
Hart, Alister J; Skinner, John A; Henckel, Johann; Sampson, Barry; Gordon, Fabiana
2011-09-01
Many factors affect the blood metal ion levels after metal-on-metal (MOM) hip arthroplasty. The main surgically adjustable variable is the amount of coverage of the head provided by the cup which is a function of the inclination and version angles. However, most studies have used plain radiographs which have questionable precision and accuracy, particularly for version and large diameter metal heads; further, these studies do not simultaneously assess version and inclination. Thus the relationship between version and blood metal ions levels has not been resolved. We determined whether cup inclination and version influence blood metal ion levels while adjusting for age at assessment, gender, body mass index, horizontal femoral offset, head size, manufacturer hip type, and Oxford hip score. We prospectively followed 100 individuals (51 females, 49 males) with unilateral MOM hip resurfacing who underwent clinical assessment, CT scanning, and blood metal ion measurement. Multiple regression analysis was used to determine which variables were predictors of blood metal ion levels and to model the effect of these variables. Only cup inclination, version angles, and gender influenced blood cobalt or chromium levels. Cobalt and chromium levels positively correlated with inclination angle and negatively correlated with version angle. The effect of changes in version angle was less than for inclination angle. Based on our observations, we developed a formula to predict the effect of these parameters on metal ion levels. Our data suggest insufficient cup version can cause high blood metal ions after MOM hip arthroplasty. We were unable to show that excessive version caused high levels. Level II, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.
FTC - THE FAULT-TREE COMPILER (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
Double axis, two-crystal x-ray spectrometer.
Erez, G; Kimhi, D; Livnat, A
1978-05-01
A two-crystal double axis x-ray spectrometer, capable of goniometric accuracy on the order of 0.1", has been developed. Some of its unique design features are presented. These include (1) a modified commercial thrust bearing which furnishes a precise, full circle theta:2theta coupling, (2) a new tangent drive system design in which a considerable reduction of the lead screw effective pitch is achieved, and (3) an automatic step scanning control which eliminates most of the mechanical deficiencies of the tangent drive by directly reading the tangent arm displacement.
Thermomechanical CSM analysis of a superheater tube in transient state
NASA Astrophysics Data System (ADS)
Taler, Dawid; Madejski, Paweł
2011-12-01
The paper presents a thermomechanical computational solid mechanics analysis (CSM) of a pipe "double omega", used in the steam superheaters in circulating fluidized bed (CFB) boilers. The complex cross-section shape of the "double omega" tubes requires more precise analysis in order to prevent from failure as a result of the excessive temperature and thermal stresses. The results have been obtained using the finite volume method for transient state of superheater. The calculation was carried out for the section of pipe made of low-alloy steel.
Longitudinal Double-Spin Asymmetry for Inclusive Jet Production in p→+p→ Collisions at s=200GeV
NASA Astrophysics Data System (ADS)
Abelev, B. I.; Aggarwal, M. M.; Ahammed, Z.; Anderson, B. D.; Arkhipkin, D.; Averichev, G. S.; Bai, Y.; Balewski, J.; Barannikova, O.; Barnby, L. S.; Baudot, J.; Baumgart, S.; Belaga, V. V.; Bellingeri-Laurikainen, A.; Bellwied, R.; Benedosso, F.; Betts, R. R.; Bhardwaj, S.; Bhasin, A.; Bhati, A. K.; Bichsel, H.; Bielcik, J.; Bielcikova, J.; Bland, L. C.; Blyth, S.-L.; Bombara, M.; Bonner, B. E.; Botje, M.; Bouchet, J.; Brandin, A. V.; Burton, T. P.; Bystersky, M.; Cai, X. Z.; Caines, H.; Calderón de La Barca Sánchez, M.; Callner, J.; Catu, O.; Cebra, D.; Cervantes, M. C.; Chajecki, Z.; Chaloupka, P.; Chattopadhyay, S.; Chen, H. F.; Chen, J. H.; Chen, J. Y.; Cheng, J.; Cherney, M.; Chikanian, A.; Christie, W.; Chung, S. U.; Clarke, R. F.; Codrington, M. J. M.; Coffin, J. P.; Cormier, T. M.; Cosentino, M. R.; Cramer, J. G.; Crawford, H. J.; Das, D.; Dash, S.; Daugherity, M.; de Moura, M. M.; Dedovich, T. G.; Dephillips, M.; Derevschikov, A. A.; Didenko, L.; Dietel, T.; Djawotho, P.; Dogra, S. M.; Dong, X.; Drachenberg, J. L.; Draper, J. E.; Du, F.; Dunin, V. B.; Dunlop, J. C.; Dutta Mazumdar, M. R.; Edwards, W. R.; Efimov, L. G.; Elhalhuli, E.; Emelianov, V.; Engelage, J.; Eppley, G.; Erazmus, B.; Estienne, M.; Fachini, P.; Fatemi, R.; Fedorisin, J.; Feng, A.; Filip, P.; Finch, E.; Fine, V.; Fisyak, Y.; Fu, J.; Gagliardi, C. A.; Gaillard, L.; Ganti, M. S.; Garcia-Solis, E.; Ghazikhanian, V.; Ghosh, P.; Gorbunov, Y. N.; Gos, H.; Grebenyuk, O.; Grosnick, D.; Grube, B.; Guertin, S. M.; Guimaraes, K. S. F. F.; Gupta, A.; Gupta, N.; Haag, B.; Hallman, T. J.; Hamed, A.; Harris, J. W.; He, W.; Heinz, M.; Henry, T. W.; Heppelmann, S.; Hippolyte, B.; Hirsch, A.; Hjort, E.; Hoffman, A. M.; Hoffmann, G. W.; Hofman, D. J.; Hollis, R. S.; Horner, M. J.; Huang, H. Z.; Hughes, E. W.; Humanic, T. J.; Igo, G.; Iordanova, A.; Jacobs, P.; Jacobs, W. W.; Jakl, P.; Jones, P. G.; Judd, E. G.; Kabana, S.; Kang, K.; Kapitan, J.; Kaplan, M.; Keane, D.; Kechechyan, A.; Kettler, D.; Khodyrev, V. Yu.; Kiryluk, J.; Kisiel, A.; Kislov, E. M.; Klein, S. R.; Knospe, A. G.; Kocoloski, A.; Koetke, D. D.; Kollegger, T.; Kopytine, M.; Kotchenda, L.; Kouchpil, V.; Kowalik, K. L.; Kravtsov, P.; Kravtsov, V. I.; Krueger, K.; Kuhn, C.; Kulikov, A. I.; Kumar, A.; Kurnadi, P.; Kuznetsov, A. A.; Lamont, M. A. C.; Landgraf, J. M.; Lange, S.; Lapointe, S.; Laue, F.; Lauret, J.; Lebedev, A.; Lednicky, R.; Lee, C.-H.; Lehocka, S.; Levine, M. J.; Li, C.; Li, Q.; Li, Y.; Lin, G.; Lin, X.; Lindenbaum, S. J.; Lisa, M. A.; Liu, F.; Liu, H.; Liu, J.; Liu, L.; Ljubicic, T.; Llope, W. J.; Longacre, R. S.; Love, W. A.; Lu, Y.; Ludlam, T.; Lynn, D.; Ma, G. L.; Ma, J. G.; Ma, Y. G.; Mahapatra, D. P.; Majka, R.; Mangotra, L. K.; Manweiler, R.; Margetis, S.; Markert, C.; Martin, L.; Matis, H. S.; Matulenko, Yu. A.; McShane, T. S.; Meschanin, A.; Millane, J.; Miller, M. L.; Minaev, N. G.; Mioduszewski, S.; Mischke, A.; Mitchell, J.; Mohanty, B.; Morozov, D. A.; Munhoz, M. G.; Nandi, B. K.; Nattrass, C.; Nayak, T. K.; Nelson, J. M.; Nepali, C.; Netrakanti, P. K.; Nogach, L. V.; Nurushev, S. B.; Odyniec, G.; Ogawa, A.; Okorokov, V.; Olson, D.; Pachr, M.; Pal, S. K.; Panebratsev, Y.; Pavlinov, A. I.; Pawlak, T.; Peitzmann, T.; Perevoztchikov, V.; Perkins, C.; Peryt, W.; Phatak, S. C.; Planinic, M.; Pluta, J.; Poljak, N.; Porile, N.; Poskanzer, A. M.; Potekhin, M.; Potrebenikova, E.; Potukuchi, B. V. K. S.; Prindle, D.; Pruneau, C.; Pruthi, N. K.; Putschke, J.; Qattan, I. A.; Raniwala, R.; Raniwala, S.; Ray, R. L.; Relyea, D.; Ridiger, A.; Ritter, H. G.; Roberts, J. B.; Rogachevskiy, O. V.; Romero, J. L.; Rose, A.; Roy, C.; Ruan, L.; Russcher, M. J.; Sahoo, R.; Sakrejda, I.; Sakuma, T.; Salur, S.; Sandweiss, J.; Sarsour, M.; Sazhin, P. S.; Schambach, J.; Scharenberg, R. P.; Schmitz, N.; Seger, J.; Selyuzhenkov, I.; Seyboth, P.; Shabetai, A.; Shahaliev, E.; Shao, M.; Sharma, M.; Shen, W. Q.; Shimanskiy, S. S.; Sichtermann, E. P.; Simon, F.; Singaraju, R. N.; Skoby, M. J.; Smirnov, N.; Snellings, R.; Sorensen, P.; Sowinski, J.; Speltz, J.; Spinka, H. M.; Srivastava, B.; Stadnik, A.; Stanislaus, T. D. S.; Staszak, D.; Stock, R.; Strikhanov, M.; Stringfellow, B.; Suaide, A. A. P.; Suarez, M. C.; Subba, N. L.; Sumbera, M.; Sun, X. M.; Sun, Z.; Surrow, B.; Symons, T. J. M.; Szanto de Toledo, A.; Takahashi, J.; Tang, A. H.; Tarnowsky, T.; Thomas, J. H.; Timmins, A. R.; Timoshenko, S.; Tokarev, M.; Trainor, T. A.; Tram, V. N.; Trentalange, S.; Tribble, R. E.; Tsai, O. D.; Ulery, J.; Ullrich, T.; Underwood, D. G.; van Buren, G.; van der Kolk, N.; van Leeuwen, M.; Vander Molen, A. M.; Varma, R.; Vasilevski, I. M.; Vasiliev, A. N.; Vernet, R.; Vigdor, S. E.; Viyogi, Y. P.; Vokal, S.; Voloshin, S. A.; Wada, M.; Waggoner, W. T.; Wang, F.; Wang, G.; Wang, J. S.; Wang, X. L.; Wang, Y.; Webb, J. C.; Westfall, G. D.; Whitten, C., Jr.; Wieman, H.; Wissink, S. W.; Witt, R.; Wu, J.; Wu, Y.; Xu, N.; Xu, Q. H.; Xu, Z.; Yepes, P.; Yoo, I.-K.; Yue, Q.; Yurevich, V. I.; Zawisza, M.; Zhan, W.; Zhang, H.; Zhang, W. M.; Zhang, Y.; Zhang, Z. P.; Zhao, Y.; Zhong, C.; Zhou, J.; Zoulkarneev, R.; Zoulkarneeva, Y.; Zubarev, A. N.; Zuo, J. X.
2008-06-01
We report a new STAR measurement of the longitudinal double-spin asymmetry ALL for inclusive jet production at midrapidity in polarized p+p collisions at a center-of-mass energy of s=200GeV. The data, which cover jet transverse momenta 5
Automatic alignment of double optical paths in excimer laser amplifier
NASA Astrophysics Data System (ADS)
Wang, Dahui; Zhao, Xueqing; Hua, Hengqi; Zhang, Yongsheng; Hu, Yun; Yi, Aiping; Zhao, Jun
2013-05-01
A kind of beam automatic alignment method used for double paths amplification in the electron pumped excimer laser system is demonstrated. In this way, the beams from the amplifiers can be transferred along the designated direction and accordingly irradiate on the target with high stabilization and accuracy. However, owing to nonexistence of natural alignment references in excimer laser amplifiers, two cross-hairs structure is used to align the beams. Here, one crosshair put into the input beam is regarded as the near-field reference while the other put into output beam is regarded as the far-field reference. The two cross-hairs are transmitted onto Charge Coupled Devices (CCD) by image-relaying structures separately. The errors between intersection points of two cross-talk images and centroid coordinates of actual beam are recorded automatically and sent to closed loop feedback control mechanism. Negative feedback keeps running until preset accuracy is reached. On the basis of above-mentioned design, the alignment optical path is built and the software is compiled, whereafter the experiment of double paths automatic alignment in electron pumped excimer laser amplifier is carried through. Meanwhile, the related influencing factors and the alignment precision are analyzed. Experimental results indicate that the alignment system can achieve the aiming direction of automatic aligning beams in short time. The analysis shows that the accuracy of alignment system is 0.63μrad and the beam maximum restoration error is 13.75μm. Furthermore, the bigger distance between the two cross-hairs, the higher precision of the system is. Therefore, the automatic alignment system has been used in angular multiplexing excimer Main Oscillation Power Amplification (MOPA) system and can satisfy the requirement of beam alignment precision on the whole.
NASA Astrophysics Data System (ADS)
Ghosh, Shreya; Lawless, Matthew J.; Rule, Gordon S.; Saxena, Sunil
2018-01-01
Site-directed spin labeling using two strategically placed natural histidine residues allows for the rigid attachment of paramagnetic Cu2+. This double histidine (dHis) motif enables extremely precise, narrow distance distributions resolved by Cu2+-based pulsed ESR. Furthermore, the distance measurements are easily relatable to the protein backbone-structure. The Cu2+ ion has, till now, been introduced as a complex with the chelating agent iminodiacetic acid (IDA) to prevent unspecific binding. Recently, this method was found to have two limiting concerns that include poor selectivity towards α-helices and incomplete Cu2+-IDA complexation. Herein, we introduce an alternative method of dHis-Cu2+ loading using the nitrilotriacetic acid (NTA)-Cu2+ complex. We find that the Cu2+-NTA complex shows a four-fold increase in selectivity toward α-helical dHis sites. Furthermore, we show that 100% Cu2+-NTA complexation is achievable, enabling precise dHis loading and resulting in no free Cu2+ in solution. We analyze the optimum dHis loading conditions using both continuous wave and pulsed ESR. We implement these findings to show increased sensitivity of the Double Electron-Electron Resonance (DEER) experiment in two different protein systems. The DEER signal is increased within the immunoglobulin binding domain of protein G (called GB1). We measure distances between a dHis site on an α-helix and dHis site either on a mid-strand or a non-hydrogen bonded edge-strand β-sheet. Finally, the DEER signal is increased twofold within two α-helix dHis sites in the enzymatic dimer glutathione S-transferase exemplifying the enhanced α-helical selectivity of Cu2+-NTA.
Tests of general relativity from timing the double pulsar.
Kramer, M; Stairs, I H; Manchester, R N; McLaughlin, M A; Lyne, A G; Ferdman, R D; Burgay, M; Lorimer, D R; Possenti, A; D'Amico, N; Sarkissian, J M; Hobbs, G B; Reynolds, J E; Freire, P C C; Camilo, F
2006-10-06
The double pulsar system PSR J0737-3039A/B is unique in that both neutron stars are detectable as radio pulsars. They are also known to have much higher mean orbital velocities and accelerations than those of other binary pulsars. The system is therefore a good candidate for testing Einstein's theory of general relativity and alternative theories of gravity in the strong-field regime. We report on precision timing observations taken over the 2.5 years since its discovery and present four independent strong-field tests of general relativity. These tests use the theory-independent mass ratio of the two stars. By measuring relativistic corrections to the Keplerian description of the orbital motion, we find that the "post-Keplerian" parameter s agrees with the value predicted by general relativity within an uncertainty of 0.05%, the most precise test yet obtained. We also show that the transverse velocity of the system's center of mass is extremely small. Combined with the system's location near the Sun, this result suggests that future tests of gravitational theories with the double pulsar will supersede the best current solar system tests. It also implies that the second-born pulsar may not have formed through the core collapse of a helium star, as is usually assumed.
How to measure separations and angles between intra-molecular fluorescent markers
NASA Astrophysics Data System (ADS)
Flyvbjerg, Henrik; Mortensen, Kim I.; Sung, Jongmin; Spudich, James A.
We demonstrate a novel, yet simple tool for the study of structure and function of biomolecules by extending two-colour co-localization microscopy to fluorescent molecules with fixed orientations and in intra-molecular proximity. From each color-separated microscope image in a time-lapse movie and using only simple means, we simultaneously determine both the relative (x,y)-separation of the fluorophores and their individual orientations in space with accuracy and precision. The positions and orientations of two domains of the same molecule are thus time-resolved. Using short double-stranded DNA molecules internally labelled with two fixed fluorophores, we demonstrate the accuracy and precision of our method using the known structure of double-stranded DNA as a benchmark, resolve 10-base-pair differences in fluorophore separations, and determine the unique 3D orientation of each DNA molecule, thereby establishing short, double-labelled DNA molecules as probes of 3D orientation of anything to which one can attach them firmly. This work was supported by a Lundbeck fellowship to K.I.M; a Stanford Bio-X fellowship to J.S. and Grants from the NIH (GM33289) to J.A.S. and the Human Frontier Science Program (GP0054/2009-C) to J.A.S. and H.F.
Herkert, Nicholas J; Hornbuckle, Keri C
2018-05-23
Accurate and precise interpretation of concentrations from polyurethane passive samplers (PUF-PAS) is important as more studies show elevated concentrations of PCBs and other semivolatile air toxics in indoor air of schools and homes. If sufficiently reliable, these samplers may be used to identify local sources and human health risks. Here we report indoor air sampling rates (Rs) for polychlorinated biphenyl congeners (PCBs) predicted for a frequently used double-dome and a half-dome PUF-PAS design. Both our experimentally calibrated (1.10 ± 0.23 m3 d-1) and modeled (1.08 ± 0.04 m3 d-1) Rs for the double-dome samplers compare well with literature reports for similar rooms. We determined that variability of wind speeds throughout the room significantly (P < 0.001) effected uptake rates. We examined this effect using computational fluid dynamics modeling and 3-D sonic anemometer measurements and found the airflow dynamics to have a significant but small impact on the precision of calculated airborne concentrations. The PUF-PAS concentration measurements were within 27% and 10% of the active sampling concentration measurements for the double-dome and half-dome designs, respectively. While the half-dome samplers produced more consistent concentration measurements, we find both designs to perform well indoors.
Exact solution and precise asymptotics of a Fisher-KPP type front
NASA Astrophysics Data System (ADS)
Berestycki, Julien; Brunet, Éric; Derrida, Bernard
2018-01-01
The present work concerns a version of the Fisher-KPP equation where the nonlinear term is replaced by a saturation mechanism, yielding a free boundary problem with mixed conditions. Following an idea proposed in Brunet and Derrida (2015 J. Stat. Phys. 161 801), we show that the Laplace transform of the initial condition is directly related to some functional of the front position μt . We then obtain precise asymptotics of the front position by means of singularity analysis. In particular, we recover the so-called Ebert and van Saarloos correction (Ebert and van Saarloos 2000 Physica D 146 1), we obtain an additional term of order log t /t in this expansion, and we give precise conditions on the initial condition for those terms to be present.
A hidden analytic structure of the Rabi model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moroz, Alexander, E-mail: wavescattering@yahoo.com
2014-01-15
The Rabi model describes the simplest interaction between a cavity mode with a frequency ω{sub c} and a two-level system with a resonance frequency ω{sub 0}. It is shown here that the spectrum of the Rabi model coincides with the support of the discrete Stieltjes integral measure in the orthogonality relations of recently introduced orthogonal polynomials. The exactly solvable limit of the Rabi model corresponding to Δ=ω{sub 0}/(2ω{sub c})=0, which describes a displaced harmonic oscillator, is characterized by the discrete Charlier polynomials in normalized energy ϵ, which are orthogonal on an equidistant lattice. A non-zero value of Δ leads tomore » non-classical discrete orthogonal polynomials ϕ{sub k}(ϵ) and induces a deformation of the underlying equidistant lattice. The results provide a basis for a novel analytic method of solving the Rabi model. The number of ca. 1350 calculable energy levels per parity subspace obtained in double precision (cca 16 digits) by an elementary stepping algorithm is up to two orders of magnitude higher than is possible to obtain by Braak’s solution. Any first n eigenvalues of the Rabi model arranged in increasing order can be determined as zeros of ϕ{sub N}(ϵ) of at least the degree N=n+n{sub t}. The value of n{sub t}>0, which is slowly increasing with n, depends on the required precision. For instance, n{sub t}≃26 for n=1000 and dimensionless interaction constant κ=0.2, if double precision is required. Given that the sequence of the lth zeros x{sub nl}’s of ϕ{sub n}(ϵ)’s defines a monotonically decreasing discrete flow with increasing n, the Rabi model is indistinguishable from an algebraically solvable model in any finite precision. Although we can rigorously prove our results only for dimensionless interaction constant κ<1, numerics and exactly solvable example suggest that the main conclusions remain to be valid also for κ≥1. -- Highlights: •A significantly simplified analytic solution of the Rabi model. •The spectrum is the lattice of discrete orthogonal polynomials. •Up to 1350 levels in double precision can be obtained for a given parity. •Omission of any level can be easily detected.« less
Boulyga, Sergei F; Klötzli, Urs; Stingeder, Gerhard; Prohaska, Thomas
2007-10-15
An inductively coupled plasma mass spectrometer with dynamic reaction cell (ICP-DRC-MS) was optimized for determining (44)Ca/(40)Ca isotope ratios in aqueous solutions with respect to (i) repeatability, (ii) robustness, and (iii) stability. Ammonia as reaction gas allowed both the removal of (40)Ar+ interference on (40)Ca+ and collisional damping of ion density fluctuations of an ion beam extracted from an ICP. The effect of laboratory conditions as well as ICP-DRC-MS parameters such a nebulizer gas flow rate, rf power, lens potential, dwell time, or DRC parameters on precision and mass bias was studied. Precision (calculated using the "unbiased" or "n - 1" method) of a single isotope ratio measurement of a 60 ng g(-1) calcium solution (analysis time of 6 min) is routinely achievable in the range of 0.03-0.05%, which corresponded to the standard error of the mean value (n = 6) of 0.012-0.020%. These experimentally observed RSDs were close to theoretical precision values given by counting statistics. Accuracy of measured isotope ratios was assessed by comparative measurements of the same samples by ICP-DRC-MS and thermal ionization mass spectrometry (TIMS) by using isotope dilution with a (43)Ca-(48)Ca double spike. The analysis time in both cases was 1 h per analysis (10 blocks, each 6 min). The delta(44)Ca values measured by TIMS and ICP-DRC-MS with double-spike calibration in two samples (Ca ICP standard solution and digested NIST 1486 bone meal) coincided within the obtained precision. Although the applied isotope dilution with (43)Ca-(48)Ca double-spike compensates for time-dependent deviations of mass bias and allows achieving accurate results, this approach makes it necessary to measure an additional isotope pair, reducing the overall analysis time per isotope or increasing the total analysis time. Further development of external calibration by using a bracketing method would allow a wider use of ICP-DRC-MS for routine calcium isotopic measurements, but it still requires particular software or hardware improvements aimed at reliable control of environmental effects, which might influence signal stability in ICP-DRC-MS and serve as potential uncertainty sources in isotope ratio measurements.
NASA Astrophysics Data System (ADS)
Kaufman, Lisa; EXO-200 Collaboration
2017-09-01
The EXO-200 experiment has made both the first observation of the double beta decay in Xe-136 and the most precisely measured half-life of any two-neutrino double beta decay to date. Consisting of an extremely low-background time projection chamber filled with 150 kg of enriched liquid Xe-136, it has provided one of the most sensitive searches for the neutrinoless double beta decay using the first two years of data. After a hiatus in operations during a temporary shutdown of its host facility, the Waste Isolation Pilot Plant, the experiment has restarted data taking with upgrades to its front-end electronics and a radon suppression system. This talk will cover the latest results of the collaboration including new data with improved energy resolution.
Double-survey estimates of bald eagle populations in Oregon
Anthony, R.G.; Garrett, Monte G.; Isaacs, F.B.
1999-01-01
The literature on abundance of birds of prey is almost devoid of population estimates with statistical rigor. Therefore, we surveyed bald eagle (Haliaeetus leucocephalus) populations on the Crooked and lower Columbia rivers of Oregon and used the double-survey method to estimate populations and sighting probabilities for different survey methods (aerial, boat, vehicle) and bald eagle ages (adults vs. subadults). Sighting probabilities were consistently 20%. The results revealed variable and negative bias (percent relative bias = -9 to -70%) of direct counts and emphasized the importance of estimating populations where some measure of precision and ability to conduct inference tests are available. We recommend use of the double-survey method to estimate abundance of bald eagle populations and other raptors in open habitats.
NASA Astrophysics Data System (ADS)
Gueddana, Amor; Attia, Moez; Chatta, Rihab
2015-03-01
In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.
NASA Astrophysics Data System (ADS)
Mao, Zhangwen; Guo, Wei; Ji, Dianxiang; Zhang, Tianwei; Gu, Chenyi; Tang, Chao; Gu, Zhengbin; Nie*, Yuefeng; Pan, Xiaoqing
In situ reflection high-energy electron diffraction (RHEED) and its intensity oscillations are extremely important for the growth of epitaxial thin films with atomic precision. The RHEED intensity oscillations of complex oxides are, however, rather complicated and a general model is still lacking. Here, we report the unusual phase inversion and frequency doubling of RHEED intensity oscillations observed in the layer-by-layer growth of SrTiO3 using oxide molecular beam epitaxy. In contacts to the common understanding that the maximum(minimum) intensity occurs at SrO(TiO2) termination, respectively, we found that both maximum or minimum intensities can occur at SrO, TiO2, or even incomplete terminations depending on the incident angle of the electron beam, which raises a fundamental question if one can rely on the RHEED intensity oscillations to precisely control the growth of thin films. A general model including surface roughness and termination dependent mean inner potential qualitatively explains the observed phenomena, and provides the answer to the question how to prepare atomically and chemically precise surface/interfaces using RHEED oscillations for complex oxides. We thank National Basic Research Program of China (No. 11574135, 2015CB654901) and the National Thousand-Young-Talents Program.
Development and Evaluation of Math Library Routines for a 1750A Airborne Microcomputer.
1985-12-04
Since each iteration doubles the number of correct significant digits in the square root, this assures an accuracy of 63.32 bits. (4: 23) The next...X, C1 + C2 represents In (C) to more than working precision This method gives extra digits of precision equivalent to the number of extra digits in...will not underflow for lxI K eps. Cody and Waite have suggested that eps = 2-t/2 where there are t base-2 digits in the significand. The next step
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B.
The precision of double-beta ββ-decay experimental half lives and their uncertainties is reanalyzed. The method of Benford's distributions has been applied to nuclear reaction, structure and decay data sets. First-digit distribution trend for ββ-decay T 2v 1/2 is consistent with large nuclear reaction and structure data sets and provides validation of experimental half-lives. A complementary analysis of the decay uncertainties indicates deficiencies due to small size of statistical samples, and incomplete collection of experimental information. Further experimental and theoretical efforts would lead toward more precise values of-decay half-lives and nuclear matrix elements.
A Tutorial for the Student Edition (Release 1.1) of Minitab.
ERIC Educational Resources Information Center
MacFarland, Thomas W.; Hou, Cheng-I
This guide for using Minitab requires DOS version 2.0 or greater, 512K RAM memory, two double-sided diskette drives, and a graphics monitor. Topics covered in the tutorial are Getting started; Installation; Making a data diskette; Entering data; Central tendency and dispersion; t-test; Chi-square test; Oneway ANOVA test; Twoway ANOVA test; and…
HUCKLEBERRY FINN. DR. JEKYLL AND MR. HYDE. SHORT STORIES. LITERATURE CURRICULUM IV, STUDENT VERSION.
ERIC Educational Resources Information Center
KITZHABER, ALBERT R.
A STUDENT'S CURRICULUM GUIDE FOR THE STUDY OF "HUCKLEBERRY FINN,""DR. JEKYLL AND MR. HYDE," AND THREE SHORT STORIES WAS PRESENTED. THE SHORT STORIES INCLUDED WERE (1) "THE COUNTRY OF THE BLIND" BY H.G. WELLS (COMPLETE TEXT), (2) "A DOUBLE-DYED DECEIVER" BY O. HENRY, AND (3) "A MYSTERY OF HEROISM"…
High-precision double-frequency interferometric measurement of the cornea shape
NASA Astrophysics Data System (ADS)
Molebny, Vasyl V.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.; Smirnov, Eugene M.; Ilchenko, Leonid M.; Goncharov, Vadym O.
1996-11-01
To measure the shape of the cornea and its declinations from the necessary values before and after PRK operation, s well as the shape of other spherical objects like artificial pupil, a technique was used of double-frequency dual-beam interferometry. The technique is based on determination of the optical path difference between two neighboring laser beams, reflected from the cornea or other surface under investigation. Knowing the distance between the beams on the investigated shape. The shape itself is reconstructed by along-line integration. To adjust the wavefront orientation of the laser beam to the spherical shape of the cornea or artificial pupil in the course of scanning, additional lens is involved. Signal-to-noise ratio is ameliorated excluding losses in the acousto-optic deflectors. Polarization selection is realized for choosing the signal needed for measurement. 2D image presentation is accompanied by convenient PC accessories, permitting precise cross-section measurements along selected directions. Sensitivity of the order of 10-2 micrometers is achieved.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities.
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakhman, A.; Hafez, Mohamed A.; Nanda, Sirish K.
Here, a high-finesse Fabry-Perot cavity with a frequency-doubled continuous wave green laser (532 nm) has been built and installed in Hall A of Jefferson Lab for high precision Compton polarimetry. The infrared (1064 nm) beam from a ytterbium-doped fiber amplifier seeded by a Nd:YAG nonplanar ring oscillator laser is frequency doubled in a single-pass periodically poled MgO:LiNbO 3 crystal. The maximum achieved green power at 5 W infrared pump power is 1.74 W with a total conversion efficiency of 34.8%. The green beam is injected into the optical resonant cavity and enhanced up to 3.7 kW with a corresponding enhancementmore » of 3800. The polarization transfer function has been measured in order to determine the intra-cavity circular laser polarization within a measurement uncertainty of 0.7%. The PREx experiment at Jefferson Lab used this system for the first time and achieved 1.0% precision in polarization measurements of an electron beam with energy and current of 1.0 GeV and 50 μA.« less
The precision measurement and assembly for miniature parts based on double machine vision systems
NASA Astrophysics Data System (ADS)
Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.
2015-02-01
In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.
2007-09-27
the spatial and spectral resolution ...variety of geological and vegetation mapping efforts, the Hymap sensor offered the best available combination of spectral and spatial resolution , signal... The limitations of the technology currently relate to spatial and spectral resolution and geo- correction accuracy. Secondly, HSI datasets
Conservation of Mechanical and Electric Energy: Simple Experimental Verification
ERIC Educational Resources Information Center
Ponikvar, D.; Planinsic, G.
2009-01-01
Two similar experiments on conservation of energy and transformation of mechanical into electrical energy are presented. Both can be used in classes, as they offer numerous possibilities for discussion with students and are simple to perform. Results are presented and are precise within 20% for the version of the experiment where measured values…
NASA Astrophysics Data System (ADS)
Nikolakopoulos, Konstantinos G.
2017-09-01
A global digital surface model dataset named ALOS Global Digital Surface Model (AW3D30) with a horizontal resolution of approx. 30-meter mesh (1 arcsec) has been released by the Japan Aerospace Exploration Agency (JAXA). The dataset has been compiled with images acquired by the Advanced Land Observing Satellite "DAICHI" (ALOS) and it is published based on the DSM dataset (5-meter mesh version) of the "World 3D Topographic Data", which is the most precise global-scale elevation data at this time, and its elevation precision is also at a world-leading level as a 30-meter mesh version. In this study the accuracy of ALOS AW3D30 was examined. For an area with complex geomorphologic characteristics DSM from ALOS stereo pairs were created with classical photogrammetric techniques. Those DSMs were compared with the ALOS AW3D30. Points of certified elevation collected with DGPS have been used to estimate the accuracy of the DSM. The elevation difference between the two DSMs was calculated. 2D RMSE, correlation and the percentile value were also computed and the results are presented.
High precision silicon piezo resistive SMART pressure sensor
NASA Astrophysics Data System (ADS)
Brown, Rod
2005-01-01
Instruments for test and calibration require a pressure sensor that is precise and stable. Market forces also dictate a move away from single measurand test equipment and, certainly in the case of pressure, away from single range equipment. A pressure `module' is required which excels in pressure measurement but is interchangble with sensors for other measurands. A communications interface for such a sensor has been specified. Instrument Digital Output Sensor (IDOS) that permits this interchanagability and allows the sensor to be inside or outside the measuring instrument. This paper covers the design and specification of a silicon diaphragm piezo resistive SMART sensor using this interface. A brief history of instrument sensors will be given to establish the background to this development. Design choices of the silicon doping, bridge energisation method, temperature sensing, signal conversion, data processing, compensation method, communications interface will be discussed. The physical format of the `in-instrument' version will be shown and then extended to the packaging design for the external version. Test results will show the accuracy achieved exceeds the target of 0.01%FS over a range of temperatures.
Metric freeness and projectivity for classical and quantum normed modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helemskii, A Ya
2013-07-31
In functional analysis, there are several diverse approaches to the notion of projective module. We show that a certain general categorical scheme contains all basic versions as special cases. In this scheme, the notion of free object comes to the foreground, and, in the best categories, projective objects are precisely retracts of free ones. We are especially interested in the so-called metric version of projectivity and characterize the metrically free classical and quantum (= operator) normed modules. Informally speaking, so-called extremal projectivity, which was known earlier, is interpreted as a kind of 'asymptotical metric projectivity'. In addition, we answer themore » following specific question in the geometry of normed spaces: what is the structure of metrically projective modules in the simplest case of normed spaces? We prove that metrically projective normed spaces are precisely the subspaces of l{sub 1}(M) (where M is a set) that are denoted by l{sub 1}{sup 0}(M) and consist of finitely supported functions. Thus, in this case, projectivity coincides with freeness. Bibliography: 28 titles.« less
A trick to improve the efficiency of generating unweighted B events from BCVEGPY
NASA Astrophysics Data System (ADS)
Wang, Xian-You; Wu, Xing-Gang
2012-02-01
In the present paper, we provide an addendum to improve the efficiency of generating unweighted events within PYTHIA environment for the generator BCVEGPY2.1 [C.H. Chang, J.X. Wang, X.G. Wu, Comput. Phys. Commun. 174 (2006) 241]. This trick is helpful for experimental simulation. Moreover, the BCVEGPY output has also been improved, i.e. one Les Houches Event common block has been added so as to generate a standard Les Houches Event file that contains the information of the generated B meson and the accompanying partons, which can be more conveniently used for further simulation. New version program summaryTitle of program: BCVEGPY2.1a Catalogue identifier: ADTJ_v2_2 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTJ_v2_2.html Program obtained from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 166 133 No. of bytes in distributed program, including test data, etc.: 1 655 390 Distribution format: tar.gz Programming language used: FORTRAN 77/90 Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating systems: LINUX RAM: About 2.0 MB Classification: 11.2, 11.5 Catalogue identifier of previous version: ADTJ_v2_1 Reference in CPC: Comput. Phys. Commun. 175 (2006) 624 Does the new version supersede the old program: No Nature of physical problem: Hadronic Production of B meson and its excited states Method of solution: To generate weighted and unweighted B events within PYTHIA environment effectively. Restrictions on the complexity of the problem: Hadronic production of ( cb¯)-quarkonium via the gluon-gluon fusion mechanism are given by the 'complete calculation approach'. The simulation of B events is done within PYTHIA environment. Reasons for new version: More and more data are accumulated at the large hadronic collider, it would be possible to make precise studies on B meson properties, such as its lifetime, mass spectrum and etc. The BCVEGPY has been adopted by several experimental groups due to its high efficiency in comparison to that of PYTHIA. However, to generate unweighted events with PYTHIA inner mechanism as programmed by the previous version is still time-consuming. So it would be helpful to improve the efficiency for generating unweighted events within PYTHIA. Moreover, it would be better to use an uniform and standard output format for further detector simulation. Typical running time: Typical running time is machine and user-parameters dependent. I) To generate 10 6 weighted S-wave ( cb¯)-quarkonium events (IDWTUP = 3), it will take about 40 minutes on a 1.8 GHz Intel P4-processor machine. II) To generate unweighted S-wave ( cb¯)-quarkonium events with PYTHIA inner structure (IDWTUP = 1), it will take about 20 hour on a 1.8 GHz Intel P4-processor machine to generate 1000 events. III) To generate 10 6 unweighted S-wave ( cb¯)-quarkonium events with the present trick (IDWTUP = 1), it will take 17 hour on a 3.16 Hz Intel E8500 processor machine. Moreover, it can be found that the running time for the P-wave ( cb¯)-quarkonium production is about two times longer than the case of S-wave production under the same conditions. Keywords: Event generator; Hadronic production; B meson; Unweighted events Summary of revisions: (1) The generator BCVEGPY [1-3] has been programmed to generate B events under PYTHIA environment [4], which has been frequently adopted for theoretical and experimental studies, e.g. Refs. [5-18]. It is found that each experimental group shall have its own simulation software architecture, and the users will spend a lot of time to write an interface so as to implement BCVEGPY into their own software. So it would be better to supply a standard output. The LHE format becomes a standard format [19], which is proposed to store process and event information from the matrix-element-based generators. The users can pass these parton-level information to the general event generators like PYTHIA and HERWIG [20] for further simulation. For such purpose, we add two common blocks in genevent.F. One common block is called as bcvegpy_pyupin and the other one is write_lhe. The bcvegpy_pyupin, which is similar to PYUPIN subroutine in PYTHIA, stores the initialization information in the HEPRUP common block. INTEGER MAXPUP PARAMETER (MAXPUP = 100) INTEGER IDBMUP,PDFGUP,PDFSUP,IDWTUP,NPRUP,LPRUP DOUBLE PRECISION EBMUP,XSECUP,XERRUP,XMAXUP COMMON/HEPRUP/IDBMUP(2),EBMUP(2),PDFGUP(2),PDFSUP(2), &IDWTUP,NPRUP,XSECUP(MAXPUP),XERRUP(MAXPUP), &XMAXUP(MAXPUP),LPRUP(MAXPUP) The write_lhe, which is similar to PYUPEV subroutine in pythia, stores the information of each separate event in the HEPEUP common block. INTEGER MAXNUP PARAMETER (MAXNUP = 500) INTEGER NUP,IDPRUP,IDUP,ISTUP,MOTHUP,ICOLUP DOUBLE PRECISION XWGTUP,SCALUP,AQEDUP,AQCDUP,PUP,VTIMUP, &SPINUP COMMON/HEPEUP/NUP,IDPRUP,XWGTUP,SCALUP,AQEDUP,AQCDUP, &IDUP(MAXNUP),ISTUP(MAXNUP),MOTHUP(2,MAXNUP), &ICOLUP(2,MAXNUP),PUP(5,MAXNUP),VTIMUP(MAXNUP), &SPINUP(MAXNUP)
NASA Astrophysics Data System (ADS)
Dobaczewski, J.; Satuła, W.; Carlsson, B. G.; Engel, J.; Olbratowski, P.; Powałowski, P.; Sadziak, M.; Sarich, J.; Schunck, N.; Staszczak, A.; Stoitsov, M.; Zalewski, M.; Zduńczuk, H.
2009-11-01
We describe the new version (v2.40h) of the code HFODD which solves the nuclear Skyrme-Hartree-Fock or Skyrme-Hartree-Fock-Bogolyubov problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented: (i) projection on good angular momentum (for the Hartree-Fock states), (ii) calculation of the GCM kernels, (iii) calculation of matrix elements of the Yukawa interaction, (iv) the BCS solutions for state-dependent pairing gaps, (v) the HFB solutions for broken simplex symmetry, (vi) calculation of Bohr deformation parameters, (vii) constraints on the Schiff moments and scalar multipole moments, (viii) the DT2h transformations and rotations of wave functions, (ix) quasiparticle blocking for the HFB solutions in odd and odd-odd nuclei, (x) the Broyden method to accelerate the convergence, (xi) the Lipkin-Nogami method to treat pairing correlations, (xii) the exact Coulomb exchange term, (xiii) several utility options, and we have corrected three insignificant errors. New version program summaryProgram title: HFODD (v2.40h) Catalogue identifier: ADFL_v2_2 Program summary URL:
Lorenzen, Nina Dyrberg; Stilling, Maiken; Jakobsen, Stig Storgaard; Gustafson, Klas; Søballe, Kjeld; Baad-Hansen, Thomas
2013-11-01
The stability of implants is vital to ensure a long-term survival. RSA determines micro-motions of implants as a predictor of early implant failure. RSA can be performed as a marker- or model-based analysis. So far, CAD and RE model-based RSA have not been validated for use in hip resurfacing arthroplasty (HRA). A phantom study determined the precision of marker-based and CAD and RE model-based RSA on a HRA implant. In a clinical study, 19 patients were followed with stereoradiographs until 5 years after surgery. Analysis of double-examination migration results determined the clinical precision of marker-based and CAD model-based RSA, and at the 5-year follow-up, results of the total translation (TT) and the total rotation (TR) for marker- and CAD model-based RSA were compared. The phantom study showed that comparison of the precision (SDdiff) in marker-based RSA analysis was more precise than model-based RSA analysis in TT (p CAD < 0.001; p RE = 0.04) and TR (p CAD = 0.01; p RE < 0.001). The clinical precision (double examination in 8 patients) comparing the precision SDdiff was better evaluating the TT using the marker-based RSA analysis (p = 0.002), but showed no difference between the marker- and CAD model-based RSA analysis regarding the TR (p = 0.91). Comparing the mean signed values regarding the TT and the TR at the 5-year follow-up in 13 patients, the TT was lower (p = 0.03) and the TR higher (p = 0.04) in the marker-based RSA compared to CAD model-based RSA. The precision of marker-based RSA was significantly better than model-based RSA. However, problems with occluded markers lead to exclusion of many patients which was not a problem with model-based RSA. HRA were stable at the 5-year follow-up. The detection limit was 0.2 mm TT and 1° TR for marker-based and 0.5 mm TT and 1° TR for CAD model-based RSA for HRA.
Reinforcement learning in complementarity game and population dynamics
NASA Astrophysics Data System (ADS)
Jost, Jürgen; Li, Wei
2014-02-01
We systematically test and compare different reinforcement learning schemes in a complementarity game [J. Jost and W. Li, Physica A 345, 245 (2005), 10.1016/j.physa.2004.07.005] played between members of two populations. More precisely, we study the Roth-Erev, Bush-Mosteller, and SoftMax reinforcement learning schemes. A modified version of Roth-Erev with a power exponent of 1.5, as opposed to 1 in the standard version, performs best. We also compare these reinforcement learning strategies with evolutionary schemes. This gives insight into aspects like the issue of quick adaptation as opposed to systematic exploration or the role of learning rates.
NASA Astrophysics Data System (ADS)
Senkerik, Roman; Zelinka, Ivan; Davendra, Donald; Oplatkova, Zuzana
2010-06-01
This research deals with the optimization of the control of chaos by means of evolutionary algorithms. This work is aimed on an explanation of how to use evolutionary algorithms (EAs) and how to properly define the advanced targeting cost function (CF) securing very fast and precise stabilization of desired state for any initial conditions. As a model of deterministic chaotic system, the one dimensional Logistic equation was used. The evolutionary algorithm Self-Organizing Migrating Algorithm (SOMA) was used in four versions. For each version, repeated simulations were conducted to outline the effectiveness and robustness of used method and targeting CF.
NASA Astrophysics Data System (ADS)
Osborn, T. J.; Jones, P. D.
2014-02-01
The CRUTEM4 (Climatic Research Unit Temperature, version 4) land-surface air temperature data set is one of the most widely used records of the climate system. Here we provide an important additional dissemination route for this data set: online access to monthly, seasonal and annual data values and time series graphs via Google Earth. This is achieved via an interface written in Keyhole Markup Language (KML) and also provides access to the underlying weather station data used to construct the CRUTEM4 data set. A mathematical description of the construction of the CRUTEM4 data set (and its predecessor versions) is also provided, together with an archive of some previous versions and a recommendation for identifying the precise version of the data set used in a particular study. The CRUTEM4 data set used here is available from doi:10.5285/EECBA94F-62F9-4B7C-88D3-482F2C93C468.
Belle II SVD ladder assembly procedure and electrical qualification
NASA Astrophysics Data System (ADS)
Adamczyk, K.; Aihara, H.; Angelini, C.; Aziz, T.; Babu, Varghese; Bacher, S.; Bahinipati, S.; Barberio, E.; Baroncelli, T.; Basith, A. K.; Batignani, G.; Bauer, A.; Behera, P. K.; Bergauer, T.; Bettarini, S.; Bhuyan, B.; Bilka, T.; Bosi, F.; Bosisio, L.; Bozek, A.; Buchsteiner, F.; Casarosa, G.; Ceccanti, M.; Červenkov, D.; Chendvankar, S. R.; Dash, N.; Divekar, S. T.; Doležal, Z.; Dutta, D.; Forti, F.; Friedl, M.; Hara, K.; Higuchi, T.; Horiguchi, T.; Irmler, C.; Ishikawa, A.; Jeon, H. B.; Joo, C.; Kandra, J.; Kang, K. H.; Kato, E.; Kawasaki, T.; Kodyš, P.; Kohriki, T.; Koike, S.; Kolwalkar, M. M.; Kvasnička, P.; Lanceri, L.; Lettenbicher, J.; Mammini, P.; Mayekar, S. N.; Mohanty, G. B.; Mohanty, S.; Morii, T.; Nakamura, K. R.; Natkaniec, Z.; Negishi, K.; Nisar, N. K.; Onuki, Y.; Ostrowicz, W.; Paladino, A.; Paoloni, E.; Park, H.; Pilo, F.; Profeti, A.; Rao, K. K.; Rashevskaya, I.; Rizzo, G.; Rozanska, M.; Sandilya, S.; Sasaki, J.; Sato, N.; Schultschik, S.; Schwanda, C.; Seino, Y.; Shimizu, N.; Stypula, J.; Tanaka, S.; Tanida, K.; Taylor, G. N.; Thalmeier, R.; Thomas, R.; Tsuboyama, T.; Uozumi, S.; Urquijo, P.; Vitale, L.; Volpi, M.; Watanuki, S.; Watson, I. J.; Webb, J.; Wiechczynski, J.; Williams, S.; Würkner, B.; Yamamoto, H.; Yin, H.; Yoshinobu, T.; Belle II SVD Collaboration
2016-07-01
The Belle II experiment at the SuperKEKB asymmetric e+e- collider in Japan will operate at a luminosity approximately 50 times larger than its predecessor (Belle). At its heart lies a six-layer vertex detector comprising two layers of pixelated silicon detectors (PXD) and four layers of double-sided silicon microstrip detectors (SVD). One of the key measurements for Belle II is time-dependent CP violation asymmetry, which hinges on a precise charged-track vertex determination. Towards this goal, a proper assembly of the SVD components with precise alignment ought to be performed and the geometrical tolerances should be checked to fall within the design limits. We present an overview of the assembly procedure that is being followed, which includes the precision gluing of the SVD module components, wire-bonding of the various electrical components, and precision three dimensional coordinate measurements of the jigs used in assembly as well as of the final SVD modules.
Borchers, D L; Langrock, R
2015-12-01
We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Palmer, Cameron S; Niggemeyer, Louise E; Charman, Debra
2010-09-01
The 2005 version of the Abbreviated Injury Scale (AIS05) potentially represents a significant change in injury spectrum classification, due to a substantial increase in the codeset size and alterations to the agreed severity of many injuries compared to the previous version (AIS98). Whilst many trauma registries around the world are moving to adopt AIS05 or its 2008 update (AIS08), its effect on patient classification in existing registries, and the optimum method of comparing existing data collections with new AIS05 collections are unknown. The present study aimed to assess the potential impact of adopting the AIS05 codeset in an established trauma system, and to identify issues associated with this change. A current subset of consecutive major trauma patients admitted to two large hospitals in the Australian state of Victoria were double-coded in AIS98 and AIS05. Assigned codesets were also mapped to the other AIS version using code lists supplied in the AIS05 manual, giving up to four AIS codes per injury sustained. Resulting codesets were assessed for agreement in codes used, injury severity and calculated severity scores. 602 injuries sustained by 109 patients were compared. Adopting AIS05 would lead to a decrease in the number of designated major trauma patients in Victoria, estimated at 22% (95% confidence interval, 15-31%). Differences in AIS level between versions were significantly more likely to occur amongst head and chest injuries. Data mapped to a different codeset performed better in paired comparisons than raw AIS98 and AIS05 codesets, with data mapping of AIS05 codes back to AIS98 giving significantly higher levels of agreement in AIS level, ISS and NISS than other potential comparisons, and resulting in significantly fewer conversion problems than attempting to map AIS98 codes to AIS05. This study provides new insights into AIS codeset change impact. Adoption of AIS05 or AIS08 in established registries will decrease major trauma patient numbers. Code mapping between AIS versions can improve comparisons between datasets in different AIS versions, although the injury profile of a trauma population will affect the degree of comparability. At present, mapping AIS05 data back to AIS98 is recommended. 2009 Elsevier Ltd. All rights reserved.
Pribil, M.J.; Wanty, R.B.; Ridley, W.I.; Borrok, D.M.
2010-01-01
An increased interest in high precision Cu isotope ratio measurements using multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) has developed recently for various natural geologic systems and environmental applications, these typically contain high concentrations of sulfur, particularly in the form of sulfate (SO42-) and sulfide (S). For example, Cu, Fe, and Zn concentrations in acid mine drainage (AMD) can range from 100??g/L to greater than 50mg/L with sulfur species concentrations reaching greater than 1000mg/L. Routine separation of Cu, Fe and Zn from AMD, Cu-sulfide minerals and other geological matrices usually incorporates single anion exchange resin column chromatography for metal separation. During chromatographic separation, variable breakthrough of SO42- during anion exchange resin column chromatography into the Cu fractions was observed as a function of the initial sulfur to Cu ratio, column properties, and the sample matrix. SO42- present in the Cu fraction can form a polyatomic 32S-14N-16O-1H species causing a direct mass interference with 63Cu and producing artificially light ??65Cu values. Here we report the extent of the mass interference caused by SO42- breakthrough when measuring ??65Cu on natural samples and NIST SRM 976 Cu isotope spiked with SO42- after both single anion column chromatography and double anion column chromatography. A set of five 100??g/L Cu SRM 976 samples spiked with 500mg/L SO42- resulted in an average ??65Cu of -3.50?????5.42??? following single anion column separation with variable SO42- breakthrough but an average concentration of 770??g/L. Following double anion column separation, the average SO42-concentration of 13??g/L resulted in better precision and accuracy for the measured ??65Cu value of 0.01?????0.02??? relative to the expected 0??? for SRM 976. We conclude that attention to SO42- breakthrough on sulfur-rich samples is necessary for accurate and precise measurements of ??65Cu and may require the use of a double ion exchange column procedure. ?? 2010.
Analysis of the heat transfer in double and triple concentric tube heat exchangers
NASA Astrophysics Data System (ADS)
Rădulescu, S.; Negoiţă, L. I.; Onuţu, I.
2016-08-01
The tubular heat exchangers (shell and tube heat exchangers and concentric tube heat exchangers) represent an important category of equipment in the petroleum refineries and are used for heating, pre-heating, cooling, condensation and evaporation purposes. The paper presents results of analysis of the heat transfer to cool a petroleum product in two types of concentric tube heat exchangers: double and triple concentric tube heat exchangers. The cooling agent is water. The triple concentric tube heat exchanger is a modified constructive version of double concentric tube heat exchanger by adding an intermediate tube. This intermediate tube improves the heat transfer by increasing the heat area per unit length. The analysis of the heat transfer is made using experimental data obtained during the tests in a double and triple concentric tube heat exchanger. The flow rates of fluids, inlet and outlet temperatures of water and petroleum product are used in determining the performance of both heat exchangers. Principally, for both apparatus are calculated the overall heat transfer coefficients and the heat exchange surfaces. The presented results shows that triple concentric tube heat exchangers provide better heat transfer efficiencies compared to the double concentric tube heat exchangers.
Development and evaluation of a biomedical search engine using a predicate-based vector space model.
Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey
2013-10-01
Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (p<.001) for the predicate-based (80%) than for the keyword-based (71%) approach. Relevance was almost doubled with the predicate-based approach-2.1 versus 1.6 without rank order adjustment (p<.001) and 1.34 versus 0.98 with rank order adjustment (p<.001) for predicate--versus keyword-based approach respectively. Predicates can support more precise searching than keywords, laying the foundation for rich and sophisticated information search. Copyright © 2013 Elsevier Inc. All rights reserved.
Study on dynamic deformation synchronized measurement technology of double-layer liquid surfaces
NASA Astrophysics Data System (ADS)
Tang, Huiying; Dong, Huimin; Liu, Zhanwei
2017-11-01
Accurate measurement of the dynamic deformation of double-layer liquid surfaces plays an important role in many fields, such as fluid mechanics, biomechanics, petrochemical industry and aerospace engineering. It is difficult to measure dynamic deformation of double-layer liquid surfaces synchronously for traditional methods. In this paper, a novel and effective method for full-field static and dynamic deformation measurement of double-layer liquid surfaces has been developed, that is wavefront distortion of double-wavelength transmission light with geometric phase analysis (GPA) method. Double wavelength lattice patterns used here are produced by two techniques, one is by double wavelength laser, and the other is by liquid crystal display (LCD). The techniques combine the characteristics such as high transparency, low reflectivity and fluidity of liquid. Two color lattice patterns produced by laser and LCD were adjusted at a certain angle through the tested double-layer liquid surfaces simultaneously. On the basis of the refractive indexes difference of two transmitted lights, the double-layer liquid surfaces were decoupled with GPA method. Combined with the derived relationship between phase variation of transmission-lattice patterns and out-of plane heights of two surfaces, as well as considering the height curves of the liquid level, the double-layer liquid surfaces can be reconstructed successfully. Compared with the traditional measurement method, the developed method not only has the common advantages of the optical measurement methods, such as high-precision, full-field and non-contact, but also simple, low cost and easy to set up.
An experimental version of the MZT (speech-from-text) system with external F(sub 0) control
NASA Astrophysics Data System (ADS)
Nowak, Ignacy
1994-12-01
The version of a Polish speech from text system described in this article was developed using the speech-from-text system. The new system has additional functions which make it possible to enter commands in edited orthographic text to control the phrase component and accentuation parameters. This makes it possible to generate a series of modified intonation contours in the texts spoken by the system. The effects obtained are made easier to control by a graphic illustration of the base frequency pattern in phrases that were last 'spoken' by the system. This version of the system was designed as a test prototype which will help us expand and refine our set of rules for automatic generation of intonation contours, which in turn will enable the fully automated speech-from-text system to generate speech with a more varied and precisely formed fundamental frequency pattern.
Nimbus-7 TOMS Version 7 Calibration
NASA Technical Reports Server (NTRS)
Wellemeyer, C. G.; Taylor, S. L.; Jaross, G.; DeLand, M. T.; Seftor, C. J.; Labow, G.; Swissler, T. J.; Cebula, R. P.
1996-01-01
This report describes an improved instrument characterization used for the Version 7 processing of the Nimbus-7 Total Ozone Mapping Spectrometer (TOMS) data record. An improved internal calibration technique referred to as spectral discrimination is used to provide long-term calibration precision of +/- 1%/decade in total column ozone amount. A revised wavelength scale results in a day one calibration that agrees with other satellite and ground-based measurements of total ozone, while a wavelength independent adjustment of the initial radiometric calibration constants provides good agreement with surface reflectivity measured by other satellite-borne ultraviolet measurements. The impact of other aspects of the Nimbus-7 TOMS instrument performance are also discussed. The Version 7 data should be used in all future studies involving the Nimbus-7 TOMS measurements of ozone. The data are available through the NASA Goddard Space Flight Center's Distributive Active Archive Center (DAAC).
Scale of attitudes toward alcohol - Spanish version: evidences of validity and reliability 1
Ramírez, Erika Gisseth León; de Vargas, Divane
2017-01-01
ABSTRACT Objective: validate the Scale of attitudes toward alcohol, alcoholism and individuals with alcohol use disorders in its Spanish version. Method: methodological study, involving 300 Colombian nurses. Adopting the classical theory, confirmatory factor analysis was applied without prior examination, based on the strong historical evidence of the factorial structure of the original scale to determine the construct validity of this Spanish version. To assess the reliability, Cronbach’s Alpha and Mc Donalid’s Omega coefficients were used. Results: the confirmatory factor analysis indicated the good fit of the scale model in a four-factor distribution, with a cut-off point at 3.2, demonstrating 66.7% of sensitivity. Conclusions: the Scale of attitudes toward alcohol, alcoholism and individuals with alcohol use disorders in Spanish presented robust psychometric qualities, affirming that the instrument possesses a solid factorial structure and reliability and is capable of precisely measuring the nurses’ atittudes towards the phenomenon proposed. PMID:28793126
[Bioimpedometry and its utilization in dialysis therapy].
Lopot, František
2016-01-01
Measurement of living tissue impedance - bioimpedometry - started to be used in medicine some 50 years ago, first exclusively for estimation of extracellular and intracellular compartment volumes. Its most simple single frequency (50 kHz) version works directly with the measured impedance vector. Technically more sophisticated versions convert the measured impedance in values of volumes of different compartments of body fluids and calculate also principal markers of nutritional status (lean body mass, adipose tissue mass). The latest version specifically developed for application in dialysis patients includes body composition modelling and provides even absolute value of overhydration (excess fluid). Still in experimental phase is the bioimpedance exploitation for more precise estimation of residual glomerular filtration. Not yet standardized is also segmental bioimpedance measurement which should enable separate assessment of hydration status of the trunk segment and ultrafiltration capacity of peritoneum in peritoneal dialysis patients.Key words: assessment - bioimpedance - excess fluid - fluid status - glomerular filtration - haemodialysis - nutritional status - peritoneal dialysis.
Derbyshire, Brian; Raut, Videshnandan V.
2013-01-01
Historically, wire markers were attached to cemented all-plastic acetabular cups to demarcate the periphery and to measure socket wear. The wire shape was either a semi-circle passing over the pole of the cup, or a circle around the cup equator. More recently, “double-D” shaped markers were introduced with a part-circular aspect passing over the pole and a semi-circular aspect parallel to the equatorial plane. This configuration enabled cup retroversion to be distinguished from anteversion. In this study, the accuracy of radiographic measurement of cup orientation and wear was assessed for cups with “double-D” and circular markers. Each cup was attached to a measurement jig which could vary the anteversion/retroversion and internal/external rotation of the cup. A metal femoral head was fixed within the socket and radiographic images were created for all combinations of cup orientation settings. The images were measured using software with automatic edge detection, and cup orientation and zero-wear accuracies were determined for each setting. The median error for cup version measurements was similar for both types of wire marker (0.2° double-D marker, −0.24° circular marker), but measurements of the circular marker were more repeatable. The median inclination errors were 2.05° (double-D marker) and 0.23° (circular marker). The median overall “zero wear” errors were 0.19 mm (double-D marker) and 0.03 mm (circular marker). Measurements of the circular wire marker were much more repeatable. PMID:23813165
NASA Technical Reports Server (NTRS)
Schroeder, J. A.; Merrick, V. K.
1990-01-01
Several control and display concepts were evaluated on a variable-stability helicopter prior to future evaluations on a modified Harrier. The control and display concepts had been developed to enable precise hover maneuvers, station keeping, and vertical landings in simulated zero-visibility conditions and had been evaluated extensively in previous piloted simulations. Flight evaluations early in the program revealed several inadequacies in the display drive laws that were later corrected using an alternative design approach that integrated the control and display characteristics with the desired guidance law. While hooded, three pilots performed landing-pad captures followed by vertical landings with attitude-rate, attitude, and translation-velocity-command control systems. The latter control system incorporated a modified version of state-rate-feedback implicit-model following. Precise landing within 2 ft of the desired touchdown point were achieved.
Goñi-Moreno, Ángel; Kim, Juhyun; de Lorenzo, Víctor
2017-02-01
Visualization of the intracellular constituents of individual bacteria while performing as live biocatalysts is in principle doable through more or less sophisticated fluorescence microscopy. Unfortunately, rigorous quantitation of the wealth of data embodied in the resulting images requires bioinformatic tools that are not widely extended within the community-let alone that they are often subject to licensing that impedes software reuse. In this context we have developed CellShape, a user-friendly platform for image analysis with subpixel precision and double-threshold segmentation system for quantification of fluorescent signals stemming from single-cells. CellShape is entirely coded in Python, a free, open-source programming language with widespread community support. For a developer, CellShape enhances extensibility (ease of software improvements) by acting as an interface to access and use existing Python modules; for an end-user, CellShape presents standalone executable files ready to open without installation. We have adopted this platform to analyse with an unprecedented detail the tridimensional distribution of the constituents of the gene expression flow (DNA, RNA polymerase, mRNA and ribosomal proteins) in individual cells of the industrial platform strain Pseudomonas putida KT2440. While the CellShape first release version (v0.8) is readily operational, users and/or developers are enabled to expand the platform further. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin
2016-06-01
CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.
A microlens-array based pupil slicer and double scrambler for MAROON-X
NASA Astrophysics Data System (ADS)
Seifahrt, Andreas; Stürmer, Julian; Bean, Jacob L.
2016-07-01
We report on the design and construction of a microlens-array (MLA)-based pupil slicer and double scrambler for MAROON-X, a new fiber-fed, red-optical, high-precision radial-velocity spectrograph for one of the twin 6.5m Magellan Telescopes in Chile. We have constructed a 3X slicer based on a single cylindrical MLA and show that geometric efficiencies of >=85% can be achieved, limited by the fill factor and optical surface quality of the MLA. We present here the final design of the 3x pupil slicer and double scrambler for MAROON-X, based on a dual MLA design with (a)spherical lenslets. We also discuss the techniques used to create a pseudo-slit of rectangular core fibers with low FRD levels.
First direct determination of the 48Ca double-β decay Q value
NASA Astrophysics Data System (ADS)
Bustabad, S.; Bollen, G.; Brodeur, M.; Lincoln, D. L.; Novario, S. J.; Redshaw, M.; Ringle, R.; Schwarz, S.; Valverde, A. A.
2013-08-01
The low-energy beam and ion trap Penning trap mass spectrometer was used for an improved determination of the 48Ca double-β decay Q value: Qββ=4268.121(79)keV. The new value is 1.2 keV greater than the value in the 2012 atomic mass evaluation [Chin. Phys. CCPCHCQ1674-113710.1088/1674-1137/36/12/003 36, 1603 (2012)], a shift of three σ, and is a factor of 5 more precise. Accurate knowledge of this Q value is important for experimental searches to observe neutrinoless double-β decay (0νββ) in 48Ca and is essential for extracting the effective mass of the electron neutrino if the 48Ca half-life of 0νββ was experimentally determined.
Double stranded replicative form (RFI) DNA of bacteriophage M13mp10 has been modified in vitro to various extents with N-hydroxy-2-aminofluorene (N-OH-AF) and then transfected into E. coli cells. HPLC analysis of the modified DNA shows that only dG-C8-AF adducts are formed. Appro...
Study of the one-way speed of light anisotropy with particle beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojtsekhowski, Bogdan B.
Concepts of high precision studies of the one-way speed of light anisotropy are discussed. The high energy particle beam allows measurement of a one-way speed of light anisotropy (SOLA) via analysis of the beam momentum variation with sidereal phase without the use of synchronized clocks. High precision beam position monitors could provide accurate monitoring of the beam orbit and determination of the particle beam momentum with relative accuracy on the level of 10^-10, which corresponds to a limit on SOLA of 10^-18 with existing storage rings. A few additional versions of the experiment are also presented.
Study of the one-way speed of light anisotropy with particle beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojtsekhowski, Bogdan
2017-04-01
Concepts of high precision studies of the one-way speed of light anisotropy are discussed. The high energy particle beam allows measurement of a one-way speed of light anisotropy (SOLA) via analysis of the beam momentum variation with sidereal phase without the use of synchronized clocks. High precision beam position monitors could provide accurate monitoring of the beam orbit and determination of the particle beam momentum with relative accuracy on the level of 10^-10, which corresponds to a limit on SOLA of 10^-18 with existing storage rings. A few additional versions of the experiment are also presented.
Testing the accuracy of growth and yield models for southern hardwood forests
H. Michael Rauscher; Michael J. Young; Charles D. Webb; Daniel J. Robison
2000-01-01
The accuracy of ten growth and yield models for Southern Appalachian upland hardwood forests and southern bottomland forests was evaluated. In technical applications, accuracy is the composite of both bias (average error) and precision. Results indicate that GHAT, NATPIS, and a locally calibrated version of NETWIGS may be regarded as being operationally valid...
Stimulus-Response Theory of Finite Automata, Technical Report No. 133.
ERIC Educational Resources Information Center
Suppes, Patrick
The central aim of this paper and its projected successors is to prove in detail that stimulus-response theory, or at least a mathematically precise version, can give an account of the learning of many phrase-structure grammars. Section 2 is concerned with standard notions of finite and probabilistic automata. An automaton is defined as a device…
ERIC Educational Resources Information Center
Unlu, Fatih; Layzer, Carolyn; Clements, Douglas; Sarama, Julie; Cook, David
2013-01-01
Many educational Randomized Controlled Trials (RCTs) collect baseline versions of outcome measures (pretests) to be used in the estimation of impacts at posttest. Although pretest measures are not necessary for unbiased impact estimates in well executed experimental studies, using them increases the precision of impact estimates and reduces sample…
Mass and Double-Beta-Decay Q Value of Xe136
NASA Astrophysics Data System (ADS)
Redshaw, Matthew; Wingfield, Elizabeth; McDaniel, Joseph; Myers, Edmund G.
2007-02-01
The atomic mass of Xe136 has been measured by comparing cyclotron frequencies of single ions in a Penning trap. The result, with 1 standard deviation uncertainty, is M(Xe136)=135.907 214 484 (11) u. Combined with previous results for the mass of Ba136 [Audi, Wapstra, and Thibault, Nucl. Phys. A 729, 337 (2003)NUPABL0375-947410.1016/j.nuclphysa.2003.11.003], this gives a Q value (M[Xe136]-M[Ba136])c2=2457.83(37)keV, sufficiently precise for ongoing searches for the neutrinoless double-beta decay of Xe136.
Mass and Double-Beta-Decay Q Value of {sup 136}Xe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redshaw, Matthew; Wingfield, Elizabeth; McDaniel, Joseph
The atomic mass of {sup 136}Xe has been measured by comparing cyclotron frequencies of single ions in a Penning trap. The result, with 1 standard deviation uncertainty, is M({sup 136}Xe)=135.907 214 484 (11) u. Combined with previous results for the mass of {sup 136}Ba [Audi, Wapstra, and Thibault, Nucl. Phys. A 729, 337 (2003)], this gives a Q value (M[{sup 136}Xe]-M[{sup 136}Ba])c{sup 2}=2457.83(37) keV, sufficiently precise for ongoing searches for the neutrinoless double-beta decay of {sup 136}Xe.
NASA Astrophysics Data System (ADS)
Yakut, Kadri
2015-08-01
We present a detailed study of KIC 2306740, an eccentric double-lined eclipsing binary system with a pulsating component.Archive Kepler satellite data were combined with newly obtained spectroscopic data with 4.2\\,m William Herschel Telescope(WHT). This allowed us to determine rather precise orbital and physical parameters of this long period, slightly eccentric, pulsating binary system. Duplicity effects are extracted from the light curve in order to estimate pulsation frequencies from the residuals.We modelled the detached binary system assuming non-conservative evolution models with the Cambridge STARS(TWIN) code.
Adamczyk, L.
2015-08-26
We report a new measurement of the midrapidity inclusive jet longitudinal double-spin asymmetry, A LL, in polarized pp collisions at center-of-mass energy √s = 200 GeV. The STAR data place stringent constraints on polarized parton distribution functions extracted at next-to-leading order from global analyses of inclusive deep-inelastic scattering (DIS), semi-inclusive DIS, and RHIC pp data. Lastly, the measured asymmetries provide evidence at the 3σ level for positive gluon polarization in the Bjorken-x region x > 0.05 .
Implosion Dynamics and Mix in Double-Shell ICF Capsule Designs
NASA Astrophysics Data System (ADS)
Gunderson, Mark; Daughton, William; Simakov, Andrei; Wilson, Douglas; Watt, Robert; Delamater, Norman; Montgomery, David
2015-11-01
From an implosion dynamics perspective, double-shell ICF capsule designs have several advantages over the single-shell NIF ICF capsule point design. Double shell designs do not require precise shock sequencing, do not rely on hot spot ignition, have lower peak implosion speed requirements, and have lower convergence ratio requirements. However, there are still hurdles that must be overcome. The timing of the two main shocks in these designs is important in achieving sufficient compression of the DT fuel. Instability of the inner gold shell due to preheat from the hohlraum environment can disrupt the implosion of the inner pill. Mix, in addition to quenching burn in the DT fuel, also decreases the transfer of energy between the beryllium ablator and the inner gold shell during collision thus decreasing the implosion speed of the inner shell along with compression of the DT fuel. Herein, we will discuss practical implications of these effects on double-shell design we carry out in preparation for the NIF double-shell campaign. Work performed under the auspices of DOE by LANL under contract DE-AC52-06NA25396.
EDDIX--a database of ionisation double differential cross sections.
MacGibbon, J H; Emerson, S; Liamsuwan, T; Nikjoo, H
2011-02-01
The use of Monte Carlo track structure is a choice method in biophysical modelling and calculations. To precisely model 3D and 4D tracks, the cross section for the ionisation by an incoming ion, double differential in the outgoing electron energy and angle, is required. However, the double differential cross section cannot be theoretically modelled over the full range of parameters. To address this issue, a database of all available experimental data has been constructed. Currently, the database of Experimental Double Differential Ionisation Cross sections (EDDIX) contains over 1200 digitalised experimentally measured datasets from the 1960s to present date, covering all available ion species (hydrogen to uranium) and all available target species. Double differential cross sections are also presented with the aid of an eight parameter functions fitted to the cross sections. The parameters include projectile species and charge, target nuclear charge and atomic mass, projectile atomic mass and energy, electron energy and deflection angle. It is planned to freely distribute EDDIX and make it available to the radiation research community for use in the analytical and numerical modelling of track structure.
New Cerec software version 4.3 for Omnicam and Bluecam.
Fritzsche, G; Schenk, O
2014-01-01
The introduction of the Cerec Omnicam acquisition unit in September 2012 presented Sirona with a challenge: con- figuring the existing software version 4 for both the exist- ing Bluecam, which uses still images, and the video-based Omnicam. Sirona has succeeded in making all the features introduced in version 4.2 (such as the virtual articulator or implant-supported single-tooth restorations, both monolithic and two-part designs) work with both camera types, without compromising the uniform, homogeneous look and feel of the software. The virtual articulator (Figs 1a to 1c) now has even more individual configuration options and allows the setting of almost all angles derived from the individual transfer bow based on precalculated average values. The new software version 4.3, presented in July 2014, fixes some minor bugs, such as the time-consuming "empty grinding" after necessary water changes during the grinding process, but also includes many features that noticeably ease the workflow. For example, the important scanning precision in the region of the anterior incisal edges has been improved, which makes the scanning process more reliable, faster, and far more comfortable.
Tamaoka, Katsuo; Asano, Michiko; Miyaoka, Yayoi; Yokosawa, Kazuhiko
2014-04-01
Using the eye-tracking method, the present study depicted pre- and post-head processing for simple scrambled sentences of head-final languages. Three versions of simple Japanese active sentences with ditransitive verbs were used: namely, (1) SO₁O₂V canonical, (2) SO₂O₁V single-scrambled, and (3) O₁O₂SV double-scrambled order. First pass reading times indicated that the third noun phrase just before the verb in both single- and double-scrambled sentences required longer reading times compared to canonical sentences. Re-reading times (the sum of all fixations minus the first pass reading) showed that all noun phrases including the crucial phrase before the verb in double-scrambled sentences required longer re-reading times than those required for single-scrambled sentences; single-scrambled sentences had no difference from canonical ones. Therefore, a single filler-gap dependency can be resolved in pre-head anticipatory processing whereas two filler-gap dependencies require much greater cognitive loading than a single case. These two dependencies can be resolved in post-head processing using verb agreement information.
Implementing NLO DGLAP evolution in parton showers
Hoche, Stefan; Krauss, Frank; Prestel, Stefan
2017-10-13
Here, we present a parton shower which implements the DGLAP evolution of parton densities and fragmentation functions at next-to-leading order precision up to effects stemming from local four-momentum conservation. The Monte-Carlo simulation is based on including next-to-leading order collinear splitting functions in an existing parton shower and combining their soft enhanced contributions with the corresponding terms at leading order. Soft double counting is avoided by matching to the soft eikonal. Example results from two independent realizations of the algorithm, implemented in the two event generation frameworks Pythia and Sherpa, illustrate the improved precision of the new formalism.
Implementing NLO DGLAP evolution in parton showers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Höche, Stefan; Krauss, Frank; Prestel, Stefan
2017-10-01
We present a parton shower which implements the DGLAP evolution of parton densities and fragmentation functions at next-to-leading order precision up to effects stemming from local four-momentum conservation. The Monte-Carlo simulation is based on including next-to-leading order collinear splitting functions in an existing parton shower and combining their soft enhanced contributions with the corresponding terms at leading order. Soft double counting is avoided by matching to the soft eikonal. Example results from two independent realizations of the algorithm, implemented in the two event generation frameworks Pythia and Sherpa, illustrate the improved precision of the new formalism.
Three-dimensional orbit and physical parameters of HD 6840
NASA Astrophysics Data System (ADS)
Wang, Xiao-Li; Ren, Shu-Lin; Fu, Yan-Ning
2016-02-01
HD 6840 is a double-lined visual binary with an orbital period of ˜7.5 years. By fitting the speckle interferometric measurements made by the 6 m BTA telescope and 3.5 m WIYN telescope, Balega et al. gave a preliminary astrometric orbital solution of the system in 2006. Recently, Griffin derived a precise spectroscopic orbital solution from radial velocities observed with OPH and Cambridge Coravel. However, due to the low precision of the determined orbital inclination, the derived component masses are not satisfying. By adding the newly collected astrometric data in the Fourth Catalog of Interferometric Measurements of Binary Stars, we give a three-dimensional orbit solution with high precision and derive the preliminary physical parameters of HD 6840 via a simultaneous fit including both astrometric and radial velocity measurements.
New instrumentation for precise (n,γ) measurements at ILL Grenoble
NASA Astrophysics Data System (ADS)
Urban, W.; Jentschel, M.; Märkisch, B.; Materna, Th; Bernards, Ch; Drescher, C.; Fransen, Ch; Jolie, J.; Köster, U.; Mutti, P.; Rzaca-Urban, T.; Simpson, G. S.
2013-03-01
An array of eight Ge detectors for coincidence measurements of γ rays from neutron-capture reactions has been constructed at the PF1B cold-neutron facility of the Institut Laue-Langevin. The detectors arranged in one plane every 45° can be used for angular correlation measurements. The neutron collimation line of the setup provides a neutron beam of 12 mm in diameter and the capture flux of about 108/(s × cm2) at the target position, with a negligible neutron halo. With the setup up to 109 γγ and up to 108 triple-γ coincidence events have been collected in a day measurement. Precise energy and efficiency calibrations up to 10 MeV are easily performed with 27Al(n,γ)28Al and 35Cl(n,γ)36Cl reactions. Test measurements have shown that neutron binding energies can be determined with an accuracy down to a few eV and angular correlation coefficients measured with a precision down to a percent level. The triggerless data collected with a digital electronics and acquisition allows to determine half-lives of excited levels in the nano- to microsecond range. The high resolving power of double- and triple-γ time coincidences allows significant improvements of excitation schemes reported in previous (n,γ) works and complements high-resolution γ-energy measurements at the double-crystal Bragg spectrometer GAMS of ILL.
Accuracy of GIPSY PPP from version 6.2: a robust method to remove outliers
NASA Astrophysics Data System (ADS)
Hayal, Adem G.; Ugur Sanli, D.
2014-05-01
In this paper, we figure out the accuracy of GIPSY PPP from the latest version, version 6.2. As the research community prepares for the real-time PPP, it would be interesting to revise the accuracy of static GPS from the latest version of well established research software, the first among its kinds. Although the results do not significantly differ from the previous version, version 6.1.1, we still observe the slight improvement on the vertical component due to an enhanced second order ionospheric modeling which came out with the latest version. However, in this study, we rather turned our attention into outlier detection. Outliers usually occur among the solutions from shorter observation sessions and degrade the quality of the accuracy modeling. In our previous analysis from version 6.1.1, we argued that the elimination of outliers was cumbersome with the traditional method since repeated trials were needed, and subjectivity that could affect the statistical significance of the solutions might have been existed among the results (Hayal and Sanli, 2013). Here we overcome this problem using a robust outlier elimination method. Median is perhaps the simplest of the robust outlier detection methods in terms of applicability. At the same time, it might be considered to be the most efficient one with its highest breakdown point. In our analysis, we used a slightly different version of the median as introduced in Tut et al. 2013. Hence, we were able to remove suspected outliers at one run; which were, with the traditional methods, more problematic to remove this time from the solutions produced using the latest version of the software. References Hayal, AG, Sanli DU, Accuracy of GIPSY PPP from version 6, GNSS Precise Point Positioning Workshop: Reaching Full Potential, Vol. 1, pp. 41-42, (2013) Tut,İ., Sanli D.U., Erdogan B., Hekimoglu S., Efficiency of BERNESE single baseline rapid static positioning solutions with SEARCH strategy, Survey Review, Vol. 45, Issue 331, pp.296-304, (2013)
The validity of the 4-Skills Scan: A double validation study.
van Kernebeek, W G; de Kroon, M L A; Savelsbergh, G J P; Toussaint, H M
2018-06-01
Adequate gross motor skills are an essential aspect of a child's healthy development. Where physical education (PE) is part of the primary school curriculum, a strong curriculum-based emphasis on evaluation and support of motor skill development in PE is apparent. Monitoring motor development is then a task for the PE teacher. In order to fulfil this task, teachers need adequate tools. The 4-Skills Scan is a quick and easily manageable gross motor skill instrument; however, its validity has never been assessed. Therefore, the purpose of this study is to assess the construct and concurrent validity of both 4-Skills Scans (version 2007 and version 2015). A total of 212 primary school children (6 - 12 years old), was requested to participate in both versions of the 4-Skills Scan. For assessing construct validity, children covered an obstacle course with video recordings for observation by an expert panel. For concurrent validity, a comparison was made with the MABC-2, by calculating Pearson correlations. Multivariable linear regression analyses were performed to determine the contribution of each subscale to the construct of gross motor skills, according to the MABC-2 and the expert panel. Correlations between the 4-Skills Scans and expert valuations were moderate, with coefficients of .47 (version 2007) and .46 (version 2015). Correlations between the 4-Skills Scans and the MABC-2 (gross) were moderate (.56) for version 2007 and high (.64) for version 2015. It is concluded that both versions of the 4-Skills Scans are satisfactory valid instruments for assessing gross motor skills during PE lessons. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Hassan, Said A.; Elzanfaly, Eman S.; Salem, Maissa Y.; El-Zeany, Badr A.
2016-01-01
A novel spectrophotometric method was developed for determination of ternary mixtures without previous separation, showing significant advantages over conventional methods. The new method is based on mean centering of double divisor ratio spectra. The mathematical explanation of the procedure is illustrated. The method was evaluated by determination of model ternary mixture and by the determination of Amlodipine (AML), Aliskiren (ALI) and Hydrochlorothiazide (HCT) in laboratory prepared mixtures and in a commercial pharmaceutical preparation. For proper presentation of the advantages and applicability of the new method, a comparative study was established between the new mean centering of double divisor ratio spectra (MCDD) and two similar methods used for analysis of ternary mixtures, namely mean centering (MC) and double divisor of ratio spectra-derivative spectrophotometry (DDRS-DS). The method was also compared with a reported one for analysis of the pharmaceutical preparation. The method was validated according to the ICH guidelines and accuracy, precision, repeatability and robustness were found to be within the acceptable limits.
Observations, Analysis, and Orbital Calculation of the Visual Double Star STTA 123 AB
NASA Astrophysics Data System (ADS)
Brashear, Nicholas; Camama, Angel; Drake, Miles; Smith, Miranda; Johnson, Jolyon; Arnold, Dave; Chamberlain, Rebecca
2012-04-01
As part of a research workshop at Pine Mountain Observatory, four students from Evergreen State College met with an instructor and an experienced double star observer to learn the methods used to measure double stars and to contribute observations to the Washington Double Star (WDS) Catalog. The students then observed and analyzed the visual double star STTA 123 AB with few past observations in the WDS Catalog to determine if it is optical or binary in nature. The separation of this double star was found to be 69.9" and its position angle to be 148.0°. Using the spectral types, stellar parallaxes, and proper motion vectors of these two stars, the students determined that this double star is likely physically bound by gravity in a binary system. Johnson calculated a preliminary circular orbit for the system using Newton's version of Kepler's third law. The masses of the two stars were estimated based on their spectral types (F0) to be 1.4 Msun. Their separation was estimated to be 316 AU based on their distance from Earth (about 216.5 light years) and their orbital period was estimated to be 3357 years. Arnold compared the observations made by the students to what would be predicted by the orbit calculation. A discrepancy of 14° was found in the position angle. The authors suggest that the orbit is both eccentric and inclined to our line of sight, making the observed position angle change less than predicted.
Modelling the balance between quiescence and cell death in normal and tumour cell populations.
Spinelli, Lorenzo; Torricelli, Alessandro; Ubezio, Paolo; Basse, Britta
2006-08-01
When considering either human adult tissues (in vivo) or cell cultures (in vitro), cell number is regulated by the relationship between quiescent cells, proliferating cells, cell death and other controls of cell cycle duration. By formulating a mathematical description we see that even small alterations of this relationship may cause a non-growing population to start growing with doubling times characteristic of human tumours. Our model consists of two age structured partial differential equations for the proliferating and quiescent cell compartments. Model parameters are death rates from and transition rates between these compartments. The partial differential equations can be solved for the steady-age distributions, giving the distribution of the cells through the cell cycle, dependent on specific model parameter values. Appropriate formulas can then be derived for various population characteristic quantities such as labelling index, proliferation fraction, doubling time and potential doubling time of the cell population. Such characteristic quantities can be estimated experimentally, although with decreasing precision from in vitro, to in vivo experimental systems and to the clinic. The model can be used to investigate the effects of a single alteration of either quiescence or cell death control on the growth of the whole population and the non-trivial dependence of the doubling time and other observable quantities on particular underlying cell cycle scenarios of death and quiescence. The model indicates that tumour evolution in vivo is a sequence of steady-states, each characterised by particular death and quiescence rate functions. We suggest that a key passage of carcinogenesis is a loss of the communication between quiescence, death and cell cycle machineries, causing a defect in their precise, cell cycle dependent relationship.
Zhao, Yinzhi; Zhang, Peng; Guo, Jiming; Li, Xin; Wang, Jinling; Yang, Fei; Wang, Xinzhe
2018-06-20
Due to the great influence of multipath effect, noise, clock and error on pseudorange, the carrier phase double difference equation is widely used in high-precision indoor pseudolite positioning. The initial position is determined mostly by the known point initialization (KPI) method, and then the ambiguities can be fixed with the LAMBDA method. In this paper, a new method without using the KPI to achieve high-precision indoor pseudolite positioning is proposed. The initial coordinates can be quickly obtained to meet the accuracy requirement of the indoor LAMBDA method. The detailed processes of the method follows: Aiming at the low-cost single-frequency pseudolite system, the static differential pseudolite system (DPL) method is used to obtain the low-accuracy positioning coordinates of the rover station quickly. Then, the ambiguity function method (AFM) is used to search for the coordinates in the corresponding epoch. The real coordinates obtained by AFM can meet the initial accuracy requirement of the LAMBDA method, so that the double difference carrier phase ambiguities can be correctly fixed. Following the above steps, high-precision indoor pseudolite positioning can be realized. Several experiments, including static and dynamic tests, are conducted to verify the feasibility of the new method. According to the results of the experiments, the initial coordinates with the accuracy of decimeter level through the DPL can be obtained. For the AFM part, both a one-meter search scope and two-centimeter or four-centimeter search steps are used to ensure the precision at the centimeter level and high search efficiency. After dealing with the problem of multiple peaks caused by the ambiguity cosine function, the coordinate information of the maximum ambiguity function value (AFV) is taken as the initial value of the LAMBDA, and the ambiguities can be fixed quickly. The new method provides accuracies at the centimeter level for dynamic experiments and at the millimeter level for static ones.
Extended version of the "Sniffin' Sticks" identification test: test-retest reliability and validity.
Sorokowska, A; Albrecht, E; Haehner, A; Hummel, T
2015-03-30
The extended, 32-item version of the Sniffin' Sticks identification test was developed in order to create a precise tool enabling repeated, longitudinal testing of individual olfactory subfunctions. Odors of the previous test version had to be changed for technical reasons, and the odor identification test needed re-investigation in terms of reliability, validity, and normative values. In our study we investigated olfactory abilities of a group of 100 patients with olfactory dysfunction and 100 controls. We reconfirmed the high test-retest reliability of the extended version of the Sniffin' Sticks identification test and high correlations between the new and the original part of this tool. In addition, we confirmed the validity of the test as it discriminated clearly between controls and patients with olfactory loss. The additional set of 16 odor identification sticks can be either included in the current olfactory test, thus creating a more detailed diagnosis tool, or it can be used separately, enabling to follow olfactory function over time. Additionally, the normative values presented in our paper might provide useful guidelines for interpretation of the extended identification test results. The revised version of the Sniffin' Sticks 32-item odor identification test is a reliable and valid tool for the assessment of olfactory function. Copyright © 2015 Elsevier B.V. All rights reserved.
Deformed quantum double realization of the toric code and beyond
NASA Astrophysics Data System (ADS)
Padmanabhan, Pramod; Ibieta-Jimenez, Juan Pablo; Bernabe Ferreira, Miguel Jorge; Teotonio-Sobrinho, Paulo
2016-09-01
Quantum double models, such as the toric code, can be constructed from transfer matrices of lattice gauge theories with discrete gauge groups and parametrized by the center of the gauge group algebra and its dual. For general choices of these parameters the transfer matrix contains operators acting on links which can also be thought of as perturbations to the quantum double model driving it out of its topological phase and destroying the exact solvability of the quantum double model. We modify these transfer matrices with perturbations and extract exactly solvable models which remain in a quantum phase, thus nullifying the effect of the perturbation. The algebra of the modified vertex and plaquette operators now obey a deformed version of the quantum double algebra. The Abelian cases are shown to be in the quantum double phase whereas the non-Abelian phases are shown to be in a modified phase of the corresponding quantum double phase. These are illustrated with the groups Zn and S3. The quantum phases are determined by studying the excitations of these systems namely their fusion rules and the statistics. We then go further to construct a transfer matrix which contains the other Z2 phase namely the double semion phase. More generally for other discrete groups these transfer matrices contain the twisted quantum double models. These transfer matrices can be thought of as being obtained by introducing extra parameters into the transfer matrix of lattice gauge theories. These parameters are central elements belonging to the tensor products of the algebra and its dual and are associated to vertices and volumes of the three dimensional lattice. As in the case of the lattice gauge theories we construct the operators creating the excitations in this case and study their braiding and fusion properties.
Using Kill-Chain Analysis to Develop Surface Ship CONOPs to Defend Against Anti-Ship Cruise Missiles
2010-06-01
used to analyze this problem. The first was a software product from the Palisade Corporation called @Risk for Excel (version 5.5) with Precision...matching range cells in Table 4. Table 5 is for the case with no soft-kill mechanisms used by the ASCM and the numeric values do not take into
Immersive Simulation of Complex Social Environments
2008-12-01
Complexity, 7, 18–30. Dawkins , R., 1989: The Selfish Gene (2nd ed.). New York: Oxford University Press. Dennett, D. C., 1995: Darwin’s Dangerous...interpretation, bias, and misinformation, which create erroneous versions of what has transpired. Dawkins presents a model for describing knowledge...evolution within a social group through interpersonal exchange (memetics). ( Dawkins , 1987) Where genetic duplication tends to be precise (and mutation
ERIC Educational Resources Information Center
Haley, Stephen M.; Coster, Wendy J.; Dumas, Helene M.; Fragala-Pinkham, Maria A.; Kramer, Jessica; Ni, Pengsheng; Tian, Feng; Kao, Ying-Chia; Moed, Rich; Ludlow, Larry H.
2011-01-01
Aim: The aims of the study were to: (1) build new item banks for a revised version of the Pediatric Evaluation of Disability Inventory (PEDI) with four content domains: daily activities, mobility, social/cognitive, and responsibility; and (2) use post-hoc simulations based on the combined normative and disability calibration samples to assess the…
ERIC Educational Resources Information Center
Blakey, Emma; Visser, Ingmar; Carroll, Daniel J.
2016-01-01
Improvements in cognitive flexibility during the preschool years have been linked to developments in both working memory and inhibitory control, though the precise contribution of each remains unclear. In the current study, one hundred and twenty 2-, 3-, and 4-year-olds completed two rule-switching tasks. In one version, children switched rules in…
Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua
2018-05-01
High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Evaluation of French and English MeSH Indexing Systems with a Parallel Corpus
Névéol, Aurélie; Mork, James G.; Aronson, Alan R.; Darmoni, Stefan J.
2005-01-01
Objective This paper presents the evaluation of two MeSH® indexing systems for French and English on a parallel corpus. Material and methods We describe two automatic MeSH indexing systems - MTI for English, and MAIF for French. The French version of the evaluation resources has been manually indexed with MeSH keyword/qualifier pairs. This professional indexing is used as our gold standard in the evaluation of both systems on keyword retrieval. Results The English system (MTI) obtains significantly better precision and recall (78% precision and 21% recall at rank 1, vs. 37%. precision and 6% recall for MAIF ). Moreover, the performance of both systems can be optimised by the breakage function used by the French system (MAIF), which selects an adaptive number of descriptors for each resource indexed. Conclusion MTI achieves better performance. However, both systems have features that can benefit each other. PMID:16779103
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takao, Hidemasa, E-mail: takaoh-tky@umin.ac.jp; Shibata, Eisuke; Ohtomo, Kuni
A case of multiple hepatocellular carcinomas with a severe intrahepatic arterioportal shunt that was successfully embolized with n-butyl-2-cyanoacrylate with coaxial double-balloon occlusion prior to transcatheter arterial chemoembolization is presented. A proximal balloon positioned at the proper hepatic artery was used for flow control, and a coaxial microballoon, positioned in the closest of three arterial feeding branches to the arterioportal shunt, was used to control the delivery of n-butyl-2-cyanoacrylate. This coaxial double-balloon technique can prevent proximal embolization and distal migration of n-butyl-2-cyanoacrylate and enable precise control of the distribution of n-butyl-2-cyanoacrylate. It could also be applicable to n-butyl-2-cyanoacrylate embolization for othermore » than intrahepatic arterioportal shunt.« less
Doppler Lidar Measurements of Tropospheric Wind Profiles Using the Aerosol Double Edge Technique
NASA Technical Reports Server (NTRS)
Gentry, Bruce M.; Li, Steven X.; Mathur, Savyasachee; Korb, C. Laurence; Chen, Huailin
2000-01-01
The development of a ground based direct detection Doppler lidar based on the recently described aerosol double edge technique is reported. A pulsed, injection seeded Nd:YAG laser operating at 1064 nm is used to make range resolved measurements of atmospheric winds in the free troposphere. The wind measurements are determined by measuring the Doppler shift of the laser signal backscattered from atmospheric aerosols. The lidar instrument and double edge method are described and initial tropospheric wind profile measurements are presented. Wind profiles are reported for both day and night operation. The measurements extend to altitudes as high as 14 km and are compared to rawinsonde wind profile data from Dulles airport in Virginia. Vertical resolution of the lidar measurements is 330 m and the rms precision of the measurements is a low as 0.6 m/s.
Computation of Estonian CORS data using Bernese 5.2 and Gipsy 6.4 softwares
NASA Astrophysics Data System (ADS)
Kollo, Karin; Kall, Tarmo; Liibusk, Aive
2017-04-01
GNSS permanent station network in Estonia (ESTREF) was established already in 2007. In 2014-15 extensive reconstruction of ESTREF was carried out, including the establishment of 18 new stations, change of the hardware in CORS stations as well as establishing GNSS-RTK service for the whole Estonia. For GNSS-RTK service one needs precise coordinates in well-defined reference frame, i.e., ETRS89. For long time stability of stations and time-series analysis the re-processing of Estonian CORS data is ongoing. We re-process data from 2007 until 2015 with program Bernese GNSS 5.2 (Dach, 2015). For the set of ESTREF stations established in 2007, we perform as well computations with GIPSY 6.4 software (Ries et al., 2015). In the computations daily GPS-only solution was used. For precise orbits, final products from CODE (CODE analysis centre at the Astronomical Institute of the University of Bern) and JPL (Jet Propulsion Laboratory) for Bernese and GIPSY solutions were used, respectively. The cut-off angle was set to 10 degrees in order to avoid near-field multipath influence. In GIPSY, precise point positioning method with fixing ambiguities was used. Bernese calculations were performed based on double difference processing. Antenna phase centers were modelled based on igs08.atx and epnc_08.atx files. Vienna mapping function was used for mapping tropospheric delays. For the GIPSY solution, the higher order ionospheric term was modelled based on IRI-2012b model. For the Bernese solution higher order ionospheric term was neglected. FES2004 ocean tide loading model was used for the both computation strategies. As a result, two solutions using different scientific GNSS computation programs were obtained. The results from Bernese and GIPSY solutions were compared, using station repeatability values, RMS and coordinate differences. KEYWORDS: GNSS reference station network, Bernese GNSS 5.2, Gipsy 6.4, Estonia. References: Dach, R., S. Lutz, P. Walser, P. Fridez (Eds); 2015: Bernese GNSS Software Version 5.2. User manual, Astronomical Institute, Universtiy of Bern, Bern Open Publishing. DOI: 10.7892/boris.72297; ISBN: 978-3-906813-05-9. Paul Ries, Willy Bertiger, Shailen, Shailen Desai, & Kevin Miller. (2015). GIPSY 6.4 Release Notes. Jet Propulsion Laboratory, California Institute of Technology. Retrieved from https://gipsy-oasis.jpl.nasa.gov/docs/index.php
On the use of programmable hardware and reduced numerical precision in earth-system modeling.
Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N
2015-09-01
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.
Selva, Anna; Solà, Ivan; Zhang, Yuan; Pardo-Hernandez, Hector; Haynes, R Brian; Martínez García, Laura; Navarro, Tamara; Schünemann, Holger; Alonso-Coello, Pablo
2017-08-30
Identifying scientific literature addressing patients' views and preferences is complex due to the wide range of studies that can be informative and the poor indexing of this evidence. Given the lack of guidance we developed a search strategy to retrieve this type of evidence. We assembled an initial list of terms from several sources, including the revision of the terms and indexing of topic-related studies and, methods research literature, and other relevant projects and systematic reviews. We used the relative recall approach, evaluating the capacity of the designed search strategy for retrieving studies included in relevant systematic reviews for the topic. We implemented in practice the final version of the search strategy for conducting systematic reviews and guidelines, and calculated search's precision and the number of references needed to read (NNR). We assembled an initial version of the search strategy, which had a relative recall of 87.4% (yield of 132/out of 151 studies). We then added some additional terms from the studies not initially identified, and re-tested this improved version against the studies included in a new set of systematic reviews, reaching a relative recall of 85.8% (151/out of 176 studies, 95% CI 79.9 to 90.2). This final version of the strategy includes two sets of terms related with two domains: "Patient Preferences and Decision Making" and "Health State Utilities Values". When we used the search strategy for the development of systematic reviews and clinical guidelines we obtained low precision values (ranging from 2% to 5%), and the NNR from 20 to 50. This search strategy fills an important research gap in this field. It will help systematic reviewers, clinical guideline developers, and policy-makers to retrieve published research on patients' views and preferences. In turn, this will facilitate the inclusion of this critical aspect when formulating heath care decisions, including recommendations.
Study of a double bubbler for material balance in liquids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hugues Lambert
The objective of this project was to determine the potential of a double bubbler to measure density and fluid level of the molten salt contained in an electrorefiner. Such in-situ real-time measurements can provide key information for material balances in the pyroprocessing of the nuclear spent fuel. This theoretical study showed this technique has a lot of promise. Four different experiments were designed and performed. The first three experiments studied the influence of a variety of factors such as depth difference between the two tubes, gas flow rate, the radius of the tubes and determining the best operating conditions. Themore » last experiment purpose was to determine the precision and accuracy of the apparatus during specific conditions. The elected operating conditions for the characterization of the system were a difference of depth of 25 cm and a flow rate of 55 ml/min in each tube. The measured densities were between 1,000 g/l and 1,400g/l and the level between 34cm and 40 cm. The depth difference between the tubes is critical, the larger, the better. The experiments showed that the flow rate should be the same in each tube. The concordances with theoretical predictions were very good. The density precision was very satisfying (spread<0.1%) and the accuracy was about 1%. For the level determination, the precision was also very satisfying (spread<0.1%), but the accuracy was about 3%. However, those two biases could be corrected with calibration curves. In addition to the aqueous systems studied in the present work, future work will focus on examining the behavior of the double bubbler instrumentation in molten salt systems. The two main challenges which were identified in this work are the effect of the temperature and the variation of the superficial tension.« less
Coster, Wendy J; Haley, Stephen M; Ni, Pengsheng; Dumas, Helene M; Fragala-Pinkham, Maria A
2008-04-01
To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the self-care and social function scales of the Pediatric Evaluation of Disability Inventory compared with the full-length version of these scales. Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children's homes. Children with disabilities (n=469) and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). Not applicable. Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length self-care and social function scales; time (in seconds) to complete assessments and respondent ratings of burden. Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (r range, .94-.99). Using computer simulation of retrospective data, discriminant validity, and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared with over 16 minutes to complete the full-length scales. Self-care and social function score estimates from CAT administration are highly comparable with those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time.
EFICAz2.5: application of a high-precision enzyme function predictor to 396 proteomes.
Kumar, Narendra; Skolnick, Jeffrey
2012-10-15
High-quality enzyme function annotation is essential for understanding the biochemistry, metabolism and disease processes of organisms. Previously, we developed a multi-component high-precision enzyme function predictor, EFICAz(2) (enzyme function inference by a combined approach). Here, we present an updated improved version, EFICAz(2.5), that is trained on a significantly larger data set of enzyme sequences and PROSITE patterns. We also present the results of the application of EFICAz(2.5) to the enzyme reannotation of 396 genomes cataloged in the ENSEMBL database. The EFICAz(2.5) server and database is freely available with a use-friendly interface at http://cssb.biology.gatech.edu/EFICAz2.5.
Tsai, P P; Nagelschmidt, N; Kirchner, J; Stelzer, H D; Hackbarth, H
2012-01-01
Preference tests have often been performed for collecting information about animals' acceptance of environmental refinement objects. In numerous published studies animals were individually tested during preference experiments, as it is difficult to observe group-housed animals with an automatic system. Thus, videotaping is still the most favoured method for observing preferences of socially-housed animals. To reduce the observation workload and to be able to carry out preference testing of socially-housed animals, an automatic recording system (DoubleCage) was developed for determining the location of group-housed animals in a preference test set-up. This system is able to distinguish the transition of individual animals between two cages and to record up to 16 animals at the same time (four animals per cage). The present study evaluated the reliability of the DoubleCage system. The data recorded by the DoubleCage program and the data obtained by human observation were compared. The measurements of the DoubleCage system and manual observation of the videotapes are comparable and significantly correlated (P < 0.0001) with good agreement. Using the DoubleCage system enables precise and reliable recording of the preferences of group-housed animals and a considerable reduction of animal observation time.
New mainstream double-end carbon dioxide capnograph for human respiration
NASA Astrophysics Data System (ADS)
Yang, Jiachen; An, Kun; Wang, Bin; Wang, Lei
2010-11-01
Most of the current respiratory devices for monitoring CO2 concentration use the side-stream structure. In this work, we engage to design a new double-end mainstream device for monitoring CO2 concentration of gas breathed out of the human body. The device can accurately monitor the cardiopulmonary status during anesthesia and mechanical ventilation in real time. Meanwhile, to decrease the negative influence of device noise and the low sample precision caused by temperature drift, wavelet packet denoising and temperature drift compensation are used. The new capnograph is proven by clinical trials to be helpful in improving the accuracy of capnography.
Rearrangement of valence neutrons in the neutrinoless double-β decay of 136Xe
NASA Astrophysics Data System (ADS)
Szwec, S. V.; Kay, B. P.; Cocolios, T. E.; Entwisle, J. P.; Freeman, S. J.; Gaffney, L. P.; Guimarães, V.; Hammache, F.; McKee, P. P.; Parr, E.; Portail, C.; Schiffer, J. P.; de Séréville, N.; Sharp, D. K.; Smith, J. F.; Stefan, I.
2016-11-01
A quantitative description of the change in ground-state neutron occupancies between 136Xe and 136Ba, the initial and final state in the neutrinoless double-β decay of 136Xe, has been extracted from precision measurements of the cross sections of single-neutron-adding and -removing reactions. Comparisons are made to recent theoretical calculations of the same properties using various nuclear-structure models. These are the same calculations used to determine the magnitude of the nuclear matrix elements for the process, which at present disagree with each other by factors of 2 or 3. The experimental neutron occupancies show some disagreement with the theoretical calculations.
Double-Pulse Two-Micron IPDA Lidar Simulation for Airborne Carbon Dioxide Measurements
NASA Technical Reports Server (NTRS)
Refaat, Tamer F.; Singh, Upendra N.; Yu, Jirong; Petros, Mulugeta
2015-01-01
An advanced double-pulsed 2-micron integrated path differential absorption lidar has been developed at NASA Langley Research Center for measuring atmospheric carbon dioxide. The instrument utilizes a state-of-the-art 2-micron laser transmitter with tunable on-line wavelength and advanced receiver. Instrument modeling and airborne simulations are presented in this paper. Focusing on random errors, results demonstrate instrument capabilities of performing precise carbon dioxide differential optical depth measurement with less than 3% random error for single-shot operation from up to 11 km altitude. This study is useful for defining CO2 measurement weighting, instrument setting, validation and sensitivity trade-offs.
The Southern Double Stars of Carl Rümker I: History, Identification, Accuracy
NASA Astrophysics Data System (ADS)
Letchford, Roderick; White, Graeme; Ernest, Allan
2017-04-01
The second catalog of southern double stars was published by Carl Rümker 1832. We describe this catalog, obtain modern nomenclature and data and estimate the accuracy of his positions for the primary components. We have shown the equinox and epoch to be B1827.0. Of the 28 pairs, 27 could be identified. RMK 23 is RMK 22 and RMK 24 could not be identified. Five pairs observed by Rümker are credited to co-worker Dunlop (DUN) in the WDS. There are two typographical errors. We tentatively identify RMK 28 with COO 261. We have shown the positional data in the 1832 catalog to be accurate and we present a modern/revised version of Rümker’s catalog.
Wietecha, Linda; Williams, David; Shaywitz, Sally; Shaywitz, Bennett; Hooper, Stephen R; Wigal, Sharon B; Dunn, David; McBurnett, Keith
2013-11-01
The purpose of this study was to evaluate atomoxetine treatment effects in attention-deficit/hyperactivity disorder (ADHD-only), attention-deficit/hyperactivity disorder with comorbid dyslexia (ADHD+D), or dyslexia only on ADHD core symptoms and on sluggish cognitive tempo (SCT), working memory, life performance, and self-concept. Children and adolescents (10-16 years of age) with ADHD+D (n=124), dyslexia-only (n=58), or ADHD-only (n=27) received atomoxetine (1.0-1.4 mg/kg/day) or placebo (ADHD-only subjects received atomoxetine) in a 16 week, acute, randomized, double-blind trial with a 16 week, open-label extension phase (atomoxetine treatment only). Changes from baseline were assessed to weeks 16 and 32 in ADHD Rating Scale-IV-Parent-Version:Investigator-Administered and Scored (ADHDRS-IV-Parent:Inv); ADHD Rating Scale-IV-Teacher-Version (ADHDRS-IV-Teacher-Version); Life Participation Scale-Child- or Parent-Rated Version (LPS); Kiddie-Sluggish Cognitive Tempo (K-SCT) Interview; Multidimensional Self Concept Scale (MSCS); and Working Memory Test Battery for Children (WMTB-C). At week 16, atomoxetine treatment resulted in significant (p<0.05) improvement from baseline in subjects with ADHD+D versus placebo on ADHDRS-IV-Parent:Inv Total (primary outcome) and subscales, ADHDRS-IV-Teacher-Version Inattentive subscale, K-SCT Interview Parent and Teacher subscales, and WMTB-C Central Executive component scores; in subjects with Dyslexia-only, atomoxetine versus placebo significantly improved K-SCT Youth subscale scores from baseline. At Week 32, atomoxetine-treated ADHD+D subjects significantly improved from baseline on all measures except MSCS Family subscale and WMTB-C Central Executive and Visuo-spatial Sketchpad component scores. The atomoxetine-treated dyslexia-only subjects significantly improved from baseline to week 32 on ADHDRS-IV-Parent:Inv Inattentive subscale, K-SCT Parent and Teacher subscales, and WMTB-C Phonological Loop and Central Executive component scores. The atomoxetine-treated ADHD-only subjects significantly improved from baseline to Week 32 on ADHDRS-Parent:Inv Total and subscales, ADHDRS-IV-Teacher-Version Hyperactive/Impulsive subscale, LPS Self-Control and Total, all K-SCT subscales, and MSCS Academic and Competence subscale scores. Atomoxetine treatment improved ADHD symptoms in subjects with ADHD+D and ADHD-only, but not in subjects with dyslexia-only without ADHD. This is the first study to report significant effects of any medication on SCT. This study was registered at: http://clinicaltrials.gov/ct2/home, NCT00607919.
Williams, David; Shaywitz, Sally; Shaywitz, Bennett; Hooper, Stephen R.; Wigal, Sharon B.; Dunn, David; McBurnett, Keith
2013-01-01
Abstract Objective The purpose of this study was to evaluate atomoxetine treatment effects in attention-deficit/hyperactivity disorder (ADHD-only), attention-deficit/hyperactivity disorder with comorbid dyslexia (ADHD+D), or dyslexia only on ADHD core symptoms and on sluggish cognitive tempo (SCT), working memory, life performance, and self-concept. Methods Children and adolescents (10–16 years of age) with ADHD+D (n=124), dyslexia-only (n=58), or ADHD-only (n=27) received atomoxetine (1.0–1.4 mg/kg/day) or placebo (ADHD-only subjects received atomoxetine) in a 16 week, acute, randomized, double-blind trial with a 16 week, open-label extension phase (atomoxetine treatment only). Changes from baseline were assessed to weeks 16 and 32 in ADHD Rating Scale-IV-Parent-Version:Investigator-Administered and Scored (ADHDRS-IV-Parent:Inv); ADHD Rating Scale-IV-Teacher-Version (ADHDRS-IV-Teacher-Version); Life Participation Scale—Child- or Parent-Rated Version (LPS); Kiddie-Sluggish Cognitive Tempo (K-SCT) Interview; Multidimensional Self Concept Scale (MSCS); and Working Memory Test Battery for Children (WMTB-C). Results At week 16, atomoxetine treatment resulted in significant (p<0.05) improvement from baseline in subjects with ADHD+D versus placebo on ADHDRS-IV-Parent:Inv Total (primary outcome) and subscales, ADHDRS-IV-Teacher-Version Inattentive subscale, K-SCT Interview Parent and Teacher subscales, and WMTB-C Central Executive component scores; in subjects with Dyslexia-only, atomoxetine versus placebo significantly improved K-SCT Youth subscale scores from baseline. At Week 32, atomoxetine-treated ADHD+D subjects significantly improved from baseline on all measures except MSCS Family subscale and WMTB-C Central Executive and Visuo-spatial Sketchpad component scores. The atomoxetine-treated dyslexia-only subjects significantly improved from baseline to week 32 on ADHDRS-IV-Parent:Inv Inattentive subscale, K-SCT Parent and Teacher subscales, and WMTB-C Phonological Loop and Central Executive component scores. The atomoxetine-treated ADHD-only subjects significantly improved from baseline to Week 32 on ADHDRS-Parent:Inv Total and subscales, ADHDRS-IV-Teacher-Version Hyperactive/Impulsive subscale, LPS Self-Control and Total, all K-SCT subscales, and MSCS Academic and Competence subscale scores. Conclusions Atomoxetine treatment improved ADHD symptoms in subjects with ADHD+D and ADHD-only, but not in subjects with dyslexia-only without ADHD. This is the first study to report significant effects of any medication on SCT. Clinical Trials Registration This study was registered at: http://clinicaltrials.gov/ct2/home, NCT00607919. PMID:24206099
Combination of GPS and GLONASS IN PPP algorithms and its effect on site coordinates determination
NASA Astrophysics Data System (ADS)
Hefty, J.; Gerhatova, L.; Burgan, J.
2011-10-01
Precise Point Positioning (PPP) approach using the un-differenced code and phase GPS observations, precise orbits and satellite clocks is an important alternative to the analyses based on double differences. We examine the extension of the PPP method by introducing the GLONASS satellites into the processing algorithms. The procedures are demonstrated on the software package ABSOLUTE developed at the Slovak University of Technology. Partial results, like ambiguities and receiver clocks obtained from separate solutions of the two GNSS are mutually compared. Finally, the coordinate time series from combination of GPS and GLONASS observations are compared with GPS-only solutions.
Rigorous high-precision enclosures of fixed points and their invariant manifolds
NASA Astrophysics Data System (ADS)
Wittig, Alexander N.
The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.
Convective Heat Transfer for Ship Propulsion.
1981-04-01
such as Ede, Hislop and Morris [1956], Krall and Sparrow [1966] and Zemanick and Dougall I [1970]. Reviews of the heat transfer literature for...separated flow; he employed a one- dimensional model of the flow near a wall. Recently, Chieng and Launder [ 1980 ], in calculations of the turbulent heat...computer program developed originally by Gosman and Pun [1974]. In the present study, the version of Habib and Whitelaw [ 1980 ], which treats double coaxial
mMass 3: a cross-platform software environment for precise analysis of mass spectrometric data.
Strohalm, Martin; Kavan, Daniel; Novák, Petr; Volný, Michael; Havlícek, Vladimír
2010-06-01
While tools for the automated analysis of MS and LC-MS/MS data are continuously improving, it is still often the case that at the end of an experiment, the mass spectrometrist will spend time carefully examining individual spectra. Current software support is mostly provided only by the instrument vendors, and the available software tools are often instrument-dependent. Here we present a new generation of mMass, a cross-platform environment for the precise analysis of individual mass spectra. The software covers a wide range of processing tasks such as import from various data formats, smoothing, baseline correction, peak picking, deisotoping, charge determination, and recalibration. Functions presented in the earlier versions such as in silico digestion and fragmentation were redesigned and improved. In addition to Mascot, an interface for ProFound has been implemented. A specific tool is available for isotopic pattern modeling to enable precise data validation. The largest available lipid database (from the LIPID MAPS Consortium) has been incorporated and together with the new compound search tool lipids can be rapidly identified. In addition, the user can define custom libraries of compounds and use them analogously. The new version of mMass is based on a stand-alone Python library, which provides the basic functionality for data processing and interpretation. This library can serve as a good starting point for other developers in their projects. Binary distributions of mMass, its source code, a detailed user's guide, and video tutorials are freely available from www.mmass.org .
Collinearly-improved BK evolution meets the HERA data
Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...
2015-10-03
In a previous publication, we have established a collinearly-improved version of the Balitsky–Kovchegov (BK) equation, which resums to all orders the radiative corrections enhanced by large double transverse logarithms. Here, we study the relevance of this equation as a tool for phenomenology, by confronting it to the HERA data. To that aim, we first improve the perturbative accuracy of our resummation, by including two classes of single-logarithmic corrections: those generated by the first non-singular terms in the DGLAP splitting functions and those expressing the one-loop running of the QCD coupling. The equation thus obtained includes all the next-to-leading order correctionsmore » to the BK equation which are enhanced by (single or double) collinear logarithms. Furthermore, we then use numerical solutions to this equation to fit the HERA data for the electron–proton reduced cross-section at small Bjorken x. We obtain good quality fits for physically acceptable initial conditions. Our best fit, which shows a good stability up to virtualities as large as Q 2 = 400 GeV 2 for the exchanged photon, uses as an initial condition the running-coupling version of the McLerran–Venugopalan model, with the QCD coupling running according to the smallest dipole prescription.« less
Performance study of LMS based adaptive algorithms for unknown system identification
NASA Astrophysics Data System (ADS)
Javed, Shazia; Ahmad, Noor Atinah
2014-07-01
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Performance study of LMS based adaptive algorithms for unknown system identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javed, Shazia; Ahmad, Noor Atinah
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less
NASA Astrophysics Data System (ADS)
Eriksen, Janus J.
2017-09-01
It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.
Prenatal and accurate perinatal diagnosis of type 2 H or ductular duplicate gallbladder.
Maggi, Umberto; Farris, Giorgio; Carnevali, Alessandra; Borzani, Irene; Clerici, Paola; Agosti, Massimo; Rossi, Giorgio; Leva, Ernesto
2018-02-07
Double gallbladder is a rare biliary anomaly. Perinatal diagnosis of the disorder has been reported in only 6 cases, and in 5 of them the diagnosis was based on ultrasound imaging only. However, the ultrasound technique alone does not provide a sufficiently precise description of cystic ducts and biliary anatomy, an information that is crucial for a correct classification and for a possible future surgery. At 21 weeks of gestational age of an uneventful pregnancy in a 38 year old primipara mother, a routine ultrasound screening detected a biliary anomaly in the fetus suggestive of a double gallbladder. A neonatal abdominal ultrasonography performed on postnatal day 2 confirmed the diagnosis. On day 12 the newborn underwent a Magnetic Resonance Cholangiopancreatography (MRCP) that clearly characterized the anatomy of the anomaly: both gallbladders had their own cystic duct and both had a separate insertion in the main biliary duct. We report a case of early prenatal suspected duplicate gallbladder that was confirmed by a neonatal precise diagnosis of a Type 2, H or ductular duplicate gallbladder, using for the first time 3D images of Magnetic resonance cholangiopancreatography in a newborn. An accurate anatomical diagnosis is mandatory in patients undergoing a possible future cholecystectomy, to avoid surgical complications or reoperations. Therefore, in case of a perinatal suspicion of a double gallbladder, neonates should undergo a Magnetic resonance cholangiopancreatography. A review of the Literature about this variant is included.
Zhang, Ming-Kang; Wang, Xiao-Gang; Zhu, Jing-Yi; Liu, Miao-Deng; Li, Chu-Xin; Feng, Jun; Zhang, Xian-Zheng
2018-04-17
This study reports a double-targeting "nanofirework" for tumor-ignited imaging to guide effective tumor-depth photothermal therapy (PTT). Typically, ≈30 nm upconversion nanoparticles (UCNP) are enveloped with a hybrid corona composed of ≈4 nm CuS tethered hyaluronic acid (CuS-HA). The HA corona provides active tumor-targeted functionality together with excellent stability and improved biocompatibility. The dimension of UCNP@CuS-HA is specifically set within the optimal size window for passive tumor-targeting effect, demonstrating significant contributions to both the in vivo prolonged circulation duration and the enhanced size-dependent tumor accumulation compared with ultrasmall CuS nanoparticles. The tumors featuring hyaluronidase (HAase) overexpression could induce the escape of CuS away from UCNP@CuS-HA due to HAase-catalyzed HA degradation, in turn activating the recovery of initially CuS-quenched luminescence of UCNP and also driving the tumor-depth infiltration of ultrasmall CuS for effective PTT. This in vivo transition has proven to be highly dependent on tumor occurrence like a tumor-ignited explosible firework. Together with the double-targeting functionality, the pathology-selective tumor ignition permits precise tumor detection and imaging-guided spatiotemporal control over PTT operation, leading to complete tumor ablation under near infrared (NIR) irradiation. This study offers a new paradigm of utilizing pathological characteristics to design nanotheranostics for precise detection and personalized therapy of tumors. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ruiz-Cantero, María Teresa; Carrasco-Portiño, Mercedes; Artazcoz, Lucía
2011-01-01
To examine the ability of the 2006 Spanish Health Survey (SHS-2006) to analyze the population's health from a gender perspective and identify gender-related inequalities in health, and to compare the 2006 version with that of 2003. A contents analysis of the adults and households questionnaires was performed from the gender perspective, taking gender as (a) the basis of social norms and values, (b) the organizer of social structure: gender division of labor, double workload, vertical/horizontal segregation, and access to resources and power, and (c) a component of individual identity. The 2006 SHS uses neutral language. The referent is the interviewee, substituting the head of the family/breadwinner of past surveys. A new section focuses on reproductive labor (caregiving and domestic tasks) and the time distribution for these tasks. However, some limitations in the questions about time distribution were identified, hampering accurate estimations. The time devoted to paid labor is not recorded. The 2006 version includes new information about family commitments as an obstacle to accessing healthcare and on the delay between seeking and receiving healthcare appointments. The SHS 2006 introduces sufficient variations to confirm its improvement from a gender perspective. Future surveys should reformulate the questions about the time devoted to paid and reproductive labor, which is essential to characterize gender division of labor and double workload. Updating future versions of the SHS will also involve gathering information on maternity/paternity and parental leave. The 2006 survey allows delays in receiving healthcare to be measured, but does not completely allow other delays, such as diagnostic and treatment delays, to be quantified. Copyright © 2010 SESPAS. Published by Elsevier Espana. All rights reserved.
High Maneuverability Airframe: Investigation of Fin and Canard Sizing for Optimum Maneuverability
2014-09-01
overset grids (unified- grid); 5) total variation diminishing discretization based on a new multidimensional interpolation framework; 6) Riemann solvers to...Aerodynamics .........................................................................................3 3.1.1 Solver ...describes the methodology used for the simulations. 3.1.1 Solver The double-precision solver of a commercially available code, CFD ++ v12.1.1, 9
Muñoz, H; Guerra, S; Perez-Vaquero, P; Valero Martinez, C; Aizpuru, F; Lopez-Picado, A
2014-02-01
Breech presentation occurs in up to 3% of pregnancies at term and may be an indication for caesarean delivery. External cephalic version can be effective in repositioning the fetus in a cephalic presentation, but may be painful for the mother. Our aim was to assess the efficacy of remifentanil versus placebo for pain relief during external cephalic version. A randomized, double-blind, controlled trial that included women at 36-41 weeks of gestation with non-cephalic presentations was performed. Women were randomized to receive either a remifentanil infusion at 0.1 μg/kg/min and demand boluses of 0.1 μg/kg, or saline placebo. The primary outcome was the numerical rating pain score (0-10) after external cephalic version. Sixty women were recruited, 29 in the control group and 31 in the remifentanil group. There were significant differences in pain scores at the end of the procedure (control 6.5 ± 2.4 vs. remifentanil 4.7 ± 2.5, P = 0.005) but not 10 min later (P = 0.054). The overall success rate for external cephalic version was 49% with no significant differences between groups (remifentanil group 54.8% vs. control group 41.3%, P = 0.358). In the remifentanil group, there was one case of nausea and vomiting, one of drowsiness and three cases of fetal bradycardia. In the control group, there were three cases of nausea and vomiting, one of dizziness and nine cases of fetal bradycardia. Intravenous remifentanil with bolus doses on demand during external cephalic version achieved a reduction in pain and increased maternal satisfaction. There were no additional adverse effects, and no difference in the success rate of external cephalic version or the incidence of fetal bradycardia. Copyright © 2013 Elsevier Ltd. All rights reserved.
van der Steen, M C Marieke; Jacoby, Nori; Fairhurst, Merle T; Keller, Peter E
2015-11-11
The current study investigated the human ability to synchronize movements with event sequences containing continuous tempo changes. This capacity is evident, for example, in ensemble musicians who maintain precise interpersonal coordination while modulating the performance tempo for expressive purposes. Here we tested an ADaptation and Anticipation Model (ADAM) that was developed to account for such behavior by combining error correction processes (adaptation) with a predictive temporal extrapolation process (anticipation). While previous computational models of synchronization incorporate error correction, they do not account for prediction during tempo-changing behavior. The fit between behavioral data and computer simulations based on four versions of ADAM was assessed. These versions included a model with adaptation only, one in which adaptation and anticipation act in combination (error correction is applied on the basis of predicted tempo changes), and two models in which adaptation and anticipation were linked in a joint module that corrects for predicted discrepancies between the outcomes of adaptive and anticipatory processes. The behavioral experiment required participants to tap their finger in time with three auditory pacing sequences containing tempo changes that differed in the rate of change and the number of turning points. Behavioral results indicated that sensorimotor synchronization accuracy and precision, while generally high, decreased with increases in the rate of tempo change and number of turning points. Simulations and model-based parameter estimates showed that adaptation mechanisms alone could not fully explain the observed precision of sensorimotor synchronization. Including anticipation in the model increased the precision of simulated sensorimotor synchronization and improved the fit of model to behavioral data, especially when adaptation and anticipation mechanisms were linked via a joint module based on the notion of joint internal models. Overall results suggest that adaptation and anticipation mechanisms both play an important role during sensorimotor synchronization with tempo-changing sequences. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.
Zeleznik, Michael P.; Nilsson, Kjell G.; Olivecrona, Henrik
2017-01-01
As part of the 14-year follow-up of a prospectively randomized radiostereometry (RSA) study on uncemented cup fixation, two pairs of stereo radiographs and a CT scan of 46 hips were compared. Tantalum beads, inserted during the primary operation, were detected in the CT volume and the stereo radiographs and used to produce datasets of 3D coordinates. The limit of agreement between the combined CT and RSA datasets was calculated in the same way as the precision of the double RSA examination. The precision of RSA corresponding to the 99% confidence interval was 1.36°, 1.36°, and 0.60° for X-, Y-, and Z-rotation and 0.40, 0.17, and 0.37 mm for X-, Y-, and Z-translation. The limit of agreement between CT and RSA was 1.51°, 2.17°, and 1.05° for rotation and 0.59, 0.56, and 0.74 mm for translation. The differences between CT and RSA are close to the described normal 99% confidence interval for precision in RSA: 0.3° to 2° for rotation and 0.15 to 0.6 mm for translation. We conclude that measurements using CT and RSA are comparable and that CT can be used for migration studies for longitudinal evaluations of patients with RSA markers. PMID:28243598
Otten, Volker; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Nilsson, Kjell G; Olivecrona, Henrik
2017-01-01
As part of the 14-year follow-up of a prospectively randomized radiostereometry (RSA) study on uncemented cup fixation, two pairs of stereo radiographs and a CT scan of 46 hips were compared. Tantalum beads, inserted during the primary operation, were detected in the CT volume and the stereo radiographs and used to produce datasets of 3D coordinates. The limit of agreement between the combined CT and RSA datasets was calculated in the same way as the precision of the double RSA examination. The precision of RSA corresponding to the 99% confidence interval was 1.36°, 1.36°, and 0.60° for X -, Y -, and Z -rotation and 0.40, 0.17, and 0.37 mm for X -, Y -, and Z -translation. The limit of agreement between CT and RSA was 1.51°, 2.17°, and 1.05° for rotation and 0.59, 0.56, and 0.74 mm for translation. The differences between CT and RSA are close to the described normal 99% confidence interval for precision in RSA: 0.3° to 2° for rotation and 0.15 to 0.6 mm for translation. We conclude that measurements using CT and RSA are comparable and that CT can be used for migration studies for longitudinal evaluations of patients with RSA markers.
A hybrid double-observer sightability model for aerial surveys
Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine
2013-01-01
Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.
Eckstein, Felix; Kunz, Manuela; Hudelmaier, Martin; Jackson, Rebecca; Yu, Joseph; Eaton, Charles B; Schneider, Erika
2007-02-01
Phased-array (PA) coils generally provide higher signal-to-noise ratios (SNRs) than quadrature knee coils. In this pilot study for the Osteoarthritis Initiative (OAI) we compared these two types of coils in terms of contrast-to-noise ratio (CNR), precision, and consistency of quantitative femorotibial cartilage measurements. Test-retest measurements were acquired using coronal fast low-angle shot with water excitation (FLASHwe) and coronal multiplanar reconstruction (MPR) of sagittal double-echo steady state with water excitation (DESSwe) at 3T. The precision errors for cartilage volume and thickness were
Measuring double-electron capture with liquid xenon experiments
NASA Astrophysics Data System (ADS)
Mei, D.-M.; Marshall, I.; Wei, W.-Z.; Zhang, C.
2014-01-01
We investigate the possibilities of observing the decay mode for 124Xe in which two electrons are captured, two neutrinos are emitted, and the final daughter nucleus is in its ground state, using dark matter experiments with liquid xenon. The first upper limit of the decay half-life is calculated to be 1.66 × 1021 years at a 90% confidence level (C.L.) obtained with the published background data from the XENON100 experiment. Employing a known background model from the large underground xenon (LUX) experiment, we predict that the detection of double-electron capture of 124Xe to the ground state of 124Te with LUX will have approximately 115 events, assuming a half-life of 2.9 × 1021 years. We conclude that measuring 124Xe 2ν double-electron capture to the ground state of 124Te can be performed more precisely with the proposed LUX-Zeplin (LZ) experiment.
Earthquake hypocenter relocation using double difference method in East Java and surrounding areas
DOE Office of Scientific and Technical Information (OSTI.GOV)
C, Aprilia Puspita; Meteorological, Climatological, and Geophysical Agency; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id
Determination of precise hypocenter location is very important in order to provide information about subsurface fault plane and for seismic hazard analysis. In this study, we have relocated hypocenter earthquakes in Eastern part of Java and surrounding areas from local earthquake data catalog compiled by Meteorological, Climatological, and Geophysical Agency of Indonesia (MCGA) in time period 2009-2012 by using the double-difference method. The results show that after relocation processes, there are significantly changes in position and orientation of earthquake hypocenter which is correlated with the geological setting in this region. We observed indication of double seismic zone at depths ofmore » 70-120 km within the subducting slab in south of eastern part of Java region. Our results will provide useful information for advance seismological studies and seismic hazard analysis in this study.« less
Probing phospholipase a(2) with fluorescent phospholipid substrates.
Wichmann, Oliver; Gelb, Michael H; Schultz, Carsten
2007-09-03
The Foerster resonance energy transfer-based sensor, PENN, measures intracellular phospholipase A(2) (PLA(2)) activity in living cells and small organisms. In an attempt to modify the probe for the detection of particular isoforms, we altered the sn-2 fatty acid in such a way that either one or three of the Z double bonds in arachidonic acid were present in the sensor molecule. Arachidonic-acid-mimicking fatty acids were prepared by copper-mediated coupling reactions. Probes with a single double bond in the 5-position exhibited favorable substrate properties for secretory PLA(2)s. In vitro experiments with the novel unsaturated doubly labeled phosphatidylethanolamine derivatives showed preferred cleavage of the sensor PENN2 (one double bond) by the physiologically important group V sPLA(2), while the O-methyl-derivative PMNN2 was accepted best by the isoform from hog pancreas. For experiments in living cells, we demonstrated that bioactivation via S-acetylthioethyl (SATE) groups is essential for probe performance. Surprisingly, membrane-permeant versions of the new sensors that contained double bonds, PENN2 and PENN3, were only cleaved to a minor extent in HeLa cells while the saturated form, PENN, was well accepted.
Clinical evaluation of the FreeStyle Precision Pro system.
Brazg, Ronald; Hughes, Kristen; Martin, Pamela; Coard, Julie; Toffaletti, John; McDonnell, Elizabeth; Taylor, Elizabeth; Farrell, Lausanne; Patel, Mona; Ward, Jeanne; Chen, Ting; Alva, Shridhara; Ng, Ronald
2013-06-05
A new version of international standard (ISO 15197) and CLSI Guideline (POCT12) with more stringent accuracy criteria are near publication. We evaluated the glucose test performance of the FreeStyle Precision Pro system, a new blood glucose monitoring system (BGMS) designed to enhance accuracy for point-of-care testing (POCT). Precision, interference and system accuracy with 503 blood samples from capillary, venous and arterial sources were evaluated in a multicenter study. Study results were analyzed and presented in accordance with the specifications and recommendations of the final draft ISO 15197 and the new POCT12. The FreeStyle Precision Pro system demonstrated acceptable precision (CV <5%), no interference across a hematocrit range of 15-65%, and, except for xylose, no interference from 24 of 25 potentially interfering substances. It also met all accuracy criteria specified in the final draft ISO 15197 and POCT12, with 97.3-98.9% of the individual results of various blood sample types agreeing within ±12 mg/dl of the laboratory analyzer values at glucose concentrations <100mg/dl and within ±12.5% of the laboratory analyzer values at glucose concentrations ≥100 mg/dl. The FreeStyle Precision Pro system met the tighter accuracy requirements, providing a means for enhancing accuracy for point-of-care blood glucose monitoring. Copyright © 2013 Elsevier B.V. All rights reserved.
Revised and extended UTILITIES for the RATIP package
NASA Astrophysics Data System (ADS)
Nikkinen, J.; Fritzsche, S.; Heinäsmäki, S.
2006-09-01
During the last years, the RATIP package has been found useful for calculating the excitation and decay properties of free atoms. Based on the (relativistic) multiconfiguration Dirac-Fock method, this program is used to obtain accurate predictions of atomic properties and to analyze many recent experiments. The daily work with this package made an extension of its UTILITIES [S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163] desirable in order to facilitate the data handling and interpretation of complex spectra. For this purpose, we make available an enlarged version of the UTILITIES which mainly supports the comparison with experiment as well as large Auger computations. Altogether 13 additional tasks have been appended to the program together with a new menu structure to improve the interactive control of the program. Program summaryTitle of program: RATIP Catalogue identifier: ADPD_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADPD_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Reference in CPC to previous version: S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163 Catalogue identifier of previous version: ADPD Authors of previous version: S. Fritzsche, Department of Physics, University of Kassel, Heinrich-Plett-Strasse 40, D-34132 Kassel, Germany Does the new version supersede the original program?: yes Computer for which the new version is designed and others on which it has been tested: IBM RS 6000, PC Pentium II-IV Installations: University of Kassel (Germany), University of Oulu (Finland) Operating systems: IBM AIX, Linux, Unix Program language used in the new version: ANSI standard Fortran 90/95 Memory required to execute with typical data: 300 kB No. of bits in a word: All real variables are parameterized by a selected kind parameter and, thus, can be adapted to any required precision if supported by the compiler. Currently, the kind parameter is set to double precision (two 32-bit words) as used also for other components of the RATIP package [S. Fritzsche, C.F. Fischer, C.Z. Dong, Comput. Phys. Comm. 124 (2000) 341; G. Gaigalas, S. Fritzsche, Comput. Phys. Comm. 134 (2001) 86; S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163; S. Fritzsche, J. Elec. Spec. Rel. Phen. 114-116 (2001) 1155] No. of lines in distributed program, including test data, etc.:231 813 No. of bytes in distributed program, including test data, etc.: 3 977 387 Distribution format: tar.gzip file Nature of the physical problem: In order to describe atomic excitation and decay properties also quantitatively, large-scale computations are often needed. In the framework of the RATIP package, the UTILITIES support a variety of (small) tasks. For example, these tasks facilitate the file and data handling in large-scale applications or in the interpretation of complex spectra. Method of solution: The revised UTILITIES now support a total of 29 subtasks which are mainly concerned with the manipulation of output data as obtained from other components of the RATIP package. Each of these tasks are realized by one or several subprocedures which have access to the corresponding modules of the main components. While the main menu defines seven groups of subtasks for data manipulations and computations, a particular task is selected from one of these group menus. This allows to enlarge the program later if technical support for further tasks will become necessary. For each selected task, an interactive dialog about the required input and output data as well as a few additional information are printed during the execution of the program. Reasons for the new version: The requirement for enlarging the previous version of the UTILITIES [S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163] arose from the recent application of the RATIP package for large-scale radiative and Auger computations. A number of new subtasks now refer to the handling of Auger amplitudes and their proper combination in order to facilitate the interpretation of complex spectra. A few further tasks, such as the direct access to the one-electron matrix elements for some given set of orbital functions, have been found useful also in the analysis of data. Summary of revisions: extraction and handling of atomic data within the framework of RATIP. With the revised version, we now 'add' another 13 tasks which refer to the manipulation of data files, the generation and interpretation of Auger spectra, the computation of various one- and two-electron matrix elements as well as the evaluation of momentum densities and grid parameters. Owing to the rather large number of subtasks, the main menu has been divided into seven groups from which the individual tasks can be selected very similarly as before. Typical running time: The program responds promptly for most of the tasks. The responding time for some tasks, such as the generation of a relativistic momentum density, strongly depends on the size of the corresponding data files and the number of grid points. Unusual features of the program: A total of 29 different tasks are supported by the program. Starting from the main menu, the user is guided interactively through the program by a dialog and a few additional explanations. For each task, a short summary about its function is displayed before the program prompts for all the required input data.
Provenance tracking for scientific software toolchains through on-demand release and archiving
NASA Astrophysics Data System (ADS)
Ham, David
2017-04-01
There is an emerging consensus that published computational science results must be backed by a provenance chain tying results to the exact versions of input data and the code which generated them. There is also now an impressive range of web services devoted to revision control of software, and the archiving in citeable form of both software and input data. However, much scientific software itself builds on libraries and toolkits, and these themselves have dependencies. Further, it is common for cutting edge research to depend on the latest version of software in online repositories, rather than the official release version. This creates a situation in which an author who wishes to follow best practice in recording the provenance chain of their results must archive and cite unreleased versions of a series of dependencies. Here, we present an alternative which toolkit authors can easily implement to provide a semi-automatic mechanism for creating and archiving custom software releases of the precise version of a package used in a particular simulation. This approach leverages the excellent services provided by GitHub and Zenodo to generate a connected set of citeable DOIs for the archived software. We present the integration of this workflow into the Firedrake automated finite element framework as a practical example of this approach in use on a complex geoscientific tool chain in practical use.
NASA Astrophysics Data System (ADS)
Carr, Rachel; Double Chooz Collaboration
2015-04-01
In 2011, Double Chooz reported the first evidence for θ13-driven reactor antineutrino oscillation, derived from observations of inverse beta decay (IBD) events in a single detector located ~ 1 km from two nuclear reactors. Since then, the collaboration has honed the precision of its sin2 2θ13 measurement by reducing backgrounds, improving detection efficiency and systematics, and including additional statistics from IBD events with neutron captures on hydrogen. By 2014, the overwhelmingly dominant contribution to sin2 2θ13 uncertainty was reactor flux uncertainty, which is irreducible in a single-detector experiment. Now, as Double Chooz collects the first data with a near detector, we can begin to suppress that uncertainty and approach the experiment's full potential. In this talk, we show quality checks on initial data from the near detector. We also present our two-detector sensitivity to both sin2 2θ13 and sterile neutrino mixing, which are enhanced by analysis strategies developed in our single-detector phase. In particular, we discuss prospects for the first two-detector results from Double Chooz, expected in 2015.
Effect of cation ordering on oxygen vacancy diffusion pathways in double perovskites
Uberuaga, Blas Pedro; Pilania, Ghanshyam
2015-07-08
Perovskite structured oxides (ABO 3) are attractive for a number of technological applications, including as superionics because of the high oxygen conductivities they exhibit. Double perovskites (AA’BB’O 6) provide even more flexibility for tailoring properties. Using accelerated molecular dynamics, we examine the role of cation ordering on oxygen vacancy mobility in one model double perovskite SrLaTiAlO 6. We find that the mobility of the vacancy is very sensitive to the cation ordering, with a migration energy that varies from 0.6 to 2.7 eV. In the extreme cases, the mobility is both higher and lower than either of the two endmore » member single perovskites. Further, the nature of oxygen vacancy diffusion, whether one-dimensional, two-dimensional, or three-dimensional, also varies with cation ordering. We correlate the dependence of oxygen mobility on cation structure to the distribution of Ti 4+ cations, which provide unfavorable environments for the positively charged oxygen vacancy. The results demonstrate the potential of using tailored double perovskite structures to precisely control the behavior of oxygen vacancies in these materials.« less
Diffraction-based overlay metrology for double patterning technologies
NASA Astrophysics Data System (ADS)
Dasari, Prasad; Korlahalli, Rahul; Li, Jie; Smith, Nigel; Kritsun, Oleg; Volkman, Cathy
2009-03-01
The extension of optical lithography to 32nm and beyond is made possible by Double Patterning Techniques (DPT) at critical levels of the process flow. The ease of DPT implementation is hindered by increased significance of critical dimension uniformity and overlay errors. Diffraction-based overlay (DBO) has shown to be an effective metrology solution for accurate determination of the overlay errors associated with double patterning [1, 2] processes. In this paper we will report its use in litho-freeze-litho-etch (LFLE) and spacer double patterning technology (SDPT), which are pitch splitting solutions that reduce the significance of overlay errors. Since the control of overlay between various mask/level combinations is critical for fabrication, precise and accurate assessment of errors by advanced metrology techniques such as spectroscopic diffraction based overlay (DBO) and traditional image-based overlay (IBO) using advanced target designs will be reported. A comparison between DBO, IBO and CD-SEM measurements will be reported. . A discussion of TMU requirements for 32nm technology and TMU performance data of LFLE and SDPT targets by different overlay approaches will be presented.
Precision genome editing using CRISPR-Cas9 and linear repair templates in C. elegans.
Paix, Alexandre; Folkmann, Andrew; Seydoux, Geraldine
2017-05-15
The ability to introduce targeted edits in the genome of model organisms is revolutionizing the field of genetics. State-of-the-art methods for precision genome editing use RNA-guided endonucleases to create double-strand breaks (DSBs) and DNA templates containing the edits to repair the DSBs. Following this strategy, we have developed a protocol to create precise edits in the C. elegans genome. The protocol takes advantage of two innovations to improve editing efficiency: direct injection of CRISPR-Cas9 ribonucleoprotein complexes and use of linear DNAs with short homology arms as repair templates. The protocol requires no cloning or selection, and can be used to generate base and gene-size edits in just 4days. Point mutations, insertions, deletions and gene replacements can all be created using the same experimental pipeline. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
From cutting nature and its joints to measuring it: new kinds and new kinds of people in biology.
McOuat, G
2001-12-01
In the received version of the development of science, natural kinds are established in the preliminary stages (natural history) and made more precise by measurement (exact science). By examining the move from nineteenth- to twentieth-century biology, this paper unpacks the notion of species as 'natural kinds' and grounds for discourse, questioning received notions about both kinds and species. Life sciences in the nineteenth century established several 'monster-barring' techniques to block disputes about the precise definition of species. Counterintuitively, precision and definition brought dispute and disrupted exchange. Thus, any attempt to add precision was doomed to failure. By intervening and measuring, the new experimental biology dislocated the established links between natural kinds and kinds of people and institutions. New kinds were built in new places. They were made to measure from the very start. This paper ends by claiming that there was no long-standing 'species problem' in the history of biology. That problem is a later construction of the 'modern synthesis', well after the disruption of 'kinds' and kinds of people. Only then would definitions and precision matter. A new, non-linguistic, take on the incommensurability thesis is hinted at. Copyright 2001 Elsevier Science Ltd. All rights reserved.
Accuracy and Precision of Visual Stimulus Timing in PsychoPy: No Timing Errors in Standard Usage
Garaizar, Pablo; Vadillo, Miguel A.
2014-01-01
In a recent report published in PLoS ONE, we found that the performance of PsychoPy degraded with very short timing intervals, suggesting that it might not be perfectly suitable for experiments requiring the presentation of very brief stimuli. The present study aims to provide an updated performance assessment for the most recent version of PsychoPy (v1.80) under different hardware/software conditions. Overall, the results show that PsychoPy can achieve high levels of precision and accuracy in the presentation of brief visual stimuli. Although occasional timing errors were found in very demanding benchmarking tests, there is no reason to think that they can pose any problem for standard experiments developed by researchers. PMID:25365382
Physical Projections in BRST Treatments of Reparametrization Invariant Theories
NASA Astrophysics Data System (ADS)
Marnelius, Robert; Sandström, Niclas
Any regular quantum mechanical system may be cast into an Abelian gauge theory by simply reformulating it as a reparametrization invariant theory. We present a detailed study of the BRST quantization of such reparametrization invariant theories within a precise operator version of BRST which is related to the conventional BFV path integral formulation. Our treatments lead us to propose general rules for how physical wave functions and physical propagators are to be projected from the BRST singlets and propagators in the ghost extended BRST theory. These projections are performed by boundary conditions which are specified by the ingredients of BRST charge and precisely determined by the operator BRST. We demonstrate explicitly the validity of these rules for the considered class of models.
ERIC Educational Resources Information Center
Tamaoka, Katsuo; Asano, Michiko; Miyaoka, Yayoi; Yokosawa, Kazuhiko
2014-01-01
Using the eye-tracking method, the present study depicted pre- and post-head processing for simple scrambled sentences of head-final languages. Three versions of simple Japanese active sentences with ditransitive verbs were used: namely, (1) SO[subscript 1]O[subscript 2]V canonical, (2) SO[subscript 2]O[subscript 1]V single-scrambled, and (3)…
Masses of Te130 and Xe130 and Double-β-Decay Q Value of Te130
NASA Astrophysics Data System (ADS)
Redshaw, Matthew; Mount, Brianna J.; Myers, Edmund G.; Avignone, Frank T., III
2009-05-01
The atomic masses of Te130 and Xe130 have been obtained by measuring cyclotron frequency ratios of pairs of triply charged ions simultaneously trapped in a Penning trap. The results, with 1 standard deviation uncertainty, are M(Te130)=129.906222744(16)u and M(Xe130)=129.903509351(15)u. From the mass difference the double-β-decay Q value of Te130 is determined to be Qββ(Te130)=2527.518(13)keV. This is a factor of 150 more precise than the result of the AME2003 [G. Audi , Nucl. Phys. A729, 337 (2003)NUPABL0375-947410.1016/j.nuclphysa.2003.11.003].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grange, Joseph M.
2013-01-01
This dissertation presents the first measurement of the muon antineutrino charged current quasi-elastic double-differential cross section. These data significantly extend the knowledge of neutrino and antineutrino interactions in the GeV range, a region that has recently come under scrutiny due to a number of conflicting experimental results. To maximize the precision of this measurement, three novel techniques were employed to measure the neutrino background component of the data set. Representing the first measurements of the neutrino contribution to an accelerator-based antineutrino beam in the absence of a magnetic field, the successful execution of these techniques carry implications for current andmore » future neutrino experiments.« less
Rearrangement of valence neutrons in the neutrinoless double- β decay of Xe 136
Szwec, S. V.; Kay, B. P.; Cocolios, T. E.; ...
2016-11-15
Here, a quantitative description of the change in ground-state neutron occupancies between 136Xe and 136Ba, the initial and final state in the neutrinoless double-β decay of 136Xe, has been extracted from precision measurements of the cross sections of single-neutron-adding and -removing reactions. Comparisons are made to recent theoretical calculations of the same properties using various nuclear-structure models. These are the same calculations used to determine the magnitude of the nuclear matrix elements for the process, which at present disagree with each other by factors of 2 or 3. The experimental neutron occupancies show some disagreement with the theoretical calculations.
Bhattacharyya, Dhananjay; Halder, Sukanya; Basu, Sankar; Mukherjee, Debasish; Kumar, Prasun; Bansal, Manju
2017-02-01
Comprehensive analyses of structural features of non-canonical base pairs within a nucleic acid double helix are limited by the availability of a small number of three dimensional structures. Therefore, a procedure for model building of double helices containing any given nucleotide sequence and base pairing information, either canonical or non-canonical, is seriously needed. Here we describe a program RNAHelix, which is an updated version of our widely used software, NUCGEN. The program can regenerate duplexes using the dinucleotide step and base pair orientation parameters for a given double helical DNA or RNA sequence with defined Watson-Crick or non-Watson-Crick base pairs. The original structure and the corresponding regenerated structure of double helices were found to be very close, as indicated by the small RMSD values between positions of the corresponding atoms. Structures of several usual and unusual double helices have been regenerated and compared with their original structures in terms of base pair RMSD, torsion angles and electrostatic potentials and very high agreements have been noted. RNAHelix can also be used to generate a structure with a sequence completely different from an experimentally determined one or to introduce single to multiple mutation, but with the same set of parameters and hence can also be an important tool in homology modeling and study of mutation induced structural changes.
fd3: Spectral disentangling of double-lined spectroscopic binary stars
NASA Astrophysics Data System (ADS)
Ilijić, Saša
2017-05-01
The spectral disentangling technique can be applied on a time series of observed spectra of a spectroscopic double-lined binary star (SB2) to determine the parameters of orbit and reconstruct the spectra of component stars, without the use of template spectra. fd3 disentangles the spectra of SB2 stars, capable also of resolving the possible third companion. It performs the separation of spectra in the Fourier space which is faster, but in several respects less versatile than the wavelength-space separation. (Wavelength-space separation is implemented in the twin code CRES.) fd3 is written in C and is designed as a command-line utility for a Unix-like operating system. fd3 is a new version of FDBinary (ascl:1705.011), which is now deprecated.
Double-black-hole solutions of the Einstein-Maxwell-dilaton theory in five dimensions
NASA Astrophysics Data System (ADS)
Stelea, Cristian
2018-01-01
We describe a solution-generating technique that maps a static charged solution of the Einstein-Maxwell theory in four (or five) dimensions to a five-dimensional solution of the Einstein-Maxwell-Dilaton theory. As examples of this technique first we show how to construct the dilatonic version of the Reissner-Nordström solution in five dimensions and then we consider the more general case of the double black hole solutions and describe some of their properties. We found that in the general case the value of the conical singularities in between the black holes is affected by the dilaton's coupling constant to the gauge field and only in the particular case when all charges are proportional to the masses this dependence cancels out.
Packaging double-helical DNA into viral capsids.
LaMarque, Jaclyn C; Le, Thuc-Vy L; Harvey, Stephen C
2004-02-15
DNA packaging in bacteriophage P4 has been examined using a molecular mechanics model with a reduced representation containing one pseudoatom per turn of the double helix. The model is a discretized version of an elastic continuum model. The DNA is inserted piecewise into the model capsid, with the structure being reoptimized after each piece is inserted. Various optimization protocols were investigated, and it was found that molecular dynamics at a very low temperature (0.3 K) produces the optimal packaged structure. This structure is a concentric spool, rather than the coaxial spool that has been commonly accepted for so many years. This geometry, which was originally suggested by Hall and Schellman in 1982 (Biopolymers Vol. 21, pp. 2011-2031), produces a lower overall elastic energy than coaxial spooling. Copyright 2003 Wiley Periodicals, Inc.
Cadmium-Aluminum Layered Double Hydroxide Microspheres for Photocatalytic CO2 Reduction.
Saliba, Daniel; Ezzeddine, Alaa; Sougrat, Rachid; Khashab, Niveen M; Hmadeh, Mohamad; Al-Ghoul, Mazen
2016-04-21
We report the synthesis of cadmium-aluminum layered double hydroxide (CdAl LDH) using the reaction-diffusion framework. As the hydroxide anions diffuse into an agar gel matrix containing the mixture of aluminum and cadmium salts at a given ratio, they react to give the LDH. The LDH self-assembles inside the pores of the gel matrix into a unique spherical-porous shaped microstructure. The internal and external morphologies of the particles are studied by electron microscopy and tomography revealing interconnected channels and a high surface area. This material is shown to exhibit a promising performance in the photoreduction of carbon dioxide using solar light. Moreover, the palladium-decorated version shows a significant improvement in its reduction potential at room temperature. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ATM-Weather Integration Plan, Version 1.0
2009-09-17
necessarily involving the flight of aircraft (e.g. aerial gunnery, artillery, rockets, missiles, lasers , demolitions, etc.). The precise time of...tool teams to ensure that the concept is consistent with team doctrine and a collaborative and coherent NAS. In the text of this plan, weather...SAS: Wind shear detection (e.g. LLWAS), ASR-WSP, TDWR, LIDAR , ASR-8/9/11, NEXRAD, F-420, DASI, ASOS, AWOS, AWSS, SAWS, NextGen Surface Observing
Reliability of the Brazilian version of the Physical Activity Checklist Interview in children.
Adami, Fernando; Cruciani, Fernanda; Douek, Michelle; Sewell, Carolina Dumit; Mariath, Aline Brandão; Hinnig, Patrícia de Fragas; Freaza, Silvia Rafaela Mascarenhas; Bergamaschi, Denise Pimentel
2011-04-01
To assess the reliability of the Lista de Atividades Físicas (Brazilian version of the Physical Activity Checklist Interview) in children. The study is part of a cross-cultural adaptation of the Physical Activity Checklist Interview, conducted with 83 school children aged between seven and ten years, enrolled between the 2nd and 5th grades of primary education in the city of São Paulo, Southeastern Brazil, in 2008. The questionnaire was responded by children through individual interviews. It is comprised of a list of 21 moderate to vigorous physical activities performed on the previous day, it is divided into periods (before, during and after school) and it has a section for interview assessment. This questionnaire enables the quantification of time spent in physical and sedentary activities and the total and weighed metabolic costs. Reliability was assessed by comparing two interviews conducted with a mean interval of three hours. For the interview assessment, data from the first interview and those from an external evaluator were compared. Bland-Altman's proposal, the intraclass correlation coefficient and Lin's concordance correlation coefficient were used to assess reliability. The intraclass correlation coefficient lower limits for the outcomes analyzed varied from 0.84 to 0.96. Precision and agreement varied between 0.83 and 0.97 and between 0.99 and 1, respectively. The line estimated from the pairs of values obtained in both interviews indicates high data precision. The interview item showing the poorest result was the ability to estimate time (fair in 27.7% of interviews). Interview assessment items showed intraclass correlation coefficients between 0.60 and 0.70, except for level of cooperation (0.46). The Brazilian version of the Physical Activity Checklist Interview shows high reliability to assess physical and sedentary activity on the previous day in children.
Coster, Wendy J.; Haley, Stephen M.; Ni, Pengsheng; Dumas, Helene M.; Fragala-Pinkham, Maria A.
2009-01-01
Objective To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the Self-Care and Social Function scales of the Pediatric Evaluation of Disability Inventory (PEDI) compared to the full-length version of these scales. Design Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. Settings Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children’s homes. Participants Four hundred sixty-nine children with disabilities and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). Interventions Not applicable. Main Outcome Measures Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length Self-Care and Social Function scales; time (in seconds) to complete assessments and respondent ratings of burden. Results Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (all r’s between .94 and .99). Using computer simulation of retrospective data, discriminant validity and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared to over 16 minutes to complete the full-length scales. Conclusions Self-care and Social Function score estimates from CAT administration are highly comparable to those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time. PMID:18373991
Dual-CGH interferometry test for x-ray mirror mandrels
NASA Astrophysics Data System (ADS)
Gao, Guangjun; Lehan, John P.; Griesmann, Ulf
2009-06-01
We describe a glancing-incidence interferometric double-pass test, based on a pair of computer-generated holograms (CGHs), for mandrels used to fabricate x-ray mirrors for space-based x-ray telescopes. The design of the test and its realization are described. The application illustrates the advantage of dual-CGH tests for the complete metrology of precise optical surfaces.
Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Analytics-Driven Lossless Data Compression for Rapid In-situ Indexing, Storing, and Querying
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, John; Arkatkar, Isha; Lakshminarasimhan, Sriram
2013-01-01
The analysis of scientific simulations is highly data-intensive and is becoming an increasingly important challenge. Peta-scale data sets require the use of light-weight query-driven analysis methods, as opposed to heavy-weight schemes that optimize for speed at the expense of size. This paper is an attempt in the direction of query processing over losslessly compressed scientific data. We propose a co-designed double-precision compression and indexing methodology for range queries by performing unique-value-based binning on the most significant bytes of double precision data (sign, exponent, and most significant mantissa bits), and inverting the resulting metadata to produce an inverted index over amore » reduced data representation. Without the inverted index, our method matches or improves compression ratios over both general-purpose and floating-point compression utilities. The inverted index is light-weight, and the overall storage requirement for both reduced column and index is less than 135%, whereas existing DBMS technologies can require 200-400%. As a proof-of-concept, we evaluate univariate range queries that additionally return column values, a critical component of data analytics, against state-of-the-art bitmap indexing technology, showing multi-fold query performance improvements.« less
Durell, Todd M; Adler, Lenard A; Williams, Dave W; Deldar, Ahmed; McGough, James J; Glaser, Paul E; Rubin, Richard L; Pigott, Teresa A; Sarkis, Elias H; Fox, Bethany K
2013-02-01
Attention-deficit/hyperactivity disorder (ADHD) is associated with significant impairment in multiple functional domains. This trial evaluated efficacy in ADHD symptoms and functional outcomes in young adults treated with atomoxetine. Young adults (18-30 years old) with ADHD were randomized to 12 weeks of double-blind treatment with atomoxetine (n = 220) or placebo (n = 225). The primary efficacy measure of ADHD symptom change was Conners' Adult ADHD Rating Scale (CAARS): Investigator-Rated: Screening Version Total ADHD Symptoms score with adult prompts. Secondary outcomes scales included the Adult ADHD Quality of Life-29, Clinical Global Impression-ADHD-Severity, Patient Global Impression-Improvement, CAARS Self-Report, Behavior Rating Inventory of Executive Function-Adult Version Self-Report, and assessments of depression, anxiety, sleepiness, driving behaviors, social adaptation, and substance use. Atomoxetine was superior to placebo on CAARS: Investigator-Rated: Screening Version (atomoxetine [least-squares mean ± SE, -13.6 ± 0.8] vs placebo [-9.3 ± 0.8], 95% confidence interval [-6.35 to -2.37], P < 0.001), Clinical Global Impression-ADHD-Severity (atomoxetine [-1.1 ± 0.1] vs placebo [-0.7 ± 0.1], 95% confidence interval [-0.63 to -0.24], P < 0.001), and CAARS Self-Report (atomoxetine [-11.9 ± 0.8] vs placebo [-7.8 ± 0.7], 95% confidence interval [-5.94 to -2.15], P < 0.001) but not on Patient Global Impression-Improvement. In addition, atomoxetine was superior to placebo on Adult ADHD Quality of Life-29 and Behavior Rating Inventory of Executive Function-Adult Version Self-Report. Additional assessments failed to detect significant differences (P ≥ 0.05) between atomoxetine and placebo. The adverse event profile was similar to that observed in other atomoxetine studies. Nausea, decreased appetite, insomnia, dry mouth, irritability, dizziness, and dyspepsia were reported significantly more often with atomoxetine than with placebo. Atomoxetine reduced ADHD symptoms and improved quality of life and executive functioning deficits in young adults compared with placebo. Atomoxetine was also generally well tolerated.
NASA Astrophysics Data System (ADS)
Stoitsov, M. V.; Schunck, N.; Kortelainen, M.; Michel, N.; Nam, H.; Olsen, E.; Sarich, J.; Wild, S.
2013-06-01
We describe the new version 2.00d of the code HFBTHO that solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogoliubov (HFB) problem by using the cylindrical transformed deformed harmonic oscillator basis. In the new version, we have implemented the following features: (i) the modified Broyden method for non-linear problems, (ii) optional breaking of reflection symmetry, (iii) calculation of axial multipole moments, (iv) finite temperature formalism for the HFB method, (v) linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations, (vi) blocking of quasi-particles in the Equal Filling Approximation (EFA), (vii) framework for generalized energy density with arbitrary density-dependences, and (viii) shared memory parallelism via OpenMP pragmas. Program summaryProgram title: HFBTHO v2.00d Catalog identifier: ADUI_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUI_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 167228 No. of bytes in distributed program, including test data, etc.: 2672156 Distribution format: tar.gz Programming language: FORTRAN-95. Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT5, Cray XE6. Operating system: UNIX, LINUX, WindowsXP. RAM: 200 Mwords Word size: 8 bits Classification: 17.22. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADUI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 167 (2005) 43 Nature of problem: The solution of self-consistent mean-field equations for weakly-bound paired nuclei requires a correct description of the asymptotic properties of nuclear quasi-particle wave functions. In the present implementation, this is achieved by using the single-particle wave functions of the transformed harmonic oscillator, which allows for an accurate description of deformation effects and pairing correlations in nuclei arbitrarily close to the particle drip lines. Solution method: The program uses the axial Transformed Harmonic Oscillator (THO) single- particle basis to expand quasi-particle wave functions. It iteratively diagonalizes the Hartree-Fock-Bogoliubov Hamiltonian based on generalized Skyrme-like energy densities and zero-range pairing interactions until a self-consistent solution is found. A previous version of the program was presented in: M.V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Comput. Phys. Commun. 167 (2005) 43-63. Reasons for new version: Version 2.00d of HFBTHO provides a number of new options such as the optional breaking of reflection symmetry, the calculation of axial multipole moments, the finite temperature formalism for the HFB method, optimized multi-constraint calculations, the treatment of odd-even and odd-odd nuclei in the blocking approximation, and the framework for generalized energy density with arbitrary density-dependences. It is also the first version of HFBTHO to contain threading capabilities. Summary of revisions: The modified Broyden method has been implemented, Optional breaking of reflection symmetry has been implemented, The calculation of all axial multipole moments up to λ=8 has been implemented, The finite temperature formalism for the HFB method has been implemented, The linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations has been implemented, The blocking of quasi-particles in the Equal Filling Approximation (EFA) has been implemented, The framework for generalized energy density functionals with arbitrary density-dependence has been implemented, Shared memory parallelism via OpenMP pragmas has been implemented. Restrictions: Axial- and time-reversal symmetries are assumed. Unusual features: The user must have access to the LAPACK subroutines DSYEVD, DSYTRF and DSYTRI, and their dependences, which compute eigenvalues and eigenfunctions of real symmetric matrices, the LAPACK subroutines DGETRI and DGETRF, which invert arbitrary real matrices, and the BLAS routines DCOPY, DSCAL, DGEMM and DGEMV for double-precision linear algebra (or provide another set of subroutines that can perform such tasks). The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Running time: Highly variable, as it depends on the nucleus, size of the basis, requested accuracy, requested configuration, compiler and libraries, and hardware architecture. An order of magnitude would be a few seconds for ground-state configurations in small bases N≈8-12, to a few minutes in very deformed configuration of a heavy nucleus with a large basis N>20.
[Application of water jet ERBEJET 2 in salivary glands surgery].
Gasiński, Mateusz; Modrzejewski, Maciej; Cenda, Paweł; Nazim-Zygadło, Elzbieta; Kozok, Andrzej; Dobosz, Paweł
2009-09-01
Anatomical location of salivary glands requires from surgeon high precision during the operation in this site. Waterjet is one of the modern tools which allows to perform "minimal invasive" operating procedure. This tool helps to separate pathological structures from healthy tissue with a stream of high pressure saline pumped to the operating area via special designed applicators. Stream of fluid is generated by double piston pummp under 1 to 80 bar pressure that can be regulated. This allows to precise remove tumors, spare nerves and vessels in glandular tissue and minimize use of electrocoagulation. Waterjet is a modern tool that can help to improve the safety of patients and comfort of surgeon's work.
Palmer, T. N.
2014-01-01
This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic–dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only. PMID:24842038
Palmer, T N
2014-06-28
This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic-dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only.
Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures
NASA Astrophysics Data System (ADS)
Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.
2017-12-01
Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.
The color bar phase meter: A simple and economical method for calibrating crystal oscillators
NASA Technical Reports Server (NTRS)
Davis, D. D.
1973-01-01
Comparison of crystal oscillators to the rubidium stabilized color burst is made easy and inexpensive by use of the color bar phase meter. Required equipment consists of an unmodified color TV receiver, a color bar synthesizer and a stop watch (a wrist watch or clock with sweep second hand may be used with reduced precision). Measurement precision of 1 x 10 to the minus 10th power can be realized in measurement times of less than two minutes. If the color bar synthesizer were commercially available, user cost should be less than $200.00, exclusive of the TV receiver. Parts cost for the color bar synthesizer which translates the crystal oscillator frequency to 3.579MHz and modulates the received RF signal before it is fed to the receiver antenna terminals is about $25.00. A more sophisticated automated version, with precision of 1 x 10 to the minus 11th power would cost about twice as much.
Lacasse, Anaïs; Roy, Jean-Sébastien; Parent, Alexandre J; Noushi, Nioushah; Odenigbo, Chúk; Pagé, Gabrielle; Beaudet, Nicolas; Choinière, Manon; Stone, Laura S; Ware, Mark A
2017-01-01
To better standardize clinical and epidemiological studies about the prevalence, risk factors, prognosis, impact and treatment of chronic low back pain, a minimum data set was developed by the National Institutes of Health (NIH) Task Force on Research Standards for Chronic Low Back Pain. The aim of the present study was to develop a culturally adapted questionnaire that could be used for chronic low back pain research among French-speaking populations in Canada. The adaptation of the French Canadian version of the minimum data set was achieved according to guidelines for the cross-cultural adaptation of self-reported measures (double forward-backward translation, expert committee, pretest among 35 patients with pain in the low back region). Minor cultural adaptations were also incorporated into the English version by the expert committee (e.g., items about race/ethnicity, education level). This cross-cultural adaptation provides an equivalent French-Canadian version of the minimal data set questionnaire and a culturally adapted English-Canadian version. Modifications made to the original NIH minimum data set were minimized to facilitate comparison between the Canadian and American versions. The present study is a first step toward the use of a culturally adapted instrument for phenotyping French- and English-speaking low back pain patients in Canada. Clinicians and researchers will recognize the importance of this standardized tool and are encouraged to incorporate it into future research studies on chronic low back pain.
Prediction, Detection, and Validation of Isotope Clusters in Mass Spectrometry Data
Treutler, Hendrik; Neumann, Steffen
2016-01-01
Mass spectrometry is a key analytical platform for metabolomics. The precise quantification and identification of small molecules is a prerequisite for elucidating the metabolism and the detection, validation, and evaluation of isotope clusters in LC-MS data is important for this task. Here, we present an approach for the improved detection of isotope clusters using chemical prior knowledge and the validation of detected isotope clusters depending on the substance mass using database statistics. We find remarkable improvements regarding the number of detected isotope clusters and are able to predict the correct molecular formula in the top three ranks in 92% of the cases. We make our methodology freely available as part of the Bioconductor packages xcms version 1.50.0 and CAMERA version 1.30.0. PMID:27775610
Preliminary evaluation of a nest usage sensor to detect double nest occupations of laying hens.
Zaninelli, Mauro; Costa, Annamaria; Tangorra, Francesco Maria; Rossi, Luciana; Agazzi, Alessandro; Savoini, Giovanni
2015-01-26
Conventional cage systems will be replaced by housing systems that allow hens to move freely. These systems may improve hens' welfare, but they lead to some disadvantages: disease, bone fractures, cannibalism, piling and lower egg production. New selection criteria for existing commercial strains should be identified considering individual data about laying performance and the behavior of hens. Many recording systems have been developed to collect these data. However, the management of double nest occupations remains critical for the correct egg-to-hen assignment. To limit such events, most systems adopt specific trap devices and additional mechanical components. Others, instead, only prevent these occurrences by narrowing the nest, without any detection and management. The aim of this study was to develop and test a nest usage "sensor", based on imaging analysis, that is able to automatically detect a double nest occupation. Results showed that the developed sensor correctly identified the double nest occupation occurrences. Therefore, the imaging analysis resulted in being a useful solution that could simplify the nest construction for this type of recording system, allowing the collection of more precise and accurate data, since double nest occupations would be managed and the normal laying behavior of hens would not be discouraged by the presence of the trap devices.
Preliminary Evaluation of a Nest Usage Sensor to Detect Double Nest Occupations of Laying Hens
Zaninelli, Mauro; Costa, Annamaria; Tangorra, Francesco Maria; Rossi, Luciana; Agazzi, Alessandro; Savoini, Giovanni
2015-01-01
Conventional cage systems will be replaced by housing systems that allow hens to move freely. These systems may improve hens' welfare, but they lead to some disadvantages: disease, bone fractures, cannibalism, piling and lower egg production. New selection criteria for existing commercial strains should be identified considering individual data about laying performance and the behavior of hens. Many recording systems have been developed to collect these data. However, the management of double nest occupations remains critical for the correct egg-to-hen assignment. To limit such events, most systems adopt specific trap devices and additional mechanical components. Others, instead, only prevent these occurrences by narrowing the nest, without any detection and management. The aim of this study was to develop and test a nest usage “sensor”, based on imaging analysis, that is able to automatically detect a double nest occupation. Results showed that the developed sensor correctly identified the double nest occupation occurrences. Therefore, the imaging analysis resulted in being a useful solution that could simplify the nest construction for this type of recording system, allowing the collection of more precise and accurate data, since double nest occupations would be managed and the normal laying behavior of hens would not be discouraged by the presence of the trap devices. PMID:25629704
Optimization of Trade-offs in Error-free Image Transmission
NASA Astrophysics Data System (ADS)
Cox, Jerome R.; Moore, Stephen M.; Blaine, G. James; Zimmerman, John B.; Wallace, Gregory K.
1989-05-01
The availability of ubiquitous wide-area channels of both modest cost and higher transmission rate than voice-grade lines promises to allow the expansion of electronic radiology services to a larger community. The band-widths of the new services becoming available from the Integrated Services Digital Network (ISDN) are typically limited to 128 Kb/s, almost two orders of magnitude lower than popular LANs can support. Using Discrete Cosine Transform (DCT) techniques, a compressed approximation to an image may be rapidly transmitted. However, intensity or resampling transformations of the reconstructed image may reveal otherwise invisible artifacts of the approximate encoding. A progressive transmission scheme reported in ISO Working Paper N800 offers an attractive solution to this problem by rapidly reconstructing an apparently undistorted image from the DCT coefficients and then subse-quently transmitting the error image corresponding to the difference between the original and the reconstructed images. This approach achieves an error-free transmission without sacrificing the perception of rapid image delivery. Furthermore, subsequent intensity and resampling manipulations can be carried out with confidence. DCT coefficient precision affects the amount of error information that must be transmitted and, hence the delivery speed of error-free images. This study calculates the overall information coding rate for six radiographic images as a function of DCT coefficient precision. The results demonstrate that a minimum occurs for each of the six images at an average coefficient precision of between 0.5 and 1.0 bits per pixel (b/p). Apparently undistorted versions of these six images can be transmitted with a coding rate of between 0.25 and 0.75 b/p while error-free versions can be transmitted with an overall coding rate between 4.5 and 6.5 b/p.
The EORTC CAT Core-The computer adaptive version of the EORTC QLQ-C30 questionnaire.
Petersen, Morten Aa; Aaronson, Neil K; Arraras, Juan I; Chie, Wei-Chu; Conroy, Thierry; Costantini, Anna; Dirven, Linda; Fayers, Peter; Gamper, Eva-Maria; Giesinger, Johannes M; Habets, Esther J J; Hammerlid, Eva; Helbostad, Jorunn; Hjermstad, Marianne J; Holzner, Bernhard; Johnson, Colin; Kemmler, Georg; King, Madeleine T; Kaasa, Stein; Loge, Jon H; Reijneveld, Jaap C; Singer, Susanne; Taphoorn, Martin J B; Thamsborg, Lise H; Tomaszewski, Krzysztof A; Velikova, Galina; Verdonck-de Leeuw, Irma M; Young, Teresa; Groenvold, Mogens
2018-06-21
To optimise measurement precision, relevance to patients and flexibility, patient-reported outcome measures (PROMs) should ideally be adapted to the individual patient/study while retaining direct comparability of scores across patients/studies. This is achievable using item banks and computerised adaptive tests (CATs). The European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire Core 30 (QLQ-C30) is one of the most widely used PROMs in cancer research and clinical practice. Here we provide an overview of the research program to develop CAT versions of the QLQ-C30's 14 functional and symptom domains. The EORTC Quality of Life Group's strategy for developing CAT item banks consists of: literature search to identify potential candidate items; formulation of new items compatible with the QLQ-C30 item style; expert evaluations and patient interviews; field-testing and psychometric analyses, including factor analysis, item response theory calibration and simulation of measurement properties. In addition, software for setting up, running and scoring CAT has been developed. Across eight rounds of data collections, 9782 patients were recruited from 12 countries for the field-testing. The four phases of development resulted in a total of 260 unique items across the 14 domains. Each item bank consists of 7-34 items. Psychometric evaluations indicated higher measurement precision and increased statistical power of the CAT measures compared to the QLQ-C30 scales. Using CAT, sample size requirements may be reduced by approximately 20-35% on average without loss of power. The EORTC CAT Core represents a more precise, powerful and flexible measurement system than the QLQ-C30. It is currently being validated in a large independent, international sample of cancer patients. Copyright © 2018 Elsevier Ltd. All rights reserved.
SAGE III solar ozone measurements: Initial results
NASA Technical Reports Server (NTRS)
Wang, Hsiang-Jui; Cunnold, Derek M.; Trepte, Chip; Thomason, Larry W.; Zawodny, Joseph M.
2006-01-01
Results from two retrieval algorithms, o3-aer and o3-mlr , used for SAGE III solar occultation ozone measurements in the stratosphere and upper troposphere are compared. The main differences between these two retrieved (version 3.0) ozone are found at altitudes above 40 km and below 15 km. Compared to correlative measurements, the SAGE II type ozone retrievals (o3-aer) provide better precisions above 40 km and do not induce artificial hemispheric differences in upper stratospheric ozone. The multiple linear regression technique (o3_mlr), however, can yield slightly more accurate ozone (by a few percent) in the lower stratosphere and upper troposphere. By using SAGE III (version 3.0) ozone from both algorithms and in their preferred regions, the agreement between SAGE III and correlative measurements is shown to be approx.5% down to 17 km. Below 17 km SAGE III ozone values are systematically higher, by 10% at 13 km, and a small hemispheric difference (a few percent) appears. Compared to SAGE III and HALOE, SAGE II ozone has the best accuracy in the lowest few kilometers of the stratosphere. Estimated precision in SAGE III ozone is about 5% or better between 20 and 40 km and approx.10% at 50 km. The precision below 20 km is difficult to evaluate because of limited coincidences between SAGE III and sondes. SAGE III ozone values are systematically slightly larger (2-3%) than those from SAGE II but the profile shapes are remarkably similar for altitudes above 15 km. There is no evidence of any relative drift or time dependent differences between these two instruments for altitudes above 15-20 km.
Validation of UARS Microwave Limb Sounder Temperature and Pressure Measurements
NASA Technical Reports Server (NTRS)
Fishbein, E. F.; Cofield, R. E.; Froidevaux, L.; Jarnot, R. F.; Lungu, T.; Read, W. G.; Shippony, Z.; Waters, J. W.; McDermid, I. S.; McGee, T. J.;
1996-01-01
The accuracy and precision of the Upper Atmosphere Research Satellite (UARS) Microwave Limb Sounder (MLS) atmospheric temperature and tangent-point pressure measurements are described. Temperatures and tangent- point pressure (atmospheric pressure at the tangent height of the field of view boresight) are retrieved from a 15-channel 63-GHz radiometer measuring O2 microwave emissions from the stratosphere and mesosphere. The Version 3 data (first public release) contains scientifically useful temperatures from 22 to 0.46 hPa. Accuracy estimates are based on instrument performance, spectroscopic uncertainty and retrieval numerics, and range from 2.1 K at 22 hPa to 4.8 K at 0.46 hPa for temperature and from 200 m (equivalent log pressure) at 10 hPa to 300 m at 0.1 hPa. Temperature accuracy is limited mainly by uncertainty in instrument characterization, and tangent-point pressure accuracy is limited mainly by the accuracy of spectroscopic parameters. Precisions are around 1 K and 100 m. Comparisons are presented among temperatures from MLS, the National Meteorological Center (NMC) stratospheric analysis and lidar stations at Table Mountain, California, Observatory of Haute Provence (OHP), France, and Goddard Spaceflight Center, Maryland. MLS temperatures tend to be 1-2 K lower than NMC and lidar, but MLS is often 5 - 10 K lower than NMC in the winter at high latitudes, especially within the northern hemisphere vortex. Winter MLS and OHP (44 deg N) lidar temperatures generally agree and tend to be lower than NMC. Problems with Version 3 MLS temperatures and tangent-point pressures are identified, but the high precision of MLS radiances will allow improvements with better algorithms planned for the future.
2011-01-01
Introduction Many trauma registries have used the Abbreviated Injury Scale 1990 Revision Update 98 (AIS98) to classify injuries. In the current AIS version (Abbreviated Injury Scale 2005 Update 2008 - AIS08), injury classification and specificity differ substantially from AIS98, and the mapping tools provided in the AIS08 dictionary are incomplete. As a result, data from different AIS versions cannot currently be compared. The aim of this study was to develop an additional AIS98 to AIS08 mapping tool to complement the current AIS dictionary map, and then to evaluate the completed map (produced by combining these two maps) using double-coded data. The value of additional information provided by free text descriptions accompanying assigned codes was also assessed. Methods Using a modified Delphi process, a panel of expert AIS coders established plausible AIS08 equivalents for the 153 AIS98 codes which currently have no AIS08 map. A series of major trauma patients whose injuries had been double-coded in AIS98 and AIS08 was used to assess the maps; both of the AIS datasets had already been mapped to another AIS version using the AIS dictionary maps. Following application of the completed (enhanced) map with or without free text evaluation, up to six AIS codes were available for each injury. Datasets were assessed for agreement in injury severity measures, and the relative performances of the maps in accurately describing the trauma population were evaluated. Results The double-coded injuries sustained by 109 patients were used to assess the maps. For data conversion from AIS98, both the enhanced map and the enhanced map with free text description resulted in higher levels of accuracy and agreement with directly coded AIS08 data than the currently available dictionary map. Paired comparisons demonstrated significant differences between direct coding and the dictionary maps, but not with either of the enhanced maps. Conclusions The newly-developed AIS98 to AIS08 complementary map enabled transformation of the trauma population description given by AIS98 into an AIS08 estimate which was statistically indistinguishable from directly coded AIS08 data. It is recommended that the enhanced map should be adopted for dataset conversion, using free text descriptions if available. PMID:21548991
Palmer, Cameron S; Franklyn, Melanie; Read-Allsopp, Christine; McLellan, Susan; Niggemeyer, Louise E
2011-05-08
Many trauma registries have used the Abbreviated Injury Scale 1990 Revision Update 98 (AIS98) to classify injuries. In the current AIS version (Abbreviated Injury Scale 2005 Update 2008 - AIS08), injury classification and specificity differ substantially from AIS98, and the mapping tools provided in the AIS08 dictionary are incomplete. As a result, data from different AIS versions cannot currently be compared. The aim of this study was to develop an additional AIS98 to AIS08 mapping tool to complement the current AIS dictionary map, and then to evaluate the completed map (produced by combining these two maps) using double-coded data. The value of additional information provided by free text descriptions accompanying assigned codes was also assessed. Using a modified Delphi process, a panel of expert AIS coders established plausible AIS08 equivalents for the 153 AIS98 codes which currently have no AIS08 map. A series of major trauma patients whose injuries had been double-coded in AIS98 and AIS08 was used to assess the maps; both of the AIS datasets had already been mapped to another AIS version using the AIS dictionary maps. Following application of the completed (enhanced) map with or without free text evaluation, up to six AIS codes were available for each injury. Datasets were assessed for agreement in injury severity measures, and the relative performances of the maps in accurately describing the trauma population were evaluated. The double-coded injuries sustained by 109 patients were used to assess the maps. For data conversion from AIS98, both the enhanced map and the enhanced map with free text description resulted in higher levels of accuracy and agreement with directly coded AIS08 data than the currently available dictionary map. Paired comparisons demonstrated significant differences between direct coding and the dictionary maps, but not with either of the enhanced maps. The newly-developed AIS98 to AIS08 complementary map enabled transformation of the trauma population description given by AIS98 into an AIS08 estimate which was statistically indistinguishable from directly coded AIS08 data. It is recommended that the enhanced map should be adopted for dataset conversion, using free text descriptions if available.
Favero, F.; McGranahan, N.; Salm, M.; Birkbak, N. J.; Sanborn, J. Z.; Benz, S. C.; Becq, J.; Peden, J. F.; Kingsbury, Z.; Grocok, R. J.; Humphray, S.; Bentley, D.; Spencer-Dene, B.; Gutteridge, A.; Brada, M.; Roger, S.; Dietrich, P.-Y.; Forshew, T.; Gerlinger, M.; Rowan, A.; Stamp, G.; Eklund, A. C.; Szallasi, Z.; Swanton, C.
2015-01-01
Background Glioblastoma (GBM) is the most common malignant brain cancer occurring in adults, and is associated with dismal outcome and few therapeutic options. GBM has been shown to predominantly disrupt three core pathways through somatic aberrations, rendering it ideal for precision medicine approaches. Methods We describe a 35-year-old female patient with recurrent GBM following surgical removal of the primary tumour, adjuvant treatment with temozolomide and a 3-year disease-free period. Rapid whole-genome sequencing (WGS) of three separate tumour regions at recurrence was carried out and interpreted relative to WGS of two regions of the primary tumour. Results We found extensive mutational and copy-number heterogeneity within the primary tumour. We identified a TP53 mutation and two focal amplifications involving PDGFRA, KIT and CDK4, on chromosomes 4 and 12. A clonal IDH1 R132H mutation in the primary, a known GBM driver event, was detectable at only very low frequency in the recurrent tumour. After sub-clonal diversification, evidence was found for a whole-genome doubling event and a translocation between the amplified regions of PDGFRA, KIT and CDK4, encoded within a double-minute chromosome also incorporating miR26a-2. The WGS analysis uncovered progressive evolution of the double-minute chromosome converging on the KIT/PDGFRA/PI3K/mTOR axis, superseding the IDH1 mutation in dominance in a mutually exclusive manner at recurrence, consequently the patient was treated with imatinib. Despite rapid sequencing and cancer genome-guided therapy against amplified oncogenes, the disease progressed, and the patient died shortly after. Conclusion This case sheds light on the dynamic evolution of a GBM tumour, defining the origins of the lethal sub-clone, the macro-evolutionary genomic events dominating the disease at recurrence and the loss of a clonal driver. Even in the era of rapid WGS analysis, cases such as this illustrate the significant hurdles for precision medicine success. PMID:25732040
Precisely and Accurately Inferring Single-Molecule Rate Constants
Kinz-Thompson, Colin D.; Bailey, Nevette A.; Gonzalez, Ruben L.
2017-01-01
The kinetics of biomolecular systems can be quantified by calculating the stochastic rate constants that govern the biomolecular state versus time trajectories (i.e., state trajectories) of individual biomolecules. To do so, the experimental signal versus time trajectories (i.e., signal trajectories) obtained from observing individual biomolecules are often idealized to generate state trajectories by methods such as thresholding or hidden Markov modeling. Here, we discuss approaches for idealizing signal trajectories and calculating stochastic rate constants from the resulting state trajectories. Importantly, we provide an analysis of how the finite length of signal trajectories restrict the precision of these approaches, and demonstrate how Bayesian inference-based versions of these approaches allow rigorous determination of this precision. Similarly, we provide an analysis of how the finite lengths and limited time resolutions of signal trajectories restrict the accuracy of these approaches, and describe methods that, by accounting for the effects of the finite length and limited time resolution of signal trajectories, substantially improve this accuracy. Collectively, therefore, the methods we consider here enable a rigorous assessment of the precision, and a significant enhancement of the accuracy, with which stochastic rate constants can be calculated from single-molecule signal trajectories. PMID:27793280
Precise CCD positions of Himalia using Gaia DR1 in 2015-2016
NASA Astrophysics Data System (ADS)
Peng, H. W.; Peng, Q. Y.; Wang, N.
2017-05-01
In order to obtain high-precision CCD positions of Himalia, the sixth Jovian satellite, a total of 598 CCD observations have been obtained during the years 2015-2016. The observations were made by using the 2.4 and 1 m telescopes administered by Yunnan Observatories over 27 nights. Several factors that would influence the positional precision of Himalia were analysed, including the reference star catalogue used, the geometric distortion and the phase effect. By taking advantage of its unprecedented positional precision, the recently released catalogue Gaia Data Release 1 was chosen to match reference stars in the CCD frames of both Himalia and open clusters, which were observed for deriving the geometric distortion. The latest version of sofa library was used to calculate the positions of reference stars. The theoretical positions of Himalia were retrieved from the Jet Propulsion Laboratory Horizons System that includes the satellite ephemeris JUP300, while the positions of Jupiter were based on the planetary ephemeris DE431. Our results showed that the means of observed minus computed (O - C) residuals are 0.071 and -0.001 arcsec in right ascension and declination, respectively. Their standard deviations are estimated at about 0.03 arcsec in each direction.
Magnan, Morris A; Maklebust, Joann
2008-01-01
To evaluate the effect of Web-based Braden Scale training on the reliability and precision of pressure ulcer risk assessments made by registered nurses (RN) working in acute care settings. Pretest-posttest, 2-group, quasi-experimental design. Five hundred Braden Scale risk assessments were made on 102 acute care patients deemed to be at various levels of risk for pressure ulceration. Assessments were made by RNs working in acute care hospitals at 3 different medical centers where the Braden Scale was in regular daily use (2 medical centers) or new to the setting (1 medical center). The Braden Scale for Predicting Pressure Sore Risk was used to guide pressure ulcer risk assessments. A Web-based version of the Detroit Medical Center Braden Scale Computerized Training Module was used to teach nurses correct use of the Braden Scale and selection of risk-based pressure ulcer prevention interventions. In the aggregate, RN generated reliable Braden Scale pressure ulcer risk assessments 65% of the time after training. The effect of Web-based Braden Scale training on reliability and precision of assessments varied according to familiarity with the scale. With training, new users of the scale made reliable assessments 84% of the time and significantly improved precision of their assessments. The reliability and precision of Braden Scale risk assessments made by its regular users was unaffected by training. Technology-assisted Braden Scale training improved both reliability and precision of risk assessments made by new users of the scale, but had virtually no effect on the reliability or precision of risk assessments made by regular users of the instrument. Further research is needed to determine best approaches for improving reliability and precision of Braden Scale assessments made by its regular users.
tweezercalib 2.0: Faster version of MatLab package for precise calibration of optical tweezers
NASA Astrophysics Data System (ADS)
Hansen, Poul Martin; Tolić-Nørrelykke, Iva Marija; Flyvbjerg, Henrik; Berg-Sørensen, Kirstine
2006-03-01
We present a vectorized version of the MatLab (MathWorks Inc.) package tweezercalib for calibration of optical tweezers with precision. The calibration is based on the power spectrum of the Brownian motion of a dielectric bead trapped in the tweezers. Precision is achieved by accounting for a number of factors that affect this power spectrum, as described in vs. 1 of the package [I.M. Tolić-Nørrelykke, K. Berg-Sørensen, H. Flyvbjerg, Matlab program for precision calibration of optical tweezers, Comput. Phys. Comm. 159 (2004) 225-240]. The graphical user interface allows the user to include or leave out each of these factors. Several "health tests" are applied to the experimental data during calibration, and test results are displayed graphically. Thus, the user can easily see whether the data comply with the theory used for their interpretation. Final calibration results are given with statistical errors and covariance matrix. New version program summaryTitle of program: tweezercalib Catalogue identifier: ADTV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTV_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference in CPC to previous version: I.M. Tolić-Nørrelykke, K. Berg-Sørensen, H. Flyvbjerg, Comput. Phys. Comm. 159 (2004) 225 Catalogue identifier of previous version: ADTV Does the new version supersede the original program: Yes Computer for which the program is designed and others on which it has been tested: General computer running MatLab (Mathworks Inc.) Operating systems under with the program has been tested: Windows2000, Windows-XP, Linux Programming language used: MatLab (Mathworks Inc.), standard license Memory required to execute with typical data: Of order four times the size of the data file High speed storage required: none No. of lines in distributed program, including test data, etc.: 135 989 No. of bytes in distributed program, including test data, etc.: 1 527 611 Distribution format: tar. gz Nature of physical problem: Calibrate optical tweezers with precision by fitting theory to experimental power spectrum of position of bead doing Brownian motion in incompressible fluid, possibly near microscope cover slip, while trapped in optical tweezers. Thereby determine spring constant of optical trap and conversion factor for arbitrary-units-to-nanometers for detection system. Method of solution: Elimination of cross-talk between quadrant photo-diode's output channels for positions (optional). Check that distribution of recorded positions agrees with Boltzmann distribution of bead in harmonic trap. Data compression and noise reduction by blocking method applied to power spectrum. Full accounting for hydrodynamic effects: Frequency-dependent drag force and interaction with nearby cover slip (optional). Full accounting for electronic filters (optional), for "virtual filtering" caused by detection system (optional). Full accounting for aliasing caused by finite sampling rate (optional). Standard non-linear least-squares fitting. Statistical support for fit is given, with several plots facilitating inspection of consistency and quality of data and fit. Summary of revisions: A faster fitting routine, adapted from [J. Nocedal, Y.x. Yuan, Combining trust region and line search techniques, Technical Report OTC 98/04, Optimization Technology Center, 1998; W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes. The Art of Scientific Computing, Cambridge University Press, Cambridge, 1986], is applied. It uses fewer function evaluations, and the remaining function evaluations have been vectorized. Calls to routines in Toolboxes not included with a standard MatLab license have been replaced by calls to routines that are included in the present package. Fitting parameters are rescaled to ensure that they are all of roughly the same size (of order 1) while being fitted. Generally, the program package has been updated to comply with MatLab, vs. 7.0, and optimized for speed. Restrictions on the complexity of the problem: Data should be positions of bead doing Brownian motion while held by optical tweezers. For high precision in final results, data should be time series measured over a long time, with sufficiently high experimental sampling rate: The sampling rate should be well above the characteristic frequency of the trap, the so-called corner frequency. Thus, the sampling frequency should typically be larger than 10 kHz. The Fast Fourier Transform used works optimally when the time series contain 2 data points, and long measurement time is obtained with n>12-15. Finally, the optics should be set to ensure a harmonic trapping potential in the range of positions visited by the bead. The fitting procedure checks for harmonic potential. Typical running time: Seconds Unusual features of the program: None References: The theoretical underpinnings for the procedure are found in [K. Berg-Sørensen, H. Flyvbjerg, Power spectrum analysis for optical tweezers, Rev. Sci. Ins. 75 (2004) 594-612].
Liou, Yiing Mei; Jwo, Clark J C; Yao, Kaiping Grace; Chiang, Li-Chi; Huang, Lian-Hua
2008-12-01
In order to analyze the health risks of insufficient activity by international comparisons, the first author obtained the permission to translate and develop a Taiwan version of the International Physical Activity Questionnaire (IPAQ). The objective was to determine culturally sensitive Chinese translations for the terms "moderate", "vigorous" and "physical activity" as well as to identify representative types of physical activity for Taiwanese. This study used discussions by 12 expert focus groups, 6 expert audits, a scale survey, field study, Cognitive Aspect Survey Methodology (CASM), dual independent translation and back-translation to establish a consensus on physical activity-related concepts, terminologies and types that define the intensity of common activities of Taiwanese by integrating both local and foreign studies. The Chinese terms "fei li", "zhong deng fei li" and "shen ti huo dong", respectively, were identified as suitable and adequate translations for the English terms "vigorous", "moderate" and "physical activity". The common Taiwanese activities were accurately categorized and listed in questionnaires, forming culturally sensitive scales. Taiwan versions of IPAQ's self-administered long version (SL), self-administered short version (SS), and telephone interview short version (TS) were developed. Their content validity indices were .992, .994, and .980, as well as .994, .992, and .994 for language equivalence and meaning similarity between the English and Chinese versions of the IPAQ-LS, IPAQ-SS, and IPAQ-TS, respectively. Consistency values for the English and Chinese versions in terms of intraclass correlation coefficients were .945, .704, and .894, respectively. The IPAQ-Taiwan is not only a sensitive and precise tool, but also shows the effectiveness of the methodology (CASM) used in tool development. Subjects who did not regularly exercise and had an education less than a junior high school level underestimated the moderate-intensity physical activity.
High-precision mass measurements for the rp-process at JYFLTRAP
NASA Astrophysics Data System (ADS)
Canete, Laetitia; Eronen, Tommi; Jokinen, Ari; Kankainen, Anu; Moore, Ian D.; Nesterenko, Dimitry; Rinta-Antila, Sami
2018-01-01
The double Penning trap JYFLTRAP at the University of Jyväskylä has been successfully used to achieve high-precision mass measurements of nuclei involved in the rapid proton-capture (rp) process. A precise mass measurement of 31Cl is essential to estimate the waiting point condition of 30S in the rp-process occurring in type I x-ray bursts (XRBs). The mass-excess of 31C1 measured at JYFLTRAP, -7034.7(3.4) keV, is 15 more precise than the value given in the Atomic Mass Evaluation 2012. The proton separation energy Sp determined from the new mass-excess value confirmed that 30S is a waiting point, with a lower-temperature limit of 0.44 GK. The mass of 52Co effects both 51Fe(p,γ)52Co and 52Co(p,γ)53Ni reactions. The mass-excess value measured, - 34 331.6(6.6) keV is 30 times more precise than the value given in AME2012. The Q values for the 51Fe(p,γ)52Co and 52Co(p,γ)53Ni reactions are now known with a high precision, 1418(11) keV and 2588(26) keV respectively. The results show that 52Co is more proton bound and 53Ni less proton bound than what was expected from the extrapolated value.
Díaz, Raquel; Pallarès, Victor; Cano-Garrido, Olivia; Serna, Naroa; Sánchez-García, Laura; Falgàs, Aïda; Pesarrodona, Mireia; Unzueta, Ugutz; Sánchez-Chardi, Alejandro; Sánchez, Julieta M; Casanova, Isolda; Vázquez, Esther; Mangues, Ramón; Villaverde, Antonio
2018-05-29
Under the unmet need of efficient tumor-targeting drugs for oncology, a recombinant version of the plant toxin ricin (the modular protein T22-mRTA-H6) is engineered to self-assemble as protein-only, CXCR4-targeted nanoparticles. The soluble version of the construct self-organizes as regular 11 nm planar entities that are highly cytotoxic in cultured CXCR4 + cancer cells upon short time exposure, with a determined IC50 in the nanomolar order of magnitude. The chemical inhibition of CXCR4 binding sites in exposed cells results in a dramatic reduction of the cytotoxic potency, proving the receptor-dependent mechanism of cytotoxicity. The insoluble version of T22-mRTA-H6 is, contrarily, moderately active, indicating that free, nanostructured protein is the optimal drug form. In animal models of acute myeloid leukemia, T22-mRTA-H6 nanoparticles show an impressive and highly selective therapeutic effect, dramatically reducing the leukemia cells affectation of clinically relevant organs. Functionalized T22-mRTA-H6 nanoparticles are then promising prototypes of chemically homogeneous, highly potent antitumor nanostructured toxins for precise oncotherapies based on self-mediated intracellular drug delivery. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Luites, J W H; Wymenga, A B; Blankevoort, L; Kooloos, J M G; Verdonschot, N
2011-01-01
Femoral graft placement is an important factor in the success of anterior cruciate ligament (ACL) reconstruction. In addition to improving the accuracy of femoral tunnel placement, Computer Assisted Surgery (CAS) can be used to determine the anatomic location. This is achieved by using a 3D femoral template which indicates the position of the anatomical ACL center based on endoscopically measurable landmarks. This study describes the development and application of this method. The template is generated through statistical shape analysis of the ACL insertion, with respect to the anteromedial (AM) and posterolateral (PL) bundles. The ligament insertion data, together with the osteocartilage edge on the lateral notch, were mapped onto a cylinder fitted to the intercondylar notch surface (n = 33). Anatomic variation, in terms of standard variation of the positions of the ligament centers in the template, was within 2.2 mm. The resulting template was programmed in a computer-assisted navigation system for ACL replacement and its accuracy and precision were determined on 31 femora. It was found that with the navigation system the AM and PL tunnels could be positioned with an accuracy of 2.5 mm relative to the anatomic insertion centers; the precision was 2.4 mm. This system consists of a template that can easily be implemented in 3D computer navigation software. Requiring no preoperative images and planning, the system provides adequate accuracy and precision to position the entrance of the femoral tunnels for anatomical single- or double-bundle ACL reconstruction.
Shestov, Alexander A.; Valette, Julien; Deelchand, Dinesh K.; Uğurbil, Kâmil; Henry, Pierre-Gilles
2016-01-01
Metabolic modeling of dynamic 13C labeling curves during infusion of 13C-labeled substrates allows quantitative measurements of metabolic rates in vivo. However metabolic modeling studies performed in the brain to date have only modeled time courses of total isotopic enrichment at individual carbon positions (positional enrichments), not taking advantage of the additional dynamic 13C isotopomer information available from fine-structure multiplets in 13C spectra. Here we introduce a new 13C metabolic modeling approach using the concept of bonded cumulative isotopomers, or bonded cumomers. The direct relationship between bonded cumomers and 13C multiplets enables fitting of the dynamic multiplet data. The potential of this new approach is demonstrated using Monte-Carlo simulations with a brain two-compartment neuronal-glial model. The precision of positional and cumomer approaches are compared for two different metabolic models (with and without glutamine dilution) and for different infusion protocols ([1,6-13C2]glucose, [1,2-13C2]acetate, and double infusion [1,6-13C2]glucose + [1,2-13C2]acetate). In all cases, the bonded cumomer approach gives better precision than the positional approach. In addition, of the three different infusion protocols considered here, the double infusion protocol combined with dynamic bonded cumomer modeling appears the most robust for precise determination of all fluxes in the model. The concepts and simulations introduced in the present study set the foundation for taking full advantage of the available dynamic 13C multiplet data in metabolic modeling. PMID:22528840
1993-04-14
descen- GDP : exchange rate conversion-S8.75 Civil air: 9 major trans!ort aircraft .its at age 21: note--out of all citizens, billion, per capita S6.200...GNEHM. desalination, food processing, building version-tS9.17 bilion. 20.47 of GDP f 1992 fr.: Embassy at Bneid al-Gar (opposite the materials, salt...km total track length government revenues and contributed about (1990): over 700 km double track: govern- 40% to GDP . Most of the nonoil sector has
Double-Diffusive Convection in Rotational Shear
2015-03-01
salt finger development is 0 and 0Z ZT S> > . The model uses the Boussinesq equations of motion with the linear equations of state, are expressed in...reference density from the Boussinesq approximation. ( )top bottom Z T T T H − = (2.2) The resultant non-dimensionalized equations for the model are...S T k k t = to determine how the system evolved during the simulation. B. VERSIONS OF THE BASIC MODEL This research was based on four separate
Transmission loss of orthogonally rib-stiffened double-panel structures with cavity absorption.
Xin, F X; Lu, T J
2011-04-01
The transmission loss of sound through infinite orthogonally rib-stiffened double-panel structures having cavity-filling fibrous sound absorptive materials is theoretically investigated. The propagation of sound across the fibrous material is characterized using an equivalent fluid model, and the motions of the rib-stiffeners are described by including all possible vibrations, i.e., flexural displacements, bending, and torsional rotations. The effects of fluid-structure coupling are account for by enforcing velocity continuity conditions at fluid-panel interfaces. By taking full advantage of the periodic nature of the double-panel, the space-harmonic approach and virtual work principle are applied to solve the sets of resultant governing equations, which are eventually truncated as a finite system of simultaneous algebraic equations and numerically solved insofar as the solution converges. To validate the proposed model, a comparison between the present model predictions and existing numerical and experimental results for a simplified version of the double-panel structure is carried out, with overall agreement achieved. The model is subsequently employed to explore the influence of the fluid-structure coupling between fluid in the cavity and the two panels on sound transmission across the orthogonally rib-stiffened double-panel structure. Obtained results demonstrate that this fluid-structure coupling affects significantly sound transmission loss (STL) at low frequencies and cannot be ignored when the rib-stiffeners are sparsely distributed. As a highlight of this research, an integrated optimal algorithm toward lightweight, high-stiffness and superior sound insulation capability is proposed, based on which a preliminary optimal design of the double-panel structure is performed.
Mokra, Katarzyna; Kuźmińska-Surowaniec, Agnieszka; Woźniak, Katarzyna; Michałowicz, Jaromir
2017-02-01
In the present study, we have investigated DNA-damaging potential of BPA and its analogs, i.e. bisphenol S (BPS), bisphenol F (BPF) and bisphenol AF (BPAF) in human peripheral blood mononuclear cells (PBMCs) using the alkaline and neutral versions of the comet assay, which allowed to evaluate DNA single strand-breaks (SSBs) and double strand-breaks (DSBs). The use of the alkaline version of comet assay made also possible to analyze the kinetics of DNA repair in PBMCs after exposure of the cells to BPA or its analogs. We have observed an increase in DNA damage in PBMCs treated with BPA or its analogs in the concentrations ranging from 0.01 to 10 μg/ml after 1 and 4 h incubation. It was noted that bisphenols studied caused DNA damage mainly via SSBs, while DNA fragmentation via double DSBs was low. The strongest changes in DNA damage were provoked by BPA and particularly BPAF, which were capable of inducing SSBs even at 0.01 μg/ml, while BPS caused the lowest changes (only at 10 μg/ml). We have also observed that PBMCs significantly repaired bisphenols-induced DNA damage but they were unable (excluding cells treated with BPS) to repair totally DNA breaks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Marmignon, Antoine; Ku, Michael; Silve, Aude; Meyer, Eric; Forney, James D.; Malinsky, Sophie; Bétermier, Mireille
2011-01-01
During the sexual cycle of the ciliate Paramecium, assembly of the somatic genome includes the precise excision of tens of thousands of short, non-coding germline sequences (Internal Eliminated Sequences or IESs), each one flanked by two TA dinucleotides. It has been reported previously that these genome rearrangements are initiated by the introduction of developmentally programmed DNA double-strand breaks (DSBs), which depend on the domesticated transposase PiggyMac. These DSBs all exhibit a characteristic geometry, with 4-base 5′ overhangs centered on the conserved TA, and may readily align and undergo ligation with minimal processing. However, the molecular steps and actors involved in the final and precise assembly of somatic genes have remained unknown. We demonstrate here that Ligase IV and Xrcc4p, core components of the non-homologous end-joining pathway (NHEJ), are required both for the repair of IES excision sites and for the circularization of excised IESs. The transcription of LIG4 and XRCC4 is induced early during the sexual cycle and a Lig4p-GFP fusion protein accumulates in the developing somatic nucleus by the time IES excision takes place. RNAi–mediated silencing of either gene results in the persistence of free broken DNA ends, apparently protected against extensive resection. At the nucleotide level, controlled removal of the 5′-terminal nucleotide occurs normally in LIG4-silenced cells, while nucleotide addition to the 3′ ends of the breaks is blocked, together with the final joining step, indicative of a coupling between NHEJ polymerase and ligase activities. Taken together, our data indicate that IES excision is a “cut-and-close” mechanism, which involves the introduction of initiating double-strand cleavages at both ends of each IES, followed by DSB repair via highly precise end joining. This work broadens our current view on how the cellular NHEJ pathway has cooperated with domesticated transposases for the emergence of new mechanisms involved in genome dynamics. PMID:21533177
Kapusta, Aurélie; Matsuda, Atsushi; Marmignon, Antoine; Ku, Michael; Silve, Aude; Meyer, Eric; Forney, James D; Malinsky, Sophie; Bétermier, Mireille
2011-04-01
During the sexual cycle of the ciliate Paramecium, assembly of the somatic genome includes the precise excision of tens of thousands of short, non-coding germline sequences (Internal Eliminated Sequences or IESs), each one flanked by two TA dinucleotides. It has been reported previously that these genome rearrangements are initiated by the introduction of developmentally programmed DNA double-strand breaks (DSBs), which depend on the domesticated transposase PiggyMac. These DSBs all exhibit a characteristic geometry, with 4-base 5' overhangs centered on the conserved TA, and may readily align and undergo ligation with minimal processing. However, the molecular steps and actors involved in the final and precise assembly of somatic genes have remained unknown. We demonstrate here that Ligase IV and Xrcc4p, core components of the non-homologous end-joining pathway (NHEJ), are required both for the repair of IES excision sites and for the circularization of excised IESs. The transcription of LIG4 and XRCC4 is induced early during the sexual cycle and a Lig4p-GFP fusion protein accumulates in the developing somatic nucleus by the time IES excision takes place. RNAi-mediated silencing of either gene results in the persistence of free broken DNA ends, apparently protected against extensive resection. At the nucleotide level, controlled removal of the 5'-terminal nucleotide occurs normally in LIG4-silenced cells, while nucleotide addition to the 3' ends of the breaks is blocked, together with the final joining step, indicative of a coupling between NHEJ polymerase and ligase activities. Taken together, our data indicate that IES excision is a "cut-and-close" mechanism, which involves the introduction of initiating double-strand cleavages at both ends of each IES, followed by DSB repair via highly precise end joining. This work broadens our current view on how the cellular NHEJ pathway has cooperated with domesticated transposases for the emergence of new mechanisms involved in genome dynamics.
Methodologies for the Statistical Analysis of Memory Response to Radiation
NASA Astrophysics Data System (ADS)
Bosser, Alexandre L.; Gupta, Viyas; Tsiligiannis, Georgios; Frost, Christopher D.; Zadeh, Ali; Jaatinen, Jukka; Javanainen, Arto; Puchner, Helmut; Saigné, Frédéric; Virtanen, Ari; Wrobel, Frédéric; Dilillo, Luigi
2016-08-01
Methodologies are proposed for in-depth statistical analysis of Single Event Upset data. The motivation for using these methodologies is to obtain precise information on the intrinsic defects and weaknesses of the tested devices, and to gain insight on their failure mechanisms, at no additional cost. The case study is a 65 nm SRAM irradiated with neutrons, protons and heavy ions. This publication is an extended version of a previous study [1].
ERIC Educational Resources Information Center
Akram, Hadeel Abdulah
2017-01-01
The factor structure of Holland's hexagonal model as shown in the Self-Directed Search (SDS) has received extensive attention across the world. The goal in creating the SDS was to equip guidance counselors and services with information about adults' personality types, interests, preferences, and career options. More precisely, the SDS items assess…
Optical links in the angle-data assembly of the 70-meter antennas
NASA Technical Reports Server (NTRS)
Nelson, M. D.; Schroeder, J. R.; Tubbs, E. F.
1988-01-01
In the precision-pointing mode the 70 meter antennas utilize an optical link provided by an autocollimator. In an effort to improve reliability and performance, commercial instruments were evaluated as replacement candidates, and upgraded versions of the existing instruments were designed and tested. The latter were selected for the Neptune encounter, but commercial instruments with digital output show promise of significant performance improvement for the post-encounter period.
Brst-Bfv Quantization and the Schwinger Action Principle
NASA Astrophysics Data System (ADS)
Garcia, J. Antonio; Vergara, J. David; Urrutia, Luis F.
We introduce an operator version of the BRST-BFV effective action for arbitrary systems with first class constraints. Using the Schwinger action principle we calculate the propagators corresponding to: (i) the parametrized nonrelativistic free particle, (ii) the relativistic free particle and (iii) the spinning relativistic free particle. Our calculation correctly imposes the BRST invariance at the end points. The precise use of the additional boundary terms required in the description of fermionic variables is incorporated.
BPMs with Precise Alignment for TTF2
NASA Astrophysics Data System (ADS)
Noelle, D.; Priebe, G.; Wendt, M.; Werner, M.
2004-11-01
Design and technology of the new, standardized BPM-system for the warm sections of the TESLA Test Facility phase II (TTF2) are presented. Stripline- and button-BPM pickups are read-out with an upgraded version of the AM/PM BPM-electronics of TTF1. The Stripline-BPMs are fixed inside the quadrupole magnets. A stretched wire measurement was used to calibrate the electrical axis of the BPM wrt. to the magnetic axis of the quadrupole.
Low-Crosstalk Composite Optical Crosspoint Switches
NASA Technical Reports Server (NTRS)
Pan, Jing-Jong; Liang, Frank
1993-01-01
Composite optical switch includes two elementary optical switches in tandem, plus optical absorbers. Like elementary optical switches, composite optical switches assembled into switch matrix. Performance enhanced by increasing number of elementary switches. Advantage of concept: crosstalk reduced to acceptably low level at moderate cost of doubling number of elementary switches rather than at greater cost of tightening manufacturing tolerances and exerting more-precise control over operating conditions.
NASA Astrophysics Data System (ADS)
Diehl, T.; Kissling, E. H.; Singer, J.; Lee, T.; Clinton, J. F.; Waldhauser, F.; Wiemer, S.
2017-12-01
Information on the structure of upper-crustal fault systems and their connection with seismicity is key to the understanding of neotectonic processes. Precisely determined focal depths in combination with structural models can provide important insight into deformation styles of the upper crust (e.g. thin- vs. versus thick-skinned tectonics). Detailed images of seismogenic fault zones in the upper crust, on the other hand, will contribute to the assessment of the hazard related to natural and induced earthquakes, especially in regions targeted for radioactive waste repositories or geothermal energy production. The complex velocity structure of the uppermost crust and unfavorable network geometries, however, often hamper precise locations (i.e. focal depth) of shallow seismicity and therefore limit tectonic interpretations. In this study we present a new high-precision catalog of absolute locations of seismicity in Switzerland. High-quality travel-time data from local and regional earthquakes in the period 2000-2017 are used to solve the coupled hypocenter-velocity structure problem in 1D. For this purpose, the well-known VELEST inversion software was revised and extended to improve the quality assessment of travel-time data and to facilitate the identification of erroneous picks in the bulletin data. Results from the 1D inversion are used as initial parameters for a 3D local earthquake tomography. Well-studied earthquakes and high-quality quarry blasts are used to assess the quality of 1D and 3D relocations. In combination with information available from various controlled-source experiments, borehole data, and geological profiles, focal depths and associated host formations are assessed through comparison with the resolved 3D velocity structure. The new absolute locations and velocity models are used as initial values for relative double-difference relocation of earthquakes in Switzerland. Differential times are calculated from bulletin picks and waveform cross-correlation. The resulting double-difference catalog is used as a regional background catalog for a real-time double-difference approach. We will present our implementation strategy and test its performance for local applications using examples from well-recorded natural and induced earthquake sequences in Switzerland.
NASA Astrophysics Data System (ADS)
Farley, K. A.; Hurowitz, J. A.; Asimow, P. D.; Jacobson, N. S.; Cartwright, J. A.
2013-06-01
A new method for K-Ar dating using a double isotope dilution technique is proposed and demonstrated. The method is designed to eliminate known difficulties facing in situ dating on planetary surfaces, especially instrument complexity and power availability. It may also have applicability in some terrestrial dating applications. Key to the method is the use of a solid tracer spike enriched in both 39Ar and 41K. When mixed with lithium borate flux in a Knudsen effusion cell, this tracer spike and a sample to be dated can be successfully fused and degassed of Ar at <1000 °C. The evolved 40Ar∗/39Ar ratio can be measured to high precision using noble gas mass spectrometry. After argon measurement the sample melt is heated to a slightly higher temperature (˜1030 °C) to volatilize potassium, and the evolved 39K/41K ratio measured by Knudsen effusion mass spectrometry. Combined with the known composition of the tracer spike, these two ratios define the K-Ar age using a single sample aliquot and without the need for extreme temperature or a mass determination. In principle the method can be implemented using a single mass spectrometer. Experiments indicate that quantitative extraction of argon from a basalt sample occurs at a sufficiently low temperature that potassium loss in this step is unimportant. Similarly, potassium isotope ratios measured in the Knudsen apparatus indicate good sample-spike equilibration and acceptably small isotopic fractionation. When applied to a flood basalt from the Viluy Traps, Siberia, a K-Ar age of 351 ± 19 Ma was obtained, a result within 1% of the independently known age. For practical reasons this measurement was made on two separate mass spectrometers, but a scheme for combining the measurements in a single analytical instrument is described. Because both parent and daughter are determined by isotope dilution, the precision on K-Ar ages obtained by the double isotope dilution method should routinely approach that of a pair of isotope ratio determinations, likely better than ±5%.
NASA Astrophysics Data System (ADS)
Walton, Josiah
Despite neutrino oscillation experiments firmly establishing neutrinos have non-zero mass, the absolute mass scale is unknown. Moreover, it's unknown whether the neutrino is distinguishable from its antiparticle. The most promising approach for measuring the neutrino mass scale and answering the issue of neutrino-antineutrino distinguishability is by searching for neutrinoless double-beta decay, a very rare theorized process not allowed under the current theoretical framework of particle physics. Positive observation of neutrinoless double-beta decay would usher in a revolution in particle physics, since it would determine the neutrino mass scale, establish that neutrinos and antineutrinos are indistinguishable, and that the particle physics conservation law of total lepton number is violated in nature. The latter two consequences are particularly salient, as they lead to potential explanations of neutrino mass generation and the observed large asymmetry of matter over antimatter in the universe. The Enriched Xenon Observatory (EXO-200) is an international collaboration searching for the neutrinoless double-beta decay of the isotope 136 Xe. EXO-200 operates a unique world-class low-radioactivity detector containing 110 kg of liquified xenon isotopically enriched to 80.6% in 136Xe. Recently, EXO-200 published the most precise two-neutrino double-beta decay half-life ever measured and one of the strongest limits on the half-life of the neutrinoless double-beta decay mode of 136Xe. This work presents an improved experimental search for the majoron-mediated neutrinoless double-beta decay modes of 136Xe and a novel search for the yet unobserved two neutrino double-beta decay of 134Xe.
Test bench for measurements of NOvA scintillator properties at JINR
NASA Astrophysics Data System (ADS)
Velikanova, D. S.; Antoshkin, A. I.; Anfimov, N. V.; Samoylov, O. B.
2018-04-01
The NOvA experiment was built to study oscillation parameters, mass hierarchy, CP- violation phase in the lepton sector and θ23 octant, via vɛ appearance and vμ disappearance modes in both neutrino and antineutrino beams. These scientific goals require good knowledge about NOvA scintillator basic properties. The new test bench was constructed and upgraded at JINR. The main goal of this bench is to measure scintillator properties (for solid and liquid scintillators), namely α/β discrimination and Birk's coefficients for protons and other hadrons (quenching factors). This knowledge will be crucial for recovering the energy of the hadronic part of neutrino interactions with scintillator nuclei. α/β discrimination was performed on the first version of the bench for LAB-based and NOvA scintillators. It was performed again on the upgraded version of the bench with higher statistic and precision level. Preliminary result of quenching factors for protons was obtained. A technical description of both versions of the bench and current results of the measurements and analysis are presented in this work.
NASA Astrophysics Data System (ADS)
Kanemura, Shinya; Kaneta, Kunio; Machida, Naoki; Odori, Shinya; Shindou, Tetsuo
2016-07-01
In the composite Higgs models, originally proposed by Georgi and Kaplan, the Higgs boson is a pseudo Nambu-Goldstone boson (pNGB) of spontaneous breaking of a global symmetry. In the minimal version of such models, global SO(5) symmetry is spontaneously broken to SO(4), and the pNGBs form an isospin doublet field, which corresponds to the Higgs doublet in the Standard Model (SM). Predicted coupling constants of the Higgs boson can in general deviate from the SM predictions, depending on the compositeness parameter. The deviation pattern is determined also by the detail of the matter sector. We comprehensively study how the model can be tested via measuring single and double production processes of the Higgs boson at the LHC and future electron-positron colliders. The possibility to distinguish the matter sector among the minimal composite Higgs models is also discussed. In addition, we point out differences in the cross section of double Higgs boson production from the prediction in other new physics models.
Structural rearrangements at the translocation pore of the human glutamate transporter, EAAT1.
Leighton, Barbara H; Seal, Rebecca P; Watts, Spencer D; Skyba, Mary O; Amara, Susan G
2006-10-06
Structure-function studies of mammalian and bacterial excitatory amino acid transporters (EAATs), as well as the crystal structure of a related archaeal glutamate transporter, support a model in which TM7, TM8, and the re-entrant loops HP1 and HP2 participate in forming a substrate translocation pathway within each subunit of a trimer. However, the transport mechanism, including precise binding sites for substrates and co-transported ions and changes in the tertiary structure underlying transport, is still not known. In this study, we used chemical cross-linking of introduced cysteine pairs in a cysteine-less version of EAAT1 to examine the dynamics of key domains associated with the translocation pore. Here we show that cysteine substitution at Ala-395, Ala-367, and Ala-440 results in functional single and double cysteine transporters and that in the absence of glutamate or dl-threo-beta-benzyloxyaspartate (dl-TBOA), A395C in the highly conserved TM7 can be cross-linked to A367C in HP1 and to A440C in HP2. The formation of these disulfide bonds is reversible and occurs intra-molecularly. Interestingly, cross-linking A395C to A367C appears to abolish transport, whereas cross-linking A395C to A440C lowers the affinities for glutamate and dl-TBOA but does not change the maximal transport rate. Additionally, glutamate and dl-TBOA binding prevent cross-linking in both double cysteine transporters, whereas sodium binding facilitates cross-linking in the A395C/A367C transporter. These data provide evidence that within each subunit of EAAT1, Ala-395 in TM7 resides close to a residue at the tip of each re-entrant loop (HP1 and HP2) and that these residues are repositioned relative to one another at different steps in the transport cycle. Such behavior likely reflects rearrangements in the tertiary structure of the translocation pore during transport and thus provides constraints for modeling the structural dynamics associated with transport.
NASA Astrophysics Data System (ADS)
Perez, R. Navarro; Schunck, N.; Lasseri, R.-D.; Zhang, C.; Sarich, J.
2017-11-01
We describe the new version 3.00 of the code HFBTHO that solves the nuclear Hartree-Fock (HF) or Hartree-Fock-Bogolyubov (HFB) problem by using the cylindrical transformed deformed harmonic oscillator basis. In the new version, we have implemented the following features: (i) the full Gogny force in both particle-hole and particle-particle channels, (ii) the calculation of the nuclear collective inertia at the perturbative cranking approximation, (iii) the calculation of fission fragment charge, mass and deformations based on the determination of the neck, (iv) the regularization of zero-range pairing forces, (v) the calculation of localization functions, (vi) a MPI interface for large-scale mass table calculations. Program Files doi:http://dx.doi.org/10.17632/c5g2f92by3.1 Licensing provisions: GPL v3 Programming language: FORTRAN-95 Journal reference of previous version: M.V. Stoitsov, N. Schunck, M. Kortelainen, N. Michel, H. Nam, E. Olsen, J. Sarich, and S. Wild, Comput. Phys. Commun. 184 (2013). Does the new version supersede the previous one: Yes Summary of revisions: 1. the Gogny force in both particle-hole and particle-particle channels was implemented; 2. the nuclear collective inertia at the perturbative cranking approximation was implemented; 3. fission fragment charge, mass and deformations were implemented based on the determination of the position of the neck between nascent fragments; 4. the regularization method of zero-range pairing forces was implemented; 5. the localization functions of the HFB solution were implemented; 6. a MPI interface for large-scale mass table calculations was implemented. Nature of problem:HFBTHO is a physics computer code that is used to model the structure of the nucleus. It is an implementation of the energy density functional (EDF) approach to atomic nuclei, where the energy of the nucleus is obtained by integration over space of some phenomenological energy density, which is itself a functional of the neutron and proton intrinsic densities. In the present version of HFBTHO, the energy density derives either from the zero-range Skyrme or the finite-range Gogny effective two-body interaction between nucleons. Nuclear super-fluidity is treated at the Hartree-Fock-Bogolyubov (HFB) approximation. Constraints on the nuclear shape allows probing the potential energy surface of the nucleus as needed e.g., for the description of shape isomers or fission. The implementation of a local scale transformation of the single-particle basis in which the HFB solutions are expanded provide a tool to properly compute the structure of weakly-bound nuclei. Solution method: The program uses the axial Transformed Harmonic Oscillator (THO) single-particle basis to expand quasiparticle wave functions. It iteratively diagonalizes the Hartree-Fock-Bogolyubov Hamiltonian based on generalized Skyrme-like energy densities and zero-range pairing interactions or the finite-range Gogny force until a self-consistent solution is found. A previous version of the program was presented in M.V. Stoitsov, N. Schunck, M. Kortelainen, N. Michel, H. Nam, E. Olsen, J. Sarich, and S. Wild, Comput. Phys. Commun. 184 (2013) 1592-1604 with much of the formalism presented in the original paper M.V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Comput. Phys. Commun. 167 (2005) 43-63. Additional comments: The user must have access to (i) the LAPACK subroutines DSYEEVR, DSYEVD, DSYTRF and DSYTRI, and their dependencies, which compute eigenvalues and eigenfunctions of real symmetric matrices, (ii) the LAPACK subroutines DGETRI and DGETRF, which invert arbitrary real matrices, and (iii) the BLAS routines DCOPY, DSCAL, DGEMM and DGEMV for double-precision linear algebra (or provide another set of subroutines that can perform such tasks). The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/.
Vestergaard, Rikke Falsig; Søballe, Kjeld; Hasenkam, John Michael; Stilling, Maiken
2018-05-18
A small, but unstable, saw-gap may hinder bone-bridging and induce development of painful sternal dehiscence. We propose the use of Radiostereometric Analysis (RSA) for evaluation of sternal instability and present a method validation. Four bone analogs (phantoms) were sternotomized and tantalum beads were inserted in each half. The models were reunited with wire cerclage and placed in a radiolucent separation device. Stereoradiographs (n = 48) of the phantoms in 3 positions were recorded at 4 imposed separation points. The accuracy and precision was compared statistically and presented as translations along the 3 orthogonal axes. 7 sternotomized patients were evaluated for clinical RSA precision by double-examination stereoradiographs (n = 28). In the phantom study, we found no systematic error (p > 0.3) between the three phantom positions, and precision for evaluation of sternal separation was 0.02 mm. Phantom accuracy was mean 0.13 mm (SD 0.25). In the clinical study, we found a detection limit of 0.42 mm for sternal separation and of 2 mm for anterior-posterior dislocation of the sternal halves for the individual patient. RSA is a precise and low-dose image modality feasible for clinical evaluation of sternal stability in research. ClinicalTrials.gov Identifier: NCT02738437 , retrospectively registered.
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2015-01-01
This paper presents an overview of the sixth revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This algorithm is referred to as the Airborne Spacing for Terminal Arrival Routes version 13 (ASTAR13). This airborne self-spacing concept contains both trajectory-based and state-based mechanisms for calculating the speeds required to achieve or maintain a precise spacing interval. The trajectory-based capability allows for spacing operations prior to the aircraft being on a common path. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft. This current revision to the algorithm adds the state-based capability in support of evolving industry standards relating to airborne self-spacing.
Precision Departure Release Capability (PDRC) Overview and Results: NASA to FAA Research Transition
NASA Technical Reports Server (NTRS)
Engelland, Shawn; Davis, Tom.
2013-01-01
NASA researchers developed the Precision Departure Release Capability (PDRC) concept to improve the tactical departure scheduling process. The PDRC system is comprised of: 1) a surface automation system that computes ready time predictions and departure runway assignments, 2) an en route scheduling automation tool that uses this information to estimate ascent trajectories to the merge point and computes release times and, 3) an interface that provides two-way communication between the two systems. To minimize technology transfer issues and facilitate its adoption by TMCs and Frontline Managers (FLM), NASA developed the PDRC prototype using the Surface Decision Support System (SDSS) for the Tower surface automation tool, a research version of the FAA TMA (RTMA) for en route automation tool and a digital interface between the two DSTs to facilitate coordination.
Slotta, J E; Schilling, M K; Ghadimi, M; Kollmar, O
2015-08-01
Since September 1st, 2009, the most recent version of the German "Betreuungsrechtsänderungsgesetz" has been validated by the legislators. It precisely sets out how physicians and nursing staff have to deal with a written declaration of a patient's will. This new law focuses in a special way on advance directives, describes the precise rules for the authors of an advance directive and shows both its sphere of action and its limitations. This article aims to give an overview on the legal scope of advance directives, and to illustrate potential limitations and conflicts. Furthermore, it shows the commitments and rights of the medical team against the background of an existing advance directive. Georg Thieme Verlag KG Stuttgart · New York.
DORIS/Jason-2: Better than 10 cm on-board orbits available for Near-Real-Time Altimetry
NASA Astrophysics Data System (ADS)
Jayles, C.; Chauveau, J. P.; Rozo, F.
2010-12-01
DIODE (DORIS Immediate Orbit on-board Determination) is a real-time on-board orbit determination software, embedded in the DORIS receiver. The purpose of this paper is to focus on DIODE performances. After a description of the recent DORIS evolutions, we detail how compliance with specifications are verified during extensive ground tests before the launch, then during the in-flight commissioning phase just after the launch, and how well they are met in the routine phase and today. Future improvements are also discussed for Jason-2 as well as for the next missions. The complete DORIS ground validation using DORIS simulator and new DORIS test equipments has shown prior to the Jason-2 flight that every functional requirement was fulfilled, and also that better than 10 cm real-time DIODE orbits would be achieved on-board Jason-2. The first year of Jason-2 confirmed this, and after correction of a slowly evolving polar motion error at the end of the commissioning phase, the DIODE on-board orbits are indeed better than the 10 cm specification: in the beginning of the routine phase, the discrepancy was already 7.7 cm Root-Mean-Square (RMS) in the radial component as compared to the final Precise Orbit Ephemerides (POE) orbit. Since the first day of Jason-2 cycle 1, the real-time DIODE orbits have been delivered in the altimetry fast delivery products. Their accuracy and their 100% availability make them a key input to fairly precise Near-Real-Time Altimetry processing. Time-tagging is at the microsecond level. In parallel, a few corrections (quaternion problem) and improvements have been gathered in an enhanced version of DIODE, which is already implemented and validated. With this new version, a 5 cm radial accuracy is achieved during ground validation over more than Jason-2 first year (cycles 1-43, from July 12th, 2008 to September 11th, 2009). The Seattle Ocean Surface Topography Science Team Meeting (OSTST) has recommended an upload of this v4.02 version on-board Jason-2 in order to take benefit from more accurate real-time orbits. For the future, perhaps the most important point of this work is that a 9 mm consistency is observed on-ground between simulated and adjusted orbits, proving that the DORIS measurement is very precisely and properly modelled in the DIODE navigation software. This implies that improvement of DIODE accuracy is still possible and should be driven by enhancement of the physical models: forces and perturbations of the satellite movement, Radio/Frequency phenomena perturbing measurements. A 2-cm accuracy is possible with future versions, if analysis and model improvements continue to progress.
Determination of the direction to a source of antineutrinos via inverse beta decay in Double Chooz
NASA Astrophysics Data System (ADS)
Nikitenko, Ya.
2016-11-01
To determine the direction to a source of neutrinos (and antineutrinos) is an important problem for the physics of supernovae and of the Earth. The direction to a source of antineutrinos can be estimated through the reaction of inverse beta decay. We show that the reactor neutrino experiment Double Chooz has unique capabilities to study antineutrino signal from point-like sources. Contemporary experimental data on antineutrino directionality is given. A rigorous mathematical approach for neutrino direction studies has been developed. Exact expressions for the precision of the simple mean estimator of neutrinos' direction for normal and exponential distributions for a finite sample and for the limiting case of many events have been obtained.
Masses of {sup 130}Te and {sup 130}Xe and Double-{beta}-Decay Q Value of {sup 130}Te
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redshaw, Matthew; Mount, Brianna J.; Myers, Edmund G.
The atomic masses of {sup 130}Te and {sup 130}Xe have been obtained by measuring cyclotron frequency ratios of pairs of triply charged ions simultaneously trapped in a Penning trap. The results, with 1 standard deviation uncertainty, are M({sup 130}Te)=129.906 222 744(16) u and M({sup 130}Xe)=129.903 509 351(15) u. From the mass difference the double-{beta}-decay Q value of {sup 130}Te is determined to be Q{sub {beta}}{sub {beta}}({sup 130}Te)=2527.518(13) keV. This is a factor of 150 more precise than the result of the AME2003 [G. Audi et al., Nucl. Phys. A729, 337 (2003)].
Micrometeorite penetration effects in gold foil
NASA Technical Reports Server (NTRS)
Hallgren, D. S.; Radigan, W.; Hemenway, C. L.
1976-01-01
Penetration structures revealed by a Skylab experiment dealing with exposure of single and double layers of 500-800 A thick gold foil to micrometeorites are examined. Examination of all double-layered gold foils revealed that particles producing holes of any type greater than 5 microns in diameter in the first foil break up into many fragments which in turn produce many more holes in the second foil. Evidence of an original particle is not found on any stainless steel plate below the foils, except in one instance. A precise relationship between the size of the event and the mass of the particle producing it could not be determined due to the extreme morphological variety in penetration effects. Fluxes from gold foil and crater experiments are briefly discussed.
Wide baseline stereo matching based on double topological relationship consistency
NASA Astrophysics Data System (ADS)
Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang
2009-07-01
Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, S. G.; Marsh, T. R.; Gaensicke, B. T.
Using Liverpool Telescope+RISE photometry we identify the 2.78 hr period binary star CSS 41177 as a detached eclipsing double white dwarf binary with a 21,100 K primary star and a 10,500 K secondary star. This makes CSS 41177 only the second known eclipsing double white dwarf binary after NLTT 11748. The 2 minute long primary eclipse is 40% deep and the secondary eclipse 10% deep. From Gemini+GMOS spectroscopy, we measure the radial velocities of both components of the binary from the H{alpha} absorption line cores. These measurements, combined with the light curve information, yield white dwarf masses of M{sub 1}more » = 0.283 {+-} 0.064 M{sub sun} and M{sub 2} = 0.274 {+-} 0.034 M{sub sun}, making them both helium core white dwarfs. As an eclipsing, double-lined spectroscopic binary, CSS 41177 is ideally suited to measuring precise, model-independent masses and radii. The two white dwarfs will merge in roughly 1.1 Gyr to form a single sdB star.« less
NASA Astrophysics Data System (ADS)
Kiris, Tugba; Akbulut, Saadet; Kiris, Aysenur; Gucin, Zuhal; Karatepe, Oguzhan; Bölükbasi Ates, Gamze; Tabakoǧlu, Haşim Özgür
2015-03-01
In order to develop minimally invasive, fast and precise diagnostic and therapeutic methods in medicine by using optical methods, first step is to examine how the light propagates, scatters and transmitted through medium. So as to find out appropriate wavelengths, it is required to correctly determine the optical properties of tissues. The aim of this study is to measure the optical properties of both cancerous and normal ex-vivo pancreatic tissues. Results will be compared to detect how cancerous and normal tissues respond to different wavelengths. Double-integrating-sphere system and computational technique inverse adding doubling method (IAD) were used in the study. Absorption and reduced scattering coefficients of normal and cancerous pancreatic tissues have been measured within the range of 500-650 nm. Statistical significant differences between cancerous and normal tissues have been obtained at 550 nm and 630 nm for absorption coefficients. On the other hand; there were no statistical difference found for scattering coefficients at any wavelength.
A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures
NASA Astrophysics Data System (ADS)
Youssef, Rasha M.; Maher, Hadir M.
2008-10-01
A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.
Ma, Yan; Li, Peibo; Chen, Dawei; Fang, Tiezheng; Li, Haitian; Su, Weiwei
2006-01-13
A highly sensitive and specific electrospray ionization (ESI) liquid chromatography-tandem mass spectrometry (LC/MS/MS) method for quantitation of naringenin (NAR) and an explanation for the double peaks phenomenon was developed and validated. NAR was extracted from rat plasma and tissues along with the internal standard (IS), hesperidin, with ethyl acetate. The analytes were analyzed in the multiple-reaction-monitoring (MRM) mode as the precursor/product ion pair of m/z 273.4/151.3 for NAR and m/z 611.5/303.3 for the IS. The assay was linear over the concentration range of 5-2500 ng/mL. The lower limit quantification was 5 ng/mL, available for plasma pharmacokinetics of NAR in rats. Accuracy in within- and between-run precisions showed good reproducibility. When NAR was administered orally, only little and predominantly its glucuronidation were into circulation in the plasma. There existed double peaks phenomenon in plasma concentration-time curve leading to the relatively slow elimination of NAR in plasma. The results showed that there was a linear relationship between the AUC of total NAR and dosages. And the double peaks are mainly due to enterohepatic circulation.
Fluoroscopic Placement of Double-Pigtail Ureteral Stents
Chen, Gregory L.
2001-01-01
Purpose: Double-pigtail ureteral stent is placed cystoscopically after ureteroscopy. We describe a technique for fluoroscopic placement of ureteral stents and demonstrate its use in a non-randomized prospective study. Materials and methods: Double-pigtail stents were placed either fluoroscopically or cystoscopically in 121 consecutive patients. In the fluoroscopic method, the stent was placed over a guide wire using a stent pusher without the use of cystoscopy. Conversely, stents were placed through the working channel of the cystoscope under vision. The procedure, stent length, width, type, method, ureteral dilation, and use of a retrieval string were noted. Results: A wide range of stent sizes were used. The success with fluoroscopic placement of double-pigtail ureteral stents was 100% (89 of 89 cases). No stents migrated or required replacement. Stents were placed after ureteroscopic laser lithotripsy (53/89) and ureteroscopic tumor treatment (22/89). Cystoscopic visualization was used in 32 additional procedures requiring precise control (15 ureteral strictures and nine retrograde endopyelotomy). Conclusions: The fluoroscopic placement of ureteral stents is a safe and simple technique with a very high success rate. We have used cystoscopic placement only after incisional procedures such as retrograde endopyelotomy, stricture or ureterotomy. PMID:18493562
Wenchuan Event Detection And Localization Using Waveform Correlation Coupled With Double Difference
NASA Astrophysics Data System (ADS)
Slinkard, M.; Heck, S.; Schaff, D. P.; Young, C. J.; Richards, P. G.
2014-12-01
The well-studied Wenchuan aftershock sequence triggered by the May 12, 2008, Ms 8.0, mainshock offers an ideal test case for evaluating the effectiveness of using waveform correlation coupled with double difference relocation to detect and locate events in a large aftershock sequence. We use Sandia's SeisCorr detector to process 3 months of data recorded by permanent IRIS and temporary ASCENT stations using templates from events listed in a global catalog to find similar events in the raw data stream. Then we take the detections and relocate them using the double difference method. We explore both the performance that can be expected with using just a small number of stations, and, the benefits of reprocessing a well-studied sequence such as this one using waveform correlation to find even more events. We benchmark our results against previously published results describing relocations of regional catalog data. Before starting this project, we had examples where with just a few stations at far-regional distances, waveform correlation combined with double difference did and impressive job of detection and location events with precision at the few hundred and even tens of meters level.
Chalifoux, Laurie A; Bauchat, Jeanette R; Higgins, Nicole; Toledo, Paloma; Peralta, Feyce M; Farrer, Jason; Gerber, Susan E; McCarthy, Robert J; Sullivan, John T
2017-10-01
Breech presentation is a leading cause of cesarean delivery. The use of neuraxial anesthesia increases the success rate of external cephalic version procedures for breech presentation and reduces cesarean delivery rates for fetal malpresentation. Meta-analysis suggests that higher-dose neuraxial techniques increase external cephalic version success to a greater extent than lower-dose techniques, but no randomized study has evaluated the dose-response effect. We hypothesized that increasing the intrathecal bupivacaine dose would be associated with increased external cephalic version success. We conducted a randomized, double-blind trial to assess the effect of four intrathecal bupivacaine doses (2.5, 5.0, 7.5, 10.0 mg) combined with fentanyl 15 μg on the success rate of external cephalic version for breech presentation. Secondary outcomes included mode of delivery, indication for cesarean delivery, and length of stay. A total of 240 subjects were enrolled, and 239 received the intervention. External cephalic version was successful in 123 (51.5%) of 239 patients. Compared with bupivacaine 2.5 mg, the odds (99% CI) for a successful version were 1.0 (0.4 to 2.6), 1.0 (0.4 to 2.7), and 0.9 (0.4 to 2.4) for bupivacaine 5.0, 7.5, and 10.0 mg, respectively (P = 0.99). There were no differences in the cesarean delivery rate (P = 0.76) or indication for cesarean delivery (P = 0.82). Time to discharge was increased 60 min (16 to 116 min) with bupivacaine 7.5 mg or higher as compared with 2.5 mg (P = 0.004). A dose of intrathecal bupivacaine greater than 2.5 mg does not lead to an additional increase in external cephalic procedural success or a reduction in cesarean delivery.
Track chambers based on precision drift tubes housed inside 30 mm mylar pipe
NASA Astrophysics Data System (ADS)
Borisov, A.; Bozhko, N.; Fakhrutdinov, R.; Kozhin, A.; Leontiev, B.; Levin, A.
2014-06-01
We describe drift chambers consisting of 3 layers of 30 mm (OD) drift tubes made of double sided aluminized mylar film with thickness 0.125 mm. A single drift tube is self-supported structure withstanding 350 g tension of 50 microns sense wire located in the tube center with 10 microns precision with respect to end-plug outer surface. Such tubes allow to create drift chambers with small amount of material, construction of such chambers doesn't require hard frames. Twenty six chambers with working area from 0.8 × 1.0 to 2.5 × 2.0 m2 including 4440 tubes have been manufactured for experiments at 70-GeV proton accelerator at IHEP(Protvino).
Integrating DNA strand-displacement circuitry with DNA tile self-assembly
Zhang, David Yu; Hariadi, Rizal F.; Choi, Harry M.T.; Winfree, Erik
2013-01-01
DNA nanotechnology has emerged as a reliable and programmable way of controlling matter at the nanoscale through the specificity of Watson–Crick base pairing, allowing both complex self-assembled structures with nanometer precision and complex reaction networks implementing digital and analog behaviors. Here we show how two well-developed frameworks, DNA tile self-assembly and DNA strand-displacement circuits, can be systematically integrated to provide programmable kinetic control of self-assembly. We demonstrate the triggered and catalytic isothermal self-assembly of DNA nanotubes over 10 μm long from precursor DNA double-crossover tiles activated by an upstream DNA catalyst network. Integrating more sophisticated control circuits and tile systems could enable precise spatial and temporal organization of dynamic molecular structures. PMID:23756381
NASA Astrophysics Data System (ADS)
Dils, B.; Buchwitz, M.; Reuter, M.; Schneising, O.; Boesch, H.; Parker, R.; Guerlet, S.; Aben, I.; Blumenstock, T.; Burrows, J. P.; Butz, A.; Deutscher, N. M.; Frankenberg, C.; Hase, F.; Hasekamp, O. P.; Heymann, J.; De Mazière, M.; Notholt, J.; Sussmann, R.; Warneke, T.; Griffith, D.; Sherlock, V.; Wunch, D.
2014-06-01
Column-averaged dry-air mole fractions of carbon dioxide and methane have been retrieved from spectra acquired by the TANSO-FTS (Thermal And Near-infrared Sensor for carbon Observations-Fourier Transform Spectrometer) and SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Cartography) instruments on board GOSAT (Greenhouse gases Observing SATellite) and ENVISAT (ENVIronmental SATellite), respectively, using a range of European retrieval algorithms. These retrievals have been compared with data from ground-based high-resolution Fourier transform spectrometers (FTSs) from the Total Carbon Column Observing Network (TCCON). The participating algorithms are the weighting function modified differential optical absorption spectroscopy (DOAS) algorithm (WFMD, University of Bremen), the Bremen optimal estimation DOAS algorithm (BESD, University of Bremen), the iterative maximum a posteriori DOAS (IMAP, Jet Propulsion Laboratory (JPL) and Netherlands Institute for Space Research algorithm (SRON)), the proxy and full-physics versions of SRON's RemoTeC algorithm (SRPR and SRFP, respectively) and the proxy and full-physics versions of the University of Leicester's adaptation of the OCO (Orbiting Carbon Observatory) algorithm (OCPR and OCFP, respectively). The goal of this algorithm inter-comparison was to identify strengths and weaknesses of the various so-called round- robin data sets generated with the various algorithms so as to determine which of the competing algorithms would proceed to the next round of the European Space Agency's (ESA) Greenhouse Gas Climate Change Initiative (GHG-CCI) project, which is the generation of the so-called Climate Research Data Package (CRDP), which is the first version of the Essential Climate Variable (ECV) "greenhouse gases" (GHGs). For XCO2, all algorithms reach the precision requirements for inverse modelling (< 8 ppm), with only WFMD having a lower precision (4.7 ppm) than the other algorithm products (2.4-2.5 ppm). When looking at the seasonal relative accuracy (SRA, variability of the bias in space and time), none of the algorithms have reached the demanding < 0.5 ppm threshold. For XCH4, the precision for both SCIAMACHY products (50.2 ppb for IMAP and 76.4 ppb for WFMD) fails to meet the < 34 ppb threshold for inverse modelling, but note that this work focusses on the period after the 2005 SCIAMACHY detector degradation. The GOSAT XCH4 precision ranges between 18.1 and 14.0 ppb. Looking at the SRA, all GOSAT algorithm products reach the < 10 ppm threshold (values ranging between 5.4 and 6.2 ppb). For SCIAMACHY, IMAP and WFMD have a SRA of 17.2 and 10.5 ppb, respectively.
Smith, Otto R F; Alves, Daniele E; Knapstad, Marit; Haug, Ellen; Aarø, Leif E
2017-05-12
Mental well-being is an important, yet understudied, area of research, partly due to lack of appropriate population-based measures. The Warwick-Edinburgh Mental Well-being Scale (WEMWBS) was developed to meet the needs for such a measure. This article assesses the psychometric properties of the Norwegian version of the WEMWBS, and its short-version (SWEMWBS) among a sample of primary health care patients who participated in the evaluation of Prompt Mental Health Care (PMHC), a novel Norwegian mental health care program aimed to increase access to treatment for anxiety and depression. Forward and back-translations were conducted, and 1168 patients filled out an electronic survey including the WEMWBS, and other mental health scales. The original dataset was randomly divided into a training sample (≈70%) and a validation sample (≈30%). Parallel analysis and confirmatory factor analysis were carried out to assess construct validity and precision. The final models were cross-validated in the validation sample by specifying a model with fixed parameters based on the estimates from the trainings set. Criterion validity and measurement invariance of the (S)WEMWBS were examined as well. Support was found for the single factor hypothesis in both scales, but similar to previous studies, only after a number of residuals were allowed to correlate (WEMWBS: CFI = 0.99; RMSEA = 0.06, SWEMWBS: CFI = .99; RMSEA = 0.06). Further analyses showed that the correlated residuals did not alter the meaning of the underlying construct and did not substantially affect the associations with other variables. Precision was high for both versions of the WEMWBS (>.80), and scalar measurement invariance was obtained for gender and age group. The final measurement models displayed adequate fit statistics in the validation sample as well. Correlations with other mental health scales were largely in line with expectations. No statistically significant differences were found in mean latent (S)WEMWBS scores for age and gender. Both WEMWBS scales appear to be valid and precise instruments to measure mental well-being in primary health care patients. The results encourage the use of mental well-being as an outcome in future epidemiological, clinical, and evaluation studies, and may as such be valuable for both research and public health practice.
A transportable cold atom inertial sensor for space applications
NASA Astrophysics Data System (ADS)
Ménoret, V.; Geiger, R.; Stern, G.; Cheinet, P.; Battelier, B.; Zahzam, N.; Pereira Dos Santos, F.; Bresson, A.; Landragin, A.; Bouyer, P.
2017-11-01
Atom interferometry has hugely benefitted from advances made in cold atom physics over the past twenty years, and ultra-precise quantum sensors are now available for a wide range of applications [1]. In particular, cold atom interferometers have shown excellent performances in the field of acceleration and rotation measurements [2,3], and are foreseen as promising candidates for navigation, geophysics, geo-prospecting and tests of fundamental physics such as the Universality of Free Fall (UFF). In order to carry out a test of the UFF with atoms as test masses, one needs to compare precisely the accelerations of two atoms with different masses as they fall in the Earth's gravitational field. The sensitivity of atom interferometers scales like the square of the time during which the atoms are in free fall, and on ground this interrogation time is limited by the size of the experimental setup to a fraction of a second. Sending an atom interferometer in space would allow for several seconds of excellent free-fall conditions, and tests of the UFF could be carried out with precisions as low as 10-15 [4]. However, cold atoms experiments rely on complex laser systems, which are needed to cool down and manipulate the atoms, and these systems are usually very sensitive to temperature fluctuations and vibrations. In addition, when operating an inertial sensor, vibrations are a major issue, as they deteriorate the performances of the instrument. This is why cold atom interferometers are usually used in ground based facilities, which provide stable enough environments. In order to carry out airborne or space-borne measurements, one has to design an instrument which is both compact and stable, and such that vibrations induced by the platform will not deteriorate the sensitivity of the sensor. We report on the operation of an atom interferometer on board a plane carrying out parabolic flights (Airbus A300 Zero-G, operated by Novespace). We have constructed a compact and stable laser setup, which is well suited for onboard applications. Our goal is to implement a dual-species Rb-K atom interferometer in order to carry out a test of the UFF in the plane. In this perspective, we are designing a dual-wavelength laser source, which will enable us to cool down and coherently manipulate the quantum states of both atoms. We have successfully tested a preliminary version of the source and obtained a double species magneto-optical trap (MOT).
Adams, Derk; Schreuder, Astrid B; Salottolo, Kristin; Settell, April; Goss, J Richard
2011-07-01
There are significant changes in the abbreviated injury scale (AIS) 2005 system, which make it impractical to compare patients coded in AIS version 98 with patients coded in AIS version 2005. Harborview Medical Center created a computer algorithm "Harborview AIS Mapping Program (HAMP)" to automatically convert AIS 2005 to AIS 98 injury codes. The mapping was validated using 6 months of double-coded patient injury records from a Level I Trauma Center. HAMP was used to determine how closely individual AIS and injury severity scores (ISS) were converted from AIS 2005 to AIS 98 versions. The kappa statistic was used to measure the agreement between manually determined codes and HAMP-derived codes. Seven hundred forty-nine patient records were used for validation. For the conversion of AIS codes, the measure of agreement between HAMP and manually determined codes was [kappa] = 0.84 (95% confidence interval, 0.82-0.86). The algorithm errors were smaller in magnitude than the manually determined coding errors. For the conversion of ISS, the agreement between HAMP versus manually determined ISS was [kappa] = 0.81 (95% confidence interval, 0.78-0.84). The HAMP algorithm successfully converted injuries coded in AIS 2005 to AIS 98. This algorithm will be useful when comparing trauma patient clinical data across populations coded in different versions, especially for longitudinal studies.
NASA Astrophysics Data System (ADS)
Mann, J. L.; Kelly, W. R.
2006-05-01
A new analytical technique for the determination of δ34S will be described. The technique is based on the production of singularly charged arsenic sulfide molecular ions (AsS+) by thermal ionization using silica gel as an emitter and combines multiple-collector thermal ionization mass spectrometry (MC-TIMS) with a 33S/36S double spike to correct instrumental fractionation. Because the double spike is added to the sample before chemical processing, both the isotopic composition and sulfur concentration are measured simultaneously. The accuracy and precision of the double spike technique is comparable to or better than modern gas source mass spectrometry, but requires about a factor of 10 less sample. Δ33S effects can be determined directly in an unspiked sample without any assumptions about the value of k (mass dependent fractionation factor) which is currently required by gas source mass spectrometry. Three international sulfur standards (IAEA-S-1, IAEA-S-2, and IAEA-S-3) were measured to evaluate the precision and accuracy of the new technique and to evaluate the consensus values for these standards. Two different double spike preparations were used. The δ34S values (reported relative to Vienna Canyon Diablo Troilite (VCDT), (δ34S (‰) = 34S/32S)sample/(34S/32S)VCDT - 1) x 1000]), 34S/32SVCDT = 0.0441626) determined were -0.32‰ ± 0.04‰ (1σ, n=4) and -0.31‰ ± 0.13‰ (1σ, n=8) for IAEA-S-1, 22.65‰ ± 0.04‰ (1σ, n=7) and 22.60‰ ± 0.06‰ (1σ, n=5) for IAEA- S-2, and -32.47‰ ± 0.07‰ (1σ, n=8) for IAEA-S-3. The amount of natural sample used for these analyses ranged from 0.40 μmoles to 2.35 μmoles. Each standard showed less than 0.5‰ variability (IAEA-S-1 < 0.4‰, IAEA-S-2 < 0.2‰, and IAEA-S-3 < 0.2‰). Our values for S-1 and S-2 are in excellent agreement with the consensus values and the values reported by other laboratories using both SF6 and SO2. Our value for S-3 differs statistically from the Institute for Reference Materials and Measurement (IRMM) value and is slightly lower than the currently accepted consensus value (-32.3). Because the technique is based on thermal ionization of AsS+, and As is mononuclidic, corrections for interferences or for scale contraction/expansion are not required. The availability of MC-TIMS instruments in laboratories around the world makes this technique immediately available to a much larger scientific community who require highly accurate and precise measurements of sulfur.
Orrell, John; Hoppe, Eric
2018-01-26
Working as part of a collaborative team, PNNL is bringing its signature capability in ultra-low-level detection to help search for a rare form of radioactive decay-never before detected-called "neutrinoless double beta decay" in germanium. If observed, it would demonstrate neutrinos are Majorana-type particles. This discovery would show neutrinos are unique among fundamental particles, having a property whereby the matter and anti-matter version of this particle are indistinguishable. Physicist John L. Orrell explains how they rely on the Shallow Underground Laboratory to conduct the research.
Generalised Category Attack—Improving Histogram-Based Attack on JPEG LSB Embedding
NASA Astrophysics Data System (ADS)
Lee, Kwangsoo; Westfeld, Andreas; Lee, Sangjin
We present a generalised and improved version of the category attack on LSB steganography in JPEG images with straddled embedding path. It detects more reliably low embedding rates and is also less disturbed by double compressed images. The proposed methods are evaluated on several thousand images. The results are compared to both recent blind and specific attacks for JPEG embedding. The proposed attack permits a more reliable detection, although it is based on first order statistics only. Its simple structure makes it very fast.
Science, conscience, consciousness.
Hennig, Boris
2010-01-01
Descartes' metaphysics lays the foundation for the special sciences, and the notion of consciousness ("conscientia") belongs to metaphysics rather than to psychology. I argue that as a metaphysical notion, "consciousness" refers to an epistemic version of moral conscience. As a consequence, the activity on which science is based turns out to be conscientious thought. The consciousness that makes science possible is a double awareness: the awareness of what one is thinking, of what one should be doing, and of the possibility of a gap between the two.
Lacasse, Anaïs; Roy, Jean-Sébastien; Parent, Alexandre J.; Noushi, Nioushah; Odenigbo, Chúk; Pagé, Gabrielle; Beaudet, Nicolas; Choinière, Manon; Stone, Laura S.; Ware, Mark A.
2017-01-01
Background: To better standardize clinical and epidemiological studies about the prevalence, risk factors, prognosis, impact and treatment of chronic low back pain, a minimum data set was developed by the National Institutes of Health (NIH) Task Force on Research Standards for Chronic Low Back Pain. The aim of the present study was to develop a culturally adapted questionnaire that could be used for chronic low back pain research among French-speaking populations in Canada. Methods: The adaptation of the French Canadian version of the minimum data set was achieved according to guidelines for the cross-cultural adaptation of self-reported measures (double forward-backward translation, expert committee, pretest among 35 patients with pain in the low back region). Minor cultural adaptations were also incorporated into the English version by the expert committee (e.g., items about race/ethnicity, education level). Results: This cross-cultural adaptation provides an equivalent French-Canadian version of the minimal data set questionnaire and a culturally adapted English-Canadian version. Modifications made to the original NIH minimum data set were minimized to facilitate comparison between the Canadian and American versions. Interpretation: The present study is a first step toward the use of a culturally adapted instrument for phenotyping French- and English-speaking low back pain patients in Canada. Clinicians and researchers will recognize the importance of this standardized tool and are encouraged to incorporate it into future research studies on chronic low back pain. PMID:28401140
NASA Astrophysics Data System (ADS)
Boden, A. F.; Lane, B. F.; Creech-Eakman, M. J.; Queloz, D.; Koresko, C. D.
2000-05-01
The Palomar Testbed Interferometer (PTI) is a long-baseline near-infrared interferometer located at Palomar Observatory. For the past several years we have had an ongoing program of resolving and reconstructing the visual and physical orbits of spectroscopic binary stars with PTI, with the goal of obtaining precise dynamical mass estimates and other physical parameters. We will present a number of new visual and physical orbit determinations derived from integrated reductions of PTI visibility and archival and new spectroscopic radial velocity data. The systems for which we will discuss our orbit models are: iota Pegasi (HD 210027), 64 Psc (HD 4676), 12 Boo (HD 123999), 75 Cnc (HD 78418), 47 And (HD 8374), HD 205539, BY Draconis (HDE 234677), and 3 Boo (HD 120064), and 3 Boo (HD 120064). All of these systems are double-lined binary systems (SB2), and integrated astrometric/radial velocity orbit modeling provides precise fundamental parameters (mass, luminosity) and system distance determinations comparable with Hipparcos precisions.
Reference manual for a Requirements Specification Language (RSL), version 2.0
NASA Technical Reports Server (NTRS)
Fisher, Gene L.; Cohen, Gerald C.
1993-01-01
This report is a Reference Manual for a general-purpose Requirements Specification Language, RSL. The purpose of RSL is to specify precisely the external structure of a mechanized system and to define requirements that the system must meet. A system can be comprised of a mixture of hardware, software, and human processing elements. RSL is a hybrid of features found in several popular requirements specification languages and includes constructs for formal mathematical specification.
M. G. Dosskey; S. Neelakantan; T. G. Mueller; T. Kellerman; M. J. Helmers; E. Rienzi
2015-01-01
Spatially nonuniform runoif reduces the water qua1iry perfortnance of constant- width filter strips. A geographic inlormation system (Gls)-based tool was developed and tested that ernploys terrain analysis to account lor spatially nonuniform runoffand produce more ellbctive filter strip designs.The computer program,AgBufTerBuilder, runs with ATcGIS versions 10.0 and 10...
Cubo, E; Sáez Velasco, S; Delgado Benito, V; Ausín Villaverde, V; García Soto, X R; Trejo Gabriel Y Galán, J M; Martín Santidrián, A; Macarrón, J V; Cordero Guevara, J; Benito-León, J; Louis, E D
2011-07-01
As there are no biological markers for Autism Spectrum Disorders (ASD), screening must focus on behaviour and the presence of a markedly abnormal development or a deficiency in verbal and non-verbal social interaction and communication. To evaluate the psychometric attributes of a Spanish version of the autism domain of the Autism-Tics, AD/HD and other Comorbidities Inventory (A-TAC) scale for ASD screening. A total of 140 subjects (43% male, 57% female) aged 6-16, with ASD (n=15), Mental Retardation (n=40), Psychiatric Illness (n=22), Tics (n=12) and controls (n=51), were included for ASD screening. The predictive validity, acceptability, scale assumptions, internal consistency, and precision were analysed. The internal consistency was high (α=0.93), and the standard error was adequate (1.13 [95% CI, -1.08 a 3.34]). The mean scores of the Autism module were higher in patients diagnosed with ASD and mental disability compared to the rest of the patients (P<.001). The area under the curve was 0.96 for the ASD group. The autism domain of the A-TAC scale seems to be a reliable, valid and precise tool for ASD screening in the Spanish school population. Copyright © 2010 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Lorenz, Pierre; Zhao, Xiongtao; Ehrhardt, Martin; Zagoranskiy, Igor; Zimmer, Klaus; Han, Bing
2018-02-01
Large area, high speed, nanopatterning of surfaces by laser ablation is challenging due to the required high accuracy of the optical and mechanical systems fulfilling the precision of nanopatterning process. Utilization of self-organization approaches can provide an alternative decoupling spot precision and field of machining. The laser-induced front side etching (LIFE) and laser-induced back side dry etching (LIBDE) of fused silica were studied using single and double flash nanosecond laser pulses with a wavelength of 532 nm where the time delay Δτ of the double flash laser pulses was adjusted from 50 ns to 10 μs. The fused silica can be etched at both processes assisted by a 10 nm chromium layer where the etching depth Δz at single flash laser pulses is linear to the laser fluence and independent on the number of laser pulses, from 2 to 12 J/cm2, it is Δz = δLIFE/LIBDE . Φ with δLIFE 16 nm/(J/cm2) and δLIBDE 5.2 nm/(J/cm2) 3 . δLIFE. At double flash laser pulses, the Δz is dependent on the time delay Δτ of the laser pulses and the Δz slightly increased at decreasing Δτ. Furthermore, the surface nanostructuring of fused silica using IPSM-LIFE (LIFE using in-situ pre-structured metal layer) method with a single double flash laser pulse was tested. The first pulse of the double flash results in a melting of the metal layer. The surface tension of the liquid metal layer tends in a droplet formation process and dewetting process, respectively. If the liquid phase life time ΔtLF is smaller than the droplet formation time the metal can be "frozen" in an intermediated state like metal bare structures. The second laser treatment results in a evaporation of the metal and in a partial evaporation and melting of the fused silica surface, where the resultant structures in the fused silica surface are dependent on the lateral geometry of the pre-structured metal layer. A successful IPSM-LIFE structuring could be achieved assisted by a 20 nm molybdenum layer at Δτ >= 174 ns. That path the way for the high speed ultra-fast nanostructuring of dielectric surfaces by self-organizing processes. The different surface structures were analyzed by scanning electron microscopy (SEM) and white light interferometry (WLI).
Experimental clean combustor program, phase 1
NASA Technical Reports Server (NTRS)
Bahr, D. W.; Gleason, C. C.
1975-01-01
Full annular versions of advanced combustor designs, sized to fit within the CF6-50 engine, were defined, manufactured, and tested at high pressure conditions. Configurations were screened, and significant reductions in CO, HC, and NOx emissions levels were achieved with two of these advanced combustor design concepts. Emissions and performance data at a typical AST cruise condition were also obtained along with combustor noise data as a part of an addendum to the basic program. The two promising combustor design approaches evolved in these efforts were the Double Annular Combustor and the Radial/Axial Combustor. With versions of these two basic combustor designs, CO and HC emissions levels at or near the target levels were obtained. Although the low target NOx emissions level was not obtained with these two advanced combustor designs, significant reductions were relative to the NOx levels of current technology combustors. Smoke emission levels below the target value were obtained.
The dynamical mass of a classical Cepheid variable star in an eclipsing binary system.
Pietrzyński, G; Thompson, I B; Gieren, W; Graczyk, D; Bono, G; Udalski, A; Soszyński, I; Minniti, D; Pilecki, B
2010-11-25
Stellar pulsation theory provides a means of determining the masses of pulsating classical Cepheid supergiants-it is the pulsation that causes their luminosity to vary. Such pulsational masses are found to be smaller than the masses derived from stellar evolution theory: this is the Cepheid mass discrepancy problem, for which a solution is missing. An independent, accurate dynamical mass determination for a classical Cepheid variable star (as opposed to type-II Cepheids, low-mass stars with a very different evolutionary history) in a binary system is needed in order to determine which is correct. The accuracy of previous efforts to establish a dynamical Cepheid mass from Galactic single-lined non-eclipsing binaries was typically about 15-30% (refs 6, 7), which is not good enough to resolve the mass discrepancy problem. In spite of many observational efforts, no firm detection of a classical Cepheid in an eclipsing double-lined binary has hitherto been reported. Here we report the discovery of a classical Cepheid in a well detached, double-lined eclipsing binary in the Large Magellanic Cloud. We determine the mass to a precision of 1% and show that it agrees with its pulsation mass, providing strong evidence that pulsation theory correctly and precisely predicts the masses of classical Cepheids.
Diode-pumped DUV cw all-solid-state laser to replace argon ion lasers
NASA Astrophysics Data System (ADS)
Zanger, Ekhard; Liu, B.; Gries, Wolfgang
2000-04-01
The slim series DELTATRAINTM-worldwide the first integrated cw diode-pumped all-solid-state DUV laser at 266 nm with a compact, slim design-has been developed. The slim design minimizes the DUV DPSSL footprint and thus greatly facilitates the replacement of commonly used gas ion lasers, including these with intra-cavity frequency doubling, in numerous industrial and scientific applications. Such a replacement will result in an operation cost reduction by several thousands US$DLR each year for one unit. Owing to its unique geometry-invariant frequency doubling cavity- based on the LAS patent-pending DeltaConcept architecture- this DUV laser provides excellent beam-pointing stability of <2 (mu) rad/ degree(s)C and power stability of <2%. The newest design of the cavity block has adopted a cemented resonator with each component positioned precisely inside a compact monolithic metal block. The automatic and precise crystal shifter ensures long operation lifetime of > 5000 hours of whole 266 nm laser. The microprocessor controlled power supply provides an automatic control of the whole 266 nm laser, making this DUV laser a hands-off system which can meet tough requirements posed by numerous industrial and scientific applications. It will replace the commonplace ion laser as the future DUV laser of choice.
Architecture with GIDEON, A Program for Design in Structural DNA Nanotechnology
Birac, Jeffrey J.; Sherman, William B.; Kopatsch, Jens; Constantinou, Pamela E.; Seeman, Nadrian C.
2012-01-01
We present geometry based design strategies for DNA nanostructures. The strategies have been implemented with GIDEON – a Graphical Integrated Development Environment for OligoNucleotides. GIDEON has a highly flexible graphical user interface that facilitates the development of simple yet precise models, and the evaluation of strains therein. Models are built on a simple model of undistorted B-DNA double-helical domains. Simple point and click manipulations of the model allow the minimization of strain in the phosphate-backbone linkages between these domains and the identification of any steric clashes that might occur as a result. Detailed analysis of 3D triangles yields clear predictions of the strains associated with triangles of different sizes. We have carried out experiments that confirm that 3D triangles form well only when their geometrical strain is less than 4% deviation from the estimated relaxed structure. Thus geometry-based techniques alone, without energetic considerations, can be used to explain general trends in DNA structure formation. We have used GIDEON to build detailed models of double crossover and triple crossover molecules, evaluating the non-planarity associated with base tilt and junction mis-alignments. Computer modeling using a graphical user interface overcomes the limited precision of physical models for larger systems, and the limited interaction rate associated with earlier, command-line driven software. PMID:16630733
Precision enhancement of pavement roughness localization with connected vehicles
NASA Astrophysics Data System (ADS)
Bridgelall, R.; Huang, Y.; Zhang, Z.; Deng, F.
2016-02-01
Transportation agencies rely on the accurate localization and reporting of roadway anomalies that could pose serious hazards to the traveling public. However, the cost and technical limitations of present methods prevent their scaling to all roadways. Connected vehicles with on-board accelerometers and conventional geospatial position receivers offer an attractive alternative because of their potential to monitor all roadways in real-time. The conventional global positioning system is ubiquitous and essentially free to use but it produces impractically large position errors. This study evaluated the improvement in precision achievable by augmenting the conventional geo-fence system with a standard speed bump or an existing anomaly at a pre-determined position to establish a reference inertial marker. The speed sensor subsequently generates position tags for the remaining inertial samples by computing their path distances relative to the reference position. The error model and a case study using smartphones to emulate connected vehicles revealed that the precision in localization improves from tens of metres to sub-centimetre levels, and the accuracy of measuring localized roughness more than doubles. The research results demonstrate that transportation agencies will benefit from using the connected vehicle method to achieve precision and accuracy levels that are comparable to existing laser-based inertial profilers.
NASA Astrophysics Data System (ADS)
Song, Z.; Wang, Y.; Kuang, J.
2018-05-01
Field Programmable Gate Arrays (FPGAs) made with 28 nm and more advanced process technology have great potentials for implementation of high precision time-to-digital convertors (TDC), because the delay cells in the tapped delay line (TDL) used for time interpolation are getting smaller and smaller. However, the bubble problems in the TDL status are becoming more complicated, which make it difficult to achieve TDCs on these chips with a high time precision. In this paper, we are proposing a novel decomposition encoding scheme, which not only can solve the bubble problem easily, but also has a high encoding efficiency. The potential of these chips to realize TDC can be fully released with the scheme. In a Xilinx Kintex-7 FPGA chip, we implemented a TDC system with 256 TDC channels, which doubles the number of TDC channels that our previous technique could achieve. Performances of all these TDC channels are evaluated. The average RMS time precision among them is 10.23 ps in the time-interval measurement range of (0–10 ns), and their measurement throughput reaches 277 M measures per second.
Lew, Matthew D; von Diezmann, Alexander R S; Moerner, W E
2013-02-25
Automated processing of double-helix (DH) microscope images of single molecules (SMs) streamlines the protocol required to obtain super-resolved three-dimensional (3D) reconstructions of ultrastructures in biological samples by single-molecule active control microscopy. Here, we present a suite of MATLAB subroutines, bundled with an easy-to-use graphical user interface (GUI), that facilitates 3D localization of single emitters (e.g. SMs, fluorescent beads, or quantum dots) with precisions of tens of nanometers in multi-frame movies acquired using a wide-field DH epifluorescence microscope. The algorithmic approach is based upon template matching for SM recognition and least-squares fitting for 3D position measurement, both of which are computationally expedient and precise. Overlapping images of SMs are ignored, and the precision of least-squares fitting is not as high as maximum likelihood-based methods. However, once calibrated, the algorithm can fit 15-30 molecules per second on a 3 GHz Intel Core 2 Duo workstation, thereby producing a 3D super-resolution reconstruction of 100,000 molecules over a 20×20×2 μm field of view (processing 128×128 pixels × 20000 frames) in 75 min.
Coupled model simulations of climate changes in the 20th century and beyond
NASA Astrophysics Data System (ADS)
Yu, Yongqiang; Zhi, Hai; Wang, Bin; Wan, Hui; Li, Chao; Liu, Hailong; Li, Wei; Zheng, Weipeng; Zhou, Tianjun
2008-07-01
Several scenario experiments of the IPCC 4th Assessment Report (AR4) are performed by version g1.0 of a Flexible coupled Ocean-Atmosphere-Land System Model (FGOALS) developed at the Institute of Atmospheric Physics, Chinese Academy of Sciences (IAP/CAS), including the “Climate of the 20th century experiment”, “CO2 1% increase per year to doubling experiment” and two separate IPCC greenhouse gases emission scenarios A1B and B1 experiments. To distinguish between the different impacts of natural variations and human activities on the climate change, three-member ensemble runs are performed for each scenario experiment. The coupled model simulations show: (1) from 1900 to 2000, the global mean temperature increases about 0.5°C and the major increase occurs during the later half of the 20th century, which is in consistent with the observations that highlights the coupled model’s ability to reproduce the climate changes since the industrial revolution; (2) the global mean surface air temperature increases about 1.6°C in the CO2 doubling experiment and 1.5°C and 2.4°C in the A1B and B1 scenarios, respectively. The global warming is indicated by not only the changes of the surface temperature and precipitation but also the temperature increase in the deep ocean. The thermal expansion of the sea water would induce the rise of the global mean sea level. Both the control run and the 20th century climate change run are carried out again with version g1.1 of FGOALS, in which the cold biases in the high latitudes were removed. They are then compared with those from version g1.0 of FGOALS in order to distinguish the effect of the model biases on the simulation of global warming.
Rahimi Foroushani, Abbas; Estebsari, Fatemeh; Mostafaei, Davoud; Eftekhar Ardebili, Hasan; Shojaeizadeh, Dvoud; Dastoorpour, Maryam; Jamshidi, Ensiyeh; Taghdisi, Mohammad Hossein
2014-01-01
Background: Many of the problems pertaining to old age originate from unhealthy lifestyle and low social support. Overcoming these problems requires precise and proper policy-making and planning. Objectives: The aim of the current research is to investigate the effect of health promoting interventions on healthy lifestyle and social support in elders. Patients and Methods: This study was conducted as a clinical trial lasting for 12 months on 464 elders aged above 60 years who were under the aegis of health homes in Tehran, Iran. Participants were selected through double stage cluster sampling and then divided into intervention and control groups (232 individuals in each). Tools for gathering data were a demographic checklist and two standard questionnaires called Health-Promoting Lifestyle Profile version 2 and personal resource questionnaire part 2. Data were analyzed using descriptive and analytical tests including paired t test, analysis of covariance (ANCOVA) and Pearson correlation coefficient. Results: The average age of elders in this study was 65.9 ± 3.6 years (ranging between 60 and 73 years old). Results showed that the differences between the mean post-test scores of healthy lifestyle and its six dimensions as well as perceived social support and its five dimensions in the control and intervention groups were statistically significant (P value < 0.0001). Conclusions: Aging is an inevitable stage of life. However, effective health promoting interventions can procrastinate it, reduce its consequences and problems, and turn it into a pleasant and enjoyable part of life. PMID:25389486
Ingold, T; Mätzler, C; Wehrli, C; Heimo, A; Kämpfer, N; Philipona, R
2001-04-20
Ultraviolet light was measured at four channels (305, 311, 318, and 332 nm) with a precision filter radiometer (UV-PFR) at Arosa, Switzerland (46.78 degrees , 9.68 degrees , 1850 m above sea level), within the instrument trial phase of a cooperative venture of the Swiss Meteorological Institute (MeteoSwiss) and the Physikalisch-Meteorologisches Observatorium Davos/World Radiation Center. We retrieved ozone-column density data from these direct relative irradiance measurements by adapting the Dobson standard method for all possible single-difference wavelength pairs and one double-difference pair (305/311 and 305/318) under conditions of cloud-free sky and of thin clouds (cloud optical depth <2.5 at 500 nm). All UV-PFR retrievals exhibited excellent agreement with those of collocated Dobson and Brewer spectrophotometers for data obtained during two months in 1999. Combining the results of the error analysis and the findings of the validation, we propose to retrieve ozone-column density by using the 305/311 single difference pair and the double-difference pair. Furthermore, combining both retrievals by building the ratio of ozone-column density yields information that is relevant to data quality control. Estimates of the 305/311 pair agree with measurements by the Dobson and Brewer instruments within 1% for both the mean and the standard deviation of the differences. For the double pair these values are in a range up to 1.6%. However, this pair is less sensitive to model errors. The retrieval performance is also consistent with satellite-based data from the Earth Probe Total Ozone Mapping Spectrometer (EP-TOMS) and the Global Ozone Monitoring Experiment instrument (GOME).
NASA Astrophysics Data System (ADS)
Ingold, Thomas; Mätzler, Christian; Wehrli, Christoph; Heimo, Alain; Kämpfer, Niklaus; Philipona, Rolf
2001-04-01
Ultraviolet light was measured at four channels (305, 311, 318, and 332 nm) with a precision filter radiometer (UV-PFR) at Arosa, Switzerland (46.78 , 9.68 , 1850 m above sea level), within the instrument trial phase of a cooperative venture of the Swiss Meteorological Institute (MeteoSwiss) and the Physikalisch-Meteorologisches Observatorium Davos /World Radiation Center. We retrieved ozone-column density data from these direct relative irradiance measurements by adapting the Dobson standard method for all possible single-difference wavelength pairs and one double-difference pair (305 /311 and 305 /318) under conditions of cloud-free sky and of thin clouds (cloud optical depth <2.5 at 500 nm). All UV-PFR retrievals exhibited excellent agreement with those of collocated Dobson and Brewer spectrophotometers for data obtained during two months in 1999. Combining the results of the error analysis and the findings of the validation, we propose to retrieve ozone-column density by using the 305 /311 single difference pair and the double-difference pair. Furthermore, combining both retrievals by building the ratio of ozone-column density yields information that is relevant to data quality control. Estimates of the 305 /311 pair agree with measurements by the Dobson and Brewer instruments within 1% for both the mean and the standard deviation of the differences. For the double pair these values are in a range up to 1.6%. However, this pair is less sensitive to model errors. The retrieval performance is also consistent with satellite-based data from the Earth Probe Total Ozone Mapping Spectrometer (EP-TOMS) and the Global Ozone Monitoring Experiment instrument (GOME).
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2013-07-01
We propose, describe, and demonstrate a new numerically stable implementation of the extended boundary-condition method (EBCM) to compute the T-matrix for electromagnetic scattering by spheroidal particles. Our approach relies on the fact that for many of the EBCM integrals in the special case of spheroids, a leading part of the integrand integrates exactly to zero, which causes catastrophic loss of precision in numerical computations. This feature was in fact first pointed out by Waterman in the context of acoustic scattering and electromagnetic scattering by infinite cylinders. We have recently studied it in detail in the case of electromagnetic scattering by particles. Based on this study, the principle of our new implementation is therefore to compute all the integrands without the problematic part to avoid the primary cause of loss of precision. Particular attention is also given to choosing the algorithms that minimise loss of precision in every step of the method, without compromising on speed. We show that the resulting implementation can efficiently compute in double precision arithmetic the T-matrix and therefore optical properties of spheroidal particles to a high precision, often down to a remarkable accuracy (10-10 relative error), over a wide range of parameters that are typically considered problematic. We discuss examples such as high-aspect ratio metallic nanorods and large size parameter (≈35) dielectric particles, which had been previously modelled only using quadruple-precision arithmetic codes.
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Liu, Chong
2016-10-01
Field programmable gate arrays (FPGAs) manufactured with more advanced processing technology have faster carry chains and smaller delay elements, which are favorable for the design of tapped delay line (TDL)-style time-to-digital converters (TDCs) in FPGA. However, new challenges are posed in using them to implement TDCs with a high time precision. In this paper, we propose a bin realignment method and a dual-sampling method for TDC implementation in a Xilinx UltraScale FPGA. The former realigns the disordered time delay taps so that the TDC precision can approach the limit of its delay granularity, while the latter doubles the number of taps in the delay line so that the TDC precision beyond the cell delay limitation can be expected. Two TDC channels were implemented in a Kintex UltraScale FPGA, and the effectiveness of the new methods was evaluated. For fixed time intervals in the range from 0 to 440 ns, the average RMS precision measured by the two TDC channels reaches 5.8 ps using the bin realignment, and it further improves to 3.9 ps by using the dual-sampling method. The time precision has a 5.6% variation in the measured temperature range. Every part of the TDC, including dual-sampling, encoding, and on-line calibration, could run at a 500 MHz clock frequency. The system measurement dead time is only 4 ns.
Ronquillo, Jay G; Weng, Chunhua; Lester, William T
2017-11-17
Precision medicine involves three major innovations currently taking place in healthcare: electronic health records, genomics, and big data. A major challenge for healthcare providers, however, is understanding the readiness for practical application of initiatives like precision medicine. To better understand the current state and challenges of precision medicine interoperability using a national genetic testing registry as a starting point, placed in the context of established interoperability formats. We performed an exploratory analysis of the National Institutes of Health Genetic Testing Registry. Relevant standards included Health Level Seven International Version 3 Implementation Guide for Family History, the Human Genome Organization Gene Nomenclature Committee (HGNC) database, and Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT). We analyzed the distribution of genetic testing laboratories, genetic test characteristics, and standardized genome/clinical code mappings, stratified by laboratory setting. There were a total of 25472 genetic tests from 240 laboratories testing for approximately 3632 distinct genes. Most tests focused on diagnosis, mutation confirmation, and/or risk assessment of germline mutations that could be passed to offspring. Genes were successfully mapped to all HGNC identifiers, but less than half of tests mapped to SNOMED CT codes, highlighting significant gaps when linking genetic tests to standardized clinical codes that explain the medical motivations behind test ordering. Conclusion: While precision medicine could potentially transform healthcare, successful practical and clinical application will first require the comprehensive and responsible adoption of interoperable standards, terminologies, and formats across all aspects of the precision medicine pipeline.
Tsuruta, S; Misztal, I; Strandén, I
2001-05-01
Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.
Extreme D'Hondt and round-off effects in voting computations
NASA Astrophysics Data System (ADS)
Konstantinov, M. M.; Pelova, G. B.
2015-11-01
D'Hondt (or Jefferson) method and Hare-Niemeyer (or Hamilton) method are widely used worldwide for seat allocation in proportional systems. Everything seems to be well known in this area. However, this is not the case. For example the D'Hondt method can violate the quota rule from above but this effect is not analyzed as a function of the number of parties and/or the threshold used. Also, allocation methods are often implemented automatically as computer codes in machine arithmetic believing that following the IEEE standards for double precision binary arithmetics would guarantee correct results. Unfortunately this may not happen not only for double precision arithmetic (usually producing 15-16 true decimal digits) but also for any relative precision of the underlying binary machine arithmetics. This paper deals with the following new issues.Find conditions (threshold in particular) such that D'Hondt seat allocation violates maximally the quota rule. Analyze possible influence of rounding errors in the automatic implementation of Hare-Niemeyer method in machine arithmetic.Concerning the first issue, it is known that the maximal deviation of D'Hondt allocation from upper quota for the Bulgarian proportional system (240 MP and 4% barrier) is 5. This fact had been established in 1991. A classical treatment of voting issues is the monograph [1], while electoral problems specific for Bulgaria have been treated in [2, 4]. The effect of threshold on extreme seat allocations is also analyzed in [3]. Finally we would like to stress that Voting Theory may sometimes be mathematically trivial but always has great political impact. This is a strong motivation for further investigations in this area.
Properties of an eclipsing double white dwarf binary NLTT 11748
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, David L.; Walker, Arielle N.; Marsh, Thomas R.
2014-01-10
We present high-quality ULTRACAM photometry of the eclipsing detached double white dwarf binary NLTT 11748. This system consists of a carbon/oxygen white dwarf and an extremely low mass (<0.2 M {sub ☉}) helium-core white dwarf in a 5.6 hr orbit. To date, such extremely low-mass white dwarfs, which can have thin, stably burning outer layers, have been modeled via poorly constrained atmosphere and cooling calculations where uncertainties in the detailed structure can strongly influence the eventual fates of these systems when mass transfer begins. With precise (individual precision ≈1%), high-cadence (≈2 s), multicolor photometry of multiple primary and secondary eclipsesmore » spanning >1.5 yr, we constrain the masses and radii of both objects in the NLTT 11748 system to a statistical uncertainty of a few percent. However, we find that overall uncertainty in the thickness of the envelope of the secondary carbon/oxygen white dwarf leads to a larger (≈13%) systematic uncertainty in the primary He WD's mass. Over the full range of possible envelope thicknesses, we find that our primary mass (0.136-0.162 M {sub ☉}) and surface gravity (log (g) = 6.32-6.38; radii are 0.0423-0.0433 R {sub ☉}) constraints do not agree with previous spectroscopic determinations. We use precise eclipse timing to detect the Rømer delay at 7σ significance, providing an additional weak constraint on the masses and limiting the eccentricity to ecos ω = (– 4 ± 5) × 10{sup –5}. Finally, we use multicolor data to constrain the secondary's effective temperature (7600 ± 120 K) and cooling age (1.6-1.7 Gyr).« less
Robust double gain unscented Kalman filter for small satellite attitude estimation
NASA Astrophysics Data System (ADS)
Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun
2017-08-01
Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).
Development of a CRISPR/Cas9 genome editing toolbox for Corynebacterium glutamicum.
Liu, Jiao; Wang, Yu; Lu, Yujiao; Zheng, Ping; Sun, Jibin; Ma, Yanhe
2017-11-16
Corynebacterium glutamicum is an important industrial workhorse and advanced genetic engineering tools are urgently demanded. Recently, the clustered regularly interspaced short palindromic repeats (CRISPR) and their CRISPR-associated proteins (Cas) have revolutionized the field of genome engineering. The CRISPR/Cas9 system that utilizes NGG as protospacer adjacent motif (PAM) and has good targeting specificity can be developed into a powerful tool for efficient and precise genome editing of C. glutamicum. Herein, we developed a versatile CRISPR/Cas9 genome editing toolbox for C. glutamicum. Cas9 and gRNA expression cassettes were reconstituted to combat Cas9 toxicity and facilitate effective termination of gRNA transcription. Co-transformation of Cas9 and gRNA expression plasmids was exploited to overcome high-frequency mutation of cas9, allowing not only highly efficient gene deletion and insertion with plasmid-borne editing templates (efficiencies up to 60.0 and 62.5%, respectively) but also simple and time-saving operation. Furthermore, CRISPR/Cas9-mediated ssDNA recombineering was developed to precisely introduce small modifications and single-nucleotide changes into the genome of C. glutamicum with efficiencies over 80.0%. Notably, double-locus editing was also achieved in C. glutamicum. This toolbox works well in several C. glutamicum strains including the widely-used strains ATCC 13032 and ATCC 13869. In this study, we developed a CRISPR/Cas9 toolbox that could facilitate markerless gene deletion, gene insertion, precise base editing, and double-locus editing in C. glutamicum. The CRISPR/Cas9 toolbox holds promise for accelerating the engineering of C. glutamicum and advancing its application in the production of biochemicals and biofuels.
Simultaneous, accurate measurement of the 3D position and orientation of single molecules
Backlund, Mikael P.; Lew, Matthew D.; Backer, Adam S.; Sahl, Steffen J.; Grover, Ginni; Agrawal, Anurag; Piestun, Rafael; Moerner, W. E.
2012-01-01
Recently, single molecule-based superresolution fluorescence microscopy has surpassed the diffraction limit to improve resolution to the order of 20 nm or better. These methods typically use image fitting that assumes an isotropic emission pattern from the single emitters as well as control of the emitter concentration. However, anisotropic single-molecule emission patterns arise from the transition dipole when it is rotationally immobile, depending highly on the molecule’s 3D orientation and z position. Failure to account for this fact can lead to significant lateral (x, y) mislocalizations (up to ∼50–200 nm). This systematic error can cause distortions in the reconstructed images, which can translate into degraded resolution. Using parameters uniquely inherent in the double-lobed nature of the Double-Helix Point Spread Function, we account for such mislocalizations and simultaneously measure 3D molecular orientation and 3D position. Mislocalizations during an axial scan of a single molecule manifest themselves as an apparent lateral shift in its position, which causes the standard deviation (SD) of its lateral position to appear larger than the SD expected from photon shot noise. By correcting each localization based on an estimated orientation, we are able to improve SDs in lateral localization from ∼2× worse than photon-limited precision (48 vs. 25 nm) to within 5 nm of photon-limited precision. Furthermore, by averaging many estimations of orientation over different depths, we are able to improve from a lateral SD of 116 (∼4× worse than the photon-limited precision; 28 nm) to 34 nm (within 6 nm of the photon limit). PMID:23129640
Availability of software services for a hospital information system.
Sakamoto, N
1998-03-01
Hospital information systems (HISs) are becoming more important and covering more parts in daily hospital operations as order-entry systems become popular and electronic charts are introduced. Thus, HISs today need to be able to provide necessary services for hospital operations for a 24-h day, 365 days a year. The provision of services discussed here does not simply mean the availability of computers, in which all that matters is that the computer is functioning. It means the provision of necessary information for hospital operations by the computer software, and we will call it the availability of software services. HISs these days are mostly client-server systems. To increase availability of software services in these systems, it is not enough to just use system structures that are highly reliable in existing host-centred systems. Four main components which support availability of software services are network systems, client computers, server computers, and application software. In this paper, we suggest how to structure these four components to provide the minimum requested software services even if a part of the system stops to function. The network system should be double-protected in stratus using Asynchronous Transfer Mode (ATM) as its base network. Client computers should be fat clients with as much application logic as possible, and reference information which do not require frequent updates (master files, for example) should be replicated in clients. It would be best if all server computers could be double-protected. However, if that is physically impossible, one database file should be made accessible by several server computers. Still, at least the basic patients' information and the latest clinical records should be double-protected physically. Application software should be tested carefully before introduction. Different versions of the application software should always be kept and managed in case the new version has problems. If a hospital information system is designed and developed with these points in mind, it's availability of software services should increase greatly.
Precision estimate for Odin-OSIRIS limb scatter retrievals
NASA Astrophysics Data System (ADS)
Bourassa, A. E.; McLinden, C. A.; Bathgate, A. F.; Elash, B. J.; Degenstein, D. A.
2012-02-01
The limb scatter measurements made by the Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on the Odin spacecraft are used to routinely produce vertically resolved trace gas and aerosol extinction profiles. Version 5 of the ozone and stratospheric aerosol extinction retrievals, which are available for download, are performed using a multiplicative algebraic reconstruction technique (MART). The MART inversion is a type of relaxation method, and as such the covariance of the retrieved state is estimated numerically, which, if done directly, is a computationally heavy task. Here we provide a methodology for the derivation of a numerical estimate of the covariance matrix for the retrieved state using the MART inversion that is sufficiently efficient to perform for each OSIRIS measurement. The resulting precision is compared with the variability in a large set of pairs of OSIRIS measurements that are close in time and space in the tropical stratosphere where the natural atmospheric variability is weak. These results are found to be highly consistent and thus provide confidence in the numerical estimate of the precision in the retrieved profiles.
Higher-Order Systematic Effects in the Muon Beam-Spin Dynamics for Muon g-2
NASA Astrophysics Data System (ADS)
Crnkovic, Jason; Brown, Hugh; Krouppa, Brandon; Metodiev, Eric; Morse, William; Semertzidis, Yannis; Tishchenko, Vladimir
2016-03-01
The BNL Muon g-2 Experiment (E821) produced a precision measurement of the muon anomalous magnetic moment, where as the Fermilab Muon g-2 Experiment (E989) is an upgraded version of E821 that has a goal of producing a measurement with approximately 4 times more precision. Improving the precision requires a more detailed understanding of the experimental systematic effects, and so three higher-order systematic effects in the muon beam-spin dynamics have recently been found and estimated for E821. The beamline systematic effect originates from muon production in beamline spectrometers, as well as from muons traversing beamline bending magnets. The kicker systematic effect comes from a combination of the variation in time spent inside the muon storage ring across a muon bunch and the temporal structure of the storage ring kicker waveform. Finally, the detector systematic effect arises from a combination of the energy dependent muon equilibrium orbit in the storage ring, muon decay electron drift time, and decay electron detector acceptance effects. Brookhaven Natl Lab.
A neural measure of precision in visual working memory.
Ester, Edward F; Anderson, David E; Serences, John T; Awh, Edward
2013-05-01
Recent studies suggest that the temporary storage of visual detail in working memory is mediated by sensory recruitment or sustained patterns of stimulus-specific activation within feature-selective regions of visual cortex. According to a strong version of this hypothesis, the relative "quality" of these patterns should determine the clarity of an individual's memory. Here, we provide a direct test of this claim. We used fMRI and a forward encoding model to characterize population-level orientation-selective responses in visual cortex while human participants held an oriented grating in memory. This analysis, which enables a precise quantitative description of multivoxel, population-level activity measured during working memory storage, revealed graded response profiles whose amplitudes were greatest for the remembered orientation and fell monotonically as the angular distance from this orientation increased. Moreover, interparticipant differences in the dispersion-but not the amplitude-of these response profiles were strongly correlated with performance on a concurrent memory recall task. These findings provide important new evidence linking the precision of sustained population-level responses in visual cortex and memory acuity.
Automated semantic indexing of figure captions to improve radiology image retrieval.
Kahn, Charles E; Rubin, Daniel L
2009-01-01
We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Estimated precision was 0.897 (95% confidence interval, 0.857-0.937). Estimated recall was 0.930 (95% confidence interval, 0.838-1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval.
Quantization and training of object detection networks with low-precision weights and activations
NASA Astrophysics Data System (ADS)
Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie
2018-01-01
As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2015-01-01
This paper presents an overview of the seventh revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This paper supersedes the previous documentation and presents a modification to the algorithm referred to as the Airborne Spacing for Terminal Arrival Routes version 13 (ASTAR13). This airborne self-spacing concept contains both trajectory-based and state-based mechanisms for calculating the speeds required to achieve or maintain a precise spacing interval. The trajectory-based capability allows for spacing operations prior to the aircraft being on a common path. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft. This current revision to the algorithm adds the state-based capability in support of evolving industry standards relating to airborne self-spacing.
NASA Technical Reports Server (NTRS)
Abbott, Terence S.; Swieringa, Kurt S.
2017-01-01
This paper presents an overview of the eighth revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This paper supersedes the previous documentation and presents a modification to the algorithm referred to as the Airborne Spacing for Terminal Arrival Routes version 13 (ASTAR13). This airborne self-spacing concept contains both trajectory-based and state-based mechanisms for calculating the speeds required to achieve or maintain a precise spacing interval with another aircraft. The trajectory-based capability allows for spacing operations prior to the aircraft being on a common path. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft. This current revision to the algorithm supports the evolving industry standards relating to airborne self-spacing.
Martinez-Martin, Pablo; Rodriguez-Blazquez, Carmen; Alvarez-Sanchez, Mario; Arakaki, Tomoko; Bergareche-Yarza, Alberto; Chade, Anabel; Garretto, Nelida; Gershanik, Oscar; Kurtis, Monica M; Martinez-Castrillo, Juan Carlos; Mendoza-Rodriguez, Amelia; Moore, Henry P; Rodriguez-Violante, Mayela; Singer, Carlos; Tilley, Barbara C; Huang, Jing; Stebbins, Glenn T; Goetz, Christopher G
2013-01-01
The Movement Disorder Society-UPDRS (MDS-UPDRS) was published in 2008, showing satisfactory clinimetric results and has been proposed as the official benchmark scale for Parkinson's disease. The present study, based on the official MDS-UPDRS Spanish version, performed the first independent testing of the scale and adds information on its clinimetric properties. The cross-culturally adapted MDS-UPDRS Spanish version showed a comparative fit index ≥ 0.90 for each part (I-IV) relative to the English-language version and was accepted as the Official MDS-UPDRS Spanish version. Data from this scale, applied with other assessments to Spanish-speaking Parkinson's disease patients in five countries, were analyzed for an independent and complementary clinimetric evaluation. In total, 435 patients were included. Missing data were negligible and moderate floor effect (30 %) was found for Part IV. Cronbach's α index ranged between 0.79 and 0.93 and only five items did not reach the 0.30 threshold value of item-total correlation. Test-retest reliability was adequate with only two sub-scores of the item 3.17, Rest tremor amplitude, reaching κ values lower than 0.60. The intraclass correlation coefficient was higher than 0.85 for the total score of each part. Correlation of the MDS-UPDRS parts with other measures for related constructs was high (≥ 0.60) and the standard error of measurement lower than one-third baseline standard deviation for all subscales. Results confirm those of the original study and add information on scale reliability, construct validity, and precision. The MDS-UPDRS Spanish version shows satisfactory clinimetric characteristics.
Excitation basis for (3+1)d topological phases
NASA Astrophysics Data System (ADS)
Delcamp, Clement
2017-12-01
We consider an exactly solvable model in 3+1 dimensions, based on a finite group, which is a natural generalization of Kitaev's quantum double model. The corresponding lattice Hamiltonian yields excitations located at torus-boundaries. By cutting open the three-torus, we obtain a manifold bounded by two tori which supports states satisfying a higher-dimensional version of Ocneanu's tube algebra. This defines an algebraic structure extending the Drinfel'd double. Its irreducible representations, labeled by two fluxes and one charge, characterize the torus-excitations. The tensor product of such representations is introduced in order to construct a basis for (3+1)d gauge models which relies upon the fusion of the defect excitations. This basis is defined on manifolds of the form Σ × S_1 , with Σ a two-dimensional Riemann surface. As such, our construction is closely related to dimensional reduction from (3+1)d to (2+1)d topological orders.
Stirling cryocooler test results and design model verification
NASA Astrophysics Data System (ADS)
Shimko, Martin A.; Stacy, W. D.; McCormick, John A.
A long-life Stirling cycle cryocooler being developed for spaceborne applications is described. The results from tests on a preliminary breadboard version of the cryocooler used to demonstrate the feasibility of the technology and to validate the generator design code used in its development are presented. This machine achieved a cold-end temperature of 65 K while carrying a 1/2-W cooling load. The basic machine is a double-acting, flexure-bearing, split Stirling design with linear electromagnetic drives for the expander and compressors. Flat metal diaphragms replace pistons for sweeping and sealing the machine working volumes. The double-acting expander couples to a laminar-channel counterflow recuperative heat exchanger for regeneration. The PC-compatible design code developed for this design approach calculates regenerator loss, including heat transfer irreversibilities, pressure drop, and axial conduction in the regenerator walls. The code accurately predicted cooler performance and assisted in diagnosing breadboard machine flaws during shakedown and development testing.
Machine Protection System for the Stepper Motor Actuated SyLMAND Mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subramanian, V. R.; Dolton, W.; Wells, G.
2010-06-23
SyLMAND, the Synchrotron Laboratory for Micro and Nano Devices at the Canadian Light Source, consists of a dedicated X-ray lithography beamline on a bend magnet port, and process support laboratories in a clean room environment. The beamline includes a double mirror system with flat, chromium-coated silicon mirrors operated at varying grazing angles of incidence (4 mrad to 45 mrad) for spectral adjustment by high energy cut-off. Each mirror can be independently moved by two stepper motors to precisely control the pitch and vertical position. We present in this paper the machine protection system implemented in the double mirror system tomore » allow for safe operation of the two mirrors and to avoid consequences of potential stepper motor malfunction.« less
Double sided grating fabrication for high energy X-ray phase contrast imaging
Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick; ...
2018-04-19
State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less
Double elementary Goldstone Higgs boson production in future linear colliders
NASA Astrophysics Data System (ADS)
Guo, Yu-Chen; Yue, Chong-Xing; Liu, Zhi-Cheng
2018-03-01
The Elementary Goldstone Higgs (EGH) model is a perturbative extension of the Standard Model (SM), which identifies the EGH boson as the observed Higgs boson. In this paper, we study pair production of the EGH boson in future linear electron positron colliders. The cross-sections in the TeV region can be changed to about ‑27%, 163% and ‑34% for the e+e‑→ Zhh, e+e‑→ νν¯hh and e+e‑→ tt¯hh processes with respect to the SM predictions, respectively. According to the expected measurement precisions, such correction effects might be observed in future linear colliders. In addition, we compare the cross-sections of double SM-like Higgs boson production with the predictions in other new physics models.
Science & Technology Review September 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radousky, H B
2006-07-18
This month's article has the following articles: (1) Simulations Help Plan for Large Earthquakes--Commentary by Jane C. S. Long; (2) Re-creating the 1906 San Francisco Earthquake--Supercomputer simulations of Bay Area earthquakes are providing insight into the great 1906 quake and future temblors along several faults; (3) Decoding the Origin of a Bioagent--The microstructure of a bacterial organism can be linked to the methods used to formulate the pathogen; (4) A New Look at How Aging Bones Fracture--Livermore scientists find that the increased risk of fracture from osteoporosis may be due to a change in the physical structure of trabecular bone;more » and (5) Fusion Targets on the Double--Advances in precision manufacturing allow the production of double-shell fusion targets with submicrometer tolerances.« less
Study on depth profile of heavy ion irradiation effects in poly(tetrafluoroethylene-co-ethylene)
NASA Astrophysics Data System (ADS)
Gowa, Tomoko; Shiotsu, Tomoyuki; Urakawa, Tatsuya; Oka, Toshitaka; Murakami, Takeshi; Oshima, Akihiro; Hama, Yoshimasa; Washio, Masakazu
2011-02-01
High linear energy transfer (LET) heavy ion beams were used to irradiate poly(tetrafluoroethylene-co-ethylene) (ETFE) under vacuum and in air. The irradiation effects in ETFE as a function of the depth were precisely evaluated by analyzing each of the films of the irradiated samples, which were made of stacked ETFE films. It was indicated that conjugated double bonds were generated by heavy ion beam irradiation, and their amounts showed the Bragg-curve-like distributions. Also, it was suggested that higher LET beams would induce radical formation in high density and longer conjugated C=C double bonds could be generated by the second-order reactions. Moreover, for samples irradiated in air, C=O was produced correlating to the yield of oxygen molecules diffusing from the sample surface.
A Double Perturbation Method for Reducing Dynamical Degradation of the Digital Baker Map
NASA Astrophysics Data System (ADS)
Liu, Lingfeng; Lin, Jun; Miao, Suoxia; Liu, Bocheng
2017-06-01
The digital Baker map is widely used in different kinds of cryptosystems, especially for image encryption. However, any chaotic map which is realized on the finite precision device (e.g. computer) will suffer from dynamical degradation, which refers to short cycle lengths, low complexity and strong correlations. In this paper, a novel double perturbation method is proposed for reducing the dynamical degradation of the digital Baker map. Both state variables and system parameters are perturbed by the digital logistic map. Numerical experiments show that the perturbed Baker map can achieve good statistical and cryptographic properties. Furthermore, a new image encryption algorithm is provided as a simple application. With a rather simple algorithm, the encrypted image can achieve high security, which is competitive to the recently proposed image encryption algorithms.
Double sided grating fabrication for high energy X-ray phase contrast imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick
State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less
A new version of Visual tool for estimating the fractal dimension of images
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.
2010-04-01
This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.
Sample Based Unit Liter Dose Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
JENSEN, L.
The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new datamore » to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting {mu}Ci/g or {mu}Ci/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000).« less
Detection And Mapping (DAM) package. Volume 4A: Software System Manual, part 1
NASA Technical Reports Server (NTRS)
Schlosser, E. H.
1980-01-01
The package is an integrated set of manual procedures, computer programs, and graphic devices designed for efficient production of precisely registered and formatted maps from digital LANDSAT multispectral scanner (MSS) data. The software can be readily implemented on any Univac 1100 series computer with standard peripheral equipment. This version of the software includes predefined spectral limits for use in classifying and mapping surface water for LANDSAT-1, LANDSAT-2, and LANDSAT-3. Tape formats supported include X, AM, and PM.
Modeling intelligent agent beliefs in a card game scenario
NASA Astrophysics Data System (ADS)
Gołuński, Marcel; Tomanek, Roman; WÄ siewicz, Piotr
In this paper we explore the problem of intelligent agent beliefs. We model agent beliefs using multimodal logics of belief, KD45(m) system implemented as a directed graph depicting Kripke semantics, precisely. We present a card game engine application which allows multiple agents to connect to a given game session and play the card game. As an example simplified version of popular Saboteur card game is used. Implementation was done in Java language using following libraries and applications: Apache Mina, LWJGL.
Development and Comparison of Technical Solutions for Electricity Monitoring Equipment
NASA Astrophysics Data System (ADS)
Potapovs, A.; Obushevs, A.
2017-12-01
The paper focuses on the elaboration of a demand-side management platform for optimal energy management strategies; the topicality is related to the description and comparison of the developed electricity monitoring and control equipment. The article describes two versions based on Atmega328 and STM32 microcontrollers, a lower and higher level of precision, and other distinct performance parameters. At the end of the article, the results of the testing of the two types of equipment are given and their comparison is made.
Penrose-like inequality with angular momentum for minimal surfaces
NASA Astrophysics Data System (ADS)
Anglada, Pablo
2018-02-01
In axially symmetric spacetimes the Penrose inequality can be strengthened to include angular momentum. We prove a version of this inequality for minimal surfaces, more precisely, a lower bound for the ADM mass in terms of the area of a minimal surface, the angular momentum and a particular measure of the surface size. We consider axially symmetric and asymptotically flat initial data, and use the monotonicity of the Geroch quasi-local energy on 2-surfaces along the inverse mean curvature flow.
Simulation of Finite-Precision Effects in Digital Filters.
1991-12-12
the output power due to roundoff noise [13:422][15:3591. To prevent overflow, the filter gains are < 1.0 at branch nodes where the signal enters. This...performed and prevented with a message to the user. A useful check for the user is one when the coefficients are quantized. If a quantized version of a...8217(55) ------------ ALL DONE - WRITE(6,1)’***** PLEASE ENTER NUMBER OR [RET] TO RUN PRGRAM *****’ WRITE(6,1)’ ENTER NUMBER [#]:’ READ(5,38,END=99,ERR
DAM package version 7807: Software fixes and enhancements
NASA Technical Reports Server (NTRS)
Schlosser, E.
1979-01-01
The Detection and Mapping package is an integrated set of manual procedures, computer programs, and graphic devices designed for efficient production of precisely registered, formatted, and interpreted maps from digital LANDSAT multispectral scanner data. This report documents changes to the DAM package in support of its use by the Corps of Engineers for inventorying impounded surface water. Although these changes are presented in terms of their application to detecting and mapping surface water, they are equally relevant to other land surface materials.
NASA Astrophysics Data System (ADS)
Svehla, D.; Rothacher, M.
2016-12-01
Is it possible to process Lunar Laser Ranging (LLR) measurements in the geocentric frame in a similar way SLR measurements are modelled for GPS satellites and estimate all global reference frame parameters like in the case of GPS? The answer is yes. We managed to process Lunar laser measurements to Apollo and Luna retro-reflectors on the Moon in a similar way we are processing SLR measurements to GPS satellites. We make use of the latest Lunar libration models and DE430 ephemerides given in the Solar system baricentric frame and model uplink and downlink Lunar laser ranges in the geocentric frame as one way measurements, similar to SLR measurements to GPS satellites. In the first part of this contribution we present the estimation of the Lunar orbit as well as the Earth orientation parameters (including UT1 or UT0) with this new formulation. In the second part, we form common-view double-difference LLR measurements between two Lunar retro-reflectors and two LLR telescopes to show the actual noise of the LLR measurements. Since, by forming double-differences of LLR measurements, all range biases are removed and orbit errors are significantly reduced (the Lunar orbit is much farther away than the GPS orbits), one can consider double-difference LLR as an "orbit-free" and "bias-free" differential approach. In the end, we make a comparison with the SLR double-difference approach with Galileo satellites, where we already demonstrated submillimeter precision, and discuss possible combination of LLR and SLR to GNSS satellites using double-difference approach.
Design of control system for optical fiber drawing machine driven by double motor
NASA Astrophysics Data System (ADS)
Yu, Yue Chen; Bo, Yu Ming; Wang, Jun
2018-01-01
Micro channel Plate (MCP) is a kind of large-area array electron multiplier with high two-dimensional spatial resolution, used as high-performance night vision intensifier. The high precision control of the fiber is the key technology of the micro channel plate manufacturing process, and it was achieved by the control of optical fiber drawing machine driven by dual-motor in this paper. First of all, utilizing STM32 chip, the servo motor drive and control circuit was designed to realize the dual motor synchronization. Secondly, neural network PID control algorithm was designed for controlling the fiber diameter fabricated in high precision; Finally, the hexagonal fiber was manufactured by this system and it shows that multifilament diameter accuracy of the fiber is +/- 1.5μm.
Dynamical investigations of the multiple stars
NASA Astrophysics Data System (ADS)
Kiyaeva, Olga V.; Zhuchkov, Roman Ya.
2017-11-01
Two multiple stars - the quadruple star - Bootis (ADS 9173) and the triple star T Taury were investigated. The visual double star - Bootiswas studied on the basis of the Pulkovo 26-inch refractor observations 1982-2013. An invisible satellite of the component A was discovered due to long-term uniform series of observations. Its orbital period is 20 ± 2 years. The known invisible satellite of the component B with near 5 years period was confirmed due to high precision CCD observations. The astrometric orbits of the both components were calculated. The orbits of inner and outer pairs of the pre-main sequence binary T Taury were calculated on the basis of high precision observations by the VLT and on the Keck II Telescope. This weakly hierarchical triple system is stable with probability more than 70%.
Pulsars in binary systems: probing binary stellar evolution and general relativity.
Stairs, Ingrid H
2004-04-23
Radio pulsars in binary orbits often have short millisecond spin periods as a result of mass transfer from their companion stars. They therefore act as very precise, stable, moving clocks that allow us to investigate a large set of otherwise inaccessible astrophysical problems. The orbital parameters derived from high-precision binary pulsar timing provide constraints on binary evolution, characteristics of the binary pulsar population, and the masses of neutron stars with different mass-transfer histories. These binary systems also test gravitational theories, setting strong limits on deviations from general relativity. Surveys for new pulsars yield new binary systems that increase our understanding of all these fields and may open up whole new areas of physics, as most spectacularly evidenced by the recent discovery of an extremely relativistic double-pulsar system.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.
1991-01-01
In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.
Emotional selection in memes: the case of urban legends.
Bell, C; Sternberg, E
2001-12-01
This article explores how much memes like urban legends succeed on the basis of informational selection (i.e., truth or a moral lesson) and emotional selection (i.e., the ability to evoke emotions like anger, fear, or disgust). The article focuses on disgust because its elicitors have been precisely described. In Study 1, with controls for informational factors like truth, people were more willing to pass along stories that elicited stronger disgust. Study 2 randomly sampled legends and created versions that varied in disgust; people preferred to pass along versions that produced the highest level of disgust. Study 3 coded legends for specific story motifs that produce disgust (e.g., ingestion of a contaminated substance) and found that legends that contained more disgust motifs were distributed more widely on urban legend Web sites. The conclusion discusses implications of emotional selection for the social marketplace of ideas.
Template optimization and transfer in perceptual learning.
Kurki, Ilmari; Hyvärinen, Aapo; Saarinen, Jussi
2016-08-01
We studied how learning changes the processing of a low-level Gabor stimulus, using a classification-image method (psychophysical reverse correlation) and a task where observers discriminated between slight differences in the phase (relative alignment) of a target Gabor in visual noise. The method estimates the internal "template" that describes how the visual system weights the input information for decisions. One popular idea has been that learning makes the template more like an ideal Bayesian weighting; however, the evidence has been indirect. We used a new regression technique to directly estimate the template weight change and to test whether the direction of reweighting is significantly different from an optimal learning strategy. The subjects trained the task for six daily sessions, and we tested the transfer of training to a target in an orthogonal orientation. Strong learning and partial transfer were observed. We tested whether task precision (difficulty) had an effect on template change and transfer: Observers trained in either a high-precision (small, 60° phase difference) or a low-precision task (180°). Task precision did not have an effect on the amount of template change or transfer, suggesting that task precision per se does not determine whether learning generalizes. Classification images show that training made observers use more task-relevant features and unlearn some irrelevant features. The transfer templates resembled partially optimized versions of templates in training sessions. The template change direction resembles ideal learning significantly but not completely. The amount of template change was highly correlated with the amount of learning.