NASA Astrophysics Data System (ADS)
Li, Junye; Hu, Jinglei; Wang, Binyu; Sheng, Liang; Zhang, Xinming
2018-03-01
In order to investigate the effect of abrasive flow polishing surface variable diameter pipe parts, with high precision dispensing needles as the research object, the numerical simulation of the process of polishing high precision dispensing needle was carried out. Analysis of different volume fraction conditions, the distribution of the dynamic pressure and the turbulence viscosity of the abrasive flow field in the high precision dispensing needle, through comparative analysis, the effectiveness of the abrasive grain polishing high precision dispensing needle was studied, controlling the volume fraction of silicon carbide can change the viscosity characteristics of the abrasive flow during the polishing process, so that the polishing quality of the abrasive grains can be controlled.
A comparative study of integrators for constructing ephemerides with high precision.
NASA Astrophysics Data System (ADS)
Huang, Tian-Yi
1990-09-01
There are four indexes for evaluating various integrators. They are the local truncation error, the numerical stability, the complexity of computation and the quality of adaptation. A review and a comparative study of several numerical integration methods, such as Adams, Cowell, Runge-Kutta-Fehlberg, Gragg-Bulirsch-Stoer extrapolation, Everhart, Taylor series and Krogh, which are popular for constructing ephemerides with high precision, has been worked out.
A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus
NASA Astrophysics Data System (ADS)
Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei
2005-01-01
Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.
NASA Astrophysics Data System (ADS)
Scholten, Sarah K.; Perrella, Christopher; Anstie, James D.; White, Richard T.; Al-Ashwal, Waddah; Hébert, Nicolas Bourbeau; Genest, Jérôme; Luiten, Andre N.
2018-05-01
Real-time and accurate measurements of gas properties are highly desirable for numerous real-world applications. Here, we use an optical-frequency comb to demonstrate absolute number-density and temperature measurements of a sample gas with state-of-the-art precision and accuracy. The technique is demonstrated by measuring the number density of 12C16O2 with an accuracy of better than 1% and a precision of 0.04% in a measurement and analysis cycle of less than 1 s. This technique is transferable to numerous molecular species, thus offering an avenue for near-universal gas concentration measurements.
Simulation of Thermal Behavior in High-Precision Measurement Instruments
NASA Astrophysics Data System (ADS)
Weis, Hanna Sophie; Augustin, Silke
2008-06-01
In this paper, a way to modularize complex finite-element models is described. The modularization is done with temperature fields that appear in high-precision measurement instruments. There, the temperature negatively impacts the achievable uncertainty of measurement. To correct for this uncertainty, the temperature must be known at every point. This cannot be achieved just by measuring temperatures at specific locations. Therefore, a numerical treatment is necessary. As the system of interest is very complex, modularization is unavoidable to obtain good numerical results.
Núñez-Peña, M Isabel; Suárez-Pellicioni, Macarena
2014-12-01
Numerical comparison tasks are widely used to study the mental representation of numerical magnitude. In study, event-related brain potentials (ERPs) were recorded while 26 high math-anxious (HMA) and 27 low math-anxious (LMA) individuals were presented with pairs of single-digit Arabic numbers and were asked to decide which one had the larger numerical magnitude. The size of the numbers and the distance between them were manipulated in order to study the size and the distance effects. The results showed that both distance and size effects were larger for the HMA group. As for ERPs, results showed that the ERP distance effect had larger amplitude for both the size and distance effects in the HMA group than among their LMA counterparts. Since this component has been taken as a marker of the processing of numerical magnitude, this result suggests that HMA individuals have a less precise representation of numerical magnitude. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2013-07-01
We propose, describe, and demonstrate a new numerically stable implementation of the extended boundary-condition method (EBCM) to compute the T-matrix for electromagnetic scattering by spheroidal particles. Our approach relies on the fact that for many of the EBCM integrals in the special case of spheroids, a leading part of the integrand integrates exactly to zero, which causes catastrophic loss of precision in numerical computations. This feature was in fact first pointed out by Waterman in the context of acoustic scattering and electromagnetic scattering by infinite cylinders. We have recently studied it in detail in the case of electromagnetic scattering by particles. Based on this study, the principle of our new implementation is therefore to compute all the integrands without the problematic part to avoid the primary cause of loss of precision. Particular attention is also given to choosing the algorithms that minimise loss of precision in every step of the method, without compromising on speed. We show that the resulting implementation can efficiently compute in double precision arithmetic the T-matrix and therefore optical properties of spheroidal particles to a high precision, often down to a remarkable accuracy (10-10 relative error), over a wide range of parameters that are typically considered problematic. We discuss examples such as high-aspect ratio metallic nanorods and large size parameter (≈35) dielectric particles, which had been previously modelled only using quadruple-precision arithmetic codes.
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
A new shock-capturing numerical scheme for ideal hydrodynamics
NASA Astrophysics Data System (ADS)
Fecková, Z.; Tomášik, B.
2015-05-01
We present a new algorithm for solving ideal relativistic hydrodynamics based on Godunov method with an exact solution of Riemann problem for an arbitrary equation of state. Standard numerical tests are executed, such as the sound wave propagation and the shock tube problem. Low numerical viscosity and high precision are attained with proper discretization.
A 1D radiative transfer benchmark with polarization via doubling and adding
NASA Astrophysics Data System (ADS)
Ganapol, B. D.
2017-11-01
Highly precise numerical solutions to the radiative transfer equation with polarization present a special challenge. Here, we establish a precise numerical solution to the radiative transfer equation with combined Rayleigh and isotropic scattering in a 1D-slab medium with simple polarization. The 2-Stokes vector solution for the fully discretized radiative transfer equation in space and direction derives from the method of doubling and adding enhanced through convergence acceleration. Updates to benchmark solutions found in the literature to seven places for reflectance and transmittance as well as for angular flux follow. Finally, we conclude with the numerical solution in a partially randomly absorbing heterogeneous medium.
Nahmani, Marc; Lanahan, Conor; DeRosier, David; Turrigiano, Gina G.
2017-01-01
Superresolution microscopy has fundamentally altered our ability to resolve subcellular proteins, but improving on these techniques to study dense structures composed of single-molecule-sized elements has been a challenge. One possible approach to enhance superresolution precision is to use cryogenic fluorescent imaging, reported to reduce fluorescent protein bleaching rates, thereby increasing the precision of superresolution imaging. Here, we describe an approach to cryogenic photoactivated localization microscopy (cPALM) that permits the use of a room-temperature high-numerical-aperture objective lens to image frozen samples in their native state. We find that cPALM increases photon yields and show that this approach can be used to enhance the effective resolution of two photoactivatable/switchable fluorophore-labeled structures in the same frozen sample. This higher resolution, two-color extension of the cPALM technique will expand the accessibility of this approach to a range of laboratories interested in more precise reconstructions of complex subcellular targets. PMID:28348224
rpe v5: an emulator for reduced floating-point precision in large numerical simulations
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.
2017-06-01
This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.
NASA Astrophysics Data System (ADS)
Plakhov, Iu. V.; Mytsenko, A. V.; Shel'Pov, V. A.
A numerical integration method is developed that is more accurate than Everhart's (1974) implicit single-sequence approach for integrating orbits. This method can be used to solve problems of space geodesy based on the use of highly precise laser observations.
Fully Nonlinear Modeling and Analysis of Precision Membranes
NASA Technical Reports Server (NTRS)
Pai, P. Frank; Young, Leyland G.
2003-01-01
High precision membranes are used in many current space applications. This paper presents a fully nonlinear membrane theory with forward and inverse analyses of high precision membrane structures. The fully nonlinear membrane theory is derived from Jaumann strains and stresses, exact coordinate transformations, the concept of local relative displacements, and orthogonal virtual rotations. In this theory, energy and Newtonian formulations are fully correlated, and every structural term can be interpreted in terms of vectors. Fully nonlinear ordinary differential equations (ODES) governing the large static deformations of known axisymmetric membranes under known axisymmetric loading (i.e., forward problems) are presented as first-order ODES, and a method for obtaining numerically exact solutions using the multiple shooting procedure is shown. A method for obtaining the undeformed geometry of any axisymmetric membrane with a known inflated geometry and a known internal pressure (i.e., inverse problems) is also derived. Numerical results from forward analysis are verified using results in the literature, and results from inverse analysis are verified using known exact solutions and solutions from the forward analysis. Results show that the membrane theory and the proposed numerical methods for solving nonlinear forward and inverse membrane problems are accurate.
NASA Astrophysics Data System (ADS)
Ma, Lin
2017-11-01
This paper develops a method for precisely determining the tension of an inclined cable with unknown boundary conditions. First, the nonlinear motion equation of an inclined cable is derived, and a numerical model of the motion of the cable is proposed using the finite difference method. The proposed numerical model includes the sag-extensibility, flexural stiffness, inclination angle and rotational stiffness at two ends of the cable. Second, the influence of the dynamic parameters of the cable on its frequencies is discussed in detail, and a method for precisely determining the tension of an inclined cable is proposed based on the derivatives of the eigenvalues of the matrices. Finally, a multiparameter identification method is developed that can simultaneously identify multiple parameters, including the rotational stiffness at two ends. This scheme is applicable to inclined cables with varying sag, varying flexural stiffness and unknown boundary conditions. Numerical examples indicate that the method provides good precision. Because the parameters of cables other than tension (e.g., the flexural stiffness and rotational stiffness at the ends) are not accurately known in practical engineering, the multiparameter identification method could further improve the accuracy of cable tension measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAILEY, DAVID H.; BORWEIN, JONATHAN M.
A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less
Microfluidic proportional flow controller
Prentice-Mott, Harrison; Toner, Mehmet; Irimia, Daniel
2011-01-01
Precise flow control in microfluidic chips is important for many biochemical assays and experiments at microscale. While several technologies for controlling fluid flow have been implemented either on- or off-chip, these can provide either high-speed or high-precision control, but seldom could accomplish both at the same time. Here we describe a new on-chip, pneumatically activated flow controller that allows for fast and precise control of the flow rate through a microfluidic channel. Experimental results show that the new proportional flow controllers exhibited a response time of approximately 250 ms, while our numerical simulations suggest that faster actuation down to approximately 50 ms could be achieved with alternative actuation schemes. PMID:21874096
A floating-point/multiple-precision processor for airborne applications
NASA Technical Reports Server (NTRS)
Yee, R.
1982-01-01
A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.
NASA Astrophysics Data System (ADS)
Chang, Yu Min; Lu, Nien Hua; Wu, Tsung Chiang
2005-06-01
This study applies 3D Laser scanning technology to develop a high-precision measuring system for digital survey of historical building. It outperformed other methods in obtaining abundant high-precision measuring points and computing data instantly. In this study, the Pei-tien Temple, a Chinese Taoism temple in southern Taiwan famous for its highly intricate architecture and more than 300-year history, was adopted as the target to proof the high accuracy and efficiency of this system. By using French made MENSI GS-100 Laser Scanner, numerous measuring points were precisely plotted to present the plane map, vertical map and 3D map of the property. Accuracies of 0.1-1 mm in the digital data have consistently been achieved for the historical heritage measurement.
Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing
2014-09-01
In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ye, Dong; Sun, Zhaowei; Wu, Shunan
2012-08-01
The quaternion-based, high precision, large angle rapid reorientation of rigid spacecraft is the main problem investigated in this study. The operation is accomplished via a hybrid thrusters and reaction wheels strategy where thrusters are engaged in providing a primary maneuver torque in open loop, while reaction wheels provide fine control torque to achieve high precision in closed-loop control. The inaccuracy of thrusters is handled by a variable structure control (VSC). In addition, a signum function is mixed in the switching surface in VSC to produce a maneuver to the reference attitude trajectory in a shortest distance. Detailed proofs and numerical simulation examples are presented to illustrate all the technical aspects of this work.
Precision estimate for Odin-OSIRIS limb scatter retrievals
NASA Astrophysics Data System (ADS)
Bourassa, A. E.; McLinden, C. A.; Bathgate, A. F.; Elash, B. J.; Degenstein, D. A.
2012-02-01
The limb scatter measurements made by the Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on the Odin spacecraft are used to routinely produce vertically resolved trace gas and aerosol extinction profiles. Version 5 of the ozone and stratospheric aerosol extinction retrievals, which are available for download, are performed using a multiplicative algebraic reconstruction technique (MART). The MART inversion is a type of relaxation method, and as such the covariance of the retrieved state is estimated numerically, which, if done directly, is a computationally heavy task. Here we provide a methodology for the derivation of a numerical estimate of the covariance matrix for the retrieved state using the MART inversion that is sufficiently efficient to perform for each OSIRIS measurement. The resulting precision is compared with the variability in a large set of pairs of OSIRIS measurements that are close in time and space in the tropical stratosphere where the natural atmospheric variability is weak. These results are found to be highly consistent and thus provide confidence in the numerical estimate of the precision in the retrieved profiles.
Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO
Zhang, Chaozhu; Han, Jinan; Li, Ke
2014-01-01
The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750
Micro-optical fabrication by ultraprecision diamond machining and precision molding
NASA Astrophysics Data System (ADS)
Li, Hui; Li, Likai; Naples, Neil J.; Roblee, Jeffrey W.; Yi, Allen Y.
2017-06-01
Ultraprecision diamond machining and high volume molding for affordable high precision high performance optical elements are becoming a viable process in optical industry for low cost high quality microoptical component manufacturing. In this process, first high precision microoptical molds are fabricated using ultraprecision single point diamond machining followed by high volume production methods such as compression or injection molding. In the last two decades, there have been steady improvements in ultraprecision machine design and performance, particularly with the introduction of both slow tool and fast tool servo. Today optical molds, including freeform surfaces and microlens arrays, are routinely diamond machined to final finish without post machining polishing. For consumers, compression molding or injection molding provide efficient and high quality optics at extremely low cost. In this paper, first ultraprecision machine design and machining processes such as slow tool and fast too servo are described then both compression molding and injection molding of polymer optics are discussed. To implement precision optical manufacturing by molding, numerical modeling can be included in the future as a critical part of the manufacturing process to ensure high product quality.
NASA Astrophysics Data System (ADS)
Herdeiro, Victor
2017-09-01
Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] introduced a numerical recipe, dubbed uv sampler, offering precise estimations of the conformal field theory (CFT) data of the planar two-dimensional (2D) critical Ising model. It made use of scale invariance emerging at the critical point in order to sample finite sublattice marginals of the infinite plane Gibbs measure of the model by producing holographic boundary distributions. The main ingredient of the Markov chain Monte Carlo sampler is the invariance under dilation. This paper presents a generalization to higher dimensions with the critical 3D Ising model. This leads to numerical estimations of a subset of the CFT data—scaling weights and structure constants—through fitting of measured correlation functions. The results are shown to agree with the recent most precise estimations from numerical bootstrap methods [Kos, Poland, Simmons-Duffin, and Vichi, J. High Energy Phys. 08 (2016) 036, 10.1007/JHEP08(2016)036].
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.
2016-01-01
MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
MS overline -on-shell quark mass relation up to four loops in QCD and a general SU (N ) gauge group
NASA Astrophysics Data System (ADS)
Marquard, Peter; Smirnov, Alexander V.; Smirnov, Vladimir A.; Steinhauser, Matthias; Wellmann, David
2016-10-01
We compute the relation between heavy quark masses defined in the modified minimal subtraction and the on-shell schemes. Detailed results are presented for all coefficients of the SU (Nc) color factors. The reduction of the four-loop on-shell integrals is performed for a general QCD gauge parameter. Altogether there are about 380 master integrals. Some of them are computed analytically, others with high numerical precision using Mellin-Barnes representations, and the rest numerically with the help of FIESTA. We discuss in detail the precise numerical evaluation of the four-loop master integrals. Updated relations between various short-distance masses and the MS ¯ quark mass to next-to-next-to-next-to-leading order accuracy are provided for the charm, bottom and top quarks. We discuss the dependence on the renormalization and factorization scale.
Brault, C; Gil, C; Boboc, A; Spuig, P
2011-04-01
On the Tore Supra tokamak, a far infrared polarimeter diagnostic has been routinely used for diagnosing the current density by measuring the Faraday rotation angle. A high precision of measurement is needed to correctly reconstruct the current profile. To reach this precision, electronics used to compute the phase and the amplitude of the detected signals must have a good resilience to the noise in the measurement. In this article, the analogue card's response to the noise coming from the detectors and their impact on the Faraday angle measurements are analyzed, and we present numerical methods to calculate the phase and the amplitude. These validations have been done using real signals acquired by Tore Supra and JET experiments. These methods have been developed to be used in real-time in the future numerical cards that will replace the Tore Supra present analogue ones. © 2011 American Institute of Physics
The effect of mathematics anxiety on the processing of numerical magnitude.
Maloney, Erin A; Ansari, Daniel; Fugelsang, Jonathan A
2011-01-01
In an effort to understand the origins of mathematics anxiety, we investigated the processing of symbolic magnitude by high mathematics-anxious (HMA) and low mathematics-anxious (LMA) individuals by examining their performance on two variants of the symbolic numerical comparison task. In two experiments, a numerical distance by mathematics anxiety (MA) interaction was obtained, demonstrating that the effect of numerical distance on response times was larger for HMA than for LMA individuals. These data support the claim that HMA individuals have less precise representations of numerical magnitude than their LMA peers, suggesting that MA is associated with low-level numerical deficits that compromise the development of higher level mathematical skills.
Palmer, T. N.
2014-01-01
This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic–dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only. PMID:24842038
Palmer, T N
2014-06-28
This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic-dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only.
High numerical aperture multilayer Laue lenses
Morgan, Andrew J.; Prasciolu, Mauro; Andrejczuk, Andrzej; ...
2015-06-01
The ever-increasing brightness of synchrotron radiation sources demands improved X-ray optics to utilise their capability for imaging and probing biological cells, nanodevices, and functional matter on the nanometer scale with chemical sensitivity. Here we demonstrate focusing a hard X-ray beam to an 8 nm focus using a volume zone plate (also referred to as a wedged multilayer Laue lens). This lens was constructed using a new deposition technique that enabled the independent control of the angle and thickness of diffracting layers to microradian and nanometer precision, respectively. This ensured that the Bragg condition is satisfied at each point along themore » lens, leading to a high numerical aperture that is limited only by its extent. We developed a phase-shifting interferometric method based on ptychography to characterise the lens focus. The precision of the fabrication and characterisation demonstrated here provides the path to efficient X-ray optics for imaging at 1 nm resolution.« less
High quality optically polished aluminum mirror and process for producing
NASA Technical Reports Server (NTRS)
Lyons, III, James J. (Inventor); Zaniewski, John J. (Inventor)
2005-01-01
A new technical advancement in the field of precision aluminum optics permits high quality optical polishing of aluminum monolith, which, in the field of optics, offers numerous benefits because of its machinability, lightweight, and low cost. This invention combines diamond turning and conventional polishing along with india ink, a newly adopted material, for the polishing to accomplish a significant improvement in surface precision of aluminum monolith for optical purposes. This invention guarantees the precise optical polishing of typical bare aluminum monolith to surface roughness of less than about 30 angstroms rms and preferably about 5 angstroms rms while maintaining a surface figure accuracy in terms of surface figure error of not more than one-fifteenth of wave peak-to-valley.
High quality optically polished aluminum mirror and process for producing
NASA Technical Reports Server (NTRS)
Lyons, III, James J. (Inventor); Zaniewski, John J. (Inventor)
2002-01-01
A new technical advancement in the field of precision aluminum optics permits high quality optical polishing of aluminum monolith, which, in the field of optics, offers numerous benefits because of its machinability, lightweight, and low cost. This invention combines diamond turning and conventional polishing along with india ink, a newly adopted material, for the polishing to accomplish a significant improvement in surface precision of aluminum monolith for optical purposes. This invention guarantees the precise optical polishing of typical bare aluminum monolith to surface roughness of less than about 30 angstroms rms and preferably about 5 angstroms rms while maintaining a surface figure accuracy in terms of surface figure error of not more than one-fifteenth of wave peak-to-valley.
NASA Technical Reports Server (NTRS)
Garcia-Espada, Susana; Haas, Rudiger; Colomer, Francisco
2010-01-01
An important limitation for the precision in the results obtained by space geodetic techniques like VLBI and GPS are tropospheric delays caused by the neutral atmosphere, see e.g. [1]. In recent years numerical weather models (NWM) have been applied to improve mapping functions which are used for tropospheric delay modeling in VLBI and GPS data analyses. In this manuscript we use raytracing to calculate slant delays and apply these to the analysis of Europe VLBI data. The raytracing is performed through the limited area numerical weather prediction (NWP) model HIRLAM. The advantages of this model are high spatial (0.2 deg. x 0.2 deg.) and high temporal resolution (in prediction mode three hours).
Non-rigid Earth rotation series
NASA Astrophysics Data System (ADS)
Pashkevich, V. V.
2008-04-01
The last years a lot of attempts to derive a high-precision theory of the non-rigid Earth rotation was carried out. For these purposes the different transfer functions are used. Usually these transfer func- tions are applied to the series representing the nutation in longitude and in obliquity of the rigid Earth rotation with respect to the ecliptic of date. The aim of this investigation is a construction of the new high- precision non-rigid Earth rotation series (SN9000), dynamically adequate to the DE404/LE404 ephemeris over 2000 years, which are expressed as a function of Euler angles ψ, θ and φ with respect to the fixed ecliptic plane and equinox J2000.0. The early stages of the previous investigation: 1. The high-precision numerical solution of the rigid Earth rotation have been constructed (V.V.Pashkevich, G.I.Eroshkin and A.Brzezinski, 2004), (V.V.Pashkevich and G.I.Eroshkin, Proceedings of Journees 2004). The initial con- ditions have been calculated from SMART97 (P.Bretagnon, G.Francou, P.Rocher, J.L.Simon,1998). The discrepancies between the numerical solution and the semi-analytical solution SMART97 were obtained in Euler angles over 2000 years with one-day spacing. 2. Investigation of the discrepancies is carried out by the least squares and by the spectral analysis algorithms (V.V.Pashkevich and G.I.Eroshkin, Proceedings of Journees 2005). The high-precision rigid Earth rotation series S9000 are determined (V.V.Pashkevich and G.I.Eroshkin, 2005 ). The next stage of this investigation: 3. The new high-precision non-rigid Earth rotation series (SN9000), which are expressed as a function of Euler angles, are constructed by using the method (P.Bretagnon, P.M.Mathews, J.-L.Simon: 1999) and the transfer function MHB2002 (Mathews, P. M., Herring, T. A., and Buffett B. A., 2002).
High-Accuracy Comparison Between the Post-Newtonian and Self-Force Dynamics of Black-Hole Binaries
NASA Astrophysics Data System (ADS)
Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.
The relativistic motion of a compact binary system moving in circular orbit is investigated using the post-Newtonian (PN) approximation and the perturbative self-force (SF) formalism. A particular gauge-invariant observable quantity is computed as a function of the binary's orbital frequency. The conservative effect induced by the gravitational SF is obtained numerically with high precision, and compared to the PN prediction developed to high order. The PN calculation involves the computation of the 3PN regularized metric at the location of the particle. Its divergent self-field is regularized by means of dimensional regularization. The poles ∝ {(d - 3)}^{-1} that occur within dimensional regularization at the 3PN order disappear from the final gauge-invariant result. The leading 4PN and next-to-leading 5PN conservative logarithmic contributions originating from gravitational wave tails are also obtained. Making use of these exact PN results, some previously unknown PN coefficients are measured up to the very high 7PN order by fitting to the numerical SF data. Using just the 2PN and new logarithmic terms, the value of the 3PN coefficient is also confirmed numerically with very high precision. The consistency of this cross-cultural comparison provides a crucial test of the very different regularization methods used in both SF and PN formalisms, and illustrates the complementarity of these approximation schemes when modeling compact binary systems.
NASA Technical Reports Server (NTRS)
Lake, Mark S.; Peterson, Lee D.; Hachkowski, M. Roman; Hinkle, Jason D.; Hardaway, Lisa R.
1998-01-01
The present paper summarizes results from an ongoing research program conducted jointly by the University of Colorado and NASA Langley Research Center since 1994. This program has resulted in general guidelines for the design of high-precision deployment mechanisms, and tests of prototype deployable structures incorporating these mechanisms have shown microdynamically stable behavior (i.e., dimensional stability to parts per million). These advancements have resulted from the identification of numerous heretofore unknown microdynamic and micromechanical response phenomena, and the development of new test techniques and instrumentation systems to interrogate these phenomena. In addition, recent tests have begun to interrogate nanomechanical response of materials and joints and have been used to develop an understanding of nonlinear nanodynamic behavior in microdynamically stable structures. The ultimate goal of these efforts is to enable nano-precision active control of micro-precision deployable structures (i.e., active control to a resolution of parts per billion).
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; ...
2016-01-01
We present MADNESS (multiresolution adaptive numerical environment for scientific simulation) that is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision that are based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
The least channel capacity for chaos synchronization.
Wang, Mogei; Wang, Xingyuan; Liu, Zhenzhen; Zhang, Huaguang
2011-03-01
Recently researchers have found that a channel with capacity exceeding the Kolmogorov-Sinai entropy of the drive system (h(KS)) is theoretically necessary and sufficient to sustain the unidirectional synchronization to arbitrarily high precision. In this study, we use symbolic dynamics and the automaton reset sequence to distinguish the information that is required in identifying the current drive word and obtaining the synchronization. Then, we show that the least channel capacity that is sufficient to transmit the distinguished information and attain the synchronization of arbitrarily high precision is h(KS). Numerical simulations provide support for our conclusions.
NASA Astrophysics Data System (ADS)
Gerberding, Oliver; Sheard, Benjamin; Bykov, Iouri; Kullmann, Joachim; Esteban Delgado, Juan Jose; Danzmann, Karsten; Heinzel, Gerhard
2013-12-01
Intersatellite laser interferometry is a central component of future space-borne gravity instruments like Laser Interferometer Space Antenna (LISA), evolved LISA, NGO and future geodesy missions. The inherently small laser wavelength allows us to measure distance variations with extremely high precision by interfering a reference beam with a measurement beam. The readout of such interferometers is often based on tracking phasemeters, which are able to measure the phase of an incoming beatnote with high precision over a wide range of frequencies. The implementation of such phasemeters is based on all digital phase-locked loops (ADPLL), hosted in FPGAs. Here, we present a precise model of an ADPLL that allows us to design such a readout algorithm and we support our analysis by numerical performance measurements and experiments with analogue signals.
Air Bearings Machined On Ultra Precision, Hydrostatic CNC-Lathe
NASA Astrophysics Data System (ADS)
Knol, Pierre H.; Szepesi, Denis; Deurwaarder, Jan M.
1987-01-01
Micromachining of precision elements requires an adequate machine concept to meet the high demand of surface finish, dimensional and shape accuracy. The Hembrug ultra precision lathes have been exclusively designed with hydrostatic principles for main spindle and guideways. This concept is to be explained with some major advantages of hydrostatics compared with aerostatics at universal micromachining applications. Hembrug has originally developed the conventional Mikroturn ultra precision facing lathes, for diamond turning of computer memory discs. This first generation of machines was followed by the advanced computer numerically controlled types for machining of complex precision workpieces. One of these parts, an aerostatic bearing component has been succesfully machined on the Super-Mikroturn CNC. A case study of airbearing machining confirms the statement that a good result of the micromachining does not depend on machine performance alone, but also on the technology applied.
Automated survey of pavement distress based on 2D and 3D laser images.
DOT National Transportation Integrated Search
2011-11-01
Despite numerous efforts in recent decades, currently most information on pavement surface distresses cannot be obtained automatically, at high-speed, and at acceptable precision and bias levels. This research provided seed funding to produce a funct...
Precision of the anchor influences the amount of adjustment.
Janiszewski, Chris; Uy, Dan
2008-02-01
The anchoring-and-adjustment heuristic has been used to account for a wide variety of numerical judgments. Five studies show that adjustment away from a numerical anchor is smaller if the anchor is precise than if it is rounded. Evidence suggests that precise anchors, compared with rounded anchors, are represented on a subjective scale with a finer resolution. If adjustment consists of a series of iterative mental movements along a subjective scale, then an adjustment from a precise anchor should result in a smaller overall correction than an adjustment from a rounded anchor.
Underwater sympathetic detonation of pellet explosive
NASA Astrophysics Data System (ADS)
Kubota, Shiro; Saburi, Tei; Nagayama, Kunihito
2017-06-01
The underwater sympathetic detonation of pellet explosives was taken by high-speed photography. The diameter and the thickness of the pellet were 20 and 10 mm, respectively. The experimental system consists of the precise electric detonator, two grams of composition C4 booster and three pellets, and these were set in water tank. High-speed video camera, HPV-X made by Shimadzu was used with 10 Mfs. The underwater explosions of the precise electric detonator, the C4 booster and a pellet were also taken by high-speed photography to estimate the propagation processes of the underwater shock waves. Numerical simulation of the underwater sympathetic detonation of the pellet explosives was also carried out and compared with experiment.
Numerical Relativity for Space-Based Gravitational Wave Astronomy
NASA Technical Reports Server (NTRS)
Baker, John G.
2011-01-01
In the next decade, gravitational wave instruments in space may provide high-precision measurements of gravitational-wave signals from strong sources, such as black holes. Currently variations on the original Laser Interferometer Space Antenna mission concepts are under study in the hope of reducing costs. Even the observations of a reduced instrument may place strong demands on numerical relativity capabilities. Possible advances in the coming years may fuel a new generation of codes ready to confront these challenges.
Computer numeric control generation of toric surfaces
NASA Astrophysics Data System (ADS)
Bradley, Norman D.; Ball, Gary A.; Keller, John R.
1994-05-01
Until recently, the manufacture of toric ophthalmic lenses relied largely upon expensive, manual techniques for generation and polishing. Recent gains in computer numeric control (CNC) technology and tooling enable lens designers to employ single- point diamond, fly-cutting methods in the production of torics. Fly-cutting methods continue to improve, significantly expanding lens design possibilities while lowering production costs. Advantages of CNC fly cutting include precise control of surface geometry, rapid production with high throughput, and high-quality lens surface finishes requiring minimal polishing. As accessibility and affordability increase within the ophthalmic market, torics promise to dramatically expand lens design choices available to consumers.
An Online Gravity Modeling Method Applied for High Precision Free-INS
Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao
2016-01-01
For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261
An Online Gravity Modeling Method Applied for High Precision Free-INS.
Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao
2016-09-23
For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.
Fourier Series and Elliptic Functions
ERIC Educational Resources Information Center
Fay, Temple H.
2003-01-01
Non-linear second-order differential equations whose solutions are the elliptic functions "sn"("t, k"), "cn"("t, k") and "dn"("t, k") are investigated. Using "Mathematica", high precision numerical solutions are generated. From these data, Fourier coefficients are determined yielding approximate formulas for these non-elementary functions that are…
15 CFR 711.5 - Numerical precision of submitted data.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...
15 CFR 711.5 - Numerical precision of submitted data.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 15 Commerce and Foreign Trade 2 2013-01-01 2013-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...
15 CFR 711.5 - Numerical precision of submitted data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 15 Commerce and Foreign Trade 2 2012-01-01 2012-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...
15 CFR 711.5 - Numerical precision of submitted data.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...
15 CFR 711.5 - Numerical precision of submitted data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...
Challenges in mold manufacturing for high precision molded diffractive optical elements
NASA Astrophysics Data System (ADS)
Pongs, Guido; Bresseler, Bernd; Schweizer, Klaus; Bergs, Thomas
2016-09-01
Isothermal precision glass molding of imaging optics is the key technology for mass production of precise optical elements. Especially for numerous consumer applications (e.g. digital cameras, smart phones, …), high precision glass molding is applied for the manufacturing of aspherical lenses. The usage of diffractive optical elements (DOEs) can help to further reduce the number of lenses in the optical systems which will lead to a reduced weight of hand-held optical devices. But today the application of molded glass DOEs is limited due to the technological challenges in structuring the mold surfaces. Depending on the application submicrometer structures are required on the mold surface. Furthermore these structures have to be replicated very precisely to the glass lens surface. Especially the micro structuring of hard and brittle mold materials such as Tungsten Carbide is very difficult and not established. Thus a multitude of innovative approaches using diffractive optical elements cannot be realized. Aixtooling has investigated in different mold materials and different suitable machining technologies for the micro- and sub-micrometer structuring of mold surfaces. The focus of the work lays on ultra-precision grinding to generate the diffractive pattern on the mold surfaces. This paper presents the latest achievements in diffractive structuring of Tungsten Carbide mold surfaces by ultra-precision grinding.
NASA Astrophysics Data System (ADS)
Meng, ZhuXuan; Fan, Hu; Peng, Ke; Zhang, WeiHua; Yang, HuiXin
2016-12-01
This article presents a rapid and accurate aeroheating calculation method for hypersonic vehicles. The main innovation is combining accurate of numerical method with efficient of engineering method, which makes aeroheating simulation more precise and faster. Based on the Prandtl boundary layer theory, the entire flow field is divided into inviscid and viscid flow at the outer edge of the boundary layer. The parameters at the outer edge of the boundary layer are numerically calculated from assuming inviscid flow. The thermodynamic parameters of constant-volume specific heat, constant-pressure specific heat and the specific heat ratio are calculated, the streamlines on the vehicle surface are derived and the heat flux is then obtained. The results of the double cone show that at the 0° and 10° angle of attack, the method of aeroheating calculation based on inviscid outer edge of boundary layer parameters reproduces the experimental data better than the engineering method. Also the proposed simulation results of the flight vehicle reproduce the viscid numerical results well. Hence, this method provides a promising way to overcome the high cost of numerical calculation and improves the precision.
NASA Astrophysics Data System (ADS)
Batailly, Alain; Agrapart, Quentin; Millecamps, Antoine; Brunel, Jean-François
2016-08-01
This contribution addresses a confrontation between the experimental simulation of a rotor/stator interaction case initiated by structural contacts with numerical predictions made with an in-house numerical strategy. Contrary to previous studies carried out within the low-pressure compressor of an aircraft engine, this interaction is found to be non-divergent: high amplitudes of vibration are experimentally observed and numerically predicted over a short period of time. An in-depth analysis of experimental data first allows for a precise characterization of the interaction as a rubbing event involving the first torsional mode of a single blade. Numerical results are in good agreement with experimental observations: the critical angular speed, the wear patterns on the casing as well as the blade dynamics are accurately predicted. Through out the article, the in-house numerical strategy is also confronted to another numerical strategy that may be found in the literature for the simulation of rubbing events: key differences are underlined with respect to the prediction of non-linear interaction phenomena.
The convolutional differentiator method for numerical modelling of acoustic and elastic wavefields
NASA Astrophysics Data System (ADS)
Zhang, Zhong-Jie; Teng, Ji-Wen; Yang, Ding-Hui
1996-02-01
Based on the techniques of forward and inverse Fourier transformation, the authors discussed the design scheme of ordinary differentiator used and applied in the simulation of acoustic and elastic wavefields in isotropic media respectively. To compress Gibbs effects by truncation effectively, Hanning window is introduced in. The model computation shows that, the convolutional differentiator method has the advantages of rapidity, low requirements of computer’s inner storage and high precision, which is a potential method of numerical simulation.
Precision lens assembly with alignment turning system
NASA Astrophysics Data System (ADS)
Ho, Cheng-Fang; Huang, Chien-Yao; Lin, Yi-Hao; Kuo, Hui-Jean; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Chen, Fong-Zhi
2017-10-01
The poker chip assembly with high precision lens barrels is widely applied to ultra-high performance optical system. ITRC applies the poker chip assembly technology to the high numerical aperture objective lenses and lithography projection lenses because of its high efficiency assembly process. In order to achieve high precision lens cell for poker chip assembly, an alignment turning system (ATS) is developed. The ATS includes measurement, alignment and turning modules. The measurement module is equipped with a non-contact displacement sensor (NCDS) and an autocollimator (ACM). The NCDS and ACM are used to measure centration errors of the top and the bottom surface of a lens respectively; then the amount of adjustment of displacement and tilt with respect to the rotational axis of the turning machine for the alignment module can be determined. After measurement, alignment and turning processes on the ATS, the centration error of a lens cell with 200 mm in diameter can be controlled within 10 arcsec. Furthermore, a poker chip assembly lens cell with three sub-cells is demonstrated, each sub-cells are measured and accomplished with alignment and turning processes. The lens assembly test for five times by each three technicians; the average transmission centration error of assembly lens is 12.45 arcsec. The results show that ATS can achieve high assembly efficiency for precision optical systems.
Solving lattice QCD systems of equations using mixed precision solvers on GPUs
NASA Astrophysics Data System (ADS)
Clark, M. A.; Babich, R.; Barros, K.; Brower, R. C.; Rebbi, C.
2010-09-01
Modern graphics hardware is designed for highly parallel numerical tasks and promises significant cost and performance benefits for many scientific applications. One such application is lattice quantum chromodynamics (lattice QCD), where the main computational challenge is to efficiently solve the discretized Dirac equation in the presence of an SU(3) gauge field. Using NVIDIA's CUDA platform we have implemented a Wilson-Dirac sparse matrix-vector product that performs at up to 40, 135 and 212 Gflops for double, single and half precision respectively on NVIDIA's GeForce GTX 280 GPU. We have developed a new mixed precision approach for Krylov solvers using reliable updates which allows for full double precision accuracy while using only single or half precision arithmetic for the bulk of the computation. The resulting BiCGstab and CG solvers run in excess of 100 Gflops and, in terms of iterations until convergence, perform better than the usual defect-correction approach for mixed precision.
Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C
2017-08-01
The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
NASA Astrophysics Data System (ADS)
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
Prospects of photonic nanojets for precise exposure on microobjects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geints, Yu. E., E-mail: ygeints@iao.ru; Zuev Institute of Atmospheric Optics, SB Russian Academy of Sciences, Acad. Zuev Square 1, Tomsk, 634021; Panina, E. K., E-mail: pek@iao.ru
We report on the new optical tool for precise manipulation of various microobjects. This tool is referred to as a “photonic nanojet” (PJ) and corresponds to specific spatially localized and high-intensity area formed near micron-sized transparent spherical dielectric particles illuminated by a visible laser radiation The descriptive analysis of the morphological shapes of photonic nanojets is presented. The PJ shape characterization is based on the numerical calculations of the near-field distribution according to the Mie theory and accounts for jet dimensions and shape complexity.
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
High precision pulsar timing and spin frequency second derivatives
NASA Astrophysics Data System (ADS)
Liu, X. J.; Bassa, C. G.; Stappers, B. W.
2018-05-01
We investigate the impact of intrinsic, kinematic and gravitational effects on high precision pulsar timing. We present an analytical derivation and a numerical computation of the impact of these effects on the first and second derivative of the pulsar spin frequency. In addition, in the presence of white noise, we derive an expression to determine the expected measurement uncertainty of a second derivative of the spin frequency for a given timing precision, observing cadence and timing baseline and find that it strongly depends on the latter (∝t-7/2). We show that for pulsars with significant proper motion, the spin frequency second derivative is dominated by a term dependent on the radial velocity of the pulsar. Considering the data sets from three Pulsar Timing Arrays, we find that for PSR J0437-4715 a detectable spin frequency second derivative will be present if the absolute value of the radial velocity exceeds 33 km s-1. Similarly, at the current timing precision and cadence, continued timing observations of PSR J1909-3744 for about another eleven years, will allow the measurement of its frequency second derivative and determine the radial velocity with an accuracy better than 14 km s-1. With the ever increasing timing precision and observing baselines, the impact of the, largely unknown, radial velocities of pulsars on high precision pulsar timing can not be neglected.
Development and simulation of microfluidic Wheatstone bridge for high-precision sensor
NASA Astrophysics Data System (ADS)
Shipulya, N. D.; Konakov, S. A.; Krzhizhanovskaya, V. V.
2016-08-01
In this work we present the results of analytical modeling and 3D computer simulation of microfluidic Wheatstone bridge, which is used for high-accuracy measurements and precision instruments. We propose and simulate a new method of a bridge balancing process by changing the microchannel geometry. This process is based on the “etching in microchannel” technology we developed earlier (doi:10.1088/1742-6596/681/1/012035). Our method ensures a precise control of the flow rate and flow direction in the bridge microchannel. The advantage of our approach is the ability to work without any control valves and other active electronic systems, which are usually used for bridge balancing. The geometrical configuration of microchannels was selected based on the analytical estimations. A detailed 3D numerical model was based on Navier-Stokes equations for a laminar fluid flow at low Reynolds numbers. We investigated the behavior of the Wheatstone bridge under different process conditions; found a relation between the channel resistance and flow rate through the bridge; and calculated the pressure drop across the system under different total flow rates and viscosities. Finally, we describe a high-precision microfluidic pressure sensor that employs the Wheatstone bridge and discuss other applications in complex precision microfluidic systems.
Research on axisymmetric aspheric surface numerical design and manufacturing technology
NASA Astrophysics Data System (ADS)
Wang, Zhen-zhong; Guo, Yin-biao; Lin, Zheng
2006-02-01
The key technology for aspheric machining offers exact machining path and machining aspheric lens with high accuracy and efficiency, in spite of the development of traditional manual manufacturing into nowadays numerical control (NC) machining. This paper presents a mathematical model between virtual cone and aspheric surface equations, and discusses the technology of uniform wear of grinding wheel and error compensation in aspheric machining. Finally, a software system for high precision aspheric surface manufacturing is designed and realized, based on the mentioned above. This software system can work out grinding wheel path according to input parameters and generate machining NC programs of aspheric surfaces.
NASA Astrophysics Data System (ADS)
Wang, Zhen-yu; Yu, Jian-cheng; Zhang, Ai-qun; Wang, Ya-xing; Zhao, Wen-tao
2017-12-01
Combining high precision numerical analysis methods with optimization algorithms to make a systematic exploration of a design space has become an important topic in the modern design methods. During the design process of an underwater glider's flying-wing structure, a surrogate model is introduced to decrease the computation time for a high precision analysis. By these means, the contradiction between precision and efficiency is solved effectively. Based on the parametric geometry modeling, mesh generation and computational fluid dynamics analysis, a surrogate model is constructed by adopting the design of experiment (DOE) theory to solve the multi-objects design optimization problem of the underwater glider. The procedure of a surrogate model construction is presented, and the Gaussian kernel function is specifically discussed. The Particle Swarm Optimization (PSO) algorithm is applied to hydrodynamic design optimization. The hydrodynamic performance of the optimized flying-wing structure underwater glider increases by 9.1%.
Proposal for the determination of nuclear masses by high-precision spectroscopy of Rydberg states
NASA Astrophysics Data System (ADS)
Wundt, B. J.; Jentschura, U. D.
2010-06-01
The theoretical treatment of Rydberg states in one-electron ions is facilitated by the virtual absence of the nuclear-size correction, and fundamental constants like the Rydberg constant may be in the reach of planned high-precision spectroscopic experiments. The dominant nuclear effect that shifts transition energies among Rydberg states therefore is due to the nuclear mass. As a consequence, spectroscopic measurements of Rydberg transitions can be used in order to precisely deduce nuclear masses. A possible application of this approach to hydrogen and deuterium, and hydrogen-like lithium and carbon is explored in detail. In order to complete the analysis, numerical and analytic calculations of the quantum electrodynamic self-energy remainder function for states with principal quantum number n = 5, ..., 8 and with angular momentum ell = n - 1 and ell = n - 2 are described \\big(j = \\ell \\pm {\\textstyle {\\frac{1}{2}}}\\big).
Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses
Das, Jayajit
2016-01-01
Single cells often generate precise responses by involving dissipative out-of-thermodynamic-equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high-precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early-time T cell signaling. Using exact analytical calculations and numerical simulations, I show that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in determining single-cell kinetics from cell-population results. PMID:26958894
Le Floch, Jean-Michel; Fan, Y; Humbert, Georges; Shan, Qingxiao; Férachou, Denis; Bara-Maillet, Romain; Aubourg, Michel; Hartnett, John G; Madrangeas, Valerie; Cros, Dominique; Blondy, Jean-Marc; Krupka, Jerzy; Tobar, Michael E
2014-03-01
Dielectric resonators are key elements in many applications in micro to millimeter wave circuits, including ultra-narrow band filters and frequency-determining components for precision frequency synthesis. Distributed-layered and bulk low-loss crystalline and polycrystalline dielectric structures have become very important for building these devices. Proper design requires careful electromagnetic characterization of low-loss material properties. This includes exact simulation with precision numerical software and precise measurements of resonant modes. For example, we have developed the Whispering Gallery mode technique for microwave applications, which has now become the standard for characterizing low-loss structures. This paper will give some of the most common characterization techniques used in the micro to millimeter wave regime at room and cryogenic temperatures for designing high-Q dielectric loaded cavities.
Wen, Sy-Bor; Sundaram, Vijay M; McBride, Daniel; Yang, Yu
2016-04-15
A new type of micro-lensed optical fiber through stacking appropriate high-refractive microspheres at designed locations with respect to the cleaved end of an optical fiber is numerically and experimentally demonstrated. This new type of micro-lensed optical fiber can be precisely constructed with low cost and high speed. Deep micrometer-scale and submicrometer-scale far-field light spots can be achieved when the optical fibers are multimode and single mode, respectively. By placing an appropriate teardrop dielectric nanoscale scatterer at the far-field spot of this new type of micro-lensed optical fiber, a deep-nanometer near-field spot can also be generated with high intensity and minimum joule heating, which is valuable in high-speed, high-resolution, and high-power nanoscale detection compared with traditional near-field optical fibers containing a significant portion of metallic material.
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Note: Precise radial distribution of charged particles in a magnetic guiding field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backe, H., E-mail: backe@kph.uni-mainz.de
2015-07-15
Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSFmore » is presented which can numerically be evaluated with arbitrary precision.« less
Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo
2012-12-01
In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.
NASA Astrophysics Data System (ADS)
Yin, Zhifu; Sun, Lei; Zou, Helin; Cheng, E.
2015-05-01
A method for obtaining a low-cost and high-replication precision two-dimensional (2D) nanofluidic device with a polymethyl methacrylate (PMMA) sheet is proposed. To improve the replication precision of the 2D PMMA nanochannels during the hot embossing process, the deformation of the PMMA sheet was analyzed by a numerical simulation method. The constants of the generalized Maxwell model used in the numerical simulation were calculated by experimental compressive creep curves based on previously established fitting formula. With optimized process parameters, 176 nm-wide and 180 nm-deep nanochannels were successfully replicated into the PMMA sheet with a replication precision of 98.2%. To thermal bond the 2D PMMA nanochannels with high bonding strength and low dimensional loss, the parameters of the oxygen plasma treatment and thermal bonding process were optimized. In order to measure the dimensional loss of 2D nanochannels after thermal bonding, a dimension loss evaluating method based on the nanoindentation experiments was proposed. According to the dimension loss evaluating method, the total dimensional loss of 2D nanochannels was 6 nm and 21 nm in width and depth, respectively. The tensile bonding strength of the 2D PMMA nanofluidic device was 0.57 MPa. The fluorescence images demonstrate that there was no blocking or leakage over the entire microchannels and nanochannels.
1985-10-01
83K0385 FINAL REPORT D Vol. 4 00 THERMAL EFFECTS ON THE ACCURACY OF LD NUME" 1ICALLY CONTROLLED MACHINE TOOLS PREPARED BY I Raghunath Venugopal and M...OF NUMERICALLY CONTROLLED MACHINE TOOLS 12 PERSONAL AJ’HOR(S) Venunorial, Raghunath and M. M. Barash 13a TYPE OF REPORT 13b TIME COVERED 14 DATE OF...TOOLS Prepared by Raghunath Venugopal and M. M. Barash Accesion For Unannounced 0 Justification ........................................... October 1085
A systematic and efficient method to compute multi-loop master integrals
NASA Astrophysics Data System (ADS)
Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu
2018-04-01
We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.
Beckmann, Jacques S; Lew, Daniel
2016-12-19
This era of groundbreaking scientific developments in high-resolution, high-throughput technologies is allowing the cost-effective collection and analysis of huge, disparate datasets on individual health. Proper data mining and translation of the vast datasets into clinically actionable knowledge will require the application of clinical bioinformatics. These developments have triggered multiple national initiatives in precision medicine-a data-driven approach centering on the individual. However, clinical implementation of precision medicine poses numerous challenges. Foremost, precision medicine needs to be contrasted with the powerful and widely used practice of evidence-based medicine, which is informed by meta-analyses or group-centered studies from which mean recommendations are derived. This "one size fits all" approach can provide inadequate solutions for outliers. Such outliers, which are far from an oddity as all of us fall into this category for some traits, can be better managed using precision medicine. Here, we argue that it is necessary and possible to bridge between precision medicine and evidence-based medicine. This will require worldwide and responsible data sharing, as well as regularly updated training programs. We also discuss the challenges and opportunities for achieving clinical utility in precision medicine. We project that, through collection, analyses and sharing of standardized medically relevant data globally, evidence-based precision medicine will shift progressively from therapy to prevention, thus leading eventually to improved, clinician-to-patient communication, citizen-centered healthcare and sustained well-being.
Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin
2013-01-01
Previous research has found a relationship between individual differences in children’s precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the present study we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of two years. Additionally, at the last time point, we tested children’s informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3; Ginsburg & Baroody, 2003). We found that children’s numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned, non-symbolic system of quantity representation and the system of mathematical reasoning that children come to master through instruction. PMID:24076381
Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin
2013-12-01
Previous research has found a relationship between individual differences in children's precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the current study, we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of 2years. In addition, at the final time point, we tested children's informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3). We found that children's numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned nonsymbolic system of quantity representation and the system of mathematics reasoning that children come to master through instruction. Copyright © 2013 Elsevier Inc. All rights reserved.
Verifying the error bound of numerical computation implemented in computer systems
Sawada, Jun
2013-03-12
A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.
Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions
Liu, Weidong; Luo, Xi
2014-01-01
This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463
Precipitation Estimates for Hydroelectricity
NASA Technical Reports Server (NTRS)
Tapiador, Francisco J.; Hou, Arthur Y.; de Castro, Manuel; Checa, Ramiro; Cuartero, Fernando; Barros, Ana P.
2011-01-01
Hydroelectric plants require precise and timely estimates of rain, snow and other hydrometeors for operations. However, it is far from being a trivial task to measure and predict precipitation. This paper presents the linkages between precipitation science and hydroelectricity, and in doing so it provides insight into current research directions that are relevant for this renewable energy. Methods described include radars, disdrometers, satellites and numerical models. Two recent advances that have the potential of being highly beneficial for hydropower operations are featured: the Global Precipitation Measuring (GPM) mission, which represents an important leap forward in precipitation observations from space, and high performance computing (HPC) and grid technology, that allows building ensembles of numerical weather and climate models.
NASA Astrophysics Data System (ADS)
Haghani Hassan Abadi, Reza; Fakhari, Abbas; Rahimian, Mohammad Hassan
2018-03-01
In this paper, we propose a multiphase lattice Boltzmann model for numerical simulation of ternary flows at high density and viscosity ratios free from spurious velocities. The proposed scheme, which is based on the phase-field modeling, employs the Cahn-Hilliard theory to track the interfaces among three different fluid components. Several benchmarks, such as the spreading of a liquid lens, binary droplets, and head-on collision of two droplets in binary- and ternary-fluid systems, are conducted to assess the reliability and accuracy of the model. The proposed model can successfully simulate both partial and total spreadings while reducing the parasitic currents to the machine precision.
Dual-band plasmonic resonator based on Jerusalem cross-shaped nanoapertures
NASA Astrophysics Data System (ADS)
Cetin, Arif E.; Kaya, Sabri; Mertiri, Alket; Aslan, Ekin; Erramilli, Shyamsunder; Altug, Hatice; Turkmen, Mustafa
2015-06-01
In this paper, we both experimentally and numerically introduce a dual-resonant metamaterial based on subwavelength Jerusalem cross-shaped apertures. We numerically investigate the physical origin of the dual-resonant behavior, originating from the constituting aperture elements, through finite difference time domain calculations. Our numerical calculations show that at the dual-resonances, the aperture system supports large and easily accessible local electromagnetic fields. In order to experimentally realize the aperture system, we utilize a high-precision and lift-off free fabrication method based on electron-beam lithography. We also introduce a fine-tuning mechanism for controlling the dual-resonant spectral response through geometrical device parameters. Finally, we show the aperture system's highly advantageous far- and near-field characteristics through numerical calculations on refractive index sensitivity. The quantitative analyses on the availability of the local fields supported by the aperture system are employed to explain the grounds behind the sensitivity of each spectral feature within the dual-resonant behavior. Possessing dual-resonances with large and accessible electromagnetic fields, Jerusalem cross-shaped apertures can be highly advantageous for wide range of applications demanding multiple spectral features with strong nearfield characteristics.
A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1976-01-01
The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.
Investigation of Space Interferometer Control Using Imaging Sensor Output Feedback
NASA Technical Reports Server (NTRS)
Leitner, Jesse A.; Cheng, Victor H. L.
2003-01-01
Numerous space interferometry missions are planned for the next decade to verify different enabling technologies towards very-long-baseline interferometry to achieve high-resolution imaging and high-precision measurements. These objectives will require coordinated formations of spacecraft separately carrying optical elements comprising the interferometer. High-precision sensing and control of the spacecraft and the interferometer-component payloads are necessary to deliver sub-wavelength accuracy to achieve the scientific objectives. For these missions, the primary scientific product of interferometer measurements may be the only source of data available at the precision required to maintain the spacecraft and interferometer-component formation. A concept is studied for detecting the interferometer's optical configuration errors based on information extracted from the interferometer sensor output. It enables precision control of the optical components, and, in cases of space interferometers requiring formation flight of spacecraft that comprise the elements of a distributed instrument, it enables the control of the formation-flying vehicles because independent navigation or ranging sensors cannot deliver the high-precision metrology over the entire required geometry. Since the concept can act on the quality of the interferometer output directly, it can detect errors outside the capability of traditional metrology instruments, and provide the means needed to augment the traditional instrumentation to enable enhanced performance. Specific analyses performed in this study include the application of signal-processing and image-processing techniques to solve the problems of interferometer aperture baseline control, interferometer pointing, and orientation of multiple interferometer aperture pairs.
Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y
2018-05-01
A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.
NASA Astrophysics Data System (ADS)
Ganiev, R. F.; Reviznikov, D. L.; Rogoza, A. N.; Slastushenskiy, Yu. V.; Ukrainskiy, L. E.
2017-03-01
A description of a complex approach to investigation of nonlinear wave processes in the human cardiovascular system based on a combination of high-precision methods of measuring a pulse wave, mathematical methods of processing the empirical data, and methods of direct numerical modeling of hemodynamic processes in an arterial tree is given.
The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling
NASA Astrophysics Data System (ADS)
Thornes, Tobias; Duben, Peter; Palmer, Tim
2016-04-01
At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.
2015-06-03
example, all atomic clocks for the European satellite -based global positioning system GALLILEO were manufactured in Neuchatel. With the integration...realization of numerous other exciting devices in various areas like advancement of sensors and nano- technological devices. Summary of Project...losses of the resonator . Achieving passive femtosecond pulse formation at these record-high power levels will require eliminating any destabilizing
Mazzocco, Michèle M M; Feigenson, Lisa; Halberda, Justin
2011-01-01
The Approximate Number System (ANS) is a primitive mental system of nonverbal representations that supports an intuitive sense of number in human adults, children, infants, and other animal species. The numerical approximations produced by the ANS are characteristically imprecise and, in humans, this precision gradually improves from infancy to adulthood. Throughout development, wide ranging individual differences in ANS precision are evident within age groups. These individual differences have been linked to formal mathematics outcomes, based on concurrent, retrospective, or short-term longitudinal correlations observed during the school age years. However, it remains unknown whether this approximate number sense actually serves as a foundation for these school mathematics abilities. Here we show that ANS precision measured at preschool, prior to formal instruction in mathematics, selectively predicts performance on school mathematics at 6 years of age. In contrast, ANS precision does not predict non-numerical cognitive abilities. To our knowledge, these results provide the first evidence for early ANS precision, measured before the onset of formal education, predicting later mathematical abilities.
Mazzocco, Michèle M. M.; Feigenson, Lisa; Halberda, Justin
2011-01-01
The Approximate Number System (ANS) is a primitive mental system of nonverbal representations that supports an intuitive sense of number in human adults, children, infants, and other animal species. The numerical approximations produced by the ANS are characteristically imprecise and, in humans, this precision gradually improves from infancy to adulthood. Throughout development, wide ranging individual differences in ANS precision are evident within age groups. These individual differences have been linked to formal mathematics outcomes, based on concurrent, retrospective, or short-term longitudinal correlations observed during the school age years. However, it remains unknown whether this approximate number sense actually serves as a foundation for these school mathematics abilities. Here we show that ANS precision measured at preschool, prior to formal instruction in mathematics, selectively predicts performance on school mathematics at 6 years of age. In contrast, ANS precision does not predict non-numerical cognitive abilities. To our knowledge, these results provide the first evidence for early ANS precision, measured before the onset of formal education, predicting later mathematical abilities. PMID:21935362
Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses.
Das, Jayajit
2016-03-08
Single cells often generate precise responses by involving dissipative out-of-thermodynamic-equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high-precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early-time T cell signaling. Using exact analytical calculations and numerical simulations, I show that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in determining single-cell kinetics from cell-population results. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Development of high velocity gas gun with a new trigger system-numerical analysis
NASA Astrophysics Data System (ADS)
Husin, Z.; Homma, H.
2018-02-01
In development of high performance armor vests, we need to carry out well controlled experiments using bullet speed of more than 900 m/sec. After reviewing trigger systems used for high velocity gas guns, this research intends to develop a new trigger system, which can realize precise and reproducible impact tests at impact velocity of more than 900 m/sec. A new trigger system developed here is called a projectile trap. A projectile trap is placed between a reservoir and a barrel. A projectile trap has two functions of a sealing disk and triggering. Polyamidimide is selected for the trap material and dimensions of the projectile trap are determined by numerical analysis for several levels of launching pressure to change the projectile velocity. Numerical analysis results show that projectile trap designed here can operate reasonably and stresses caused during launching operation are less than material strength. It means a projectile trap can be reused for the next shooting.
Application of Numerical Integration and Data Fusion in Unit Vector Method
NASA Astrophysics Data System (ADS)
Zhang, J.
2012-01-01
The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of available observation apparatus. Compare with the classical differential improvement with the numerical integration, its calculation speed is also improved obviously. (2) After data fusion method has been introduced into the UVM, weighted distribution accords rationally with the accuracy of different kinds of data, all data are fully used and the new method is also good at numerical stability and rational weighted distribution.
Microhartree precision in density functional theory calculations
NASA Astrophysics Data System (ADS)
Gulans, Andris; Kozhevnikov, Anton; Draxl, Claudia
2018-04-01
To address ultimate precision in density functional theory calculations we employ the full-potential linearized augmented plane-wave + local-orbital (LAPW + lo) method and justify its usage as a benchmark method. LAPW + lo and two completely unrelated numerical approaches, the multiresolution analysis (MRA) and the linear combination of atomic orbitals, yield total energies of atoms with mean deviations of 0.9 and 0.2 μ Ha , respectively. Spectacular agreement with the MRA is reached also for total and atomization energies of the G2-1 set consisting of 55 molecules. With the example of α iron we demonstrate the capability of LAPW + lo to reach μ Ha /atom precision also for periodic systems, which allows also for the distinction between the numerical precision and the accuracy of a given functional.
Atmospheric turbulence and high-precision ground-based solar polarimetry
NASA Astrophysics Data System (ADS)
Nagaraju, K.; Feller, A.; Ihle, S.; Soltau, H.
2011-10-01
High-precision full-Stokes polarimetry at near diffraction limited spatial resolution is important to understand numerous physical processes on the Sun. In view of the next generation of ground based solar telescopes, we have explored, through numerical simulation, how polarimetric accuracy is affected by atmospheric seeing, especially in the case of large aperture telescopes with increasing ratio between mirror diameter and Fried parameter. In this work we focus on higher-order wavefront aberrations. The numerical generation of time-dependent turbulence phase screens is based on the well-known power spectral method and on the assumption that the temporal evolution is mainly caused by wind driven propagation of frozen-in turbulence across the telescope. To analyze the seeing induced cross-talk between the Stokes parameters we consider polarization modulation scheme based on a continuously rotating waveplate with rotation frequencies between 1 Hz and several 100 Hz. Further, we have started the development of a new fast solar imaging polarimeter, based on pnCCD detector technology from PNSensor. The first detector will have a size of 264 x 264 pixels and will work at frame rates of up to 1kHz, combined with a very low readout noise of 2-3 e- ENC. The camera readout electronics will allow for buffering and accumulation of images corresponding to the different phases of the fast polarization modulation. A high write-out rate (about 30 to 50 frames/s) will allow for post-facto image reconstruction. We will present the concept and the expected performance of the new polarimeter, based on the above-mentioned simulations of atmospheric seeing.
Improving Weather Forecasts Through Reduced Precision Data Assimilation
NASA Astrophysics Data System (ADS)
Hatfield, Samuel; Düben, Peter; Palmer, Tim
2017-04-01
We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior. PMID:26000011
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alhroob, M.; Boyd, G.; Hasib, A.
Precision ultrasonic measurements in binary gas systems provide continuous real-time monitoring of mixture composition and flow. Using custom micro-controller-based electronics, we have developed an ultrasonic instrument, with numerous potential applications, capable of making continuous high-precision sound velocity measurements. The instrument measures sound transit times along two opposite directions aligned parallel to - or obliquely crossing - the gas flow. The difference between the two measured times yields the gas flow rate while their average gives the sound velocity, which can be compared with a sound velocity vs. molar composition look-up table for the binary mixture at a given temperature andmore » pressure. The look-up table may be generated from prior measurements in known mixtures of the two components, from theoretical calculations, or from a combination of the two. We describe the instrument and its performance within numerous applications in the ATLAS experiment at the CERN Large Hadron Collider (LHC). The instrument can be of interest in other areas where continuous in-situ binary gas analysis and flowmetry are required. (authors)« less
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
DFB laser array driver circuit controlled by adjustable signal
NASA Astrophysics Data System (ADS)
Du, Weikang; Du, Yinchao; Guo, Yu; Li, Wei; Wang, Hao
2018-01-01
In order to achieve the intelligent controlling of DFB laser array, this paper presents the design of an intelligence and high precision numerical controlling electric circuit. The system takes MCU and FPGA as the main control chip, with compact, high-efficiency, no impact, switching protection characteristics. The output of the DFB laser array can be determined by an external adjustable signal. The system transforms the analog control model into a digital control model, which improves the performance of the driver. The system can monitor the temperature and current of DFB laser array in real time. The output precision of the current can reach ± 0.1mA, which ensures the stable and reliable operation of the DFB laser array. Such a driver can benefit the flexible usage of the DFB laser array.
ERIC Educational Resources Information Center
Gonza´lez-Go´mez, David; Rodríguez, Diego Airado; Can~ada-Can~ada, Florentina; Jeong, Jin Su
2015-01-01
Currently, there are a number of educational applications that allow students to reinforce theoretical or numerical concepts through an interactive way. More precisely, in the field of the analytical chemistry, MATLAB has been widely used to write easy-to-implement code, facilitating complex performances and/or tedious calculations. The main…
Towards future high performance computing: What will change? How can we be efficient?
NASA Astrophysics Data System (ADS)
Düben, Peter
2017-04-01
How can we make the most out of "exascale" supercomputers that will be available soon and enable us to calculate an amazing number of 1,000,000,000,000,000,000 real numbers operations within a single second? How do we need to design applications to use these machines efficiently? What are the limits? We will discuss opportunities and limits of the use of future high performance computers from the perspective of Earth System Modelling. We will provide an overview about future challenges and outline how numerical application will need to be changed to run efficiently on supercomputers in the future. We will also discuss how different disciplines can support each other and talk about data handling and numerical precision of data.
NASA Astrophysics Data System (ADS)
Itobe, Hiroki; Nakagawa, Yosuke; Mizumoto, Yuta; Kangawa, Hiroi; Kakinuma, Yasuhiro; Tanabe, Takasumi
2016-05-01
We fabricated a calcium fluoride (CaF2) whispering gallery mode (WGM) microcavity with a computer controlled ultra-precision cutting process. We observed a thermo-opto-mechanical (TOM) oscillation in the CaF2 WGM microcavity, which may influence the stability of the optical output when the cavity is employed for Kerr comb generation. We studied experimentally and numerically the mechanism of the TOM oscillation and showed that it is strongly dependent on cavity diameter. In addition, our numerical study suggests that a microcavity structure fabricated with a hybrid material (i.e. CaF2 and silicon), which is compatible with an ultra-high Q and high thermal conductivity, will allow us to reduce the TOM oscillation and stabilize the optical output.
Numerical simulation of deformation and figure quality of precise mirror
NASA Astrophysics Data System (ADS)
Vit, Tomáš; Melich, Radek; Sandri, Paolo
2015-01-01
The presented paper shows results and a comparison of FEM numerical simulations and optical tests of the assembly of a precise Zerodur mirror with a mounting structure for space applications. It also shows how the curing of adhesive film can impact the optical surface, especially as regards deformations. Finally, the paper shows the results of the figure quality analysis, which are based on data from FEM simulation of optical surface deformations.
Computational Calorimetry: High-Precision Calculation of Host–Guest Binding Thermodynamics
2015-01-01
We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van’t Hoff equation. Excellent agreement between the direct and van’t Hoff methods is demonstrated for both host–guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design. PMID:26523125
Continuity and Change in Children's Longitudinal Neural Responses to Numbers
ERIC Educational Resources Information Center
Emerson, Robert W.; Cantlon, Jessica F.
2015-01-01
Human children possess the ability to approximate numerical quantity nonverbally from a young age. Over the course of early childhood, children develop increasingly precise representations of numerical values, including a symbolic number system that allows them to conceive of numerical information as Arabic numerals or number words. Functional…
Approximate analytic method for high-apogee twelve-hour orbits of artificial Earth's satellites
NASA Astrophysics Data System (ADS)
Vashkovyaka, M. A.; Zaslavskii, G. S.
2016-09-01
We propose an approach to the study of the evolution of high-apogee twelve-hour orbits of artificial Earth's satellites. We describe parameters of the motion model used for the artificial Earth's satellite such that the principal gravitational perturbations of the Moon and Sun, nonsphericity of the Earth, and perturbations from the light pressure force are approximately taken into account. To solve the system of averaged equations describing the evolution of the orbit parameters of an artificial satellite, we use both numeric and analytic methods. To select initial parameters of the twelve-hour orbit, we assume that the path of the satellite along the surface of the Earth is stable. Results obtained by the analytic method and by the numerical integration of the evolving system are compared. For intervals of several years, we obtain estimates of oscillation periods and amplitudes for orbital elements. To verify the results and estimate the precision of the method, we use the numerical integration of rigorous (not averaged) equations of motion of the artificial satellite: they take into account forces acting on the satellite substantially more completely and precisely. The described method can be applied not only to the investigation of orbit evolutions of artificial satellites of the Earth; it can be applied to the investigation of the orbit evolution for other planets of the Solar system provided that the corresponding research problem will arise in the future and the considered special class of resonance orbits of satellites will be used for that purpose.
NASA Astrophysics Data System (ADS)
Thomaz, Marita Duarte Canhao da Silva Pereira Fernandes
The results presented cover broad aspects of a quantitative investigation into the elecrolytic etching and polishing of metals and alloys through photographically produced dielectric stencils (Photoresists). A study of the potential field generated between a cathode and relatively smaller anode sites as those defined by a dielectric stencil was carried out. Numerical, analytical and graphical methods yielded answers to the factors determining lateral dissolution (undercut) at the anode/stencil interface. A quasi steady state numerical model simulating the transient behavior of the partially masked electrodes undergoing dissolution was obtained. AISI 304 stainless steel was electrolytically photoetched in 10% w/w HCl electrolyte. The optimised process parameters were utilised for quantifying the effects of galvanostatic etching of the anode as that defined by a relatively narrow adherent resist stencil. Stainless steel was also utilised in investigating electrolytic photopolishing. A polishing electrolyte (orthophosphoric acid-glycerol) was modified by the addition of a surfactant which yielded surface texture values of 70nm (Ra) and high levels of specular reflectance. These results were used in the production of features upon the metal surface through photographically produced precision stencils. The process was applied to the production of edge filters requiring high quality surface textures in precision recesses. Some of the new amorphous material exhibited high resistance to dissolution in commercially used spray etching formulations. One of these materials is a cobalt based alloy produced by chill block spinning. This material was also investigated and electro etched in 10% w/w HCl solution. Although passivity was not overcome, by selecting suitable operating parameters the successful electro photoetching of precision magnetic recording head laminations was achieved. Similarly, a polycrystalline nickel based alloy also exhibiting passivity in commercially used etchants was successfully etched in the above electrolyte.
ERIC Educational Resources Information Center
Siegler, Robert S.; Braithwaite, David W.
2016-01-01
In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…
Fundamental differences between optimization code test problems in engineering applications
NASA Technical Reports Server (NTRS)
Eason, E. D.
1984-01-01
The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.
Constructing exact symmetric informationally complete measurements from numerical solutions
NASA Astrophysics Data System (ADS)
Appleby, Marcus; Chien, Tuan-Yow; Flammia, Steven; Waldron, Shayne
2018-04-01
Recently, several intriguing conjectures have been proposed connecting symmetric informationally complete quantum measurements (SIC POVMs, or SICs) and algebraic number theory. These conjectures relate the SICs to their minimal defining algebraic number field. Testing or sharpening these conjectures requires that the SICs are expressed exactly, rather than as numerical approximations. While many exact solutions of SICs have been constructed previously using Gröbner bases, this method has probably been taken as far as is possible with current computer technology (except in special cases where there are additional symmetries). Here, we describe a method for converting high-precision numerical solutions into exact ones using an integer relation algorithm in conjunction with the Galois symmetries of an SIC. Using this method, we have calculated 69 new exact solutions, including nine new dimensions, where previously only numerical solutions were known—which more than triples the number of known exact solutions. In some cases, the solutions require number fields with degrees as high as 12 288. We use these solutions to confirm that they obey the number-theoretic conjectures, and address two questions suggested by the previous work.
Garay-Avendaño, Roger L; Zamboni-Rached, Michel
2014-07-10
In this paper, we propose a method that is capable of describing in exact and analytic form the propagation of nonparaxial scalar and electromagnetic beams. The main features of the method presented here are its mathematical simplicity and the fast convergence in the cases of highly nonparaxial electromagnetic beams, enabling us to obtain high-precision results without the necessity of lengthy numerical simulations or other more complex analytical calculations. The method can be used in electromagnetism (optics, microwaves) as well as in acoustics.
GHM method for obtaining rationalsolutions of nonlinear differential equations.
Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo
2015-01-01
In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.
Should precise numerical dating overrule glacial geomorphology?
NASA Astrophysics Data System (ADS)
Winkler, Stefan
2016-04-01
Numerical age dating techniques, namely different types of terrestrial cosmogenic nuclide dating (TCND), have achieved an impressive progress in both laboratory precision and regional calibration models during the past few decades. It is now possible to apply precise TCND even to young landforms like Late Holocene moraines, a task seemed hardly achievable just about 15 years ago. An increasing number of studies provide very precise TCND ages for boulders from Late Holocene moraines enabling related reconstruction of glacier chronologies and the interpretation of these glacial landforms in a palaeoclimatological context. These studies may also solve previous controversies about different ages assigned to moraines obtained by different dating techniques, for example relative-age dating techniques or techniques combining relative-age dating with few fixed points derived from numerical age dating. There are a few cases, for example Mueller Glacier and nearby long debris-covered valley glacier in Aoraki/Mt.Cook National Park (Southern Alps, New Zealand), where the apparent "supremacy" of TCND-ages seem to overrule glacial geomorphological principles. Enabled by a comparatively high number of individual boulders precisely dated by TCND, moraine ridges on those glacier forelands have been primarily clustered on basis of these boulder ages rather than on their corresponding morphological position. To the extreme, segments of a particular moraine complex morphologically and sedimentologically proven to be formed during one event have become split and classified as two separate "moraines" on different parts of the glacier foreland. One ledge of another moraine complex contains 2 TCND-sampled boulders apparently representing two separate "moraines"-clusters of an age difference in the order of 1,500 years. Although recently criticism has been raised regarding the non-contested application of the arithmetic mean for calculation of TCND-ages for individual moraines, this problem is still not properly addressed in every case and significant age differences of individual boulders on moraine ridges create uncertainties with their palaeoclimatic interpretation. Referring to the exemplary case of the glacier forelands mentioned above it is argued that prior to any chronological interpretation the geomorphological correlation of individual moraine ridges and complexes need to be established and potential uncertainties clearly addressed. After the TCND-ages have been obtained from sampled boulders and assigned to the moraines any discrepancy needs to be carefully investigated to ensure that misleading ages don't effect subsequent chronological reconstructions and palaeoclimatic interpretations. Even if dating precision has recently considerably increased, moraines should not be clustered into synchronous moraine-groups based on TCND-ages if their morphological position or sedimentology contradicts such classification. Furthermore, the high precision of TCND-ages do often not consider the concept of 'LIA'-type events and different response times of nearby glaciers to the same mass balance/climate signal, therefore potentially overestimating the true number of glacier advances during a specific period. An alternative interpretation of existing TCND-ages reveals fewer advances during the Late Holocene. Summarising, modern TCND-ages are possibly "too precise" in some aspects and wrongly judged as superior to geomorphological evidence. A more critical evaluation would be beneficial to any subsequent attempts of intra-hemispheric and global correlation of glacier chronologies.
NASA Astrophysics Data System (ADS)
Kȩdzierski, Marcin; Wajnryb, Eligiusz
2011-10-01
Self-diffusion of colloidal particles confined to a cylindrical microchannel is considered theoretically and numerically. Virial expansion of the self-diffusion coefficient is performed. Two-body and three-body hydrodynamic interactions are evaluated with high precision using the multipole method. The multipole expansion algorithm is also used to perform numerical simulations of the self-diffusion coefficient, valid for all possible particle packing fractions. Comparison with earlier results shows that the widely used method of reflections is insufficient for calculations of hydrodynamic interactions even for small packing fractions and small particles radii, contrary to the prevalent opinion.
Direct numerical simulation of microcavitation processes in different bio environments
NASA Astrophysics Data System (ADS)
Ly, Kevin; Wen, Sy-Bor; Schmidt, Morgan S.; Thomas, Robert J.
2017-02-01
Laser-induced microcavitation refers to the rapid formation and expansion of a vapor bubble inside the bio-tissue when it is exposed to intense, pulsed laser energy. With the associated microscale dissection occurring within the tissue, laserinduced microcavitation is a common approach for high precision bio-surgeries. For example, laser-induced microcavitation is used for laser in-situ keratomileusis (LASIK) to precisely reshape the midstromal corneal tissue through excimer laser beam. Multiple efforts over the last several years have observed unique characteristics of microcavitions in biotissues. For example, it was found that the threshold energy for microcavitation can be significantly reduced when the size of the biostructure is increased. Also, it was found that the dynamics of microcavitation are significantly affected by the elastic modules of the bio-tissue. However, these efforts have not focused on the early events during microcavitation development. In this study, a direct numerical simulation of the microcavitation process based on equation of state of the biotissue was established. With the direct numerical simulation, we were able to reproduce the dynamics of microcavitation in water-rich bio tissues. Additionally, an experimental setup in deionized water and 10% PAA gel was made to verify the results of the simulation for early micro-cavitation formation for 10% Polyacrylamide (PAA) gel in deionized water.
Design of measurement system of 3D surface profile based on chromatic confocal technology
NASA Astrophysics Data System (ADS)
Wang, An-su; Xie, Bin; Liu, Zi-wei
2018-01-01
Chromatic confocal 3D profilometer has widely used in science investigation and industry fields recently for its high precision, great measuring range and numerical surface characteristic. It can provide exact and omnidirectional solution for manufacture and research by 3D non-contact surface analysis technique. The article analyzes the principle of surface measurement with chromatic confocal technology, and provides the designing indicators and requirements of the confocal system. As the key component, the dispersive objective used to achieve longitudinal focus vibration with wavelength was designed. The objective disperses the focus of wavelength between 400 700 nm to 15 mm longitudinal range. With selected spectrometer, the resolution of chromatic confocal 3D profilometer is no more than 5 μm, which can meet needs for the high precision non-contact surface profile measurement.
Computing Generalized Matrix Inverse on Spiking Neural Substrate.
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.
Precision direct photon spectra at high energy and comparison to the 8 TeV ATLAS data
Schwartz, Matthew D.
2016-09-01
The direct photon spectrum is computed to the highest currently available precision and compared to ATLAS data from 8 TeV collisions at the LHC. The prediction includes threshold resummation at next-to-next-to-next-to-leading logarithmic order through the program PeTeR, matched to next-to-leading fixed order with fragmentation effects using JetPhox and includes the resummation of leading-logarithmic electroweak Sudakov effects. Remarkably, improved agreement with data can be seen when each component of the calculation is added successively. This comparison demonstrates the importance of both threshold logs and electroweak Sudakov effects. Numerical values for the predictions are included.
Synthesis of a combined system for precise stabilization of the Spektr-UF observatory: II
NASA Astrophysics Data System (ADS)
Bychkov, I. V.; Voronov, V. A.; Druzhinin, E. I.; Kozlov, R. I.; Ul'yanov, S. A.; Belyaev, B. B.; Telepnev, P. P.; Ul'yashin, A. I.
2014-03-01
The paper presents the second part of the results of search studies for the development of a combined system of high-precision stabilization of the optical telescope for the designed Spectr-UF international observatory [1]. A new modification of the strict method of the synthesis of nonlinear discrete-continuous stabilization systems with uncertainties is described, which is based on the minimization of the guaranteed accuracy estimate calculated using vector Lyapunov functions. Using this method, the synthesis of the feedback parameters in the mode of precise inertial stabilization of the optical telescope axis is performed taking the design nonrigidity, quantization of signals over time and level, and errors of orientation meters, as well as the errors and limitation of control moments of executive engine-flywheels into account. The results of numerical experiments that demonstrate the quality of the synthesized system are presented.
Parameter-tolerant design of high contrast gratings
NASA Astrophysics Data System (ADS)
Chevallier, Christyves; Fressengeas, Nicolas; Jacquet, Joel; Almuneau, Guilhem; Laaroussi, Youness; Gauthier-Lafaye, Olivier; Cerutti, Laurent; Genty, Frédéric
2015-02-01
This work is devoted to the design of high contrast grating mirrors taking into account the technological constraints and tolerance of fabrication. First, a global optimization algorithm has been combined to a numerical analysis of grating structures (RCWA) to automatically design HCG mirrors. Then, the tolerances of the grating dimensions have been precisely studied to develop a robust optimization algorithm with which high contrast gratings, exhibiting not only a high efficiency but also large tolerance values, could be designed. Finally, several structures integrating previously designed HCGs has been simulated to validate and illustrate the interest of such gratings.
Magnitude Knowledge: The Common Core of Numerical Development
ERIC Educational Resources Information Center
Siegler, Robert S.
2016-01-01
The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic…
Magnitude Knowledge: The Common Core of Numerical Development
ERIC Educational Resources Information Center
Siegler, Robert S.
2016-01-01
The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: 1) representing increasingly precisely the magnitudes of non-symbolic…
Experimental Mathematics and Mathematical Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.; Borwein, Jonathan M.; Broadhurst, David
2009-06-26
One of the most effective techniques of experimental mathematics is to compute mathematical entities such as integrals, series or limits to high precision, then attempt to recognize the resulting numerical values. Recently these techniques have been applied with great success to problems in mathematical physics. Notable among these applications are the identification of some key multi-dimensional integrals that arise in Ising theory, quantum field theory and in magnetic spin theory.
Computational Science: Ensuring America’s Competitiveness
2005-06-01
Supercharging U. S. Innovation & Competitiveness, Washington, D.C. , July 2004. Davies, C. T. H. , et al. , “High-Precision Lattice QCD Confronts Experiment...together to form a class of particles call hadrons (that include protons and neutrons) . For 30 years, researchers in lattice QCD have been trying to use...the basic QCD equations to calculate the properties of hadrons, especially their masses, using numerical lattice gauge theory calculations in order to
Scheduling Mission-Critical Flows in Congested and Contested Airborne Network Environments
2018-03-01
precision agriculture [64–71]. However, designing, implementing, and testing UAV networks poses numerous interdisciplinary challenges because the...applications including search and rescue, disaster relief, precision agriculture , environmental monitoring, and surveillance. Many of these applications...monitoring enabling precision agriculture ,” in Automation Science and Engineering (CASE), 2015 IEEE International Conference on. IEEE, 2015, pp. 462–469. [65
Amigó, José M; Hirata, Yoshito; Aihara, Kazuyuki
2017-08-01
In a previous paper, the authors studied the limits of probabilistic prediction in nonlinear time series analysis in a perfect model scenario, i.e., in the ideal case that the uncertainty of an otherwise deterministic model is due to only the finite precision of the observations. The model consisted of the symbolic dynamics of a measure-preserving transformation with respect to a finite partition of the state space, and the quality of the predictions was measured by the so-called ignorance score, which is a conditional entropy. In practice, though, partitions are dispensed with by considering numerical and experimental data to be continuous, which prompts us to trade off in this paper the Shannon entropy for the differential entropy. Despite technical differences, we show that the core of the previous results also hold in this extended scenario for sufficiently high precision. The corresponding imperfect model scenario will be revisited too because it is relevant for the applications. The theoretical part and its application to probabilistic forecasting are illustrated with numerical simulations and a new prediction algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.
2013-10-15
We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less
NASA Astrophysics Data System (ADS)
Bornyakov, V. G.; Boyda, D. L.; Goy, V. A.; Molochkov, A. V.; Nakamura, Atsushi; Nikolaev, A. A.; Zakharov, V. I.
2017-05-01
We propose and test a new approach to computation of canonical partition functions in lattice QCD at finite density. We suggest a few steps procedure. We first compute numerically the quark number density for imaginary chemical potential i μq I . Then we restore the grand canonical partition function for imaginary chemical potential using the fitting procedure for the quark number density. Finally we compute the canonical partition functions using high precision numerical Fourier transformation. Additionally we compute the canonical partition functions using the known method of the hopping parameter expansion and compare results obtained by two methods in the deconfining as well as in the confining phases. The agreement between two methods indicates the validity of the new method. Our numerical results are obtained in two flavor lattice QCD with clover improved Wilson fermions.
Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N
2015-12-11
Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer can quickly validate. Lower optimality precision meant that codes were not often placed in the optimal hierarchical subfolder. The seven sites encountered few occurrences of codes outside our ontology, 93 % of which comprised just four codes. Our hierarchical approach correctly grouped retired and non-retired codes in most cases and extended the temporal reach of several important phenotyping algorithms. We developed a simple, easily-validated, automated method to place retired CPT codes into the BioPortal CPT hierarchy. This complements existing hierarchical terminologies, which do not include retired codes. The approach's utility is confirmed by the high correctness precision and successful grouping of retired with non-retired codes.
Numerical simulation of polishing U-tube based on solid-liquid two-phase
NASA Astrophysics Data System (ADS)
Li, Jun-ye; Meng, Wen-qing; Wu, Gui-ling; Hu, Jing-lei; Wang, Bao-zuo
2018-03-01
As the advanced technology to solve the ultra-precision machining of small hole structure parts and complex cavity parts, the abrasive grain flow processing technology has the characteristics of high efficiency, high quality and low cost. So this technology in many areas of precision machining has an important role. Based on the theory of solid-liquid two-phase flow coupling, a solid-liquid two-phase MIXTURE model is used to simulate the abrasive flow polishing process on the inner surface of U-tube, and the temperature, turbulent viscosity and turbulent dissipation rate in the process of abrasive flow machining of U-tube were compared and analyzed under different inlet pressure. In this paper, the influence of different inlet pressure on the surface quality of the workpiece during abrasive flow machining is studied and discussed, which provides a theoretical basis for the research of abrasive flow machining process.
Technology of focus detection for 193nm projection lithographic tool
NASA Astrophysics Data System (ADS)
Di, Chengliang; Yan, Wei; Hu, Song; Xu, Feng; Li, Jinglong
2012-10-01
With the shortening printing wavelength and increasing numerical aperture of lithographic tool, the depth of focus(DOF) sees a rapidly drop down trend, reach a scale of several hundred nanometers while the repeatable accuracy of focusing and leveling must be one-tenth of DOF, approximately several dozen nanometers. For this feature, this article first introduces several focusing technology, Obtained the advantages and disadvantages of various methods by comparing. Then get the accuracy of dual-grating focusing method through theoretical calculation. And the dual-grating focusing method based on photoelastic modulation is divided into coarse focusing and precise focusing method to analyze, establishing image processing model of coarse focusing and photoelastic modulation model of accurate focusing. Finally, focusing algorithm is simulated with MATLAB. In conclusion dual-grating focusing method shows high precision, high efficiency and non-contact measurement of the focal plane, meeting the demands of focusing in 193nm projection lithography.
The use of a cubesat to validate technological bricks in space
NASA Astrophysics Data System (ADS)
Rakotonimbahy, E.; Vives, S.; Dohlen, K.; Savini, G.; Iafolla, V.
2017-11-01
In the framework of the FP7 program FISICA (Far Infrared Space Interferometer Critical Assessment), we are developing a cubesat platform which will be used for the validation in space of two technological bricks relevant for FIRI. The first brick is a high-precision accelerometer which could be used in a future space mission as fundamental element for the dynamic control loop of the interferometer. The second brick is a miniaturized version of an imaging multi-aperture telescope. Ultimately, such an instrument could be composed of numerous space-born mirror segments flying in precise formation on baselines of hundreds or thousands of meters, providing high-resolution glimpses of distant worlds. We are proposing to build a very first space-born demonstrator of such an instrument which will fit into the limited resources of one cubesat. In this paper, we will describe the detailed design of the cubesat hosting the two payloads.
NASA Astrophysics Data System (ADS)
Vasilyan, Suren; Rivero, Michel; Schleichert, Jan; Halbedel, Bernd; Fröhlich, Thomas
2016-04-01
In this paper, we present an application for realizing high-precision horizontally directed force measurements in the order of several tens of nN in combination with high dead loads of about 10 N. The set-up is developed on the basis of two identical state-of-the-art electromagnetic force compensation (EMFC) high precision balances. The measurement resolution of horizontally directed single-axis quasi-dynamic forces is 20 nN over the working range of ±100 μN. The set-up operates in two different measurement modes: in the open-loop mode the mechanical deflection of the proportional lever is an indication of the acting force, whereas in the closed-loop mode it is the applied electric current to the coil inside the EMFC balance that compensates deflection of the lever to the offset zero position. The estimated loading frequency (cutoff frequency) of the set-up in the open-loop mode is about 0.18 Hz, in the closed-loop mode it is 0.7 Hz. One of the practical applications that the set-up is suitable for is the flow rate measurements of low electrically conducting electrolytes by applying the contactless technique of Lorentz force velocimetry. Based on a previously developed set-up which uses a single EMFC balance, experimental, theoretical and numerical analyses of the thermo-mechanical properties of the supporting structure are presented.
The development of alignment turning system for precision len cells
NASA Astrophysics Data System (ADS)
Huang, Chien-Yao; Ho, Cheng-Fang; Wang, Jung-Hsing; Chung, Chien-Kai; Chen, Jun-Cheng; Chang, Keng-Shou; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Chen, Fong-Zhi
2017-08-01
In general, the drop-in and cell-mounted assembly are used for standard and high performance optical system respectively. The optical performance is limited by the residual centration error and position accuracy of the conventional assembly. Recently, the poker chip assembly with high precision lens barrels that can overcome the limitation of conventional assembly is widely applied to ultra-high performance optical system. ITRC also develops the poker chip assembly solution for high numerical aperture objective lenses and lithography projection lenses. In order to achieve high precision lens cell for poker chip assembly, an alignment turning system (ATS) is developed. The ATS includes measurement, alignment and turning modules. The measurement module including a non-contact displacement sensor and an autocollimator can measure centration errors of the top and the bottom surface of a lens respectively. The alignment module comprising tilt and translation stages can align the optical axis of the lens to the rotating axis of the vertical lathe. The key specifications of the ATS are maximum lens diameter, 400mm, and radial and axial runout of the rotary table < 2 μm. The cutting performances of the ATS are surface roughness Ra < 1 μm, flatness < 2 μm, and parallelism < 5 μm. After measurement, alignment and turning processes on our ATS, the centration error of a lens cell with 200mm in diameter can be controlled in 10 arcsec. This paper also presents the thermal expansion of the hydrostatic rotating table. A poker chip assembly lens cell with three sub-cells is accomplished with average transmission centration error in 12.45 arcsec by fresh technicians. The results show that ATS can achieve high assembly efficiency for precision optical systems.
Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born
2012-01-01
We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031
Computing Generalized Matrix Inverse on Spiking Neural Substrate
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483
Cymatics for the cloaking of flexural vibrations in a structured plate
Misseroni, D.; Colquitt, D. J.; Movchan, A. B.; Movchan, N. V.; Jones, I. S.
2016-01-01
Based on rigorous theoretical findings, we present a proof-of-concept design for a structured square cloak enclosing a void in an elastic lattice. We implement high-precision fabrication and experimental testing of an elastic invisibility cloak for flexural waves in a mechanical lattice. This is accompanied by verifications and numerical modelling performed through finite element simulations. The primary advantage of our square lattice cloak, over other designs, is the straightforward implementation and the ease of construction. The elastic lattice cloak, implemented experimentally, shows high efficiency. PMID:27068339
Development and evaluation of a hybrid averaged orbit generator
NASA Technical Reports Server (NTRS)
Mcclain, W. D.; Long, A. C.; Early, L. W.
1978-01-01
A rapid orbit generator based on a first-order application of the Generalized Method of Averaging has been developed for the Research and Development (R&D) version of the Goddard Trajectory Determination System (GTDS). The evaluation of the averaged equations of motion can use both numerically averaged and recursively evaluated, analytically averaged perturbation models. These equations are numerically integrated to obtain the secular and long-period motion. Factors affecting efficient orbit prediction are discussed and guidelines are presented for treatment of each major perturbation. Guidelines for obtaining initial mean elements compatible with the theory are presented. An overview of the orbit generator is presented and comparisons with high precision methods are given.
Mazzocco, Michèle M. M.; Feigenson, Lisa; Halberda, Justin
2015-01-01
Many children have significant mathematical learning disabilities (MLD, or dyscalculia) despite adequate schooling. We hypothesize that MLD partly results from a deficiency in the Approximate Number System (ANS) that supports nonverbal numerical representations across species and throughout development. Here we show that ninth grade students with MLD have significantly poorer ANS precision than students in all other mathematics achievement groups (low-, typically-, and high-achieving), as measured by psychophysical assessments of ANS acuity (w) and of the mappings between ANS representations and number words (cv). This relationship persists even when controlling for domain-general abilities. Furthermore, this ANS precision does not differentiate low- from typically-achieving students, suggesting an ANS deficit that is specific to MLD. PMID:21679173
The Nature of the Nodes, Weights and Degree of Precision in Gaussian Quadrature Rules
ERIC Educational Resources Information Center
Prentice, J. S. C.
2011-01-01
We present a comprehensive proof of the theorem that relates the weights and nodes of a Gaussian quadrature rule to its degree of precision. This level of detail is often absent in modern texts on numerical analysis. We show that the degree of precision is maximal, and that the approximation error in Gaussian quadrature is minimal, in a…
Aerial imaging with manned aircraft for precision agriculture
USDA-ARS?s Scientific Manuscript database
Over the last two decades, numerous commercial and custom-built airborne imaging systems have been developed and deployed for diverse remote sensing applications, including precision agriculture. More recently, unmanned aircraft systems (UAS) have emerged as a versatile and cost-effective platform f...
Airborne and satellite remote sensors for precision agriculture
USDA-ARS?s Scientific Manuscript database
Remote sensing provides an important source of information to characterize soil and crop variability for both within-season and after-season management despite the availability of numerous ground-based soil and crop sensors. Remote sensing applications in precision agriculture have been steadily inc...
2013-01-01
Background High resolution melting analysis (HRM) is a rapid and cost-effective technique for the characterisation of PCR amplicons. Because the reverse genetics of segmented influenza A viruses allows the generation of numerous influenza A virus reassortants within a short time, methods for the rapid selection of the correct recombinants are very useful. Methods PCR primer pairs covering the single nucleotide polymorphism (SNP) positions of two different influenza A H5N1 strains were designed. Reassortants of the two different H5N1 isolates were used as a model to prove the suitability of HRM for the selection of the correct recombinants. Furthermore, two different cycler instruments were compared. Results Both cycler instruments generated comparable average melting peaks, which allowed the easy identification and selection of the correct cloned segments or reassorted viruses. Conclusions HRM is a highly suitable method for the rapid and precise characterisation of cloned influenza A genomes. PMID:24028349
Xu, Zhenli; Ma, Manman; Liu, Pei
2014-07-01
We propose a modified Poisson-Nernst-Planck (PNP) model to investigate charge transport in electrolytes of inhomogeneous dielectric environment. The model includes the ionic polarization due to the dielectric inhomogeneity and the ion-ion correlation. This is achieved by the self energy of test ions through solving a generalized Debye-Hückel (DH) equation. We develop numerical methods for the system composed of the PNP and DH equations. Particularly, toward the numerical challenge of solving the high-dimensional DH equation, we developed an analytical WKB approximation and a numerical approach based on the selective inversion of sparse matrices. The model and numerical methods are validated by simulating the charge diffusion in electrolytes between two electrodes, for which effects of dielectrics and correlation are investigated by comparing the results with the prediction by the classical PNP theory. We find that, at the length scale of the interface separation comparable to the Bjerrum length, the results of the modified equations are significantly different from the classical PNP predictions mostly due to the dielectric effect. It is also shown that when the ion self energy is in weak or mediate strength, the WKB approximation presents a high accuracy, compared to precise finite-difference results.
A semi-analytical model of a time reversal cavity for high-amplitude focused ultrasound applications
NASA Astrophysics Data System (ADS)
Robin, J.; Tanter, M.; Pernot, M.
2017-09-01
Time reversal cavities (TRC) have been proposed as an efficient approach for 3D ultrasound therapy. They allow the precise spatio-temporal focusing of high-power ultrasound pulses within a large region of interest with a low number of transducers. Leaky TRCs are usually built by placing a multiple scattering medium, such as a random rod forest, in a reverberating cavity, and the final peak pressure gain of the device only depends on the temporal length of its impulse response. Such multiple scattering in a reverberating cavity is a complex phenomenon, and optimisation of the device’s gain is usually a cumbersome process, mostly empirical, and requiring numerical simulations with extremely long computation times. In this paper, we present a semi-analytical model for the fast optimisation of a TRC. This model decouples ultrasound propagation in an empty cavity and multiple scattering in a multiple scattering medium. It was validated numerically and experimentally using a 2D-TRC and numerically using a 3D-TRC. Finally, the model was used to determine rapidly the optimal parameters of the 3D-TRC which had been confirmed by numerical simulations.
On the use of programmable hardware and reduced numerical precision in earth-system modeling.
Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N
2015-09-01
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.
NASA Astrophysics Data System (ADS)
Kaloop, Mosbeh R.; Yigit, Cemal O.; Hu, Jong W.
2018-03-01
Recently, the high rate global navigation satellite system-precise point positioning (GNSS-PPP) technique has been used to detect the dynamic behavior of structures. This study aimed to increase the accuracy of the extraction oscillation properties of structural movements based on the high-rate (10 Hz) GNSS-PPP monitoring technique. A developmental model based on the combination of wavelet package transformation (WPT) de-noising and neural network prediction (NN) was proposed to improve the dynamic behavior of structures for GNSS-PPP method. A complicated numerical simulation involving highly noisy data and 13 experimental cases with different loads were utilized to confirm the efficiency of the proposed model design and the monitoring technique in detecting the dynamic behavior of structures. The results revealed that, when combined with the proposed model, GNSS-PPP method can be used to accurately detect the dynamic behavior of engineering structures as an alternative to relative GNSS method.
A Novel Gravity Compensation Method for High Precision Free-INS Based on “Extreme Learning Machine”
Zhou, Xiao; Yang, Gongliu; Cai, Qingzhong; Wang, Jing
2016-01-01
In recent years, with the emergency of high precision inertial sensors (accelerometers and gyros), gravity compensation has become a major source influencing the navigation accuracy in inertial navigation systems (INS), especially for high-precision INS. This paper presents preliminary results concerning the effect of gravity disturbance on INS. Meanwhile, this paper proposes a novel gravity compensation method for high-precision INS, which estimates the gravity disturbance on the track using the extreme learning machine (ELM) method based on measured gravity data on the geoid and processes the gravity disturbance to the height where INS has an upward continuation, then compensates the obtained gravity disturbance into the error equations of INS to restrain the INS error propagation. The estimation accuracy of the gravity disturbance data is verified by numerical tests. The root mean square error (RMSE) of the ELM estimation method can be improved by 23% and 44% compared with the bilinear interpolation method in plain and mountain areas, respectively. To further validate the proposed gravity compensation method, field experiments with an experimental vehicle were carried out in two regions. Test 1 was carried out in a plain area and Test 2 in a mountain area. The field experiment results also prove that the proposed gravity compensation method can significantly improve the positioning accuracy. During the 2-h field experiments, the positioning accuracy can be improved by 13% and 29% respectively, in Tests 1 and 2, when the navigation scheme is compensated by the proposed gravity compensation method. PMID:27916856
NASA Astrophysics Data System (ADS)
Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford
2018-04-01
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.
From LIDAR Scanning to 3d FEM Analysis for Complex Surface and Underground Excavations
NASA Astrophysics Data System (ADS)
Chun, K.; Kemeny, J.
2017-12-01
Light detection and ranging (LIDAR) has been a prevalent remote-sensing technology applied in the geological fields due to its high precision and ease to use. One of the major applications is to use the detailed geometrical information of underground structures as a basis for the generation of three-dimensional numerical model that can be used in FEM analysis. To date, however, straightforward techniques in reconstructing numerical model from the scanned data of underground structures have not been well established or tested. In this paper, we propose a comprehensive approach integrating from LIDAR scanning to finite element numerical analysis, specifically converting LIDAR 3D point clouds of object containing complex surface geometry into finite element model. This methodology has been applied to the Kartchner Caverns in Arizona for the stability analysis. Numerical simulations were performed using the finite element code ABAQUS. The results indicate that the highlights of our technologies obtained from LIDAR is effective and provide reference for other similar engineering project in practice.
Results from Binary Black Hole Simulations in Astrophysics Applications
NASA Technical Reports Server (NTRS)
Baker, John G.
2007-01-01
Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.
NASA Astrophysics Data System (ADS)
Rutkowski, Lucile; Masłowski, Piotr; Johansson, Alexandra C.; Khodabakhsh, Amir; Foltynowicz, Aleksandra
2018-01-01
Broadband precision spectroscopy is indispensable for providing high fidelity molecular parameters for spectroscopic databases. We have recently shown that mechanical Fourier transform spectrometers based on optical frequency combs can measure broadband high-resolution molecular spectra undistorted by the instrumental line shape (ILS) and with a highly precise frequency scale provided by the comb. The accurate measurement of the power of the comb modes interacting with the molecular sample was achieved by acquiring single-burst interferograms with nominal resolution matched to the comb mode spacing. Here we describe in detail the experimental and numerical steps needed to achieve sub-nominal resolution and retrieve ILS-free molecular spectra, i.e. with ILS-induced distortion below the noise level. We investigate the accuracy of the transition line centers retrieved by fitting to the absorption lines measured using this method. We verify the performance by measuring an ILS-free cavity-enhanced low-pressure spectrum of the 3ν1 + ν3 band of CO2 around 1575 nm with line widths narrower than the nominal resolution. We observe and quantify collisional narrowing of absorption line shape, for the first time with a comb-based spectroscopic technique. Thus retrieval of line shape parameters with accuracy not limited by the Voigt profile is now possible for entire absorption bands acquired simultaneously.
Precision shock tuning on the national ignition facility.
Robey, H F; Celliers, P M; Kline, J L; Mackinnon, A J; Boehly, T R; Landen, O L; Eggert, J H; Hicks, D; Le Pape, S; Farley, D R; Bowers, M W; Krauter, K G; Munro, D H; Jones, O S; Milovich, J L; Clark, D; Spears, B K; Town, R P J; Haan, S W; Dixit, S; Schneider, M B; Dewald, E L; Widmann, K; Moody, J D; Döppner, T D; Radousky, H B; Nikroo, A; Kroll, J J; Hamza, A V; Horner, J B; Bhandarkar, S D; Dzenitis, E; Alger, E; Giraldez, E; Castro, C; Moreno, K; Haynam, C; LaFortune, K N; Widmayer, C; Shaw, M; Jancaitis, K; Parham, T; Holunga, D M; Walters, C F; Haid, B; Malsbury, T; Trummer, D; Coffee, K R; Burr, B; Berzins, L V; Choate, C; Brereton, S J; Azevedo, S; Chandrasekaran, H; Glenzer, S; Caggiano, J A; Knauer, J P; Frenje, J A; Casey, D T; Johnson, M Gatu; Séguin, F H; Young, B K; Edwards, M J; Van Wonterghem, B M; Kilkenny, J; MacGowan, B J; Atherton, J; Lindl, J D; Meyerhofer, D D; Moses, E
2012-05-25
Ignition implosions on the National Ignition Facility [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)] are underway with the goal of compressing deuterium-tritium fuel to a sufficiently high areal density (ρR) to sustain a self-propagating burn wave required for fusion power gain greater than unity. These implosions are driven with a very carefully tailored sequence of four shock waves that must be timed to very high precision to keep the fuel entropy and adiabat low and ρR high. The first series of precision tuning experiments on the National Ignition Facility, which use optical diagnostics to directly measure the strength and timing of all four shocks inside a hohlraum-driven, cryogenic liquid-deuterium-filled capsule interior have now been performed. The results of these experiments are presented demonstrating a significant decrease in adiabat over previously untuned implosions. The impact of the improved shock timing is confirmed in related deuterium-tritium layered capsule implosions, which show the highest fuel compression (ρR~1.0 g/cm(2)) measured to date, exceeding the previous record [V. Goncharov et al., Phys. Rev. Lett. 104, 165001 (2010)] by more than a factor of 3. The experiments also clearly reveal an issue with the 4th shock velocity, which is observed to be 20% slower than predictions from numerical simulation.
Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C
2012-06-01
Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications. Copyright © 2012 Elsevier B.V. All rights reserved.
Accurate computation of gravitational field of a tesseroid
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2018-02-01
We developed an accurate method to compute the gravitational field of a tesseroid. The method numerically integrates a surface integral representation of the gravitational potential of the tesseroid by conditionally splitting its line integration intervals and by using the double exponential quadrature rule. Then, it evaluates the gravitational acceleration vector and the gravity gradient tensor by numerically differentiating the numerically integrated potential. The numerical differentiation is conducted by appropriately switching the central and the single-sided second-order difference formulas with a suitable choice of the test argument displacement. If necessary, the new method is extended to the case of a general tesseroid with the variable density profile, the variable surface height functions, and/or the variable intervals in longitude or in latitude. The new method is capable of computing the gravitational field of the tesseroid independently on the location of the evaluation point, namely whether outside, near the surface of, on the surface of, or inside the tesseroid. The achievable precision is 14-15 digits for the potential, 9-11 digits for the acceleration vector, and 6-8 digits for the gradient tensor in the double precision environment. The correct digits are roughly doubled if employing the quadruple precision computation. The new method provides a reliable procedure to compute the topographic gravitational field, especially that near, on, and below the surface. Also, it could potentially serve as a sure reference to complement and elaborate the existing approaches using the Gauss-Legendre quadrature or other standard methods of numerical integration.
A mechanically tunable and efficient ceramic probe for MR-microscopy at 17 Tesla
NASA Astrophysics Data System (ADS)
Kurdjumov, Sergei; Glybovski, Stanislav; Hurshkainen, Anna; Webb, Andrew; Abdeddaim, Redha; Ciobanu, Luisa; Melchakova, Irina; Belov, Pavel
2017-09-01
In this contribution we propose and study numerically a new probe (radiofrequency coil) for magnetic resonance mi-croscopy in the field of 17T. The probe is based on two coupled donut resonators made of a high-permittivity and low-loss ceramics excited by a non-resonant inductively coupled loop attached to a coaxial cable. By full-wave numerical simulation it was shown that the probe can be precisely tuned to the Larmor frequency of protons (723 MHz) by adjusting a gap between the two resonators. Moreover, the impedance of the probe can be matched by varying the distance from one of the resonators to the loop. As a result, a compact and mechanically tunable resonant probe was demonstrated for 17 Tesla applications using no lumped capacitors for tuning and matching. The new probe was numerically compared to a conventional solenoidal probe showing better efficiency.
Lourenco, Stella F; Bonny, Justin W
2017-07-01
A growing body of evidence suggests that non-symbolic representations of number, which humans share with nonhuman animals, are functionally related to uniquely human mathematical thought. Other research suggesting that numerical and non-numerical magnitudes not only share analog format but also form part of a general magnitude system raises questions about whether the non-symbolic basis of mathematical thinking is unique to numerical magnitude. Here we examined this issue in 5- and 6-year-old children using comparison tasks of non-symbolic number arrays and cumulative area as well as standardized tests of math competence. One set of findings revealed that scores on both magnitude comparison tasks were modulated by ratio, consistent with shared analog format. Moreover, scores on these tasks were moderately correlated, suggesting overlap in the precision of numerical and non-numerical magnitudes, as expected under a general magnitude system. Another set of findings revealed that the precision of both types of magnitude contributed shared and unique variance to the same math measures (e.g. calculation and geometry), after accounting for age and verbal competence. These findings argue against an exclusive role for non-symbolic number in supporting early mathematical understanding. Moreover, they suggest that mathematical understanding may be rooted in a general system of magnitude representation that is not specific to numerical magnitude but that also encompasses non-numerical magnitude. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Henzl, Vladimir; Daub, Brian; French, Jennifer; Matthews, June; Kovash, Michael; Wender, Stephen; Famiano, Michael; Koehler, Katrina; Yuly, Mark
2010-11-01
The determination of the light response of many organic scintillators to various types of radiation has been a subject of numerous experimental as well as theoretical studies in the past. But while the data on light response to particles with energies above 1 MeV are precise and abundant, the information on light response to very low energy particles (i.e. below 1 MeV) is scarce or completely missing. In this study we measured the light response of a BC-418 scintillator to protons with energies from 100 keV to 10 MeV. The experiment was performed at Weapons Neutron Research Facility at LANSCE, Los Alamos. The neutron beam from a spallation source is used to irradiate the active target made from BC-418 plastic scintillator. The recoiled protons detected in the active target are measured in coincidence with elastically scattered incident neutrons detected by and adjacent liquid scintillator. Time of flight of the incident neutron and the knowledge of scattering geometry allow for a kinematically complete and high-precision measurement of the light response as a function of the proton energy.
Analysis of precision in chemical oscillators: implications for circadian clocks
NASA Astrophysics Data System (ADS)
d'Eysmond, Thomas; De Simone, Alessandro; Naef, Felix
2013-10-01
Biochemical reaction networks often exhibit spontaneous self-sustained oscillations. An example is the circadian oscillator that lies at the heart of daily rhythms in behavior and physiology in most organisms including humans. While the period of these oscillators evolved so that it resonates with the 24 h daily environmental cycles, the precision of the oscillator (quantified via the Q factor) is another relevant property of these cell-autonomous oscillators. Since this quantity can be measured in individual cells, it is of interest to better understand how this property behaves across mathematical models of these oscillators. Current theoretical schemes for computing the Q factors show limitations for both high-dimensional models and in the vicinity of Hopf bifurcations. Here, we derive low-noise approximations that lead to numerically stable schemes also in high-dimensional models. In addition, we generalize normal form reductions that are appropriate near Hopf bifurcations. Applying our approximations to two models of circadian clocks, we show that while the low-noise regime is faithfully recapitulated, increasing the level of noise leads to species-dependent precision. We emphasize that subcomponents of the oscillator gradually decouple from the core oscillator as noise increases, which allows us to identify the subnetworks responsible for robust rhythms.
Whiteley, Greg S; Derry, Chris; Glasbey, Trevor; Fahey, Paul
2015-06-01
To investigate the reliability of commercial ATP bioluminometers and to document precision and variability measurements using known and quantitated standard materials. Four commercially branded ATP bioluminometers and their consumables were subjected to a series of controlled studies with quantitated materials in multiple repetitions of dilution series. The individual dilutions were applied directly to ATP swabs. To assess precision and reproducibility, each dilution step was tested in triplicate or quadruplicate and the RLU reading from each test point was recorded. Results across the multiple dilution series were normalized using the coefficient of variation. The results for pure ATP and bacterial ATP from suspensions of Staphylococcus epidermidis and Pseudomonas aeruginosa are presented graphically. The data indicate that precision and reproducibility are poor across all brands tested. Standard deviation was as high as 50% of the mean for all brands, and in the field users are not provided any indication of this level of imprecision. The variability of commercial ATP bioluminometers and their consumables is unacceptably high with the current technical configuration. The advantage of speed of response is undermined by instrument imprecision expressed in the numerical scale of relative light units (RLU).
Biotemplated Morpho Butterfly Wings for Tunable Structurally Colored Photocatalysts.
Rodríguez, Robin E; Agarwal, Sneha P; An, Shun; Kazyak, Eric; Das, Debashree; Shang, Wen; Skye, Rachael; Deng, Tao; Dasgupta, Neil P
2018-02-07
Morpho sulkowskyi butterfly wings contain naturally occurring hierarchical nanostructures that produce structural coloration. The high aspect ratio and surface area of these wings make them attractive nanostructured templates for applications in solar energy and photocatalysis. However, biomimetic approaches to replicate their complex structural features and integrate functional materials into their three-dimensional framework are highly limited in precision and scalability. Herein, a biotemplating approach is presented that precisely replicates Morpho nanostructures by depositing nanocrystalline ZnO coatings onto wings via low-temperature atomic layer deposition (ALD). This study demonstrates the ability to precisely tune the natural structural coloration while also integrating multifunctionality by imparting photocatalytic activity onto fully intact Morpho wings. Optical spectroscopy and finite-difference time-domain numerical modeling demonstrate that ALD ZnO coatings can rationally tune the structural coloration across the visible spectrum. These structurally colored photocatalysts exhibit an optimal coating thickness to maximize photocatalytic activity, which is attributed to trade-offs between light absorption and catalytic quantum yield with increasing coating thickness. These multifunctional photocatalysts present a new approach to integrating solar energy harvesting into visually attractive surfaces that can be integrated into building facades or other macroscopic structures to impart aesthetic appeal.
Threshold and Jet Radius Joint Resummation for Single-Inclusive Jet Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiaohui; Moch, Sven -Olaf; Ringer, Felix
Here, we present the first threshold and jet radius jointly resummed cross section for single-inclusive hadronic jet production. We work at next-to-leading logarithmic accuracy and our framework allows for a systematic extension beyond the currently achieved precision. Long-standing numerical issues are overcome by performing the resummation directly in momentum space within soft collinear effective theory. We present the first numerical results for the LHC and observe an improved description of the available data. Our results are of immediate relevance for LHC precision phenomenology including the extraction of parton distribution functions and the QCD strong coupling constant.
Optimal moving grids for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Wathen, A. J.
1989-01-01
Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of partial differential equation solutions in the least squares norm.
Optimal moving grids for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Wathen, A. J.
1992-01-01
Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of PDE solutions in the least-squares norm are reported.
Threshold and Jet Radius Joint Resummation for Single-Inclusive Jet Production
Liu, Xiaohui; Moch, Sven -Olaf; Ringer, Felix
2017-11-20
Here, we present the first threshold and jet radius jointly resummed cross section for single-inclusive hadronic jet production. We work at next-to-leading logarithmic accuracy and our framework allows for a systematic extension beyond the currently achieved precision. Long-standing numerical issues are overcome by performing the resummation directly in momentum space within soft collinear effective theory. We present the first numerical results for the LHC and observe an improved description of the available data. Our results are of immediate relevance for LHC precision phenomenology including the extraction of parton distribution functions and the QCD strong coupling constant.
Ultra-Light Precision Membrane Optics
NASA Technical Reports Server (NTRS)
Moore, Jim; Gunter, Kent; Patrick, Brian; Marty, Dave; Bates, Kevin; Gatlin, Romona; Clayton, Bill; Rood, Bob; Brantley, Whitt (Technical Monitor)
2001-01-01
SRS Technologies and NASA Marshall Space Flight Center have conducted a research effort to explore the possibility of developing ultra-lightweight membrane optics for future imaging applications. High precision optical flats and spherical mirrors were produced under this research effort. The thin film mirrors were manufactured using surface replication casting of CPI(Trademark), a polyimide material developed specifically for UV hardness and thermal stability. In the course of this program, numerous polyimide films were cast with surface finishes better than 1.5 nanometers rms and thickness variation of less than 63 nanometers. Precision membrane optical flats were manufactured demonstrating better than 1/13 wave figure error when measured at 633 nanometers. The aerial density of these films is 0.037 kilograms per square meter. Several 0.5-meter spherical mirrors were also manufactured. These mirrors had excellent surface finish (1.5 nanometers rms) and figure error on the order of tens of microns. This places their figure error within the demonstrated correctability of advanced wavefront correction technologies such as real time holography.
NASA Astrophysics Data System (ADS)
Coe, P. A.; Howell, D. F.; Nickerson, R. B.
2004-11-01
ATLAS is the largest particle detector under construction at CERN Geneva. Frequency scanning interferometry (FSI), also known as absolute distance interferometry, will be used to monitor shape changes of the SCT (semiconductor tracker), a particle tracker in the inaccessible, high radiation environment at the centre of ATLAS. Geodetic grids with several hundred fibre-coupled interferometers (30 mm to 1.5 m long) will be measured simultaneously. These lengths will be measured by tuning two lasers and comparing the resulting phase shifts in grid line interferometers (GLIs) with phase shifts in a reference interferometer. The novel inexpensive GLI design uses diverging beams to reduce sensitivity to misalignment, albeit with weaker signals. One micrometre precision length measurements of grid lines will allow 10 µm precision tracker shape corrections to be fed into ATLAS particle tracking analysis. The technique was demonstrated by measuring a 400 mm interferometer to better than 400 nm and a 1195 mm interferometer to better than 250 nm. Precise measurements were possible, even with poor quality signals, using numerical analysis of thousands of intensity samples. Errors due to drifts in interferometer length were substantially reduced using two lasers tuned in opposite directions and the precision was further improved by linking measurements made at widely separated laser frequencies.
NASA Astrophysics Data System (ADS)
Chuvashov, I. N.
2010-12-01
The features of high-precision numerical simulation of the Earth satellite motion using parallel computing are discussed on example the implementation of the cluster "Skiff Cyberia" software complex "Numerical model of the motion of system satellites". It is shown that the use of 128 bit word length allows considering weak perturbations from the high-order harmonics in the expansion of the geopotential and the effect of strain geopotential harmonics arising due to the combination of tidal perturbations associated with exposure to the moon and sun on the solid Earth and its oceans.
A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
This paper describes the design and implementation of a layered domain-specific compiler to support MADNESS---Multiresolution ADaptive Numerical Environment for Scientific Simulation. MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions, using adaptive and fast harmonic analysis methods with guaranteed precision. MADNESS uses k-d trees to represent spatial functions and implements operators like addition, multiplication, differentiation, and integration on the numerical representation of functions. The MADNESS runtime system provides global namespace support and a task-based execution model including futures. MADNESS is currently deployed on massively parallel supercomputers and has enabled many science advances.more » Due to the highly irregular and statically unpredictable structure of the k-d trees representing the spatial functions encountered in MADNESS applications, only purely runtime approaches to optimization have previously been implemented in the MADNESS framework. This paper describes a layered domain-specific compiler developed to address some performance bottlenecks in MADNESS. The newly developed static compile-time optimizations, in conjunction with the MADNESS runtime support, enable significant performance improvement for the MADNESS framework.« less
Interfacial gauge methods for incompressible fluid dynamics
Saye, Robert
2016-01-01
Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of “gauge freedom” to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena. PMID:27386567
2002-12-01
applications, vibration sources are numerous such as: ! Launch Loading ! Man-induced accelerations like on the Shuttle or space station ! Solar ...However, the lack of significant tracking errors during times when other actuators were stationary, and the fact that the local maximum tracking...
Drilling Precise Orifices and Slots
NASA Technical Reports Server (NTRS)
Richards, C. W.; Seidler, J. E.
1983-01-01
Reaction control thrustor injector requires precisely machined orifices and slots. Tooling setup consists of rotary table, numerical control system and torque sensitive drill press. Components used to drill oxidizer orifices. Electric discharge machine drills fuel-feed orifices. Device automates production of identical parts so several are completed in less time than previously.
Calculation of precision satellite orbits with nonsingular elements /VOP formulation/
NASA Technical Reports Server (NTRS)
Velez, C. E.; Cefola, P. J.; Long, A. C.; Nimitz, K. S.
1974-01-01
Review of some results obtained in an effort to develop efficient, high-precision trajectory computation processes for artificial satellites by optimum selection of the form of the equations of motion of the satellite and the numerical integration method. In particular, the matching of a Gaussian variation-of-parameter (VOP) formulation is considered which is expressed in terms of equinoctial orbital elements and partially decouples the motion of the orbital frame from motion within the orbital frame. The performance of the resulting orbit generators is then compared with the popular classical Cowell/Gauss-Jackson formulation/integrator pair for two distinctly different orbit types - namely, the orbit of the ATS satellite at near-geosynchronous conditions and the near-circular orbit of the GEOS-C satellite at 1000 km.
Self-position estimation using terrain shadows for precise planetary landing
NASA Astrophysics Data System (ADS)
Kuga, Tomoki; Kojima, Hirohisa
2018-07-01
In recent years, the investigation of moons and planets has attracted increasing attention in several countries. Furthermore, recently developed landing systems are now expected to reach more scientifically interesting areas close to hazardous terrain, requiring precise landing capabilities within a 100 m range of the target point. To achieve this, terrain-relative navigation (capable of estimating the position of a lander relative to the target point on the ground surface is actively being studied as an effective method for achieving highly accurate landings. This paper proposes a self-position estimation method using shadows on the terrain based on edge extraction from image processing algorithms. The effectiveness of the proposed method is validated through numerical simulations using images generated from a digital elevation model of simulated terrains.
NASA Astrophysics Data System (ADS)
Hu, Mengsu; Wang, Yuan; Rutqvist, Jonny
2015-06-01
One major challenge in modeling groundwater flow within heterogeneous geological media is that of modeling arbitrarily oriented or intersected boundaries and inner material interfaces. The Numerical Manifold Method (NMM) has recently emerged as a promising method for such modeling, in its ability to handle boundaries, its flexibility in constructing physical cover functions (continuous or with gradient jump), its meshing efficiency with a fixed mathematical mesh (covers), its convenience for enhancing approximation precision, and its integration precision, achieved by simplex integration. In this paper, we report on developing and comparing two new approaches for boundary constraints using the NMM, namely a continuous approach with jump functions and a discontinuous approach with Lagrange multipliers. In the discontinuous Lagrange multiplier method (LMM), the material interfaces are regarded as discontinuities which divide mathematical covers into different physical covers. We define and derive stringent forms of Lagrange multipliers to link the divided physical covers, thus satisfying the continuity requirement of the refraction law. In the continuous Jump Function Method (JFM), the material interfaces are regarded as inner interfaces contained within physical covers. We briefly define jump terms to represent the discontinuity of the head gradient across an interface to satisfy the refraction law. We then make a theoretical comparison between the two approaches in terms of global degrees of freedom, treatment of multiple material interfaces, treatment of small area, treatment of moving interfaces, the feasibility of coupling with mechanical analysis and applicability to other numerical methods. The newly derived boundary-constraint approaches are coded into a NMM model for groundwater flow analysis, and tested for precision and efficiency on different simulation examples. We first test the LMM for a Dirichlet boundary and then test both LMM and JFM for an idealized heterogeneous model, comparing the numerical results with analytical solutions. Then we test both approaches for a heterogeneous model and compare the results of hydraulic head and specific discharge. We show that both approaches are suitable for modeling material boundaries, considering high accuracy for the boundary constraints, the capability to deal with arbitrarily oriented or complexly intersected boundaries, and their efficiency using a fixed mathematical mesh.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhardwaj, Shubhendu; Sensale-Rodriguez, Berardi; Xing, Huili Grace
A rigorous theoretical and computational model is developed for the plasma-wave propagation in high electron mobility transistor structures with electron injection from a resonant tunneling diode at the gate. We discuss the conditions in which low-loss and sustainable plasmon modes can be supported in such structures. The developed analytical model is used to derive the dispersion relation for these plasmon-modes. A non-linear full-wave-hydrodynamic numerical solver is also developed using a finite difference time domain algorithm. The developed analytical solutions are validated via the numerical solution. We also verify previous observations that were based on a simplified transmission line model. Itmore » is shown that at high levels of negative differential conductance, plasmon amplification is indeed possible. The proposed rigorous models can enable accurate design and optimization of practical resonant tunnel diode-based plasma-wave devices for terahertz sources, mixers, and detectors, by allowing a precise representation of their coupling when integrated with other electromagnetic structures.« less
Applications of RNA Indexes for Precision Oncology in Breast Cancer.
Ma, Liming; Liang, Zirui; Zhou, Hui; Qu, Lianghu
2018-05-09
Precision oncology aims to offer the most appropriate treatments to cancer patients mainly based on their individual genetic information. Genomics has provided numerous valuable data on driver mutations and risk loci; however, it remains a formidable challenge to transform these data into therapeutic agents. Transcriptomics describes the multifarious expression patterns of both mRNAs and non-coding RNAs (ncRNAs), which facilitates the deciphering of genomic codes. In this review, we take breast cancer as an example to demonstrate the applications of these rich RNA resources in precision medicine exploration. These include the use of mRNA profiles in triple-negative breast cancer (TNBC) subtyping to inform corresponding candidate targeted therapies; current advancements and achievements of high-throughput RNA interference (RNAi) screening technologies in breast cancer; and microRNAs as functional signatures for defining cell identities and regulating the biological activities of breast cancer cells. We summarize the benefits of transcriptomic analyses in breast cancer management and propose that unscrambling the core signaling networks of cancer may be an important task of multiple-omic data integration for precision oncology. Copyright © 2018 The Authors. Production and hosting by Elsevier B.V. All rights reserved.
Wu, Jun; Hu, Xie-he; Chen, Sheng; Chu, Jian
2003-01-01
The closed-loop stability issue of finite-precision realizations was investigated for digital controllers implemented in block-floating-point format. The controller coefficient perturbation was analyzed resulting from using finite word length (FWL) block-floating-point representation scheme. A block-floating-point FWL closed-loop stability measure was derived which considers both the dynamic range and precision. To facilitate the design of optimal finite-precision controller realizations, a computationally tractable block-floating-point FWL closed-loop stability measure was then introduced and the method of computing the value of this measure for a given controller realization was developed. The optimal controller realization is defined as the solution that maximizes the corresponding measure, and a numerical optimization approach was adopted to solve the resulting optimal realization problem. A numerical example was used to illustrate the design procedure and to compare the optimal controller realization with the initial realization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, A.; Acero, J.; Alberdi, B.
High precision coil current control, stability and ripple content are very important aspects for a stellarator design. The TJ-II coils will be supplied by network commutated current converters and therefore the coil currents will contain harmonics which have to be kept to a very low level. An analytical investigation as well as numerous simulations with EMTP, SABER{reg_sign} and other softwares, have been done in order to predict the harmonic currents and to verify the completion with the specified maximum levels. The calculations and the results are presented.
Nanophotonic particle simulation and inverse design using artificial neural networks.
Peurifoy, John; Shen, Yichen; Jing, Li; Yang, Yi; Cano-Renteria, Fidel; DeLacy, Brendan G; Joannopoulos, John D; Tegmark, Max; Soljačić, Marin
2018-06-01
We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical.
NASA Astrophysics Data System (ADS)
Merzlaya, Anastasia;
2017-01-01
The heavy-ion programme of the NA61/SHINE experiment at CERN SPS is expanding to allow precise measurements of exotic particles with lifetime few hundred microns. A Vertex Detector for open charm measurements at the SPS is being constructed by the NA61/SHINE Collaboration to meet the challenges of high spatial resolution of secondary vertices and efficiency of track registration. This task is solved by the application of the coordinate sensitive CMOS Monolithic Active Pixel Sensors with extremely low material budget in the new Vertex Detector. A small-acceptance version of the Vertex Detector is being tested this year, later it will be expanded to a large-acceptance version. Simulation studies will be presented. A method of track reconstruction in the inhomogeneous magnetic field for the Vertex Detector was developed and implemented. Numerical calculations show the possibility of high precision measurements in heavy ion collisions of strange and multi strange particles, as well as heavy flavours, like charmed particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David
In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard! If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100more » correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the authors take the reader on a wide-ranging tour of modern numerical mathematics, with enough background material so that even readers with little or no training in numerical analysis can follow. Here is a list of just a few of the topics visited: numerical quadrature (i.e., numerical integration), series summation, sequence extrapolation, contour integration, Fourier integrals, high-precision arithmetic, interval arithmetic, symbolic computing, numerical linear algebra, perturbation theory, Euler-Maclaurin summation, global minimization, eigenvalue methods, evolutionary algorithms, matrix preconditioning, random walks, special functions, elliptic functions, Monte-Carlo methods, and numerical differentiation.« less
Mazzarella, Luca
2018-01-01
On 8 and 9 February 2018, the IFOM-IEO campus in Milan hosted the Milan summit on Precision Medicine, which gathered clinical and translational research experts from academia, industry and regulatory bodies to discuss the state of the art of precision medicine in Europe. The meeting was pervaded by a generalised feeling of excitement for a field that is perceived to be technologically mature for the transition into clinical routine but still hampered by numerous obstacles of a methodological, ethical, regulatory and possibly cultural nature. Through lively discussions, the attendees tried to identify realistic ways to implement a technology-rich precision approach to cancer patients.
Alessandrini, Marco; Chaudhry, Mamoonah; Dodgen, Tyren M; Pepper, Michael S
2016-10-01
In a move indicative of the enthusiastic support of precision medicine, the U.S. President Barack Obama announced the Precision Medicine Initiative in January 2015. The global precision medicine ecosystem is, thus, receiving generous support from the United States ($215 million), and numerous other governments have followed suit. In the context of precision medicine, drug treatment and prediction of its outcomes have been important for nearly six decades in the field of pharmacogenomics. The field offers an elegant solution for minimizing the effects and occurrence of adverse drug reactions (ADRs). The Clinical Pharmacogenetics Implementation Consortium (CPIC) plays an important role in this context, and it aims at specifically guiding the translation of clinically relevant and evidence-based pharmacogenomics research. In this forward-looking analysis, we make particular reference to several of the CPIC guidelines and their role in guiding the treatment of highly relevant diseases, namely cardiovascular disease, major depressive disorder, cancer, and human immunodeficiency virus, with a view to predicting and managing ADRs. In addition, we provide a list of the top 10 crosscutting opportunities and challenges facing the fields of precision medicine and pharmacogenomics, which have broad applicability independent of the drug class involved. Many of these opportunities and challenges pertain to infrastructure, study design, policy, and science culture in the early 21st century. Ultimately, rational pharmacogenomics study design and the acquisition of comprehensive phenotypic data that proportionately match the genomics data should be an imperative as we move forward toward global precision medicine.
Alessandrini, Marco; Chaudhry, Mamoonah; Dodgen, Tyren M.
2016-01-01
Abstract In a move indicative of the enthusiastic support of precision medicine, the U.S. President Barack Obama announced the Precision Medicine Initiative in January 2015. The global precision medicine ecosystem is, thus, receiving generous support from the United States ($215 million), and numerous other governments have followed suit. In the context of precision medicine, drug treatment and prediction of its outcomes have been important for nearly six decades in the field of pharmacogenomics. The field offers an elegant solution for minimizing the effects and occurrence of adverse drug reactions (ADRs). The Clinical Pharmacogenetics Implementation Consortium (CPIC) plays an important role in this context, and it aims at specifically guiding the translation of clinically relevant and evidence-based pharmacogenomics research. In this forward-looking analysis, we make particular reference to several of the CPIC guidelines and their role in guiding the treatment of highly relevant diseases, namely cardiovascular disease, major depressive disorder, cancer, and human immunodeficiency virus, with a view to predicting and managing ADRs. In addition, we provide a list of the top 10 crosscutting opportunities and challenges facing the fields of precision medicine and pharmacogenomics, which have broad applicability independent of the drug class involved. Many of these opportunities and challenges pertain to infrastructure, study design, policy, and science culture in the early 21st century. Ultimately, rational pharmacogenomics study design and the acquisition of comprehensive phenotypic data that proportionately match the genomics data should be an imperative as we move forward toward global precision medicine. PMID:27643672
The instanton method and its numerical implementation in fluid mechanics
NASA Astrophysics Data System (ADS)
Grafke, Tobias; Grauer, Rainer; Schäfer, Tobias
2015-08-01
A precise characterization of structures occurring in turbulent fluid flows at high Reynolds numbers is one of the last open problems of classical physics. In this review we discuss recent developments related to the application of instanton methods to turbulence. Instantons are saddle point configurations of the underlying path integrals. They are equivalent to minimizers of the related Freidlin-Wentzell action and known to be able to characterize rare events in such systems. While there is an impressive body of work concerning their analytical description, this review focuses on the question on how to compute these minimizers numerically. In a short introduction we present the relevant mathematical and physical background before we discuss the stochastic Burgers equation in detail. We present algorithms to compute instantons numerically by an efficient solution of the corresponding Euler-Lagrange equations. A second focus is the discussion of a recently developed numerical filtering technique that allows to extract instantons from direct numerical simulations. In the following we present modifications of the algorithms to make them efficient when applied to two- or three-dimensional (2D or 3D) fluid dynamical problems. We illustrate these ideas using the 2D Burgers equation and the 3D Navier-Stokes equations.
Acosta, Luis Enrique; de Lacy, M Clara; Ramos, M Isabel; Cano, Juan Pedro; Herrera, Antonio Manuel; Avilés, Manuel; Gil, Antonio José
2018-04-27
The aim of this paper is to study the behavior of an earth fill dam, analyzing the deformations determined by high precision geodetic techniques and those obtained by the Finite Element Method (FEM). A large number of control points were established around the area of the dam, and the measurements of their displacements took place during several periods. In this study, high-precision leveling and GNSS (Global Navigation Satellite System) techniques were used to monitor vertical and horizontal displacements respectively. Seven surveys were carried out: February and July 2008, March and July 2013, August 2014, September 2015 and September 2016. Deformations were predicted, taking into account the general characteristics of an earth fill dam. A comparative evaluation of the results derived from predicted (FEM) and observed deformations shows the differences on average being 20 cm for vertical displacements, and 6 cm for horizontal displacements at the crest. These differences are probably due to the simplifications assumed during the FEM modeling process: critical sections are considered homogeneous along their longitude, and the properties of the materials were established according to the general characteristics of an earth fill dam. These characteristics were taken from the normative and similar studies in the country. This could also be due to the geodetic control points being anchored in the superficial layer of the slope when the construction of the dam was finished.
Using confidence intervals to evaluate the focus alignment of spectrograph detector arrays.
Sawyer, Travis W; Hawkins, Kyle S; Damento, Michael
2017-06-20
High-resolution spectrographs extract detailed spectral information of a sample and are frequently used in astronomy, laser-induced breakdown spectroscopy, and Raman spectroscopy. These instruments employ dispersive elements such as prisms and diffraction gratings to spatially separate different wavelengths of light, which are then detected by a charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) detector array. Precise alignment along the optical axis (focus position) of the detector array is critical to maximize the instrumental resolution; however, traditional approaches of scanning the detector through focus lack a quantitative measure of precision, limiting the repeatability and relying on one's experience. Here we propose a method to evaluate the focus alignment of spectrograph detector arrays by establishing confidence intervals to measure the alignment precision. We show that propagation of uncertainty can be used to estimate the variance in an alignment, thus providing a quantitative and repeatable means to evaluate the precision and confidence of an alignment. We test the approach by aligning the detector array of a prototype miniature echelle spectrograph. The results indicate that the procedure effectively quantifies alignment precision, enabling one to objectively determine when an alignment has reached an acceptable level. This quantitative approach also provides a foundation for further optimization, including automated alignment. Furthermore, the procedure introduced here can be extended to other alignment techniques that rely on numerically fitting data to a model, providing a general framework for evaluating the precision of alignment methods.
Grenier, Christophe; Anbergen, Hauke; Bense, Victor; ...
2018-02-26
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less
NASA Astrophysics Data System (ADS)
Skibinski, Jakub; Caban, Piotr; Wejrzanowski, Tomasz; Kurzydlowski, Krzysztof J.
2014-10-01
In the present study numerical simulations of epitaxial growth of gallium nitride in Metal Organic Vapor Phase Epitaxy reactor AIX-200/4RF-S is addressed. Epitaxial growth means crystal growth that progresses while inheriting the laminar structure and the orientation of substrate crystals. One of the technological problems is to obtain homogeneous growth rate over the main deposit area. Since there are many agents influencing reaction on crystal area such as temperature, pressure, gas flow or reactor geometry, it is difficult to design optimal process. According to the fact that it's impossible to determine experimentally the exact distribution of heat and mass transfer inside the reactor during crystal growth, modeling is the only solution to understand the process precisely. Numerical simulations allow to understand the epitaxial process by calculation of heat and mass transfer distribution during growth of gallium nitride. Including chemical reactions in numerical model allows to calculate the growth rate of the substrate and estimate the optimal process conditions for obtaining the most homogeneous product.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grenier, Christophe; Anbergen, Hauke; Bense, Victor
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less
A Polynomial Time, Numerically Stable Integer Relation Algorithm
NASA Technical Reports Server (NTRS)
Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)
1998-01-01
Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.
Simple Numerical Modelling for Gasdynamic Design of Wave Rotors
NASA Astrophysics Data System (ADS)
Okamoto, Koji; Nagashima, Toshio
The precise estimation of pressure waves generated in the passages is a crucial factor in wave rotor design. However, it is difficult to estimate the pressure wave analytically, e.g. by the method of characteristics, because the mechanism of pressure-wave generation and propagation in the passages is extremely complicated as compared to that in a shock tube. In this study, a simple numerical modelling scheme was developed to facilitate the design procedure. This scheme considers the three dominant factors in the loss mechanism —gradual passage opening, wall friction and leakage— for simulating the pressure waves precisely. The numerical scheme itself is based on the one-dimensional Euler equations with appropriate source terms to reduce the calculation time. The modelling of these factors was verified by comparing the results with those of a two-dimensional numerical simulation, which were previously validated by the experimental data in our previous study. Regarding wave rotor miniaturization, the leakage flow effect, which involves the interaction between adjacent cells, was investigated extensively. A port configuration principle was also examined and analyzed in detail to verify the applicability of the present numerical modelling scheme to the wave rotor design.
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang
2016-01-01
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459
Carbon-14 wiggle-match dating of peat deposits: advantages and limitations
NASA Astrophysics Data System (ADS)
Blaauw, Maarten; van Geel, Bas; Mauquoy, Dmitri; van der Plicht, Johannes
2004-02-01
Carbon-14 wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a series of closely spaced peat 14C dates with the 14C calibration curve. The method of WMD is discussed, and its advantages and limitations are compared with calibration of individual dates. A numerical approach to WMD is introduced that makes it possible to assess the precision of WMD chronologies. During several intervals of the Holocene, the 14C calibration curve shows less pronounced fluctuations. We assess whether wiggle-matching is also a feasible strategy for these parts of the 14C calibration curve. High-precision chronologies, such as obtainable with WMD, are needed for studies of rapid climate changes and their possible causes during the Holocene. Copyright
Mazzocco, Michèle M M; Feigenson, Lisa; Halberda, Justin
2011-01-01
Many children have significant mathematical learning disabilities (MLD, or dyscalculia) despite adequate schooling. The current study hypothesizes that MLD partly results from a deficiency in the Approximate Number System (ANS) that supports nonverbal numerical representations across species and throughout development. In this study of 71 ninth graders, it is shown that students with MLD have significantly poorer ANS precision than students in all other mathematics achievement groups (low, typically, and high achieving), as measured by psychophysical assessments of ANS acuity (w) and of the mappings between ANS representations and number words (cv). This relation persists even when controlling for domain-general abilities. Furthermore, this ANS precision does not differentiate low-achieving from typically achieving students, suggesting an ANS deficit that is specific to MLD. © 2011 The Authors. Child Development © 2011 Society for Research in Child Development, Inc.
Precisely cyclic sand: self-organization of periodically sheared frictional grains.
Royer, John R; Chaikin, Paul M
2015-01-06
The disordered static structure and chaotic dynamics of frictional granular matter has occupied scientists for centuries, yet there are few organizational principles or guiding rules for this highly hysteretic, dissipative material. We show that cyclic shear of a granular material leads to dynamic self-organization into several phases with different spatial and temporal order. Using numerical simulations, we present a phase diagram in strain-friction space that shows chaotic dispersion, crystal formation, vortex patterns, and most unusually a disordered phase in which each particle precisely retraces its unique path. However, the system is not reversible. Rather, the trajectory of each particle, and the entire frictional, many-degrees-of-freedom system, organizes itself into a limit cycle absorbing state. Of particular note is that fact that the cyclic states are spatially disordered, whereas the ordered states are chaotic.
The Navy Precision Optical Interferometer: an update
NASA Astrophysics Data System (ADS)
Armstrong, J. T.; Baines, Ellyn K.; Schmitt, Henrique R.; Restaino, Sergio R.; Clark, James H.; Benson, James A.; Hutter, Donald J.; Zavala, Robert T.; van Belle, Gerard T.
2016-08-01
We describe the current status of the Navy Precision Optical Interferometer (NPOI), including developments since the last SPIE meeting. The NPOI group has added stations as far as 250m from the array center and added numerous infrastructure improvements. Science programs include stellar diameters and limb darkening, binary orbits, Be star disks, exoplanet host stars, and progress toward high-resolution stellar surface imaging. Technical and infrastructure projects include on-sky demonstrations of baseline bootstrapping with six array elements and of the VISION beam combiner, control system updates, integration of the long delay lines, and updated firmware for the Classic beam combiner. Our plans to add up to four 1.8 m telescopes are no longer viable, but we have recently acquired separate funding for adding three 1 m AO-equipped telescopes and an infrared beam combiner to the array.
Precisely cyclic sand: Self-organization of periodically sheared frictional grains
Royer, John R.; Chaikin, Paul M.
2015-01-01
The disordered static structure and chaotic dynamics of frictional granular matter has occupied scientists for centuries, yet there are few organizational principles or guiding rules for this highly hysteretic, dissipative material. We show that cyclic shear of a granular material leads to dynamic self-organization into several phases with different spatial and temporal order. Using numerical simulations, we present a phase diagram in strain–friction space that shows chaotic dispersion, crystal formation, vortex patterns, and most unusually a disordered phase in which each particle precisely retraces its unique path. However, the system is not reversible. Rather, the trajectory of each particle, and the entire frictional, many–degrees-of-freedom system, organizes itself into a limit cycle absorbing state. Of particular note is that fact that the cyclic states are spatially disordered, whereas the ordered states are chaotic. PMID:25538298
Spatial control of recollision wave packets with attosecond precision.
Kitzler, Markus; Lezius, Matthias
2005-12-16
We propose orthogonally polarized two-color laser pulses to steer tunneling electrons with attosecond precision around the ion core. We numerically demonstrate that the angles of birth and recollision, the recollision energy, and the temporal structure of the recolliding wave packet can be controlled without stabilization of the carrier-envelope phase of the laser, and that the wave packet's properties can be described by classical relations for a point charge. This establishes unique mapping between parameters of the laser field and attributes of the recolliding wave packet. The method is capable of probing ionic wave packet dynamics with attosecond resolution from an adjustable direction and might be used as an alternative to aligning molecules. Shaping the properties of the recollision wave packet by controlling the laser field may also provide new routes for improvement of attosecond pulse generation via high harmonic radiation.
NASA Astrophysics Data System (ADS)
Silaev, A. A.; Romanov, A. A.; Vvedenskii, N. V.
2018-03-01
In the numerical solution of the time-dependent Schrödinger equation by grid methods, an important problem is the reflection and wrap-around of the wave packets at the grid boundaries. Non-optimal absorption of the wave function leads to possible large artifacts in the results of numerical simulations. We propose a new method for the construction of the complex absorbing potentials for wave suppression at the grid boundaries. The method is based on the use of the multi-hump imaginary potential which contains a sequence of smooth and symmetric humps whose widths and amplitudes are optimized for wave absorption in different spectral intervals. We show that this can ensure a high efficiency of absorption in a wide range of de Broglie wavelengths, which includes wavelengths comparable to the width of the absorbing layer. Therefore, this method can be used for high-precision simulations of various phenomena where strong spreading of the wave function takes place, including the phenomena accompanying the interaction of strong fields with atoms and molecules. The efficiency of the proposed method is demonstrated in the calculation of the spectrum of high-order harmonics generated during the interaction of hydrogen atoms with an intense infrared laser pulse.
Network-induced chaos in integrate-and-fire neuronal ensembles.
Zhou, Douglas; Rangan, Aaditya V; Sun, Yi; Cai, David
2009-09-01
It has been shown that a single standard linear integrate-and-fire (IF) neuron under a general time-dependent stimulus cannot possess chaotic dynamics despite the firing-reset discontinuity. Here we address the issue of whether conductance-based, pulsed-coupled network interactions can induce chaos in an IF neuronal ensemble. Using numerical methods, we demonstrate that all-to-all, homogeneously pulse-coupled IF neuronal networks can indeed give rise to chaotic dynamics under an external periodic current drive. We also provide a precise characterization of the largest Lyapunov exponent for these high dimensional nonsmooth dynamical systems. In addition, we present a stable and accurate numerical algorithm for evaluating the largest Lyapunov exponent, which can overcome difficulties encountered by traditional methods for these nonsmooth dynamical systems with degeneracy induced by, e.g., refractoriness of neurons.
NASA Astrophysics Data System (ADS)
Calabia, Andres; Jin, Shuanggen
2017-02-01
The thermospheric mass density variations and the thermosphere-ionosphere coupling during geomagnetic storms are not clear due to lack of observables and large uncertainty in the models. Although accelerometers on-board Low-Orbit-Earth (LEO) satellites can measure non-gravitational accelerations and derive thermospheric mass density variations with unprecedented details, their measurements are not always available (e.g., for the March 2013 geomagnetic storm). In order to cover accelerometer data gaps of Gravity Recovery and Climate Experiment (GRACE), we estimate thermospheric mass densities from numerical derivation of GRACE determined precise orbit ephemeris (POE) for the period 2011-2016. Our results show good correlation with accelerometer-based mass densities, and a better estimation than the NRLMSISE00 empirical model. Furthermore, we statistically analyze the differences to accelerometer-based densities, and study the March 2013 geomagnetic storm response. The thermospheric density enhancements at the polar regions on 17 March 2013 are clearly represented by POE-based measurements. Although our results show density variations better correlate with Dst and k-derived geomagnetic indices, the auroral electroject activity index AE as well as the merging electric field Em picture better agreement at high latitude for the March 2013 geomagnetic storm. On the other side, low-latitude variations are better represented with the Dst index. With the increasing resolution and accuracy of Precise Orbit Determination (POD) products and LEO satellites, the straightforward technique of determining non-gravitational accelerations and thermospheric mass densities through numerical differentiation of POE promises potentially good applications for the upper atmosphere research community.
Haffert, S Y
2016-08-22
Current wavefront sensors for high resolution imaging have either a large dynamic range or a high sensitivity. A new kind of wavefront sensor is developed which can have both: the Generalised Optical Differentiation wavefront sensor. This new wavefront sensor is based on the principles of optical differentiation by amplitude filters. We have extended the theory behind linear optical differentiation and generalised it to nonlinear filters. We used numerical simulations and laboratory experiments to investigate the properties of the generalised wavefront sensor. With this we created a new filter that can decouple the dynamic range from the sensitivity. These properties make it suitable for adaptive optic systems where a large range of phase aberrations have to be measured with high precision.
NASA Astrophysics Data System (ADS)
Berkels, Benjamin; Wirth, Benedikt
2017-09-01
Nowadays, modern electron microscopes deliver images at atomic scale. The precise atomic structure encodes information about material properties. Thus, an important ingredient in the image analysis is to locate the centers of the atoms shown in micrographs as precisely as possible. Here, we consider scanning transmission electron microscopy (STEM), which acquires data in a rastering pattern, pixel by pixel. Due to this rastering combined with the magnification to atomic scale, movements of the specimen even at the nanometer scale lead to random image distortions that make precise atom localization difficult. Given a series of STEM images, we derive a Bayesian method that jointly estimates the distortion in each image and reconstructs the underlying atomic grid of the material by fitting the atom bumps with suitable bump functions. The resulting highly non-convex minimization problems are solved numerically with a trust region approach. Existence of minimizers and the model behavior for faster and faster rastering are investigated using variational techniques. The performance of the method is finally evaluated on both synthetic and real experimental data.
Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation
NASA Astrophysics Data System (ADS)
Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo
2015-01-01
Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.
Interfacial gauge methods for incompressible fluid dynamics
Saye, R.
2016-06-10
Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of "gauge freedom" to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work,more » high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena.« less
Real-time Retrieving Atmospheric Parameters from Multi-GNSS Constellations
NASA Astrophysics Data System (ADS)
Li, X.; Zus, F.; Lu, C.; Dick, G.; Ge, M.; Wickert, J.; Schuh, H.
2016-12-01
The multi-constellation GNSS (e.g. GPS, GLONASS, Galileo, and BeiDou) bring great opportunities and challenges for real-time retrieval of atmospheric parameters for supporting numerical weather prediction (NWP) nowcasting or severe weather event monitoring. In this study, the observations from different GNSS are combined together for atmospheric parameter retrieving based on the real-time precise point positioning technique. The atmospheric parameters retrieved from multi-GNSS observations, including zenith total delay (ZTD), integrated water vapor (IWV), horizontal gradient (especially high-resolution gradient estimates) and slant total delay (STD), are carefully analyzed and evaluated by using the VLBI, radiosonde, water vapor radiometer and numerical weather model to independently validate the performance of individual GNSS and also demonstrate the benefits of multi-constellation GNSS for real-time atmospheric monitoring. Numerous results show that the multi-GNSS processing can provide real-time atmospheric products with higher accuracy, stronger reliability and better distribution, which would be beneficial for atmospheric sounding systems, especially for nowcasting of extreme weather.
Gravitational geons in asymptotically anti-de Sitter spacetimes
NASA Astrophysics Data System (ADS)
Martinon, Grégoire; Fodor, Gyula; Grandclément, Philippe; Forgács, Peter
2017-06-01
We report on numerical constructions of fully non-linear geons in asymptotically anti-de Sitter (AdS) spacetimes in four dimensions. Our approach is based on 3 + 1 formalism and spectral methods in a gauge combining maximal slicing and spatial harmonic coordinates. We are able to construct several families of geons seeded by different families of spherical harmonics. We can reach unprecedentedly high amplitudes, with mass of order ∼1/2 of the AdS length, and with deviations of the order of 50% compared to third order perturbative approaches. The consistency of our results with numerical resolution is carefully checked and we give extensive precision monitoring techniques. All global quantities, such as mass and angular momentum, are computed using two independent frameworks that agree with each other at the 0.1% level. We also provide strong evidence for the existence of ‘excited’ (i.e. with one radial node) geon solutions of Einstein equations in asymptotically AdS spacetimes by constructing them numerically.
Interferometric correction system for a numerically controlled machine
Burleson, Robert R.
1978-01-01
An interferometric correction system for a numerically controlled machine is provided to improve the positioning accuracy of a machine tool, for example, for a high-precision numerically controlled machine. A laser interferometer feedback system is used to monitor the positioning of the machine tool which is being moved by command pulses to a positioning system to position the tool. The correction system compares the commanded position as indicated by a command pulse train applied to the positioning system with the actual position of the tool as monitored by the laser interferometer. If the tool position lags the commanded position by a preselected error, additional pulses are added to the pulse train applied to the positioning system to advance the tool closer to the commanded position, thereby reducing the lag error. If the actual tool position is leading in comparison to the commanded position, pulses are deleted from the pulse train where the advance error exceeds the preselected error magnitude to correct the position error of the tool relative to the commanded position.
Measurement accuracy of FBG used as a surface-bonded strain sensor installed by adhesive.
Xue, Guangzhe; Fang, Xinqiu; Hu, Xiukun; Gong, Libin
2018-04-10
Material and dimensional properties of surface-bonded fiber Bragg gratings (FBGs) can distort strain measurement, thereby lowering the measurement accuracy. To accurately assess measurement precision and correct obtained strain, a new model, considering reinforcement effects on adhesive and measured object, is proposed in this study, which is verified to be accurate enough by the numerical method. Meanwhile, a theoretical strain correction factor is obtained, which is demonstrated to be significantly sensitive to recoating material and bonding length, as suggested by numerical and experimental results. It is also concluded that a short grating length as well as a thin but large-area (preferably covering the whole FBG) adhesive can enhance the correction precision.
NASA Astrophysics Data System (ADS)
Talamonti, James J.; Kay, Richard B.; Krebs, Danny J.
1996-05-01
A numerical model was developed to emulate the capabilities of systems performing noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation by using Hanning, Blackman, and Gaussian windows in the fast Fourier transform technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer. By processing computer-simulated data through our model, we project the ultimate precision for ideal data, and data containing AM-FM noise. The precision is shown to be limited by nonlinearities in the laser scan. absolute distance, interferometer.
NASA Technical Reports Server (NTRS)
Powell, Richard W.
1998-01-01
This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.
Comment on "Modified quantum-speed-limit bounds for open quantum dynamics in quantum channels"
NASA Astrophysics Data System (ADS)
Mirkin, Nicolás; Toscano, Fabricio; Wisniacki, Diego A.
2018-04-01
In a recent paper [Phys. Rev. A 95, 052118 (2017), 10.1103/PhysRevA.95.052118], the authors claim that our criticism, in Phys. Rev. A 94, 052125 (2016), 10.1103/PhysRevA.94.052125, to some quantum speed limit bounds for open quantum dynamics that appeared recently in literature are invalid. According to the authors, the problem with our analysis would be generated by an artifact of the finite-precision numerical calculations. We analytically show here that it is not possible to have any inconsistency associated with the numerical precision of calculations. Therefore, our criticism of the quantum speed limit bounds continues to be valid.
Is "Two" a Plural Marker in Early Child Language?
ERIC Educational Resources Information Center
Barner, David; Lui, Toni; Zapf, Jennifer
2012-01-01
Is "two" ever a plural marker in child language? By some accounts, children bootstrap the distinction between the words "one" and "two" by observing their use with singular-plural marking ("one ball/two balls"). Others argue that the numeral "two" marks plurality before children begin using numerals to denote precise quantities. We tested the…
HARM processing techniques for MEMS and MOEMS devices using bonded SOI substrates and DRIE
NASA Astrophysics Data System (ADS)
Gormley, Colin; Boyle, Anne; Srigengan, Viji; Blackstone, Scott C.
2000-08-01
Silicon-on-Insulator (SOI) MEMS devices (1) are rapidly gaining popularity in realizing numerous solutions for MEMS, especially in the optical and inertia application fields. BCO recently developed a DRIE trench etch, utilizing the Bosch process, and refill process for high voltage dielectric isolation integrated circuits on thick SOI substrates. In this paper we present our most recently developed DRIE processes for MEMS and MOEMS devices. These advanced etch techniques are initially described and their integration with silicon bonding demonstrated. This has enabled process flows that are currently being utilized to develop optical router and filter products for fiber optics telecommunications and high precision accelerometers.
Ultrasonically Assisted Cutting of Bio-tissues in Microtomy
NASA Astrophysics Data System (ADS)
Wang, Dong; Roy, Anish; Silberschmidt, Vadim V.
Modern-day histology of bio-tissues for supporting stratified medicine diagnoses requires high-precision cutting to ensure high quality extremely thin specimens used in analysis. Additionally, the cutting quality is significantly affected by a wide variety of soft and hard tissues in the samples. This paper deals with development of a next generation of microtome employing introduction of controlled ultrasonic vibration to realise a hybrid cutting process of bio-tissues. The study is based on a combination of advanced experimental and numerical (finite-element) studies of multi-body dynamics of a cutting system. The quality of cut samples produced with the prototype is compared with the state-of-the-art.
Fast and precise processing of material by means of an intensive electron beam
NASA Astrophysics Data System (ADS)
Beisswenger, S.
1984-07-01
For engraving a picture carrying screen of cells into the copper-surface of gravure cylinders, an electron beam system was developed. Numerical computations of the power density in the image planes of the electron beam determined the design of the electron optical assembly. A highly stable electron beam of high power density is generated by a ribbon-like cathode. A system of magnetic lenses is used for fast control of the engraving processes and for dynamic changing of the electron optical demagnification. The electron beam engraving system is capable of engraving up to 150,000 gravure cells per sec.
NASA Astrophysics Data System (ADS)
Rowe, C. A.; Guardincerri, E.; Roy, M.; Dichter, M.
2015-12-01
As part of the CO2 reservoir muon imaging project headed by the Pacific Northwest National Laboraory (PNNL) under the U.S. Department of Energy Subsurface Technology and Engineering Research, Development, and Demonstration (SubTER) iniative, Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) plan to leverage the recently decommissioned and easily accessible Tunnel Vault on LANL property to test the complementary modeling strengths of muon radiography and high-precision gravity surveys. This tunnel extends roughly 300 feet into the hillside, with a maximum depth below the surface of approximately 300 feet. We will deploy LANL's Mini Muon Tracker (MMT), a detector consisting of 576 drift tubes arranged in alternating parallel planes of orthogonally oriented tubes. This detector is capable of precise determination of trajectories for incoming muons with angular resolution of a few milliradians. We will deploy the MMT at several locations within the tunnel, to obtain numerous crossing muon trajectories and permit a 3D tomographic image of the overburden to be built. In the same project, UNM will use a Scintrex digital gravimeter to collect high-precision gravity data from a dense grid on the hill slope above the tunnel as well as within the tunnel itself. This will provide both direct and differential gravity readings for density modeling of the overburden. By leveraging detailed geologic knowledge of the canyon and the lithology overlying the tunnel, as well as the structural elements, elevations and blueprints of the tunnel itself, we will evaluate the muon and gravity data both independently and in a simultaneous, joint inversion to build a combined 3D density model of the overburden.
NASA Astrophysics Data System (ADS)
Ramezani, Jahandar; Clyde, William; Wang, Tiantian; Johnson, Kirk; Bowring, Samuel
2016-04-01
Reversals in the Earth's magnetic polarity are geologically abrupt events of global magnitude that makes them ideal timelines for stratigraphic correlation across a variety of depositional environments, especially where diagnostic marine fossils are absent. Accurate and precise calibration of the Geomagnetic Polarity Timescale (GPTS) is thus essential to the reconstruction of Earth history and to resolving the mode and tempo of biotic and environmental change in deep time. The Late Cretaceous - Paleocene GPTS is of particular interest as it encompasses a critical period of Earth history marked by the Cretaceous greenhouse climate, the peak of dinosaur diversity, the end-Cretaceous mass extinction and its paleoecological aftermaths. Absolute calibration of the GPTS has been traditionally based on sea-floor spreading magnetic anomaly profiles combined with local magnetostratigraphic sequences for which a numerical age model could be established by interpolation between an often limited number of 40Ar/39Ar dates from intercalated volcanic ash deposits. Although the Neogene part of the GPTS has been adequately calibrated using cyclostratigraphy-based, astrochronological schemes, the application of these approaches to pre-Neogene parts of the timescale has been complicated given the uncertainties of the orbital models and the chaotic behavior of the solar system this far back in time. Here we present refined chronostratigraphic frameworks based on high-precision U-Pb geochronology of ash beds from the Western Interior Basin of North America and the Songliao Basin of Northeast China that places tight temporal constraints on the Late Cretaceous to Paleocene GPTS, either directly or by testing their astrochronological underpinnings. Further application of high-precision radioisotope geochronology and calibrated astrochronology promises a complete and robust Cretaceous-Paleogene GPTS, entirely independent of sea-floor magnetic anomaly profiles.
Approximate number sense correlates with math performance in gifted adolescents.
Wang, Jinjing Jenny; Halberda, Justin; Feigenson, Lisa
2017-05-01
Nonhuman animals, human infants, and human adults all share an Approximate Number System (ANS) that allows them to imprecisely represent number without counting. Among humans, people differ in the precision of their ANS representations, and these individual differences have been shown to correlate with symbolic mathematics performance in both children and adults. For example, children with specific math impairment (dyscalculia) have notably poor ANS precision. However, it remains unknown whether ANS precision contributes to individual differences only in populations of people with lower or average mathematical abilities, or whether this link also is present in people who excel in math. Here we tested non-symbolic numerical approximation in 13- to 16-year old gifted children enrolled in a program for talented adolescents (the Center for Talented Youth). We found that in this high achieving population, ANS precision significantly correlated with performance on the symbolic math portion of two common standardized tests (SAT and ACT) that typically are administered to much older students. This relationship was robust even when controlling for age, verbal performance, and reaction times in the approximate number task. These results suggest that the Approximate Number System is linked to symbolic math performance even at the top levels of math performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F
2012-09-14
We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.
NASA Astrophysics Data System (ADS)
Zuo, Heng E.; Yao, Youwei; Chalifoux, Brandon D.; DeTienne, Michael D.; Heilmann, Ralf K.; Schattenburg, Mark L.
2017-08-01
Slumping (or thermal-shaping) of thin glass sheets onto high precision mandrels was used successfully by NASA Goddard Space Flight Center to fabricate the NuSTAR telescope. But this process requires long thermal cycles and produces mid-range spatial frequency errors due to the anti-stick mandrel coatings. Over the last few years, we have designed and tested non-contact horizontal slumping of round flat glass sheets floating on thin layers of nitrogen between porous air-bearings using fast position control algorithms and precise fiber sensing techniques during short thermal cycles. We recently built a finite element model with ADINA to simulate the viscoelastic behavior of glass during the slumping process. The model utilizes fluid-structure interaction (FSI) to understand the deformation and motion of glass under the influence of air flow. We showed that for the 2D axisymmetric model, experimental and numerical approaches have comparable results. We also investigated the impact of bearing permeability on the resulting shape of the wafers. A novel vertical slumping set-up is also under development to eliminate the undesirable influence of gravity. Progress towards generating mirrors for good angular resolution and low mid-range spatial frequency errors is reported.
NASA Astrophysics Data System (ADS)
Schröder, Markus; Meyer, Hans-Dieter
2017-08-01
We propose a Monte Carlo method, "Monte Carlo Potfit," for transforming high-dimensional potential energy surfaces evaluated on discrete grid points into a sum-of-products form, more precisely into a Tucker form. To this end we use a variational ansatz in which we replace numerically exact integrals with Monte Carlo integrals. This largely reduces the numerical cost by avoiding the evaluation of the potential on all grid points and allows a treatment of surfaces up to 15-18 degrees of freedom. We furthermore show that the error made with this ansatz can be controlled and vanishes in certain limits. We present calculations on the potential of HFCO to demonstrate the features of the algorithm. To demonstrate the power of the method, we transformed a 15D potential of the protonated water dimer (Zundel cation) in a sum-of-products form and calculated the ground and lowest 26 vibrationally excited states of the Zundel cation with the multi-configuration time-dependent Hartree method.
An accurate real-time model of maglev planar motor based on compound Simpson numerical integration
NASA Astrophysics Data System (ADS)
Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi
2017-05-01
To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.
Back-support large laser mirror unit: mounting modeling and analysis
NASA Astrophysics Data System (ADS)
Wang, Hui; Zhang, Zheng; Long, Kai; Liu, Tianye; Li, Jun; Liu, Changchun; Xiong, Zhao; Yuan, Xiaodong
2018-01-01
In high-power laser system, the surface wavefront of large optics has a close link with its structure design and mounting method. The back-support transport mirror design is presently being investigated as a means in China's high-power laser system to hold the optical component firmly while minimizing the distortion of its reflecting surface. We have proposed a comprehensive analytical framework integrated numerical modeling and precise metrology for the mirror's mounting performance evaluation while treating the surface distortion as a key decision variable. The combination of numerical simulation and field tests demonstrates that the comprehensive analytical framework provides a detailed and accurate approach to evaluate the performance of the transport mirror. It is also verified that the back-support transport mirror is effectively compatible with state-of-the-art optical quality specifications. This study will pave the way for future research to solidify the design of back-support large laser optics in China's next generation inertial confinement fusion facility.
(3+1)D hydrodynamic simulation of relativistic heavy-ion collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schenke, Bjoern; Jeon, Sangyong; Gale, Charles
2010-07-15
We present music, an implementation of the Kurganov-Tadmor algorithm for relativistic 3+1 dimensional fluid dynamics in heavy-ion collision scenarios. This Riemann-solver-free, second-order, high-resolution scheme is characterized by a very small numerical viscosity and its ability to treat shocks and discontinuities very well. We also incorporate a sophisticated algorithm for the determination of the freeze-out surface using a three dimensional triangulation of the hypersurface. Implementing a recent lattice based equation of state, we compute p{sub T}-spectra and pseudorapidity distributions for Au+Au collisions at sq root(s)=200 GeV and present results for the anisotropic flow coefficients v{sub 2} and v{sub 4} as amore » function of both p{sub T} and pseudorapidity eta. We were able to determine v{sub 4} with high numerical precision, finding that it does not strongly depend on the choice of initial condition or equation of state.« less
Overview of the new capabilities of TORIC-v6 and comparison with TORIC-v5
NASA Astrophysics Data System (ADS)
Bilato, R.; Brambilla, M.; Bertelli, N.
2016-10-01
Since its release, version 5 (v5) of the full-wave TORIC code, characterized by an optimized parallelized solver for its routinely use in TRANSP package, has been ameliorated in many technical issues, e.g. the plasma-vacuum transition and the full-spectrum antenna modeling. For the WPCD-benchmark cases a good agreement between the new version, v6, and v5 is found. The major improvement, however, has been done in interfacing TORIC-v6 with the Fokker-Planck SSFPQL solver to account for the back-reaction of ICRF and NBI heating on the wave propagation and absorption. Special algorithms have been developed for SSFPQL for the numerical precision at high pitch-angle resolution and to evaluate the generalized dispersion function directly from the numerical solution. Care has been spent in automatizing the non-linear loop between TORIC-v6 and SSFPQL. In v6 the description of wave absorption at high-harmonics has been revised and applied to DEMO. For high-harmonic regimes there is an ongoing activity on the comparison with AORSA.
Petascale turbulence simulation using a highly parallel fast multipole method on GPUs
NASA Astrophysics Data System (ADS)
Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji
2013-03-01
This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.
The precise time-dependent solution of the Fokker–Planck equation with anomalous diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Ran; Du, Jiulin, E-mail: jiulindu@aliyun.com
2015-08-15
We study the time behavior of the Fokker–Planck equation in Zwanzig’s rule (the backward-Ito’s rule) based on the Langevin equation of Brownian motion with an anomalous diffusion in a complex medium. The diffusion coefficient is a function in momentum space and follows a generalized fluctuation–dissipation relation. We obtain the precise time-dependent analytical solution of the Fokker–Planck equation and at long time the solution approaches to a stationary power-law distribution in nonextensive statistics. As a test, numerically we have demonstrated the accuracy and validity of the time-dependent solution. - Highlights: • The precise time-dependent solution of the Fokker–Planck equation with anomalousmore » diffusion is found. • The anomalous diffusion satisfies a generalized fluctuation–dissipation relation. • At long time the time-dependent solution approaches to a power-law distribution in nonextensive statistics. • Numerically we have demonstrated the accuracy and validity of the time-dependent solution.« less
Testing and Validating Gadget2 for GPUs
NASA Astrophysics Data System (ADS)
Wibking, Benjamin; Holley-Bockelmann, K.; Berlind, A. A.
2013-01-01
We are currently upgrading a version of Gadget2 (Springel et al., 2005) that is optimized for NVIDIA's CUDA GPU architecture (Frigaard, unpublished) to work with the latest libraries and graphics cards. Preliminary tests of its performance indicate a ~40x speedup in the particle force tree approximation calculation, with overall speedup of 5-10x for cosmological simulations run with GPUs compared to running on the same CPU cores without GPU acceleration. We believe this speedup can be reasonably increased by an additional factor of two with futher optimization, including overlap of computation on CPU and GPU. Tests of single-precision GPU numerical fidelity currently indicate accuracy of the mass function and the spectral power density to within a few percent of extended-precision CPU results with the unmodified form of Gadget. Additionally, we plan to test and optimize the GPU code for Millenium-scale "grand challenge" simulations of >10^9 particles, a scale that has been previously untested with this code, with the aid of the NSF XSEDE flagship GPU-based supercomputing cluster codenamed "Keeneland." Current work involves additional validation of numerical results, extending the numerical precision of the GPU calculations to double precision, and evaluating performance/accuracy tradeoffs. We believe that this project, if successful, will yield substantial computational performance benefits to the N-body research community as the next generation of GPU supercomputing resources becomes available, both increasing the electrical power efficiency of ever-larger computations (making simulations possible a decade from now at scales and resolutions unavailable today) and accelerating the pace of research in the field.
A wavefront orientation method for precise numerical determination of tsunami travel time
NASA Astrophysics Data System (ADS)
Fine, I. V.; Thomson, R. E.
2013-04-01
We present a highly accurate and computationally efficient method (herein, the "wavefront orientation method") for determining the travel time of oceanic tsunamis. Based on Huygens principle, the method uses an eight-point grid-point pattern and the most recent information on the orientation of the advancing wave front to determine the time for a tsunami to travel to a specific oceanic location. The method is shown to provide improved accuracy and reduced anisotropy compared with the conventional multiple grid-point method presently in widespread use.
"Big data" in economic history.
Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan
2018-03-01
Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.
Direct computational approach to lattice supersymmetric quantum mechanics
NASA Astrophysics Data System (ADS)
Kadoh, Daisuke; Nakayama, Katsumasa
2018-07-01
We study the lattice supersymmetric models numerically using the transfer matrix approach. This method consists only of deterministic processes and has no statistical uncertainties. We improve it by performing a scale transformation of variables such that the Witten index is correctly reproduced from the lattice model, and the other prescriptions are shown in detail. Compared to the precious Monte-Carlo results, we can estimate the effective masses, SUSY Ward identity and the cut-off dependence of the results in high precision. Those kinds of information are useful in improving lattice formulation of supersymmetric models.
Nanophotonic particle simulation and inverse design using artificial neural networks
Peurifoy, John; Shen, Yichen; Jing, Li; Cano-Renteria, Fidel; DeLacy, Brendan G.; Joannopoulos, John D.; Tegmark, Max
2018-01-01
We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical. PMID:29868640
Thompson, Clarissa A; Morris, Bradley J; Sidney, Pooja G
2017-01-01
Do children spontaneously represent spatial-numeric features of a task, even when it does not include printed numbers (Mix et al., 2016)? Sixty first grade students completed a novel spatial estimation task by seeking and finding pages in a 100-page book without printed page numbers. Children were shown pages 1 through 6 and 100, and then were asked, "Can you find page X?" Children's precision of estimates on the page finder task and a 0-100 number line estimation task was calculated with the Percent Absolute Error (PAE) formula (Siegler and Booth, 2004), in which lower PAE indicated more precise estimates. Children's numerical knowledge was further assessed with: (1) numeral identification (e.g., What number is this: 57?), (2) magnitude comparison (e.g., Which is larger: 54 or 57?), and (3) counting on (e.g., Start counting from 84 and count up 5 more). Children's accuracy on these tasks was correlated with their number line PAE. Children's number line estimation PAE predicted their page finder PAE, even after controlling for age and accuracy on the other numerical tasks. Children's estimates on the page finder and number line tasks appear to tap a general magnitude representation. However, the page finder task did not correlate with numeral identification and counting-on performance, likely because these tasks do not measure children's magnitude knowledge. Our results suggest that the novel page finder task is a useful measure of children's magnitude knowledge, and that books have similar spatial-numeric affordances as number lines and numeric board games.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikolic, M.; Samolov, A.; Popovic, S.
2013-03-14
A tomographic numerical method based on the two-dimensional Radon formula for a cylindrical cavity has been employed for obtaining spatial distributions of the argon excited levels. The spectroscopy measurements were taken at different positions and directions to observe populations of excited species in the plasmoid region and the corresponding excitation temperatures. Excited argon states are concentrated near the tube walls, thus, confirming the assumption that the post discharge plasma is dominantly sustained by travelling surface wave. An automated optical measurement system has been developed for reconstruction of local plasma parameters of the plasmoid structure formed in an argon supersonic flowingmore » microwave discharge. The system carries out angle and distance measurements using a rotating, flat mirror, as well as two high precision stepper motors operated by a microcontroller-based system and several sensors for precise feedback control.« less
Autonomous Pointing Control of a Large Satellite Antenna Subject to Parametric Uncertainty
Wu, Shunan; Liu, Yufei; Radice, Gianmarco; Tan, Shujun
2017-01-01
With the development of satellite mobile communications, large antennas are now widely used. The precise pointing of the antenna’s optical axis is essential for many space missions. This paper addresses the challenging problem of high-precision autonomous pointing control of a large satellite antenna. The pointing dynamics are firstly proposed. The proportional–derivative feedback and structural filter to perform pointing maneuvers and suppress antenna vibrations are then presented. An adaptive controller to estimate actual system frequencies in the presence of modal parameters uncertainty is proposed. In order to reduce periodic errors, the modified controllers, which include the proposed adaptive controller and an active disturbance rejection filter, are then developed. The system stability and robustness are analyzed and discussed in the frequency domain. Numerical results are finally provided, and the results have demonstrated that the proposed controllers have good autonomy and robustness. PMID:28287450
Streamline integration as a method for two-dimensional elliptic grid generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at; Held, M.; Einkemmer, L.
We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metricsmore » we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.« less
A Metalens with a Near-Unity Numerical Aperture.
Paniagua-Domínguez, Ramón; Yu, Ye Feng; Khaidarov, Egor; Choi, Sumin; Leong, Victor; Bakker, Reuben M; Liang, Xinan; Fu, Yuan Hsing; Valuckas, Vytautas; Krivitsky, Leonid A; Kuznetsov, Arseniy I
2018-03-14
The numerical aperture (NA) of a lens determines its ability to focus light and its resolving capability. Having a large NA is a very desirable quality for applications requiring small light-matter interaction volumes or large angular collections. Traditionally, a large NA lens based on light refraction requires precision bulk optics that ends up being expensive and is thus also a specialty item. In contrast, metasurfaces allow the lens designer to circumvent those issues producing high-NA lenses in an ultraflat fashion. However, so far, these have been limited to numerical apertures on the same order of magnitude as traditional optical components, with experimentally reported NA values of <0.9. Here we demonstrate, both numerically and experimentally, a new approach that results in a diffraction-limited flat lens with a near-unity numerical aperture (NA > 0.99) and subwavelength thickness (∼λ/3), operating with unpolarized light at 715 nm. To demonstrate its imaging capability, the designed lens is applied in a confocal configuration to map color centers in subdiffractive diamond nanocrystals. This work, based on diffractive elements that can efficiently bend light at angles as large as 82°, represents a step beyond traditional optical elements and existing flat optics, circumventing the efficiency drop associated with the standard, phase mapping approach.
A Metalens with a Near-Unity Numerical Aperture
NASA Astrophysics Data System (ADS)
Paniagua-Domínguez, Ramón; Yu, Ye Feng; Khaidarov, Egor; Choi, Sumin; Leong, Victor; Bakker, Reuben M.; Liang, Xinan; Fu, Yuan Hsing; Valuckas, Vytautas; Krivitsky, Leonid A.; Kuznetsov, Arseniy I.
2018-03-01
The numerical aperture (NA) of a lens determines its ability to focus light and its resolving capability. Having a large NA is a very desirable quality for applications requiring small light-matter interaction volumes or large angular collections. Traditionally, a large NA lens based on light refraction requires precision bulk optics that ends up being expensive and is thus also a specialty item. In contrast, metasurfaces allow the lens designer to circumvent those issues producing high NA lenses in an ultra-flat fashion. However, so far, these have been limited to numerical apertures on the same order of traditional optical components, with experimentally reported values of NA <0.9. Here we demonstrate, both numerically and experimentally, a new approach that results in a diffraction limited flat lens with a near-unity numerical aperture (NA>0.99) and sub-wavelength thickness (~{\\lambda}/3), operating with unpolarized light at 715 nm. To demonstrate its imaging capability, the designed lens is applied in a confocal configuration to map color centers in sub-diffractive diamond nanocrystals. This work, based on diffractive elements able to efficiently bend light at angles as large as 82{\\deg}, represents a step beyond traditional optical elements and existing flat optics, circumventing the efficiency drop associated to the standard, phase mapping approach.
Study on longitudinal force simulation of heavy-haul train
NASA Astrophysics Data System (ADS)
Chang, Chongyi; Guo, Gang; Wang, Junbiao; Ma, Yingming
2017-04-01
The longitudinal dynamics model of heavy-haul trains and air brake model used in the longitudinal train dynamics (LTDs) are established. The dry friction damping hysteretic characteristic of steel friction draft gears is simulated by the equation which describes the suspension forces in truck leaf springs. The model of draft gears introduces dynamic loading force, viscous friction of steel friction and the damping force. Consequently, the numerical model of the draft gears is brought forward. The equation of LTDs is strongly non-linear. In order to solve the response of the strongly non-linear system, the high-precision and equilibrium iteration method based on the Newmark-β method is presented and numerical analysis is made. Longitudinal dynamic forces of the 20,000 tonnes heavy-haul train are tested, and models and solution method provided are verified by the test results.
A study on directional resistivity logging-while-drilling based on self-adaptive hp-FEM
NASA Astrophysics Data System (ADS)
Liu, Dejun; Li, Hui; Zhang, Yingying; Zhu, Gengxue; Ai, Qinghui
2014-12-01
Numerical simulation of resistivity logging-while-drilling (LWD) tool response provides guidance for designing novel logging instruments and interpreting real-time logging data. In this paper, based on self-adaptive hp-finite element method (hp-FEM) algorithm, we analyze LWD tool response against model parameters and briefly illustrate geosteering capabilities of directional resistivity LWD. Numerical simulation results indicate that the change of source spacing is of obvious influence on the investigation depth and detecting precision of resistivity LWD tool; the change of frequency can improve the resolution of low-resistivity formation and high-resistivity formation. The simulation results also indicate that the self-adaptive hp-FEM algorithm has good convergence speed and calculation accuracy to guide the geologic steering drilling and it is suitable to simulate the response of resistivity LWD tools.
NASA Astrophysics Data System (ADS)
Boscheri, Walter; Dumbser, Michael; Loubère, Raphaël; Maire, Pierre-Henri
2018-04-01
In this paper we develop a conservative cell-centered Lagrangian finite volume scheme for the solution of the hydrodynamics equations on unstructured multidimensional grids. The method is derived from the Eucclhyd scheme discussed in [47,43,45]. It is second-order accurate in space and is combined with the a posteriori Multidimensional Optimal Order Detection (MOOD) limiting strategy to ensure robustness and stability at shock waves. Second-order of accuracy in time is achieved via the ADER (Arbitrary high order schemes using DERivatives) approach. A large set of numerical test cases is proposed to assess the ability of the method to achieve effective second order of accuracy on smooth flows, maintaining an essentially non-oscillatory behavior on discontinuous profiles, general robustness ensuring physical admissibility of the numerical solution, and precision where appropriate.
NASA Astrophysics Data System (ADS)
Chen, Shun-Tong; Chang, Chih-Hsien
2013-12-01
This study presents a novel approach to the fabrication of a biomedical-mold for producing convex platform PMMA (poly-methyl-meth-acrylate) slides for counting cells. These slides allow for the microscopic examination of urine sediment cells. Manufacturing of such slides incorporates three important procedures: (1) the development of a tabletop high-precision dual-spindle CNC (computerized numerical control) machine tool; (2) the formation of a boron-doped polycrystalline composite diamond (BD-PCD) wheel-tool on the machine tool developed in procedure (1); and (3) the cutting of a multi-groove-biomedical-mold array using the formed diamond wheel-tool in situ on the developed machine. The machine incorporates a hybrid working platform providing wheel-tool thinning using spark erosion to cut, polish, and deburr microgrooves on NAK80 steel directly. With consideration given for the electrical conductive properties of BD-PCD, the diamond wheel-tool is thinned to a thickness of 5 µm by rotary wire electrical discharge machining. The thinned wheel-tool can grind microgrooves 10 µm wide. An embedded design, which inserts a close fitting precision core into the biomedical-mold to create step-difference (concave inward) of 50 µm in height between the core and the mold, is also proposed and realized. The perpendicular dual-spindles and precision rotary stage are features that allow for biomedical-mold machining without the necessity of uploading and repositioning materials until all tasks are completed. A PMMA biomedical-slide with a plurality of juxtaposed counting chambers is formed and its usefulness verified.
LORAN-C LATITUDE-LONGITUDE CONVERSION AT SEA: PROGRAMMING CONSIDERATIONS.
McCullough, James R.; Irwin, Barry J.; Bowles, Robert M.
1985-01-01
Comparisons are made of the precision of arc-length routines as computer precision is reduced. Overland propagation delays are discussed and illustrated with observations from offshore New England. Present practice of LORAN-C error budget modeling is then reviewed with the suggestion that additional terms be considered in future modeling. Finally, some detailed numeric examples are provided to help with new computer program checkout.
An OKQPSK modem incorporating numerically controlled carrier synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oetken, R.E.
1988-04-04
The feasibility of incorporating numerically controlled oscillators (NCO) in communication related applications is evaluated. NCO generation of sinusoids may prove useful in systems requiring precise frequency control, tuning linearity, and orthogonality versus frequency. An OKQPSK modem operating at a data rate of 200 kb/s was fabricated. The modem operates in a back to back hardwired channel and thus does not incorporate carrier or symbol timing recovery. Spectra of the NCO generated sinusoids are presented along with waveforms from the modulation and demodulation process. Generation of sinusoids in the digital domain is a viable alternative to analog oscillators. Implementation of anmore » NCO should be considered when frequency allocation, tuning bandwidth, or frequency hopped transmission requires precise frequency synthesis. 24 figs.« less
Virial Coefficients and Equations of State for Hard Polyhedron Fluids.
Irrgang, M Eric; Engel, Michael; Schultz, Andrew J; Kofke, David A; Glotzer, Sharon C
2017-10-24
Hard polyhedra are a natural extension of the hard sphere model for simple fluids, but there is no general scheme for predicting the effect of shape on thermodynamic properties, even in moderate-density fluids. Only the second virial coefficient is known analytically for general convex shapes, so higher-order equations of state have been elusive. Here we investigate high-precision state functions in the fluid phase of 14 representative polyhedra with different assembly behaviors. We discuss historic efforts in analytically approximating virial coefficients up to B 4 and numerically evaluating them to B 8 . Using virial coefficients as inputs, we show the convergence properties for four equations of state for hard convex bodies. In particular, the exponential approximant of Barlow et al. (J. Chem. Phys. 2012, 137, 204102) is found to be useful up to the first ordering transition for most polyhedra. The convergence behavior we explore can guide choices in expending additional resources for improved estimates. Fluids of arbitrary hard convex bodies are too complicated to be described in a general way at high densities, so the high-precision state data we provide can serve as a reference for future work in calculating state data or as a basis for thermodynamic integration.
Recurrent star-spot activity and differential rotation in KIC 11560447
NASA Astrophysics Data System (ADS)
Özavcı, I.; Şenavcı, H. V.; Işık, E.; Hussain, G. A. J.; O'Neal, D.; Yılmaz, M.; Selam, S. O.
2018-03-01
We present a detailed analysis of surface inhomogeneities on the K1-type subgiant component of the rapidly rotating eclipsing binary KIC 11560447, using high-precision Kepler light curves spanning nearly 4 yr, which corresponds to about 2800 orbital revolutions. We determine the system parameters precisely, using high-resolution spectra from the 2.1-m Otto Struve Telescope at the McDonald Observatory. We apply the maximum entropy method to reconstruct the relative longitudinal spot occupancy. Our numerical tests show that the procedure can recover large-scale random distributions of individually unresolved spots, and it can track the phase migration of up to three major spot clusters. By determining the drift rates of various spotted regions in orbital longitude, we suggest a way to constrain surface differential rotation and we show that the results are consistent with periodograms. The K1IV star exhibits two mildly preferred longitudes of emergence, indications of solar-like differential rotation, and a 0.5-1.3-yr recurrence period in star-spot emergence, accompanied by a secular increase in the axisymmetric component of spot occupancy.
NASA Astrophysics Data System (ADS)
Koenig, Karsten; Riemann, Iris; Krauss, Oliver; Fritzsche, Wolfgang
2002-04-01
Nanojoule and sub-nanojoule 80 MHz femtosecond laser pulses at 750-850 nm of a compact titanium:sapphire laser have been used for highly precise nanoprocessing of DNA as well as of intracellular and intratissue compartments. In particular, a mean power between 15 mW and 100 mW, 170 fs pulse width, submicron distance of illumination spots and microsecond beam dwell times on spots have been used for multiphoton- mediated nanoprocessing of human chromosomes, brain and ocular intrastromal tissue. By focusing the laser beam with high numerical aperture focusing optics of the laser scan system femt-O-cut and of modified multiphoton scanning microscopes to diffraction-limited spots and TW/cm2 light intensities, precise submicron holes and cuts have been processed by single spot exposure and line scans. A minimum FWHM cut size below 70 nm during the partial dissection of the human chromosome 3 was achieved. Complete chromosome dissection could be performed with FWHM cut sizes below 200 nm. Intracellular chromosome dissection was possible. Intratissue processing in depths of 50 - 100micrometers and deeper with a precision of about 1micrometers including cuts through a nuclei of a single intratissue cell without destructive photo-disruption effects to surrounding tissue layers have been demonstrated in brain and eye tissues. The femt-O-cut system includes a diagnostic system for optical tomography with submicron resolution based on multiphoton- excited autofluorescence imaging (MAI) and second harmonic generation. This system was used to localize the intracellular and intratissue targets and to control the effects of nanoprocessing. These studies show, that in contrast to conventional approaches of material processing with amplified femtosecond laser systems and (mu) J pulse energies, nanoprocessing of materials including biotissues can be performed with nJ and sub-nJ high repetition femtosecond laser pulses of turn-key compact lasers without collateral damage. Potential applications include highly precise cell and embryo surgery, gene diagnostics and gene therapy, intrastromal refractive surgery, cancer therapy and brain surgery.
Understanding the many-body expansion for large systems. I. Precision considerations
NASA Astrophysics Data System (ADS)
Richard, Ryan M.; Lao, Ka Un; Herbert, John M.
2014-07-01
Electronic structure methods based on low-order "n-body" expansions are an increasingly popular means to defeat the highly nonlinear scaling of ab initio quantum chemistry calculations, taking advantage of the inherently distributable nature of the numerous subsystem calculations. Here, we examine how the finite precision of these subsystem calculations manifests in applications to large systems, in this case, a sequence of water clusters ranging in size up to (H_2O)_{47}. Using two different computer implementations of the n-body expansion, one fully integrated into a quantum chemistry program and the other written as a separate driver routine for the same program, we examine the reproducibility of total binding energies as a function of cluster size. The combinatorial nature of the n-body expansion amplifies subtle differences between the two implementations, especially for n ⩾ 4, leading to total energies that differ by as much as several kcal/mol between two implementations of what is ostensibly the same method. This behavior can be understood based on a propagation-of-errors analysis applied to a closed-form expression for the n-body expansion, which is derived here for the first time. Discrepancies between the two implementations arise primarily from the Coulomb self-energy correction that is required when electrostatic embedding charges are implemented by means of an external driver program. For reliable results in large systems, our analysis suggests that script- or driver-based implementations should read binary output files from an electronic structure program, in full double precision, or better yet be fully integrated in a way that avoids the need to compute the aforementioned self-energy. Moreover, four-body and higher-order expansions may be too sensitive to numerical thresholds to be of practical use in large systems.
Understanding the many-body expansion for large systems. I. Precision considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard, Ryan M.; Lao, Ka Un; Herbert, John M., E-mail: herbert@chemistry.ohio-state.edu
2014-07-07
Electronic structure methods based on low-order “n-body” expansions are an increasingly popular means to defeat the highly nonlinear scaling of ab initio quantum chemistry calculations, taking advantage of the inherently distributable nature of the numerous subsystem calculations. Here, we examine how the finite precision of these subsystem calculations manifests in applications to large systems, in this case, a sequence of water clusters ranging in size up to (H{sub 2}O){sub 47}. Using two different computer implementations of the n-body expansion, one fully integrated into a quantum chemistry program and the other written as a separate driver routine for the same program,more » we examine the reproducibility of total binding energies as a function of cluster size. The combinatorial nature of the n-body expansion amplifies subtle differences between the two implementations, especially for n ⩾ 4, leading to total energies that differ by as much as several kcal/mol between two implementations of what is ostensibly the same method. This behavior can be understood based on a propagation-of-errors analysis applied to a closed-form expression for the n-body expansion, which is derived here for the first time. Discrepancies between the two implementations arise primarily from the Coulomb self-energy correction that is required when electrostatic embedding charges are implemented by means of an external driver program. For reliable results in large systems, our analysis suggests that script- or driver-based implementations should read binary output files from an electronic structure program, in full double precision, or better yet be fully integrated in a way that avoids the need to compute the aforementioned self-energy. Moreover, four-body and higher-order expansions may be too sensitive to numerical thresholds to be of practical use in large systems.« less
Goble, Daniel J; Khan, Ehran; Baweja, Harsimran S; O'Connor, Shawn M
2018-04-11
Changes in postural sway measured via force plate center of pressure have been associated with many aspects of human motor ability. A previous study validated the accuracy and precision of a relatively new, low-cost and portable force plate called the Balance Tracking System (BTrackS). This work compared a laboratory-grade force plate versus BTrackS during human-like dynamic sway conditions generated by an inverted pendulum device. The present study sought to extend previous validation attempts for BTrackS using a more traditional point of application (POA) approach. Computer numerical control (CNC) guided application of ∼155 N of force was applied five times to each of 21 points on five different BTrackS Balance Plate (BBP) devices with a hex-nose plunger. Results showed excellent agreement (ICC > 0.999) between the POAs and measured COP by the BBP devices, as well as high accuracy (<1% average percent error) and precision (<0.1 cm average standard deviation of residuals). The ICC between BBP devices was exceptionally high (ICC > 0.999) providing evidence of almost perfect inter-device reliability. Taken together, these results provide an important, static corollary to the previously obtained dynamic COP results from inverted pendulum testing of the BBP. Copyright © 2018 Elsevier Ltd. All rights reserved.
A numerical method of detecting singularity
NASA Technical Reports Server (NTRS)
Laporte, M.; Vignes, J.
1978-01-01
A numerical method is reported which determines a value C for the degree of conditioning of a matrix. This value is C = 0 for a singular matrix and has progressively larger values for matrices which are increasingly well-conditioned. This value is C sub = C max sub max (C defined by the precision of the computer) when the matrix is perfectly well conditioned.
Computer numerical control grinding of spiral bevel gears
NASA Technical Reports Server (NTRS)
Scott, H. Wayne
1991-01-01
The development of Computer Numerical Control (CNC) spiral bevel gear grinding has paved the way for major improvement in the production of precision spiral bevel gears. The object of the program was to decrease the setup, maintenance of setup, and pattern development time by 50 percent of the time required on conventional spiral bevel gear grinders. Details of the process are explained.
NASA Astrophysics Data System (ADS)
Kazmi, Zaheer Abbas; Konagai, Kazuo; Kyokawa, Hiroyuki; Tetik, Cigdem
On April 11th, 2011, Iwaki region of Fukushima prefecture was jolted by Fukushima-Prefecture Hamadoori Earthquake. Surface ruptures were observed along causative Idosawa and Yunotake normal faults. In addition to numerous small slope failures, a coherent landslide and building structures of Tabito Junior High School, bisected by Idosawa Fault, were found along the causative faults. A precise digital elevation model of the coherent landslide was obtained through the ground and air-born LiDAR surveys. The measurements of perimeters of the gymnasium building and the swimming pool of Tabito Junior High School have shown that ground undergoes a slow and steady/continual deformation.
S-Wave Dispersion Relations: Exact Left Hand E-Plane Discontinuity from the Born Series
NASA Technical Reports Server (NTRS)
Bessis, D.; Temkin, A.
1999-01-01
We show, for a superposition of Yukawa potentials, that the left hand cut discontinuity in the complex E plane of the (S-wave) scattering amplitude is given exactly, in an interval depending on n, by the discontinuity of the Born series stopped at order n. This also establishes an inverse and unexpected correspondence of the Born series at positive high energies and negative low energies. We can thus construct a viable dispersion relation (DR) for the partial (S-) wave amplitude. The high numerical precision achievable by the DR is demonstrated for the exponential potential at zero scattering energy. We also briefly discuss the extension of our results to Field Theory.
Stably Fluorescent Cell Line of Human Ovarian Epithelial Cancer Cells SK-OV-3ip-red.
Konovalova, E V; Shulga, A A; Chumakov, S P; Khodarovich, Yu M; Woo, Eui-Jeon; Deev, S M
2017-11-01
Stable red fluorescing line of human ovarian epithelial cancer cells SK-OV-3ip-red was generated expressing gene coding for protein TurboFP635 (Katushka) fluorescing in the far-red spectrum region with excitation and emission peaks at 588 and 635 nm, respectively. Fluorescence of SK-OV-3ip-red line remained high during long-term cell culturing and after cryogenic freezing. The obtained cell line SK-OV-3ip-red can serve a basis for a model of a scattered tumor with numerous/extended metastases and used both for testing anticancer drugs inhibiting metastasis growth and for non-invasive monitoring of the growth dynamics with high precision.
Chalcogenide molded freeform optics for mid-infrared lasers
NASA Astrophysics Data System (ADS)
Chenard, Francois; Alvarez, Oseas; Yi, Allen
2017-05-01
High-precision chalcogenide molded micro-lenses were produced to collimate mid-infrared Quantum Cascade Lasers (QCLs). Molded cylindrical micro-lens prototypes with aspheric contour (acylindrical), high numerical aperture (NA 0.8) and small focal length (f<2 mm) were fabricated to collimate the QCL fast-axis beam. Another innovative freeform micro-lens has an input acylindrical surface to collimate the fast axis and an orthogonal output acylindrical surface to collimate the slow axis. The thickness of the freeform lens is such that the output fast- and slow-axis beams are circular. This paper presents results on the chalcogenide molded freeform micro-lens designed to collimate and circularize QCL at 4.6 microns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guangye; Chacon, Luis; Barnes, Daniel C
2012-01-01
Recently, a fully implicit, energy- and charge-conserving particle-in-cell method has been developed for multi-scale, full-f kinetic simulations [G. Chen, et al., J. Comput. Phys. 230, 18 (2011)]. The method employs a Jacobian-free Newton-Krylov (JFNK) solver and is capable of using very large timesteps without loss of numerical stability or accuracy. A fundamental feature of the method is the segregation of particle orbit integrations from the field solver, while remaining fully self-consistent. This provides great flexibility, and dramatically improves the solver efficiency by reducing the degrees of freedom of the associated nonlinear system. However, it requires a particle push per nonlinearmore » residual evaluation, which makes the particle push the most time-consuming operation in the algorithm. This paper describes a very efficient mixed-precision, hybrid CPU-GPU implementation of the implicit PIC algorithm. The JFNK solver is kept on the CPU (in double precision), while the inherent data parallelism of the particle mover is exploited by implementing it in single-precision on a graphics processing unit (GPU) using CUDA. Performance-oriented optimizations, with the aid of an analytical performance model, the roofline model, are employed. Despite being highly dynamic, the adaptive, charge-conserving particle mover algorithm achieves up to 300 400 GOp/s (including single-precision floating-point, integer, and logic operations) on a Nvidia GeForce GTX580, corresponding to 20 25% absolute GPU efficiency (against the peak theoretical performance) and 50-70% intrinsic efficiency (against the algorithm s maximum operational throughput, which neglects all latencies). This is about 200-300 times faster than an equivalent serial CPU implementation. When the single-precision GPU particle mover is combined with a double-precision CPU JFNK field solver, overall performance gains 100 vs. the double-precision CPU-only serial version are obtained, with no apparent loss of robustness or accuracy when applied to a challenging long-time scale ion acoustic wave simulation.« less
Ronald E. McRoberts; Geoffrey R. Holden; Mark D. Nelson; Greg C. Liknes; Dale D. Gormanson
2006-01-01
Forest inventory programs report estimates of forest variables for areas of interest ranging in size from municipalities, to counties, to states or provinces. Because of numerous factors, sample sizes are often insufficient to estimate attributes as precisely as is desired, unless the estimation process is enhanced using ancillary data. Classified satellite imagery has...
Thompson, Clarissa A.; Morris, Bradley J.; Sidney, Pooja G.
2017-01-01
Do children spontaneously represent spatial-numeric features of a task, even when it does not include printed numbers (Mix et al., 2016)? Sixty first grade students completed a novel spatial estimation task by seeking and finding pages in a 100-page book without printed page numbers. Children were shown pages 1 through 6 and 100, and then were asked, “Can you find page X?” Children’s precision of estimates on the page finder task and a 0-100 number line estimation task was calculated with the Percent Absolute Error (PAE) formula (Siegler and Booth, 2004), in which lower PAE indicated more precise estimates. Children’s numerical knowledge was further assessed with: (1) numeral identification (e.g., What number is this: 57?), (2) magnitude comparison (e.g., Which is larger: 54 or 57?), and (3) counting on (e.g., Start counting from 84 and count up 5 more). Children’s accuracy on these tasks was correlated with their number line PAE. Children’s number line estimation PAE predicted their page finder PAE, even after controlling for age and accuracy on the other numerical tasks. Children’s estimates on the page finder and number line tasks appear to tap a general magnitude representation. However, the page finder task did not correlate with numeral identification and counting-on performance, likely because these tasks do not measure children’s magnitude knowledge. Our results suggest that the novel page finder task is a useful measure of children’s magnitude knowledge, and that books have similar spatial-numeric affordances as number lines and numeric board games. PMID:29312084
Universality of the logarithmic velocity profile restored
NASA Astrophysics Data System (ADS)
Luchini, Paolo
2017-11-01
The logarithmic velocity profile of wall-bounded turbulent flow, despite its widespread adoption in research and in teaching, exhibits discrepancies with both experiments and numerical simulations that have been repeatedly observed in the literature; serious doubts ensued about its precise form and universality, leading to the formulation of alternate theories and hindering ongoing experimental efforts to measure von Kármán's constant. By comparing different geometries of pipe, plane-channel and plane-Couette flow, here we show that such discrepancies can be physically interpreted, and analytically accounted for, through an equally universal higher-order correction caused by the pressure gradient. Inclusion of this term produces a tenfold increase in the adherence of the predicted profile to existing experiments and numerical simulations in all three geometries. Universality of the logarithmic law then emerges beyond doubt and a satisfactorily simple formulation is established. Among the consequences of this formulation is a strongly increased confidence that the Reynolds number of present-day direct numerical simulations is actually high enough to uncover asymptotic behaviour, but research efforts are still needed in order to increase their accuracy.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
Reliable low precision simulations in land surface models
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.
2017-12-01
Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.
On importance assessment of aging multi-state system
NASA Astrophysics Data System (ADS)
Frenkel, Ilia; Khvatskin, Lev; Lisnianski, Anatoly
2017-01-01
Modern high-tech equipment requires precise temperature control and effective cooling below the ambient temperature. Greater cooling efficiencies will allow equipment to be operated for longer periods without overheating, providing a greater return on investment and increased in availability of the equipment. This paper presents application of the Lz-transform method to importance assessment of aging multi-state water-cooling system used in one of Israeli hospitals. The water cooling system consists of 3 principal sub-systems: chillers, heat exchanger and pumps. The performance of the system and the sub-systems is measured by their produced cooling capacity. Heat exchanger is an aging component. Straightforward Markov method applied to solve this problem will require building of a system model with numerous numbers of states and solving a corresponding system of multiple differential equations. Lz-transform method, which is used for calculation of the system elements importance, drastically simplified the solution. Numerical example is presented to illustrate the described approach.
Turbulence statistics with quantified uncertainty in cold-wall supersonic channel flow
NASA Astrophysics Data System (ADS)
Ulerich, Rhys; Moser, Robert D.
2012-11-01
To investigate compressibility effects in wall-bounded turbulence, a series of direct numerical simulations of compressible channel flow with isothermal (cold) walls have been conducted. All combinations of Re = { 3000 , 5000 } and Ma = { 0 . 1 , 0 . 5 , 1 . 5 , 3 . 0 } have been simulated where the Reynolds and Mach numbers are based on bulk velocity and sound speed at the wall temperature. Turbulence statistics with precisely quantified uncertainties computed from these simulations will be presented and are being made available in a public data base at http://turbulence.ices.utexas.edu/. The simulations were performed using a new pseudo-spectral code called Suzerain, which was designed to efficiently produce high quality data on compressible, wall-bounded turbulent flows using a semi-implicit Fourier/B-spline numerical formulation. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].
The CFS-PML in numerical simulation of ATEM
NASA Astrophysics Data System (ADS)
Zhao, Xuejiao; Ji, Yanju; Qiu, Shuo; Guan, Shanshan; Wu, Yanqi
2017-01-01
In the simulation of airborne transient electromagnetic method (ATEM) in time-domain, the truncated boundary reflection can bring a big error to the results. The complex frequency shifted perfectly matched layer (CFS-PML) absorbing boundary condition has been proved to have a better absorption of low frequency incident wave and can reduce the late reflection greatly. In this paper, we apply the CFS-PML to three-dimensional numerical simulation of ATEM in time-domain to achieve a high precision .The expression of divergence equation in CFS-PML is confirmed and its explicit iteration format based on the finite difference method and the recursive convolution technique is deduced. Finally, we use the uniformity half space model and the anomalous model to test the validity of this method. Results show that the CFS-PML can reduce the average relative error to 2.87% and increase the accuracy of the anomaly recognition.
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.; ,
1985-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.
EDUCATION ENHANCES THE ACUITY OF THE NON-VERBAL APPROXIMATE NUMBER SYSTEM
Piazza, Manuela; Pica, Pierre; Izard, Véronique; Spelke, Elizabeth; Dehaene, Stanislas
2015-01-01
All humans share a universal, evolutionarily ancient approximate number system (ANS) that estimates and combines the number of objects in sets with ratio-limited precision. Inter-individual variability in the acuity of the ANS correlates with mathematical achievement, but the causes of this correlation have never been established. We acquired psychophysical measures of ANS acuity in child and adult members of an indigene group in the Amazon, the Mundurucu, who have a very restricted numerical lexicon and highly variable access to mathematical education. By comparing Mundurucu subjects with or without access to schooling, we demonstrate that education significantly enhances the acuity with which sets of concrete objects are estimated. These results speak in favor of an important effect of culture and education on basic number perception. We hypothesize that symbolic and non-symbolic numerical thinking mutually enhance one another over the course of mathematics instruction. PMID:23625879
Critical points of the O(n) loop model on the martini and the 3-12 lattices
NASA Astrophysics Data System (ADS)
Ding, Chengxiang; Fu, Zhe; Guo, Wenan
2012-06-01
We derive the critical line of the O(n) loop model on the martini lattice as a function of the loop weight n basing on the critical points on the honeycomb lattice conjectured by Nienhuis [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.49.1062 49, 1062 (1982)]. In the limit n→0 we prove the connective constant μ=1.7505645579⋯ of self-avoiding walks on the martini lattice. A finite-size scaling analysis based on transfer matrix calculations is also performed. The numerical results coincide with the theoretical predictions with a very high accuracy. Using similar numerical methods, we also study the O(n) loop model on the 3-12 lattice. We obtain similarly precise agreement with the critical points given by Batchelor [J. Stat. Phys.JSTPBS0022-471510.1023/A:1023065215233 92, 1203 (1998)].
Beran, Michael J; Parrish, Audrey E
2016-08-01
A key issue in understanding the evolutionary and developmental emergence of numerical cognition is to learn what mechanism(s) support perception and representation of quantitative information. Two such systems have been proposed, one for dealing with approximate representation of sets of items across an extended numerical range and another for highly precise representation of only small numbers of items. Evidence for the first system is abundant across species and in many tests with human adults and children, whereas the second system is primarily evident in research with children and in some tests with non-human animals. A recent paper (Choo & Franconeri, Psychonomic Bulletin & Review, 21, 93-99, 2014) with adult humans also reported "superprecise" representation of small sets of items in comparison to large sets of items, which would provide more support for the presence of a second system in human adults. We first presented capuchin monkeys with a test similar to that of Choo and Franconeri in which small or large sets with the same ratios had to be discriminated. We then presented the same monkeys with an expanded range of comparisons in the small number range (all comparisons of 1-9 items) and the large number range (all comparisons of 10-90 items in 10-item increments). Capuchin monkeys showed no increased precision for small over large sets in making these discriminations in either experiment. These data indicate a difference in the performance of monkeys to that of adult humans, and specifically that monkeys do not show improved discrimination performance for small sets relative to large sets when the relative numerical differences are held constant.
The lunar libration: comparisons between various models - a model fitted to LLR observations
NASA Astrophysics Data System (ADS)
Chapront, J.; Francou, G.
2005-09-01
We consider 4 libration models: 3 numerical models built by JPL (ephemerides for the libration in DE245, DE403 and DE405) and an analytical model improved with numerical complements fitted to recent LLR observations. The analytical solution uses 3 angular variables (ρ1, ρ2, τ) which represent the deviations with respect to Cassini's laws. After having referred the models to a unique reference frame, we study the differences between the models which depend on gravitational and tidal parameters of the Moon, as well as amplitudes and frequencies of the free librations. It appears that the differences vary widely depending of the above quantities. They correspond to a few meters displacement on the lunar surface, reminding that LLR distances are precise to the centimeter level. Taking advantage of the lunar libration theory built by Moons (1984) and improved by Chapront et al. (1999) we are able to establish 4 solutions and to represent their differences by Fourier series after a numerical substitution of the gravitational constants and free libration parameters. The results are confirmed by frequency analyses performed separately. Using DE245 as a basic reference ephemeris, we approximate the differences between the analytical and numerical models with Poisson series. The analytical solution - improved with numerical complements under the form of Poisson series - is valid over several centuries with an internal precision better than 5 centimeters.
Numerical and experimental analyses of lighting columns in terms of passive safety
NASA Astrophysics Data System (ADS)
Jedliński, Tomasz Ireneusz; Buśkiewicz, Jacek
2018-01-01
Modern lighting columns have a very beneficial influence on road safety. Currently, the columns are being designed to keep the driver safe in the event of a car collision. The following work compares experimental results of vehicle impact on a lighting column with FEM simulations performed using the Ansys LS-DYNA program. Due to high costs of experiments and time-consuming research process, the computer software seems to be very useful utility in the development of pole structures, which are to absorb kinetic energy of the vehicle in a precisely prescribed way.
Terao, Takamichi
2010-08-01
We propose a numerical method to calculate interior eigenvalues and corresponding eigenvectors for nonsymmetric matrices. Based on the subspace projection technique onto expanded Ritz subspace, it becomes possible to obtain eigenvalues and eigenvectors with sufficiently high precision. This method overcomes the difficulties of the traditional nonsymmetric Lanczos algorithm, and improves the accuracy of the obtained interior eigenvalues and eigenvectors. Using this algorithm, we investigate three-dimensional metamaterial composites consisting of positive and negative refractive index materials, and it is demonstrated that the finite-difference frequency-domain algorithm is applicable to analyze these metamaterial composites.
Strategies for Near Real Time Estimation of Precipitable Water Vapor
NASA Technical Reports Server (NTRS)
Bar-Sever, Yoaz E.
1996-01-01
Traditionally used for high precision geodesy, the GPS system has recently emerged as an equally powerful tool in atmospheric studies, in particular, climatology and meteorology. There are several products of GPS-based systems that are of interest to climatologists and meteorologists. One of the most useful is the GPS-based estimate of the amount of Precipitable Water Vapor (PWV) in the troposphere. Water vapor is an important variable in the study of climate changes and atmospheric convection (Yuan et al., 1993), and is of crucial importance for severe weather forecasting and operational numerical weather prediction (Kuo et al., 1993).
Photobiomolecular deposition of metallic particles and films
Hu, Zhong-Cheng
2005-02-08
The method of the invention is based on the unique electron-carrying function of a photocatalytic unit such as the photosynthesis system I (PSI) reaction center of the protein-chlorophyll complex isolated from chloroplasts. The method employs a photo-biomolecular metal deposition technique for precisely controlled nucleation and growth of metallic clusters/particles, e.g., platinum, palladium, and their alloys, etc., as well as for thin-film formation above the surface of a solid substrate. The photochemically mediated technique offers numerous advantages over traditional deposition methods including quantitative atom deposition control, high energy efficiency, and mild operating condition requirements.
Photobiomolecular metallic particles and films
Hu, Zhong-Cheng
2003-05-06
The method of the invention is based on the unique electron-carrying function of a photocatalytic unit such as the photosynthesis system I (PSI) reaction center of the protein-chlorophyll complex isolated from chloroplasts. The method employs a photo-biomolecular metal deposition technique for precisely controlled nucleation and growth of metallic clusters/particles, e.g., platinum, palladium, and their alloys, etc., as well as for thin-film formation above the surface of a solid substrate. The photochemically mediated technique offers numerous advantages over traditional deposition methods including quantitative atom deposition control, high energy efficiency, and mild operating condition requirements.
Knudsen Cell Studies of Ti-Al Thermodynamics
NASA Technical Reports Server (NTRS)
Jacobson, Nathan S.; Copland, Evan H.; Mehrotra, Gopal M.; Auping, Judith; Gray, Hugh R. (Technical Monitor)
2002-01-01
In this paper we describe the Knudsen cell technique for measurement of thermodynamic activities in alloys. Numerous experimental details must be adhered to in order to obtain useful experimental data. These include introduction of an in-situ standard, precise temperature measurement, elimination of thermal gradients, and precise cell positioning. Our first design is discussed and some sample data on Ti-Al alloys is presented. The second modification and associated improvements are also discussed.
Manufacturing Technology Research Needs of the Gear Industry
1987-12-31
Precision Gear Industry, . .... 31 2.2.6.8 Availability’of Skilied Craftsmen. o.... 32 2.2.6.9 Management Shortcomings within the U.S. Precision Gear...becomes more sophisticated, workers are running numerically con- trolled computer equipment requiring an understanding of math. I 2.2.6.9 Management ...inefficiencies of the job shop environ- ment by managing the gear business as a backward integra- tion of the assembly line. o Develop and maintain employee
High-precision Non-Contact Measurement of Creep of Ultra-High Temperature Materials for Aerospace
NASA Technical Reports Server (NTRS)
Rogers, Jan R.; Hyers, Robert
2008-01-01
For high-temperature applications (greater than 2,000 C) such as solid rocket motors, hypersonic aircraft, nuclear electric/thermal propulsion for spacecraft, and more efficient jet engines, creep becomes one of the most important design factors to be considered. Conventional creep-testing methods, where the specimen and test apparatus are in contact with each other, are limited to temperatures approximately 1,700 C. Development of alloys for higher-temperature applications is limited by the availability of testing methods at temperatures above 2000 C. Development of alloys for applications requiring a long service life at temperatures as low as 1500 C, such as the next generation of jet turbine superalloys, is limited by the difficulty of accelerated testing at temperatures above 1700 C. For these reasons, a new, non-contact creep-measurement technique is needed for higher temperature applications. A new non-contact method for creep measurements of ultra-high-temperature metals and ceramics has been developed and validated. Using the electrostatic levitation (ESL) facility at NASA Marshall Space Flight Center, a spherical sample is rotated quickly enough to cause creep deformation due to centrifugal acceleration. Very accurate measurement of the deformed shape through digital image analysis allows the stress exponent n to be determined very precisely from a single test, rather than from numerous conventional tests. Validation tests on single-crystal niobium spheres showed excellent agreement with conventional tests at 1985 C; however the non-contact method provides much greater precision while using only about 40 milligrams of material. This method is being applied to materials including metals and ceramics for non-eroding throats in solid rockets and next-generation superalloys for turbine engines. Recent advances in the method and the current state of these new measurements will be presented.
Eibenberger, Karin; Eibenberger, Bernhard; Rucci, Michele
2016-08-01
The precise measurement of eye movements is important for investigating vision, oculomotor control and vestibular function. The magnetic scleral search coil technique is one of the most precise measurement techniques for recording eye movements with very high spatial (≈ 1 arcmin) and temporal (>kHz) resolution. The technique is based on measuring voltage induced in a search coil through a large magnetic field. This search coil is embedded in a contact lens worn by a human subject. The measured voltage is in direct relationship to the orientation of the eye in space. This requires a magnetic field with a high homogeneity in the center, since otherwise the field inhomogeneity would give the false impression of a rotation of the eye due to a translational movement of the head. To circumvent this problem, a bite bar typically restricts head movement to a minimum. However, the need often emerges to precisely record eye movements under natural viewing conditions. To this end, one needs a uniform magnetic field that is uniform over a large area. In this paper, we present the numerical and finite element simulations of the magnetic flux density of different coil geometries that could be used for search coil recordings. Based on the results, we built a 2.2 × 2.2 × 2.2 meter coil frame with a set of 3 × 4 coils to generate a 3D magnetic field and compared the measured flux density with our simulation results. In agreement with simulation results, the system yields a highly uniform field enabling high-resolution recordings of eye movements.
High-speed laser microsurgery of alert fruit flies for fluorescence imaging of neural activity
Sinha, Supriyo; Liang, Liang; Ho, Eric T. W.; Urbanek, Karel E.; Luo, Liqun; Baer, Thomas M.; Schnitzer, Mark J.
2013-01-01
Intravital microscopy is a key means of monitoring cellular function in live organisms, but surgical preparation of a live animal for microscopy often is time-consuming, requires considerable skill, and limits experimental throughput. Here we introduce a spatially precise (<1-µm edge precision), high-speed (<1 s), largely automated, and economical protocol for microsurgical preparation of live animals for optical imaging. Using a 193-nm pulsed excimer laser and the fruit fly as a model, we created observation windows (12- to 350-µm diameters) in the exoskeleton. Through these windows we used two-photon microscopy to image odor-evoked Ca2+ signaling in projection neuron dendrites of the antennal lobe and Kenyon cells of the mushroom body. The impact of a laser-cut window on fly health appears to be substantially less than that of conventional manual dissection, for our imaging durations of up to 18 h were ∼5–20 times longer than prior in vivo microscopy studies of hand-dissected flies. This improvement will facilitate studies of numerous questions in neuroscience, such as those regarding neuronal plasticity or learning and memory. As a control, we used phototaxis as an exemplary complex behavior in flies and found that laser microsurgery is sufficiently gentle to leave it intact. To demonstrate that our techniques are applicable to other species, we created microsurgical openings in nematodes, ants, and the mouse cranium. In conjunction with emerging robotic methods for handling and mounting flies or other small organisms, our rapid, precisely controllable, and highly repeatable microsurgical techniques should enable automated, high-throughput preparation of live animals for optical experimentation. PMID:24167298
NASA Astrophysics Data System (ADS)
Altsybeyev, V. V.
2016-12-01
The implementation of numerical methods for studying the dynamics of particle flows produced by pulsed sources is discussed. A particle tracking method with so-called gun iteration for simulations of beam dynamics is used. For the space charge limited emission problem, we suggest a Gauss law emission model for precise current-density calculation in the case of a curvilinear emitter. The results of numerical simulations of particle-flow formation for cylindrical bipolar diode and for diode with elliptical emitter are presented.
Fermi Gamma-Ray Space Telescope: Science Highlights for the First 8 Months
NASA Technical Reports Server (NTRS)
Moiseev, Alexander
2010-01-01
The Fermi Gamma-ray Space Telescope was launched on June 11, 2008 and since August 2008 has successfully been conducting routine science observations of high energy phenomena in the gamma-ray sky. A number of exciting discoveries have been made during its first year of operation, including blazar flares, high-energy gamma-ray bursts, and numerous new,gamma-ray sources of different types, among them pulsars and Active Galactic Nuclei (AGN). fermi-LAT also performed accurate mea.<;urement of the diffuse gamma-radiation which clarifies the Ge V excess reported by EGRET almost 10 years ago, high precision measurement of the high energy electron spectrum, and other observations. An overview of the observatory status and recent results as of April 30, 2009, are presented. Key words: gamma-ray astronomy, cosmic rays, gamma-ray burst, pulsar, blazar. diffuse gamma-radiation
Stach, Thomas; Anselmi, Chiara
2015-12-23
Understanding the evolution of divergent developmental trajectories requires detailed comparisons of embryologies at appropriate levels. Cell lineages, the accurate visualization of cleavage patterns, tissue fate restrictions, and morphogenetic movements that occur during the development of individual embryos are currently available for few disparate animal taxa, encumbering evolutionarily meaningful comparisons. Tunicates, considered to be close relatives of vertebrates, are marine invertebrates whose fossil record dates back to 525 million years ago. Life-history strategies across this subphylum are radically different, and include biphasic ascidians with free swimming larvae and a sessile adult stage, and the holoplanktonic larvaceans. Despite considerable progress, notably on the molecular level, the exact extent of evolutionary conservation and innovation during embryology remain obscure. Here, using the innovative technique of bifocal 4D-microscopy, we demonstrate exactly which characteristics in the cell lineages of the ascidian Phallusia mammillata and the larvacean Oikopleura dioica were conserved and which were altered during evolution. Our accurate cell lineage trees in combination with detailed three-dimensional representations clearly identify conserved correspondence in relative cell position, cell identity, and fate restriction in several lines from all prospective larval tissues. At the same time, we precisely pinpoint differences observable at all levels of development. These differences comprise fate restrictions, tissue types, complex morphogenetic movement patterns, numerous cases of heterochronous acceleration in the larvacean embryo, and differences in bilateral symmetry. Our results demonstrate in extraordinary detail the multitude of developmental levels amenable to evolutionary innovation, including subtle changes in the timing of fate restrictions as well as dramatic alterations in complex morphogenetic movements. We anticipate that the precise spatial and temporal cell lineage data will moreover serve as a high-precision guide to devise experimental investigations of other levels, such as molecular interactions between cells or changes in gene expression underlying the documented structural evolutionary changes. Finally, the quantitative amount of digital high-precision morphological data will enable and necessitate software-based similarity assessments as the basis of homology hypotheses.
Molecular transport through capillaries made with atomic-scale precision
NASA Astrophysics Data System (ADS)
Radha, B.; Esfandiar, A.; Wang, F. C.; Rooney, A. P.; Gopinadhan, K.; Keerthi, A.; Mishchenko, A.; Janardanan, A.; Blake, P.; Fumagalli, L.; Lozada-Hidalgo, M.; Garaj, S.; Haigh, S. J.; Grigorieva, I. V.; Wu, H. A.; Geim, A. K.
2016-10-01
Nanometre-scale pores and capillaries have long been studied because of their importance in many natural phenomena and their use in numerous applications. A more recent development is the ability to fabricate artificial capillaries with nanometre dimensions, which has enabled new research on molecular transport and led to the emergence of nanofluidics. But surface roughness in particular makes it challenging to produce capillaries with precisely controlled dimensions at this spatial scale. Here we report the fabrication of narrow and smooth capillaries through van der Waals assembly, with atomically flat sheets at the top and bottom separated by spacers made of two-dimensional crystals with a precisely controlled number of layers. We use graphene and its multilayers as archetypal two-dimensional materials to demonstrate this technology, which produces structures that can be viewed as if individual atomic planes had been removed from a bulk crystal to leave behind flat voids of a height chosen with atomic-scale precision. Water transport through the channels, ranging in height from one to several dozen atomic planes, is characterized by unexpectedly fast flow (up to 1 metre per second) that we attribute to high capillary pressures (about 1,000 bar) and large slip lengths. For channels that accommodate only a few layers of water, the flow exhibits a marked enhancement that we associate with an increased structural order in nanoconfined water. Our work opens up an avenue to making capillaries and cavities with sizes tunable to ångström precision, and with permeation properties further controlled through a wide choice of atomically flat materials available for channel walls.
Online Wavelet Complementary velocity Estimator.
Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin
2018-02-01
In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Spectral/ hp element methods: Recent developments, applications, and perspectives
NASA Astrophysics Data System (ADS)
Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.
2018-02-01
The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.
Zhang, Yi; Sun, Weiguo; Fu, Jia; Fan, Qunchao; Ma, Jie; Xiao, Liantuan; Jia, Suotang; Feng, Hao; Li, Huidong
2014-01-03
The algebraic method (AM) proposed by Sun et al. is improved to be a variational AM (VAM) to offset the possible experimental errors and to adapt to the individual energy expansion nature of different molecular systems. The VAM is used to study the full vibrational spectra {Eυ} and the dissociation energies De of (4)HeH(+)-X(1)Σ(+), (7)Li2-1(3)Δg,Na2-C(1)Πu,NaK-7(1)Π, Cs2-B(1)Πu and (79)Br2-β1g((3)P2) diatomic electronic states. The results not only precisely reproduce all known experimental vibrational energies, but also predict correct dissociation energies and all unknown high-lying levels that may not be given by the original AM or other numerical methods or experimental methods. The analyses and the skill suggested here might be useful for other numerical simulations and theoretical fittings using known data that may carry inevitable errors. Copyright © 2013. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Seeberger, Pia; Vidal, Julien
2017-08-01
Formation entropy of point defects is one of the last crucial elements required to fully describe the temperature dependence of point defect formation. However, while many attempts have been made to compute them for very complicated systems, very few works have been carried out such as to assess the different effects of finite size effects and precision on such quantity. Large discrepancies can be found in the literature for a system as primitive as the silicon vacancy. In this work, we have proposed a systematic study of formation entropy for silicon vacancy in its 3 stable charge states: neutral, +2 and -2 for supercells with size not below 432 atoms. Rationalization of the formation entropy is presented, highlighting importance of finite size error and the difficulty to compute such quantities due to high numerical requirement. It is proposed that the direct calculation of formation entropy of VSi using first principles methods will be plagued by very high computational workload (or large numerical errors) and finite size dependent results.
Tilt angle measurement with a Gaussian-shaped laser beam tracking
NASA Astrophysics Data System (ADS)
Šarbort, Martin; Řeřucha, Šimon; Jedlička, Petr; Lazar, Josef; Číp, Ondrej
2014-05-01
We have addressed the challenge to carry out the angular tilt stabilization of a laser guiding mirror which is intended to route a laser beam with a high energy density. Such an application requires good angular accuracy as well as large operating range, long term stability and absolute positioning. We have designed an instrument for such a high precision angular tilt measurement based on a triangulation method where a laser beam with Gaussian profile is reflected off the stabilized mirror and detected by an image sensor. As the angular deflection of the mirror causes a change of the beam spot position, the principal task is to measure the position on the image chip surface. We have employed a numerical analysis of the Gaussian intensity pattern which uses the nonlinear regression algorithm. The feasibility and performance of the method were tested by numeric modeling as well as experimentally. The experimental results indicate that the assembled instrument achieves a measurement error of 0.13 microradian in the range +/-0.65 degrees over the period of one hour. This corresponds to the dynamic range of 1:170 000.
The potamochemical symphony: new progress in the high-frequency acquisition of stream chemical data
NASA Astrophysics Data System (ADS)
Floury, Paul; Gaillardet, Jérôme; Gayer, Eric; Bouchez, Julien; Tallec, Gaëlle; Ansart, Patrick; Koch, Frédéric; Gorge, Caroline; Blanchouin, Arnaud; Roubaty, Jean-Louis
2017-12-01
Our understanding of hydrological and chemical processes at the catchment scale is limited by our capacity to record the full breadth of the information carried by river chemistry, both in terms of sampling frequency and precision. Here, we present a proof-of-concept study of a lab in the field
called the River Lab
(RL), based on the idea of permanently installing a suite of laboratory instruments in the field next to a river. Housed in a small shed, this set of instruments performs analyses at a frequency of one every 40 min for major dissolved species (Na+, K+, Mg2+, Ca2+, Cl-, SO42-, NO3-) through continuous sampling and filtration of the river water using automated ion chromatographs. The RL was deployed in the Orgeval Critical Zone Observatory, France for over a year of continuous analyses. Results show that the RL is able to capture long-term fine chemical variations with no drift and a precision significantly better than conventionally achieved in the laboratory (up to ±0.5 % for all major species for over a day and up to 1.7 % over 2 months). The RL is able to capture the abrupt changes in dissolved species concentrations during a typical 6-day rain event, as well as daily oscillations during a hydrological low-flow period of summer drought. Using the measured signals as a benchmark, we numerically assess the effects of a lower sampling frequency (typical of conventional field sampling campaigns) and of a lower precision (typically reached in the laboratory) on the hydrochemical signal. The high-resolution, high-precision measurements made possible by the RL open new perspectives for understanding critical zone hydro-bio-geochemical cycles. Finally, the RL also offers a solution for management agencies to monitor water quality in quasi-real time.
Precision comparison of the power spectrum in the EFTofLSS with simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foreman, Simon; Senatore, Leonardo; Perrier, Hideki, E-mail: sfore@stanford.edu, E-mail: senatore@stanford.edu, E-mail: hideki.perrier@unige.ch
2016-05-01
We study the prediction of the dark matter power spectrum at two-loop order in the Effective Field Theory of Large Scale Structures (EFTofLSS) using high precision numerical simulations. In our universe, short distance non-linear fluctuations, not under perturbative control, affect long distance fluctuations through an effective stress tensor that needs to be parametrized in terms of counterterms that are functions of the long distance fluctuating fields. We find that at two-loop order it is necessary to include three counterterms: a linear term in the overdensity, δ, a quadratic term, δ{sup 2}, and a higher derivative term, ∂{sup 2}δ. After themore » inclusion of these three terms, the EFTofLSS at two-loop order matches simulation data up to k ≅ 0.34 h Mpc{sup −1} at redshift z = 0, up to k ≅ 0.55 h Mpc{sup −1} at z = 1, and up to k ≅ 1.1 h Mpc{sup −1} at z = 2. At these wavenumbers, the cosmic variance of the simulation is at least as small as 10{sup −3}, providing for the first time a high precision comparison between theory and data. The actual reach of the theory is affected by theoretical uncertainties associated to not having included higher order terms in perturbation theory, for which we provide an estimate, and by potentially overfitting the data, which we also try to address. Since in the EFTofLSS the coupling constants associated with the counterterms are unknown functions of time, we show how a simple parametrization gives a sensible description of their time-dependence. Overall, the k -reach of the EFTofLSS is much larger than previous analytical techniques, showing that the amount of cosmological information amenable to high-precision analytical control might be much larger than previously believed.« less
Contribution from individual nearby sources to the spectrum of high-energy cosmic-ray electrons
NASA Astrophysics Data System (ADS)
Sedrati, R.; Attallah, R.
2014-04-01
In the last few years, very important data on high-energy cosmic-ray electrons and positrons from high-precision space-born and ground-based experiments have attracted a great deal of interest. These particles represent a unique probe for studying local comic-ray accelerators because they lose energy very rapidly. These energy losses reduce the lifetime so drastically that high-energy cosmic-ray electrons can attain the Earth only from rather local astrophysical sources. This work aims at calculating, by means of Monte Carlo simulation, the contribution from some known nearby astrophysical sources to the cosmic-ray electron/positron spectra at high energy (≥ 10 GeV). The background to the electron energy spectrum from distant sources is determined with the help of the GALPROP code. The obtained numerical results are compared with a set of experimental data.
Modeling and FE Simulation of Quenchable High Strength Steels Sheet Metal Hot Forming Process
NASA Astrophysics Data System (ADS)
Liu, Hongsheng; Bao, Jun; Xing, Zhongwen; Zhang, Dejin; Song, Baoyu; Lei, Chengxi
2011-08-01
High strength steel (HSS) sheet metal hot forming process is investigated by means of numerical simulations. With regard to a reliable numerical process design, the knowledge of the thermal and thermo-mechanical properties is essential. In this article, tensile tests are performed to examine the flow stress of the material HSS 22MnB5 at different strains, strain rates, and temperatures. Constitutive model based on phenomenological approach is developed to describe the thermo-mechanical properties of the material 22MnB5 by fitting the experimental data. A 2D coupled thermo-mechanical finite element (FE) model is developed to simulate the HSS sheet metal hot forming process for U-channel part. The ABAQUS/explicit model is used conduct the hot forming stage simulations, and ABAQUS/implicit model is used for accurately predicting the springback which happens at the end of hot forming stage. Material modeling and FE numerical simulations are carried out to investigate the effect of the processing parameters on the hot forming process. The processing parameters have significant influence on the microstructure of U-channel part. The springback after hot forming stage is the main factor impairing the shape precision of hot-formed part. The mechanism of springback is advanced and verified through numerical simulations and tensile loading-unloading tests. Creep strain is found in the tensile loading-unloading test under isothermal condition and has a distinct effect on springback. According to the numerical and experimental results, it can be concluded that springback is mainly caused by different cooling rats and the nonhomogengeous shrink of material during hot forming process, the creep strain is the main factor influencing the amount of the springback.
Research on the impact factors of GRACE precise orbit determination by dynamic method
NASA Astrophysics Data System (ADS)
Guo, Nan-nan; Zhou, Xu-hua; Li, Kai; Wu, Bin
2018-07-01
With the successful use of GPS-only-based POD (precise orbit determination), more and more satellites carry onboard GPS receivers to support their orbit accuracy requirements. It provides continuous GPS observations in high precision, and becomes an indispensable way to obtain the orbit of LEO satellites. Precise orbit determination of LEO satellites plays an important role for the application of LEO satellites. Numerous factors should be considered in the POD processing. In this paper, several factors that impact precise orbit determination are analyzed, namely the satellite altitude, the time-variable earth's gravity field, the GPS satellite clock error and accelerometer observation. The GRACE satellites provide ideal platform to study the performance of factors for precise orbit determination using zero-difference GPS data. These factors are quantitatively analyzed on affecting the accuracy of dynamic orbit using GRACE observations from 2005 to 2011 by SHORDE software. The study indicates that: (1) with the altitude of the GRACE satellite is lowered from 480 km to 460 km in seven years, the 3D (three-dimension) position accuracy of GRACE satellite orbit is about 3˜4 cm based on long spans data; (2) the accelerometer data improves the 3D position accuracy of GRACE in about 1 cm; (3) the accuracy of zero-difference dynamic orbit is about 6 cm with the GPS satellite clock error products in 5 min sampling interval and can be raised to 4 cm, if the GPS satellite clock error products with 30 s sampling interval can be adopted. (4) the time-variable part of earth gravity field model improves the 3D position accuracy of GRACE in about 0.5˜1.5 cm. Based on this study, we quantitatively analyze the factors that affect precise orbit determination of LEO satellites. This study plays an important role to improve the accuracy of LEO satellites orbit determination.
High-precision numerical integration of equations in dynamics
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
An important requirement for the process of solving differential equations in Dynamics, such as the equations of the motion of celestial bodies and, in particular, the motion of cosmic robotic systems is high accuracy at large time intervals. One of effective tools for obtaining such solutions is the Taylor series method. In this connection, we note that it is very advantageous to reduce the given equations of Dynamics to systems with polynomial (in unknowns) right-hand sides. This allows us to obtain effective algorithms for finding the Taylor coefficients, a priori error estimates at each step of integration, and an optimal choice of the order of the approximation used. In the paper, these questions are discussed and appropriate algorithms are considered.
Dominating Scale-Free Networks Using Generalized Probabilistic Methods
Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.
2014-01-01
We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937
Integrated optics interferometer for high precision displacement measurement
NASA Astrophysics Data System (ADS)
Persegol, Dominique; Collomb, Virginie; Minier, Vincent
2017-11-01
We present the design and fabrication aspects of an integrated optics interferometer used in the optical head of a compact and lightweight displacement sensor developed for spatial applications. The process for fabricating the waveguides of the optical chip is a double thermal ion exchange of silver and sodium in a silicate glass. This two step process is adapted for the fabrication of high numerical aperture buried waveguides having negligible losses for bending radius as low as 10 mm. The optical head of the sensor is composed of a reference arm, a sensing arm and an interferometer which generates a one dimensional fringe pattern allowing a multiphase detection. Four waveguides placed at the output of the interferometer deliver four ideally 90° phase shifted signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Weiwei; Kaminski, Clemens F., E-mail: cfk23@cam.ac.uk
2014-01-20
This paper proposes a technique that can simultaneously retrieve distributions of temperature, concentration of chemical species, and pressure based on broad bandwidth, frequency-agile tomographic absorption spectroscopy. The technique holds particular promise for the study of dynamic combusting flows. A proof-of-concept numerical demonstration is presented, using representative phantoms to model conditions typically prevailing in near-atmospheric or high pressure flames. The simulations reveal both the feasibility of the proposed technique and its robustness. Our calculations indicate precisions of ∼70 K at flame temperatures and ∼0.05 bars at high pressure from reconstructions featuring as much as 5% Gaussian noise in the projections.
Hall, William A; Bergom, Carmen; Thompson, Reid F; Baschnagel, Andrew M; Vijayakumar, Srinivasan; Willers, Henning; Li, X Allen; Schultz, Christopher J; Wilson, George D; West, Catharine M L; Capala, Jacek; Coleman, C Norman; Torres-Roca, Javier F; Weidhaas, Joanne; Feng, Felix Y
2018-06-01
To summarize important talking points from a 2016 symposium focusing on real-world challenges to advancing precision medicine in radiation oncology, and to help radiation oncologists navigate the practical challenges of precision, radiation oncology. The American Society for Radiation Oncology, American Association of Physicists in Medicine, and National Cancer Institute cosponsored a meeting on precision medicine in radiation oncology. In June 2016 numerous scientists, clinicians, and physicists convened at the National Institutes of Health to discuss challenges and future directions toward personalized radiation therapy. Various breakout sessions were held to discuss particular components and approaches to the implementation of personalized radiation oncology. This article summarizes the genomically guided radiation therapy breakout session. A summary of existing genomic data enabling personalized radiation therapy, ongoing clinical trials, current challenges, and future directions was collected. The group attempted to provide both a current overview of data that radiation oncologists could use to personalize therapy, along with data that are anticipated in the coming years. It seems apparent from the provided review that a considerable opportunity exists to truly bring genomically guided radiation therapy into clinical reality. Genomically guided radiation therapy is a necessity that must be embraced in the coming years. Incorporating these data into treatment recommendations will provide radiation oncologists with a substantial opportunity to improve outcomes for numerous cancer patients. More research focused on this topic is needed to bring genomic signatures into routine standard of care. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
Numerical Simulations of the Digital Microfluidic Manipulation of Single Microparticles.
Lan, Chuanjin; Pal, Souvik; Li, Zhen; Ma, Yanbao
2015-09-08
Single-cell analysis techniques have been developed as a valuable bioanalytical tool for elucidating cellular heterogeneity at genomic, proteomic, and cellular levels. Cell manipulation is an indispensable process for single-cell analysis. Digital microfluidics (DMF) is an important platform for conducting cell manipulation and single-cell analysis in a high-throughput fashion. However, the manipulation of single cells in DMF has not been quantitatively studied so far. In this article, we investigate the interaction of a single microparticle with a liquid droplet on a flat substrate using numerical simulations. The droplet is driven by capillary force generated from the wettability gradient of the substrate. Considering the Brownian motion of microparticles, we utilize many-body dissipative particle dynamics (MDPD), an off-lattice mesoscopic simulation technique, in this numerical study. The manipulation processes (including pickup, transport, and drop-off) of a single microparticle with a liquid droplet are simulated. Parametric studies are conducted to investigate the effects on the manipulation processes from the droplet size, wettability gradient, wetting properties of the microparticle, and particle-substrate friction coefficients. The numerical results show that the pickup, transport, and drop-off processes can be precisely controlled by these parameters. On the basis of the numerical results, a trap-free delivery of a hydrophobic microparticle to a destination on the substrate is demonstrated in the numerical simulations. The numerical results not only provide a fundamental understanding of interactions among the microparticle, the droplet, and the substrate but also demonstrate a new technique for the trap-free immobilization of single hydrophobic microparticles in the DMF design. Finally, our numerical method also provides a powerful design and optimization tool for the manipulation of microparticles in DMF systems.
Communication: Analysing kinetic transition networks for rare events.
Stevenson, Jacob D; Wales, David J
2014-07-28
The graph transformation approach is a recently proposed method for computing mean first passage times, rates, and committor probabilities for kinetic transition networks. Here we compare the performance to existing linear algebra methods, focusing on large, sparse networks. We show that graph transformation provides a much more robust framework, succeeding when numerical precision issues cause the other methods to fail completely. These are precisely the situations that correspond to rare event dynamics for which the graph transformation was introduced.
High-intensity focused ultrasound (HIFU) array system for image-guided ablative therapy (IGAT)
NASA Astrophysics Data System (ADS)
Kaczkowski, Peter J.; Keilman, George W.; Cunitz, Bryan W.; Martin, Roy W.; Vaezy, Shahram; Crum, Lawrence A.
2003-06-01
Recent interest in using High Intensity Focused Ultrasound (HIFU) for surgical applications such as hemostasis and tissue necrosis has stimulated the development of image-guided systems for non-invasive HIFU therapy. Seeking an all-ultrasound therapeutic modality, we have developed a clinical HIFU system comprising an integrated applicator that permits precisely registered HIFU therapy delivery and high quality ultrasound imaging using two separate arrays, a multi-channel signal generator and RF amplifier system, and a software program that provides the clinician with a graphical overlay of the ultrasound image and therapeutic protocol controls. Electronic phasing of a 32 element 2 MHz HIFU annular array allows adjusting the focus within the range of about 4 to 12 cm from the face. A central opening in the HIFU transducer permits mounting a commercial medical imaging scanhead (ATL P7-4) that is held in place within a special housing. This mechanical fixture ensures precise coaxial registration between the HIFU transducer and the image plane of the imaging probe. Recent enhancements include development of an acoustic lens using numerical simulations for use with a 5-element array. Our image-guided therapy system is very flexible and enables exploration of a variety of new HIFU therapy delivery and monitoring approaches in the search for safe, effective, and efficient treatment protocols.
Development of High Precision Tsunami Runup Calculation Method Coupled with Structure Analysis
NASA Astrophysics Data System (ADS)
Arikawa, Taro; Seki, Katsumi; Chida, Yu; Takagawa, Tomohiro; Shimosako, Kenichiro
2017-04-01
The 2011 Great East Japan Earthquake (GEJE) has shown that tsunami disasters are not limited to inundation damage in a specified region, but may destroy a wide area, causing a major disaster. Evaluating standing land structures and damage to them requires highly precise evaluation of three-dimensional fluid motion - an expensive process. Our research goals were thus to develop a coupling STOC-CADMAS (Arikawa and Tomita, 2016) coupling with the structure analysis (Arikawa et. al., 2009) to efficiently calculate all stages from tsunami source to runup including the deformation of structures and to verify their applicability. We also investigated the stability of breakwaters at Kamaishi Bay. Fig. 1 shows the whole of this calculation system. The STOC-ML simulator approximates pressure by hydrostatic pressure and calculates the wave profiles based on an equation of continuity, thereby lowering calculation cost, primarily calculating from a e epi center to the shallow region. As a simulator, STOC-IC solves pressure based on a Poisson equation to account for a shallower, more complex topography, but reduces computation cost slightly to calculate the area near a port by setting the water surface based on an equation of continuity. CS3D also solves a Navier-Stokes equation and sets the water surface by VOF to deal with the runup area, with its complex surfaces of overflows and bores. STR solves the structure analysis including the geo analysis based on the Biot's formula. By coupling these, it efficiently calculates the tsunami profile from the propagation to the inundation. The numerical results compared with the physical experiments done by Arikawa et. al.,2012. It was good agreement with the experimental ones. Finally, the system applied to the local situation at Kamaishi bay. The almost breakwaters were washed away, whose situation was similar to the damage at Kamaishi bay. REFERENCES T. Arikawa and T. Tomita (2016): "Development of High Precision Tsunami Runup Calculation Method Based on a Hierarchical Simulation", Journal of Disaster ResearchVol.11 No.4 T. Arikawa, K. Hamaguchi, K. Kitagawa, T. Suzuki (2009): "Development of Numerical Wave Tank Coupled with Structure Analysis Based on FEM", Journal of J.S.C.E., Ser. B2 (Coastal Engineering) Vol. 65, No. 1 T. Arikawa et. al.(2012) "Failure Mechanism of Kamaishi Breakwaters due to the Great East Japan Earthquake Tsunami", 33rd International Conference on Coastal Engineering, No.1191
mr: A C++ library for the matching and running of the Standard Model parameters
NASA Astrophysics Data System (ADS)
Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.
2016-09-01
We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library. Catalogue identifier: AFAI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 517613 No. of bytes in distributed program, including test data, etc.: 2358729 Distribution format: tar.gz Programming language: C++. Computer: IBM PC. Operating system: Linux, Mac OS X. RAM: 1 GB Classification: 11.1. External routines: TSIL [1], OdeInt [2], boost [3] Nature of problem: The running parameters of the Standard Model renormalized in the MS bar scheme at some high renormalization scale, which is chosen by the user, are evaluated in perturbation theory as precisely as possible in two steps. First, the initial conditions at the electroweak energy scale are evaluated from the Fermi constant GF and the pole masses of the W, Z, and Higgs bosons and the bottom and top quarks including the full two-loop threshold corrections. Second, the evolution to the high energy scale is performed by numerically solving the renormalization group evolution equations through three loops. Pure QCD corrections to the matching and running are included through four loops. Solution method: Numerical integration of analytic expressions Additional comments: Available for download from URL: http://apik.github.io/mr/. The MathLink interface is tested to work with Mathematica 7-9 and, with an additional flag, also with Mathematica 10 under Linux and with Mathematica 10 under Mac OS X. Running time: less than 1 second References: [1] S. P. Martin and D. G. Robertson, Comput. Phys. Commun. 174 (2006) 133-151 [hep-ph/0501132]. [2] K. Ahnert and M. Mulansky, AIP Conf. Proc. 1389 (2011) 1586-1589 [arxiv:1110.3397 [cs.MS
Analysis of mixing in high-explosive fireballs using small-scale pressurised spheres
NASA Astrophysics Data System (ADS)
Courtiaud, S.; Lecysyn, N.; Damamme, G.; Poinsot, T.; Selle, L.
2018-02-01
After the detonation of an oxygen-deficient homogeneous high explosive, a phase of turbulent combustion, called afterburning, takes place at the interface between the rich detonation products and air. Its modelling is instrumental for the accurate prediction of the performance of these explosives. Because of the high temperature of detonation products, the chemical reactions are mixing-driven. Modelling afterburning thus relies on the precise description of the mixing process inside fireballs. This work presents a joint numerical and experimental study of a non-reacting reduced-scale set-up, which uses the compressed balloon analogy and does not involve the detonation of a high explosive. The set-up produces a flow similar to the one caused by a spherical detonation and allows focusing on the mixing process. The numerical work is composed of 2D and 3D LES simulations of the set-up. It is shown that grid independence can be reached by imposing perturbations at the edge of the fireball. The results compare well with the existing literature and give new insights on the mixing process inside fireballs. In particular, they highlight the fact that the mixing layer development follows an energetic scaling law but remains sensitive to the density ratio between the detonation products and air.
Validity of the "Laplace Swindle" in Calculation of Giant-Planet Gravity Fields
NASA Astrophysics Data System (ADS)
Hubbard, William B.
2014-11-01
Jupiter and Saturn have large rotation-induced distortions, providing an opportunity to constrain interior structure via precise measurement of external gravity. Anticipated high-precision gravity measurements close to the surfaces of Jupiter (Juno spacecraft) and Saturn (Cassini spacecraft), possibly detecting zonal harmonics to J10 and beyond, will place unprecedented requirements on gravitational modeling via the theory of figures (TOF). It is not widely appreciated that the traditional TOF employs a formally nonconvergent expansion attributed to Laplace. This suspect expansion is intimately related to the standard zonal harmonic (J-coefficient) expansion of the external gravity potential. It can be shown (Hubbard, Schubert, Kong, and Zhang: Icarus, in press) that both Jupiter and Saturn are in the domain where Laplace's "swindle" works exactly, or at least as well as necessary. More highly-distorted objects such as rapidly spinning asteroids may not be in this domain, however. I present a numerical test for the validity and precision of TOF via polar "audit points". I extend the audit-point test to objects rotating differentially on cylinders, obtaining zonal harmonics to J20 and beyond. Models with only low-order differential rotation do not exhibit dramatic effects in the shape of the zonal harmonic spectrum. However, a model with Jupiter-like zonal winds exhibits a break in the zonal harmonic spectrum above about J10, and generally follows the more shallow Kaula power rule at higher orders. This confirms an earlier result obtained by a different method (Hubbard: Icarus 137, 357-359, 1999).
Precise Orbit Determination Of Low Earth Satellites At AIUB Using GPS And SLR Data
NASA Astrophysics Data System (ADS)
Jaggi, A.; Bock, H.; Thaller, D.; Sosnica, K.; Meyer, U.; Baumann, C.; Dach, R.
2013-12-01
An ever increasing number of low Earth orbiting (LEO) satellites is, or will be, equipped with retro-reflectors for Satellite Laser Ranging (SLR) and on-board receivers to collect observations from Global Navigation Satellite Systems (GNSS) such as the Global Positioning System (GPS) and the Russian GLONASS and the European Galileo systems in the future. At the Astronomical Institute of the University of Bern (AIUB) LEO precise orbit determination (POD) using either GPS or SLR data is performed for a wide range of applications for satellites at different altitudes. For this purpose the classical numerical integration techniques, as also used for dynamic orbit determination of satellites at high altitudes, are extended by pseudo-stochastic orbit modeling techniques to efficiently cope with potential force model deficiencies for satellites at low altitudes. Accuracies of better than 2 cm may be achieved by pseudo-stochastic orbit modeling for satellites at very low altitudes such as for the GPS-based POD of the Gravity field and steady-state Ocean Circulation Explorer (GOCE).
NASA Astrophysics Data System (ADS)
Baldi, Alfonso; Jacquot, Pierre
2003-05-01
Graphite-epoxy laminates are subjected to the "incremental hole-drilling" technique in order to investigate the residual stresses acting within each layer of the composite samples. In-plane speckle interferometry is used to measure the displacement field created by each drilling increment around the hole. Our approach features two particularities (1) we rely on the precise repositioning of the samples in the optical set-up after each new boring step, performed by means of a high precision, numerically controlled milling machine in the workshop; (2) for each increment, we acquire three displacement fields, along the length, the width of the samples, and at 45°, using a single symmetrical double beam illumination and a rotary stage holding the specimens. The experimental protocol is described in detail and the experimental results are presented, including a comparison with strain gages. Speckle interferometry appears as a suitable method to respond to the increasing demand for residual stress determination in composite samples.
Kinematics of a New High Precision Three Degree-of-Freedom Parallel Manipulator
NASA Technical Reports Server (NTRS)
Tahmasebi, Farhad
2005-01-01
Closed-form direct and inverse kinematics of a new three degree-of-freedom (DOF) parallel manipulator with inextensible limbs and base-mounted actuators are presented. The manipulator has higher resolution and precision than the existing three DOF mechanisms with extensible limbs. Since all of the manipulator actuators are base-mounted; higher payload capacity, smaller actuator sizes, and lower power dissipation can be obtained. The manipulator is suitable for alignment applications where only tip, tilt, and piston motions are significant. The direct kinematics of the manipulator is reduced to solving an eighth-degree polynomial in the square of tangent of half-angle between one of the limbs and the base plane. Hence, there are at most sixteen assembly configurations for the manipulator. In addition, it is shown that the sixteen solutions are eight pairs of reflected configurations with respect to the base plane. Numerical examples for the direct and inverse kinematics of the manipulator are also presented.
Performance Analysis for the New g-2 Experiment at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stratakis, Diktys; Convery, Mary; Crmkovic, J.
2016-06-01
The new g-2 experiment at Fermilab aims to measure the muon anomalous magnetic moment to a precision of ±0.14 ppm - a fourfold improvement over the 0.54 ppm precision obtained in the g-2 BNL E821experiment. Achieving this goal requires the delivery of highly polarized 3.094 GeV/c muons with a narrow ±0.5% Δp/p acceptance to the g-2 storage ring. In this study, we describe a muon capture and transport scheme that should meet this requirement. First, we present the conceptual design of our proposed scheme wherein we describe its basic features. Then, we detail its performance numerically by simulating the pionmore » production in the (g-2) production target, the muon collection by the downstream beamline optics as well as the beam polarization and spin-momentum correlation up to the storage ring. The sensitivity in performance of our proposed channel against key parameters such as magnet apertures and magnet positioning errors is analyzed« less
Precise on-machine extraction of the surface normal vector using an eddy current sensor array
NASA Astrophysics Data System (ADS)
Wang, Yongqing; Lian, Meng; Liu, Haibo; Ying, Yangwei; Sheng, Xianjun
2016-11-01
To satisfy the requirements of on-machine measurement of the surface normal during complex surface manufacturing, a highly robust normal vector extraction method using an Eddy current (EC) displacement sensor array is developed, the output of which is almost unaffected by surface brightness, machining coolant and environmental noise. A precise normal vector extraction model based on a triangular-distributed EC sensor array is first established. Calibration of the effects of object surface inclination and coupling interference on measurement results, and the relative position of EC sensors, is involved. A novel apparatus employing three EC sensors and a force transducer was designed, which can be easily integrated into the computer numerical control (CNC) machine tool spindle and/or robot terminal execution. Finally, to test the validity and practicability of the proposed method, typical experiments were conducted with specified testing pieces using the developed approach and system, such as an inclined plane and cylindrical and spherical surfaces.
Foong, Shaohui; Sun, Zhenglong
2016-08-12
In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.
Theory of chaotic orbital variations confirmed by Cretaceous geological evidence
NASA Astrophysics Data System (ADS)
Ma, Chao; Meyers, Stephen R.; Sageman, Bradley B.
2017-02-01
Variations in the Earth’s orbit and spin vector are a primary control on insolation and climate; their recognition in the geological record has revolutionized our understanding of palaeoclimate dynamics, and has catalysed improvements in the accuracy and precision of the geological timescale. Yet the secular evolution of the planetary orbits beyond 50 million years ago remains highly uncertain, and the chaotic dynamical nature of the Solar System predicted by theoretical models has yet to be rigorously confirmed by well constrained (radioisotopically calibrated and anchored) geological data. Here we present geological evidence for a chaotic resonance transition associated with interactions between the orbits of Mars and the Earth, using an integrated radioisotopic and astronomical timescale from the Cretaceous Western Interior Basin of what is now North America. This analysis confirms the predicted chaotic dynamical behaviour of the Solar System, and provides a constraint for refining numerical solutions for insolation, which will enable a more precise and accurate geological timescale to be produced.
Representation of numerical magnitude in math-anxious individuals.
Colomé, Àngels
2018-01-01
Larger distance effects in high math-anxious individuals (HMA) performing comparison tasks have previously been interpreted as indicating less precise magnitude representation in this population. A recent study by Dietrich, Huber, Moeller, and Klein limited the effects of math anxiety to symbolic comparison, in which they found larger distance effects for HMA, despite equivalent size effects. However, the question of whether distance effects in symbolic comparison reflect the properties of the magnitude representation or decisional processes is currently under debate. This study was designed to further explore the relation between math anxiety and magnitude representation through three different tasks. HMA and low math-anxious individuals (LMA) performed a non-symbolic comparison, in which no group differences were found. Furthermore, we did not replicate previous findings in an Arabic digit comparison, in which HMA individuals showed equivalent distance effects to their LMA peers. Lastly, there were no group differences in a counting Stroop task. Altogether, an explanation of math anxiety differences in terms of less precise magnitude representation is not supported.
NASA Astrophysics Data System (ADS)
Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang
2016-09-01
Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.
NASA Technical Reports Server (NTRS)
Hoff, Claus; Cady, Eric; Chainyk, Mike; Kissil, Andrew; Levine, Marie; Moore, Greg
2011-01-01
The efficient simulation of multidisciplinary thermo-opto-mechanical effects in precision deployable systems has for years been limited by numerical toolsets that do not necessarily share the same finite element basis, level of mesh discretization, data formats, or compute platforms. Cielo, a general purpose integrated modeling tool funded by the Jet Propulsion Laboratory and the Exoplanet Exploration Program, addresses shortcomings in the current state of the art via features that enable the use of a single, common model for thermal, structural and optical aberration analysis, producing results of greater accuracy, without the need for results interpolation or mapping. This paper will highlight some of these advances, and will demonstrate them within the context of detailed external occulter analyses, focusing on in-plane deformations of the petal edges for both steady-state and transient conditions, with subsequent optical performance metrics including intensity distributions at the pupil and image plane.
Theory of chaotic orbital variations confirmed by Cretaceous geological evidence.
Ma, Chao; Meyers, Stephen R; Sageman, Bradley B
2017-02-22
Variations in the Earth's orbit and spin vector are a primary control on insolation and climate; their recognition in the geological record has revolutionized our understanding of palaeoclimate dynamics, and has catalysed improvements in the accuracy and precision of the geological timescale. Yet the secular evolution of the planetary orbits beyond 50 million years ago remains highly uncertain, and the chaotic dynamical nature of the Solar System predicted by theoretical models has yet to be rigorously confirmed by well constrained (radioisotopically calibrated and anchored) geological data. Here we present geological evidence for a chaotic resonance transition associated with interactions between the orbits of Mars and the Earth, using an integrated radioisotopic and astronomical timescale from the Cretaceous Western Interior Basin of what is now North America. This analysis confirms the predicted chaotic dynamical behaviour of the Solar System, and provides a constraint for refining numerical solutions for insolation, which will enable a more precise and accurate geological timescale to be produced.
MassTRIX: mass translator into pathways.
Suhre, Karsten; Schmitt-Kopplin, Philippe
2008-07-01
Recent technical advances in mass spectrometry (MS) have brought the field of metabolomics to a point where large numbers of metabolites from numerous prokaryotic and eukaryotic organisms can now be easily and precisely detected. The challenge today lies in the correct annotation of these metabolites on the basis of their accurate measured masses. Assignment of bulk chemical formula is generally possible, but without consideration of the biological and genomic context, concrete metabolite annotations remain difficult and uncertain. MassTRIX responds to this challenge by providing a hypothesis-driven approach to high precision MS data annotation. It presents the identified chemical compounds in their genomic context as differentially colored objects on KEGG pathway maps. Information on gene transcription or differences in the gene complement (e.g. samples from different bacterial strains) can be easily added. The user can thus interpret the metabolic state of the organism in the context of its potential and, in the case of submitted transcriptomics data, real enzymatic capacities. The MassTRIX web server is freely accessible at http://masstrix.org.
Chowdhury, M A K; Sharif Ullah, A M M; Anwar, Saqib
2017-09-12
Ti6Al4V alloys are difficult-to-cut materials that have extensive applications in the automotive and aerospace industry. A great deal of effort has been made to develop and improve the machining operations of Ti6Al4V alloys. This paper presents an experimental study that systematically analyzes the effects of the machining conditions (ultrasonic power, feed rate, spindle speed, and tool diameter) on the performance parameters (cutting force, tool wear, overcut error, and cylindricity error), while drilling high precision holes on the workpiece made of Ti6Al4V alloys using rotary ultrasonic machining (RUM). Numerical results were obtained by conducting experiments following the design of an experiment procedure. The effects of the machining conditions on each performance parameter have been determined by constructing a set of possibility distributions (i.e., trapezoidal fuzzy numbers) from the experimental data. A possibility distribution is a probability-distribution-neural representation of uncertainty, and is effective in quantifying the uncertainty underlying physical quantities when there is a limited number of data points which is the case here. Lastly, the optimal machining conditions have been identified using these possibility distributions.
[A quick algorithm of dynamic spectrum photoelectric pulse wave detection based on LabVIEW].
Lin, Ling; Li, Na; Li, Gang
2010-02-01
Dynamic spectrum (DS) detection is attractive among the numerous noninvasive blood component detection methods because of the elimination of the main interference of the individual discrepancy and measure conditions. DS is a kind of spectrum extracted from the photoelectric pulse wave and closely relative to the artery blood. It can be used in a noninvasive blood component concentration examination. The key issues in DS detection are high detection precision and high operation speed. The precision of measure can be advanced by making use of over-sampling and lock-in amplifying on the pick-up of photoelectric pulse wave in DS detection. In the present paper, the theory expression formula of the over-sampling and lock-in amplifying method was deduced firstly. Then in order to overcome the problems of great data and excessive operation brought on by this technology, a quick algorithm based on LabVIEW and a method of using external C code applied in the pick-up of photoelectric pulse wave were presented. Experimental verification was conducted in the environment of LabVIEW. The results show that by the method pres ented, the speed of operation was promoted rapidly and the data memory was reduced largely.
Prompt and Precise Prototyping
NASA Technical Reports Server (NTRS)
2003-01-01
For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.
Parallel high-precision orbit propagation using the modified Picard-Chebyshev method
NASA Astrophysics Data System (ADS)
Koblick, Darin C.
2012-03-01
The modified Picard-Chebyshev method, when run in parallel, is thought to be more accurate and faster than the most efficient sequential numerical integration techniques when applied to orbit propagation problems. Previous experiments have shown that the modified Picard-Chebyshev method can have up to a one order magnitude speedup over the 12
NASA Astrophysics Data System (ADS)
Noreen, Amna; Olaussen, Kåre
2012-10-01
A subroutine for a very-high-precision numerical solution of a class of ordinary differential equations is provided. For a given evaluation point and equation parameters the memory requirement scales linearly with precision P, and the number of algebraic operations scales roughly linearly with P when P becomes sufficiently large. We discuss results from extensive tests of the code, and how one, for a given evaluation point and equation parameters, may estimate precision loss and computing time in advance. Program summary Program title: seriesSolveOde1 Catalogue identifier: AEMW_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 991 No. of bytes in distributed program, including test data, etc.: 488116 Distribution format: tar.gz Programming language: C++ Computer: PC's or higher performance computers. Operating system: Linux and MacOS RAM: Few to many megabytes (problem dependent). Classification: 2.7, 4.3 External routines: CLN — Class Library for Numbers [1] built with the GNU MP library [2], and GSL — GNU Scientific Library [3] (only for time measurements). Nature of problem: The differential equation -s2({d2}/{dz2}+{1-ν+-ν-}/{z}{d}/{dz}+{ν+ν-}/{z2})ψ(z)+{1}/{z} ∑n=0N vnznψ(z)=0, is solved numerically to very high precision. The evaluation point z and some or all of the equation parameters may be complex numbers; some or all of them may be represented exactly in terms of rational numbers. Solution method: The solution ψ(z), and optionally ψ'(z), is evaluated at the point z by executing the recursion A(z)={s-2}/{(m+1+ν-ν+)(m+1+ν-ν-)} ∑n=0N Vn(z)A(z), ψ(z)=ψ(z)+A(z), to sufficiently large m. Here ν is either ν+ or ν-, and Vn(z)=vnz. The recursion is initialized by A(z)=δzν,for n=0,1,…,N ψ(z)=A0(z). Restrictions: No solution is computed if z=0, or s=0, or if ν=ν- (assuming Reν+≥Reν-) with ν+-ν- an integer, except when ν+-ν-=1 and v =0 (i.e. when z is an ordinary point for zψ(z)). Additional comments: The code of the main algorithm is in the file seriesSolveOde1.cc, which "#include" the file checkForBreakOde1.cc. These routines, and the programs using them, must "#include" the file seriesSolveOde1.cc. Running time: On a Linux PC that is a few years old, at y=√{10} to an accuracy of P=200 decimal digits, evaluating the ground state wavefunction of the anharmonic oscillator (with the eigenvalue known in advance); (cf. Eq. (6)) takes about 2 ms, and about 40 min at an accuracy of P=100000 decimal digits. References: [1] B. Haible and R.B. Kreckel, CLN — Class Library for Numbers, http://www.ginac.de/CLN/ [2] T. Granlund and collaborators, GMP — The GNU Multiple Precision Arithmetic Library, http://gmplib.org/ [3] M. Galassi et al., GNU Scientific Library Reference Manual (3rd Ed.), ISBN 0954612078., http://www.gnu.org/software/gsl/
Recent progress in translational cystic fibrosis research using precision medicine strategies.
Cholon, Deborah M; Gentzsch, Martina
2018-03-01
Significant progress has been achieved in developing precision therapies for cystic fibrosis; however, highly effective treatments that target the ion channel, CFTR, are not yet available for many patients. As numerous CFTR therapeutics are currently in the clinical pipeline, reliable screening tools capable of predicting drug efficacy to support individualized treatment plans and translational research are essential. The utilization of bronchial, nasal, and rectal tissues from individual cystic fibrosis patients for drug testing using in vitro assays such as electrophysiological measurements of CFTR activity and evaluation of fluid movement in spheroid cultures, has advanced the prediction of patient-specific responses. However, for precise prediction of drug effects, in vitro models of CFTR rescue should incorporate the inflamed cystic fibrosis airway environment and mimic the complex tissue structures of airway epithelia. Furthermore, novel assays that monitor other aspects of successful CFTR rescue such as restoration of mucus characteristics, which is important for predicting mucociliary clearance, will allow for better prognoses of successful therapies in vivo. Additional cystic fibrosis treatment strategies are being intensively explored, such as development of drugs that target other ion channels, and novel technologies including pluripotent stem cells, gene therapy, and gene editing. The multiple therapeutic approaches available to treat the basic defect in cystic fibrosis combined with relevant precision medicine models provide a framework for identifying optimal and sustained treatments that will benefit all cystic fibrosis patients. Copyright © 2017 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Dieriam, Todd A.
1990-01-01
Future missions to Mars may require pin-point landing precision, possibly on the order of tens of meters. The ability to reach a target while meeting a dynamic pressure constraint to ensure safe parachute deployment is complicated at Mars by low atmospheric density, high atmospheric uncertainty, and the desire to employ only bank angle control. The vehicle aerodynamic performance requirements and guidance necessary for 0.5 to 1.5 lift drag ratio vehicle to maximize the achievable footprint while meeting the constraints are examined. A parametric study of the various factors related to entry vehicle performance in the Mars environment is undertaken to develop general vehicle aerodynamic design requirements. The combination of low lift drag ratio and low atmospheric density at Mars result in a large phugoid motion involving the dynamic pressure which complicates trajectory control. Vehicle ballistic coefficient is demonstrated to be the predominant characteristic affecting final dynamic pressure. Additionally, a speed brake is shown to be ineffective at reducing the final dynamic pressure. An adaptive precision entry atmospheric guidance scheme is presented. The guidance uses a numeric predictor-corrector algorithm to control downrange, an azimuth controller to govern crossrange, and analytic control law to reduce the final dynamic pressure. Guidance performance is tested against a variety of dispersions, and the results from selected tests are presented. Precision entry using bank angle control only is demonstrated to be feasible at Mars.
Differential porosimetry and permeametry for random porous media.
Hilfer, R; Lemmer, A
2015-07-01
Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.
NASA Astrophysics Data System (ADS)
Dossmann, Yvan; Paci, Alexandre; Auclair, Francis; Floor, Jochem
2010-05-01
Internal tides are suggested to play a major role in the sustaining of the global oceanic circulation [1][5]. Although the exact origin of the energy conversions occurring in stratified fluids is questioned [2], it is clear that the diapycnal energy transfers provided by the energy cascade of internal gravity waves generated at tidal frequencies in regions of steep bathymetry is strongly linked to the general circulation energy balance. Therefore a precise quantification of the energy supply by internal waves is a crucial step in forecasting climate, since it improves our understanding of the underlying physical processes. We focus on an academic case of internal waves generated over an oceanic ridge in a linearly stratified fluid. In order to accurately quantify the diapycnal energy transfers caused by internal waves dynamics, we adopt a complementary approach involving both laboratory and numerical experiments. The laboratory experiments are conducted in a 4m long tank of the CNRM-GAME fluid mechanics laboratory, well known for its large stratified water flume (e.g. Knigge et al [3]). The horizontal oscillation at precisely controlled frequency of a Gaussian ridge immersed in a linearly stratified fluid generates internal gravity waves. The ridge of e-folding width 3.6 cm is 10 cm high and spans 50 cm. We use PIV and Synthetic Schlieren measurement techniques, to retrieve the high resolution velocity and stratification anomaly fields in the 2D vertical plane across the ridge. These experiments allow us to get access to real and exhaustive measurements of a wide range of internal waves regimes by varying the precisely controlled experimental parameters. To complete this work, we carry out some direct numerical simulations with the same parameters (forcing amplitude and frequency, initial stratification, boundary conditions) as the laboratory experiments. The model used is a non-hydrostatic version of the numerical model Symphonie [4]. Our purpose is not only to test the dynamics and energetics of the numerical model, but also to advance the analysis based on combined wavelet and empirical orthogonal function. In particular, we focus on the study of the transient regime of internal wave generation near the ridge. Our analyses of the experimental fields show that, for fixed background stratification and topography, the evolution of the stratification anomaly strongly depends on the forcing frequency. The duration of the transient regime, as well as the amplitude reached in the stationary state vary significantly with the parameter ω/N (where ω is the forcing frequency, and N is the background Brunt-Väisälä frequency). We also observe that, for particular forcing frequencies, for which the ridge slope matches the critical slope of the first harmonic mode, internal waves are excited both at the fundamental and the first harmonic frequency. Associated energy transfers are finally evaluated both experimentally and numerically, enabling us to highlight the similarities and discrepancies between the laboratory experiments and the numerical simulations. References [1] Munk W. and C. Wunsch (1998): Abyssal recipes II: energetics of tidal and wind mixing Deep-Sea Res. 45, 1977-2010 [2] Tailleux R. (2009): On the energetics of stratified turbulent mixing, irreversible thermodynamics, Boussinesq models and the ocean heat engine controversy, J. Fluid Mech. 638, 339-382 [3] Knigge C., D. Etling, A. Paci and O. Eiff (2010): Laboratory experiments on mountain-induced rotors, Quarterly Journal of the Royal Meteorological Society, in press. [4] Auclair F., C. Estournel, J. Floor, C. N'Guyen and P. Marsaleix, (2009): A non-hydrostatic, energy conserving algorithm for regional ocean modelling. Under revision. [5] Wunsch, C. & R. Ferrari (2004): Vertical mixing, energy and the general circulation of the oceans. Annu. Rev. Fluid Mech., 36:281-314.
Engineering of Machine tool’s High-precision electric drives
NASA Astrophysics Data System (ADS)
Khayatov, E. S.; Korzhavin, M. E.; Naumovich, N. I.
2018-03-01
In the article it is shown that in mechanisms with numerical program control, high quality of processes can be achieved only in systems that provide adjustment of the working element’s position with high accuracy, and this requires an expansion of the regulation range by the torque. In particular, the use of synchronous reactive machines with independent excitation control makes it possible to substantially increase the moment overload in the sequential excitation circuit. Using mathematical and physical modeling methods, it is shown that in the electric drive with a synchronous reactive machine with independent excitation in a circuit with sequential excitation, it is possible to significantly expand the range of regulation by the torque and this is achieved by the effect of sequential excitation, which makes it possible to compensate for the transverse reaction of the armature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Dae Jung; Lee, Dong-Hun; Kim, Kihong
We study theoretically the linear mode conversion between electromagnetic waves and Langmuir waves in warm, stratified, and unmagnetized plasmas, using a numerically precise calculation based on the invariant imbedding method. We verify that the principle of reciprocity for the forward and backward mode conversion coefficients holds precisely regardless of temperature. We also find that the temperature dependence of the mode conversion coefficient is substantially stronger than that previously reported. Depending on the wave frequency and the incident angle, the mode conversion coefficient is found to increase or decrease with the increase of temperature.
Floating point arithmetic in future supercomputers
NASA Technical Reports Server (NTRS)
Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.
1989-01-01
Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.
NASA Astrophysics Data System (ADS)
Fantino, E.; Casotto, S.
2009-07-01
Four widely used algorithms for the computation of the Earth’s gravitational potential and its first-, second- and third-order gradients are examined: the traditional increasing degree recursion in associated Legendre functions and its variant based on the Clenshaw summation, plus the methods of Pines and Cunningham-Metris, which are free from the singularities that distinguish the first two methods at the geographic poles. All four methods are reorganized with the lumped coefficients approach, which in the cases of Pines and Cunningham-Metris requires a complete revision of the algorithms. The characteristics of the four methods are studied and described, and numerical tests are performed to assess and compare their precision, accuracy, and efficiency. In general the performance levels of all four codes exhibit large improvements over previously published versions. From the point of view of numerical precision, away from the geographic poles Clenshaw and Legendre offer an overall better quality. Furthermore, Pines and Cunningham-Metris are affected by an intrinsic loss of precision at the equator and suffer from additional deterioration when the gravity gradients components are rotated into the East-North-Up topocentric reference system.
Clamping characteristics study on different types of clamping unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiao, Zhiwei; Liu, Haichao; Xie, Pengcheng
2015-05-22
Plastic products are becoming more and more widely used in aerospace, IT, digital electronics and many other fields. With the development of technology, the requirement of product precision is getting higher and higher. However, type and working performance of clamping unit play a decisive role in product precision. Clamping characteristics of different types of clamping unit are discussed in this article, which use finite element numerical analysis method through the software ABAQUS to study the clamping uniformity, and detect the clamping force repeatability precision. The result shows that compared with toggled three-platen clamping unit, clamping characteristics of internal circulation two-platenmore » clamping unit are better, for instance, its mold cavity deformation and force that bars and mold parting surface suffered are more uniform, and its clamping uniformity and repeatability precision is also better.« less
Boundary regularized integral equation formulation of the Helmholtz equation in acoustics.
Sun, Qiang; Klaseboer, Evert; Khoo, Boo-Cheong; Chan, Derek Y C
2015-01-01
A boundary integral formulation for the solution of the Helmholtz equation is developed in which all traditional singular behaviour in the boundary integrals is removed analytically. The numerical precision of this approach is illustrated with calculation of the pressure field owing to radiating bodies in acoustic wave problems. This method facilitates the use of higher order surface elements to represent boundaries, resulting in a significant reduction in the problem size with improved precision. Problems with extreme geometric aspect ratios can also be handled without diminished precision. When combined with the CHIEF method, uniqueness of the solution of the exterior acoustic problem is assured without the need to solve hypersingular integrals.
Boundary regularized integral equation formulation of the Helmholtz equation in acoustics
Sun, Qiang; Klaseboer, Evert; Khoo, Boo-Cheong; Chan, Derek Y. C.
2015-01-01
A boundary integral formulation for the solution of the Helmholtz equation is developed in which all traditional singular behaviour in the boundary integrals is removed analytically. The numerical precision of this approach is illustrated with calculation of the pressure field owing to radiating bodies in acoustic wave problems. This method facilitates the use of higher order surface elements to represent boundaries, resulting in a significant reduction in the problem size with improved precision. Problems with extreme geometric aspect ratios can also be handled without diminished precision. When combined with the CHIEF method, uniqueness of the solution of the exterior acoustic problem is assured without the need to solve hypersingular integrals. PMID:26064591
High precision analytical description of the allowed β spectrum shape
NASA Astrophysics Data System (ADS)
Hayen, Leendert; Severijns, Nathal; Bodek, Kazimierz; Rozpedzik, Dagmara; Mougeot, Xavier
2018-01-01
A fully analytical description of the allowed β spectrum shape is given in view of ongoing and planned measurements. Its study forms an invaluable tool in the search for physics beyond the standard electroweak model and the weak magnetism recoil term. Contributions stemming from finite size corrections, mass effects, and radiative corrections are reviewed. Particular focus is placed on atomic and chemical effects, where the existing description is extended and analytically provided. The effects of QCD-induced recoil terms are discussed, and cross-checks were performed for different theoretical formalisms. Special attention was given to a comparison of the treatment of nuclear structure effects in different formalisms. Corrections were derived for both Fermi and Gamow-Teller transitions, and methods of analytical evaluation thoroughly discussed. In its integrated form, calculated f values were in agreement with the most precise numerical results within the aimed for precision. The need for an accurate evaluation of weak magnetism contributions was stressed, and the possible significance of the oft-neglected induced pseudoscalar interaction was noted. Together with improved atomic corrections, an analytical description was presented of the allowed β spectrum shape accurate to a few parts in 10-4 down to 1 keV for low to medium Z nuclei, thereby extending the work by previous authors by nearly an order of magnitude.
Cosmological neutrino simulations at extreme scale
Emberson, J. D.; Yu, Hao-Ran; Inman, Derek; ...
2017-08-01
Constraining neutrino mass remains an elusive challenge in modern physics. Precision measurements are expected from several upcoming cosmological probes of large-scale structure. Achieving this goal relies on an equal level of precision from theoretical predictions of neutrino clustering. Numerical simulations of the non-linear evolution of cold dark matter and neutrinos play a pivotal role in this process. We incorporate neutrinos into the cosmological N-body code CUBEP3M and discuss the challenges associated with pushing to the extreme scales demanded by the neutrino problem. We highlight code optimizations made to exploit modern high performance computing architectures and present a novel method ofmore » data compression that reduces the phase-space particle footprint from 24 bytes in single precision to roughly 9 bytes. We scale the neutrino problem to the Tianhe-2 supercomputer and provide details of our production run, named TianNu, which uses 86% of the machine (13,824 compute nodes). With a total of 2.97 trillion particles, TianNu is currently the world’s largest cosmological N-body simulation and improves upon previous neutrino simulations by two orders of magnitude in scale. We finish with a discussion of the unanticipated computational challenges that were encountered during the TianNu runtime.« less
Precision control of recombinant gene transcription for CHO cell synthetic biology.
Brown, Adam J; James, David C
2016-01-01
The next generation of mammalian cell factories for biopharmaceutical production will be genetically engineered to possess both generic and product-specific manufacturing capabilities that may not exist naturally. Introduction of entirely new combinations of synthetic functions (e.g. novel metabolic or stress-response pathways), and retro-engineering of existing functional cell modules will drive disruptive change in cellular manufacturing performance. However, before we can apply the core concepts underpinning synthetic biology (design, build, test) to CHO cell engineering we must first develop practical and robust enabling technologies. Fundamentally, we will require the ability to precisely control the relative stoichiometry of numerous functional components we simultaneously introduce into the host cell factory. In this review we discuss how this can be achieved by design of engineered promoters that enable concerted control of recombinant gene transcription. We describe the specific mechanisms of transcriptional regulation that affect promoter function during bioproduction processes, and detail the highly-specific promoter design criteria that are required in the context of CHO cell engineering. The relative applicability of diverse promoter development strategies are discussed, including re-engineering of natural sequences, design of synthetic transcription factor-based systems, and construction of synthetic promoters. This review highlights the potential of promoter engineering to achieve precision transcriptional control for CHO cell synthetic biology. Copyright © 2015. Published by Elsevier Inc.
Engineering the Mechanical Properties of Polymer Networks with Precise Doping of Primary Defects.
Chan, Doreen; Ding, Yichuan; Dauskardt, Reinhold H; Appel, Eric A
2017-12-06
Polymer networks are extensively utilized across numerous applications ranging from commodity superabsorbent polymers and coatings to high-performance microelectronics and biomaterials. For many applications, desirable properties are known; however, achieving them has been challenging. Additionally, the accurate prediction of elastic modulus has been a long-standing difficulty owing to the presence of loops. By tuning the prepolymer formulation through precise doping of monomers, specific primary network defects can be programmed into an elastomeric scaffold, without alteration of their resulting chemistry. The addition of these monomers that respond mechanically as primary defects is used both to understand their impact on the resulting mechanical properties of the materials and as a method to engineer the mechanical properties. Indeed, these materials exhibit identical bulk and surface chemistry, yet vastly different mechanical properties. Further, we have adapted the real elastic network theory (RENT) to the case of primary defects in the absence of loops, thus providing new insights into the mechanism for material strength and failure in polymer networks arising from primary network defects, and to accurately predict the elastic modulus of the polymer system. The versatility of the approach we describe and the fundamental knowledge gained from this study can lead to new advancements in the development of novel materials with precisely defined and predictable chemical, physical, and mechanical properties.
Precise Determination of the Baseline Between the TerraSAR-X and TanDEM-X Satellites
NASA Astrophysics Data System (ADS)
Koenig, Rolf; Rothacher, Markus; Michalak, Grzegorz; Moon, Yongjin
TerraSAR-X, launched on June 15, 2007, and TanDEM-X, to be launched in September 2009, both carry the Tracking, Occultation and Ranging (TOR) category A payload instrument package. The TOR consists of a high-precision dual-frequency GPS receiver, called Integrated GPS Occultation Receiver (IGOR), for precise orbit determination and atmospheric sounding and a Laser retro-reflector (LRR) serving as target for the global Satellite Laser Ranging (SLR) ground station network. The TOR is supplied by the GeoForschungsZentrum Potsdam (GFZ) Germany, and the Center for Space Research (CSR), Austin, Texas. The objective of the German/US collaboration is twofold: provision of atmospheric profiles for use in numerical weather predictions and climate studies from the occultation data and precision SAR data processing based on precise orbits and atmospheric products. For the scientific objectives of the TanDEM- X mission, i.e., bi-static SAR together with TerraSAR-X, the dual-frequency GPS receiver is of vital importance for the millimeter level determination of the baseline or distance between the two spacecrafts. The paper discusses the feasibility of generating millimeter baselines by the example of GRACE, where for validation the distance between the two GRACE satellites is directly available from the micrometer-level intersatellite link measurements. The distance of the GRACE satellites is some 200 km, the distance of the TerraSAR-X/TanDEM-X formation will be some 200 meters. Therefore the proposed approach is then subject to a simulation of the foreseen TerraSAR-X/TanDEM-X formation. The effect of varying space environmental conditions, of possible phase center variations, multi path, and of varying center of mass of the spacecrafts are evaluated and discussed.
Fabrication of micro-lens array on convex surface by meaning of micro-milling
NASA Astrophysics Data System (ADS)
Zhang, Peng; Du, Yunlong; Wang, Bo; Shan, Debin
2014-08-01
In order to develop the application of the micro-milling technology, and to fabricate ultra-precision optical surface with complex microstructure, in this paper, the primary experimental research on micro-milling complex microstructure array is carried out. A complex microstructure array surface with vary parameters is designed, and the mathematic model of the surface is set up and simulated. For the fabrication of the designed microstructure array surface, a micro three-axis ultra-precision milling machine tool is developed, aerostatic guideway drove directly by linear motor is adopted in order to guarantee the enough stiffness of the machine, and novel numerical control strategy with linear encoders of 5nm resolution used as the feedback of the control system is employed to ensure the extremely high motion control accuracy. With the help of CAD/CAM technology, convex micro lens array on convex spherical surface with different scales on material of polyvinyl chloride (PVC) and pure copper is fabricated using micro tungsten carbide ball end milling tool based on the ultra-precision micro-milling machine. Excellent nanometer-level micro-movement performance of the axis is proved by motion control experiment. The fabrication is nearly as the same as the design, the characteristic scale of the microstructure is less than 200μm and the accuracy is better than 1μm. It prove that ultra-precision micro-milling technology based on micro ultra-precision machine tool is a suitable and optional method for micro manufacture of microstructure array surface on different kinds of materials, and with the development of micro milling cutter, ultraprecision micro-milling complex microstructure surface will be achieved in future.
ADRC for spacecraft attitude and position synchronization in libration point orbits
NASA Astrophysics Data System (ADS)
Gao, Chen; Yuan, Jianping; Zhao, Yakun
2018-04-01
This paper addresses the problem of spacecraft attitude and position synchronization in libration point orbits between a leader and a follower. Using dual quaternion, the dimensionless relative coupled dynamical model is derived considering computation efficiency and accuracy. Then a model-independent dimensionless cascade pose-feedback active disturbance rejection controller is designed to spacecraft attitude and position tracking control problems considering parameter uncertainties and external disturbances. Numerical simulations for the final approach phase in spacecraft rendezvous and docking and formation flying are done, and the results show high-precision tracking errors and satisfactory convergent rates under bounded control torque and force which validate the proposed approach.
Fiber optic light collection system for scanning-tunneling-microscope-induced light emission.
Watkins, Neil J; Long, James P; Kafafi, Zakya H; Mäkinen, Antti J
2007-05-01
We report a compact light collection scheme suitable for retrofitting a scanning tunneling microscope (STM) for STM-induced light emission experiments. The approach uses a pair of optical fibers with large core diameters and high numerical apertures to maximize light collection efficiency and to moderate the mechanical precision required for alignment. Bench tests indicate that efficiency reduction is almost entirely due to reflective losses at the fiber ends, while losses due to fiber misalignment have virtually been eliminated. Photon-map imaging with nanometer features is demonstrated on a stepped Au(111) surface with signal rates exceeding 10(4) counts/s.
PATTERNS IN BIOMEDICAL DATA-HOW DO WE FIND THEM?
Basile, Anna O; Verma, Anurag; Byrska-Bishop, Marta; Pendergrass, Sarah A; Darabos, Christian; Lester Kirchner, H
2017-01-01
Given the exponential growth of biomedical data, researchers are faced with numerous challenges in extracting and interpreting information from these large, high-dimensional, incomplete, and often noisy data. To facilitate addressing this growing concern, the "Patterns in Biomedical Data-How do we find them?" session of the 2017 Pacific Symposium on Biocomputing (PSB) is devoted to exploring pattern recognition using data-driven approaches for biomedical and precision medicine applications. The papers selected for this session focus on novel machine learning techniques as well as applications of established methods to heterogeneous data. We also feature manuscripts aimed at addressing the current challenges associated with the analysis of biomedical data.
Fiber-coupled thermal microscope for solid materials based on thermoreflectance method
NASA Astrophysics Data System (ADS)
Miyake, Shugo; Hatori, Kimihito; Ohtsuki, Tetsuya; Awano, Takaaki; Sekine, Makoto
2018-06-01
Measurement of the thermal properties of solid-state materials, including high- and low-thermal-conductivity materials in electronic devices, is very important to improve thermal design. The thermoreflectance method is well known as a powerful technique for measuring a wide range of thermal conductivity. However, in order to precisely determine the thermoreflectance signal, the alignment between two laser beams should be perfectly coaxial, similar to that in the numerical calculation model. In this paper, a developed fiber-coupled thermal microscope based on the thermoreflectance method is demonstrated, which we use to determine the frequency dependence of the temperature responses of silicon, sapphire, zirconium, and Pyrex glass samples.
Modified GMDH-NN algorithm and its application for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Song, Shufang; Wang, Lu
2017-11-01
Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.
NASA Technical Reports Server (NTRS)
Tsai, C.; Szabo, B. A.
1973-01-01
An approch to the finite element method which utilizes families of conforming finite elements based on complete polynomials is presented. Finite element approximations based on this method converge with respect to progressively reduced element sizes as well as with respect to progressively increasing orders of approximation. Numerical results of static and dynamic applications of plates are presented to demonstrate the efficiency of the method. Comparisons are made with plate elements in NASTRAN and the high-precision plate element developed by Cowper and his co-workers. Some considerations are given to implementation of the constraint method into general purpose computer programs such as NASTRAN.
Calculating Trajectories And Orbits
NASA Technical Reports Server (NTRS)
Alderson, Daniel J.; Brady, Franklyn H.; Breckheimer, Peter J.; Campbell, James K.; Christensen, Carl S.; Collier, James B.; Ekelund, John E.; Ellis, Jordan; Goltz, Gene L.; Hintz, Gerarld R.;
1989-01-01
Double-Precision Trajectory Analysis Program, DPTRAJ, and Orbit Determination Program, ODP, developed and improved over years to provide highly reliable and accurate navigation capability for deep-space missions like Voyager. Each collection of programs working together to provide desired computational results. DPTRAJ, ODP, and supporting utility programs capable of handling massive amounts of data and performing various numerical calculations required for solving navigation problems associated with planetary fly-by and lander missions. Used extensively in support of NASA's Voyager project. DPTRAJ-ODP available in two machine versions. UNIVAC version, NPO-15586, written in FORTRAN V, SFTRAN, and ASSEMBLER. VAX/VMS version, NPO-17201, written in FORTRAN V, SFTRAN, PL/1 and ASSEMBLER.
Black hole spectroscopy: Systematic errors and ringdown energy estimates
NASA Astrophysics Data System (ADS)
Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav
2018-02-01
The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.
Smart reconfigurable parabolic space antenna for variable electromagnetic patterns
NASA Astrophysics Data System (ADS)
Kalra, Sahil; Datta, Rituparna; Munjal, B. S.; Bhattacharya, Bishakh
2018-02-01
An application of reconfigurable parabolic space antenna for satellite is discussed in this paper. The present study focuses on shape morphing of flexible parabolic antenna actuated with Shape Memory Alloy (SMA) wires. The antenna is able to transmit the signals to the desired footprint on earth with a desired gain value. SMA wire based actuation with a locking device is developed for a precise control of Antenna shape. The locking device is efficient to hold the structure in deformed configuration during power cutoff from the system. The maximum controllable deflection at any point using such actuation system is about 25mm with a precision of ±100 m. In order to control the shape of the antenna in a closed feedback loop, a Proportional, Integral and Derivative (PID) based controller is developed using LabVIEW (NI) and experiments are performed. Numerical modeling and analysis of the structure is carried out using finite element software ABAQUS. For data reduction and fast computation, stiffness matrix generated by ABAQUS is condensed by Guyan Reduction technique and shape optimization is performed using Non-dominated Sorting Genetic Algorithm (NSGA-II). The matching in comparative study between numerical and experimental set-up shows efficacy of our method. Thereafter, Electro-Magnetic (EM) simulations of the deformed shape is carried out using electromagnetic field simulation, High Frequency Structure Simulator (HFSS). The proposed design is envisaged to be very effective for multipurpose application of satellite system in the future missions of Indian Space Research Organization (ISRO).
Numerosity and number signs in deaf Nicaraguan adults
Flaherty, Molly; Senghas, Ann
2012-01-01
What abilities are entailed in being numerate? Certainly, one is the ability to hold the exact quantity of a set in mind, even as it changes, and even after its members can no longer be perceived. Is counting language necessary to track and reproduce exact quantities? Previous work with speakers of languages that lack number words involved participants only from non-numerate cultures. Deaf Nicaraguan adults all live in a richly numerate culture, but vary in counting ability, allowing us to experimentally differentiate the contribution of these two factors. Thirty deaf and 10 hearing participants performed 11 one-to-one matching and counting tasks. Results suggest that immersion in a numerate culture is not enough to make one fully numerate. A memorized sequence of number symbols is required, though even an unconventional, iconic system is sufficient. Additionally, we find that within a numerate culture, the ability to track precise quantities can be acquired in adulthood. PMID:21899832
Photon-photon scattering at the high-intensity frontier
NASA Astrophysics Data System (ADS)
Gies, Holger; Karbstein, Felix; Kohlfürst, Christian; Seegert, Nico
2018-04-01
The tremendous progress in high-intensity laser technology and the establishment of dedicated high-field laboratories in recent years have paved the way towards a first observation of quantum vacuum nonlinearities at the high-intensity frontier. We advocate a particularly prospective scenario, where three synchronized high-intensity laser pulses are brought into collision, giving rise to signal photons, whose frequency and propagation direction differ from the driving laser pulses, thus providing various means to achieve an excellent signal to background separation. Based on the theoretical concept of vacuum emission, we employ an efficient numerical algorithm which allows us to model the collision of focused high-intensity laser pulses in unprecedented detail. We provide accurate predictions for the numbers of signal photons accessible in experiment. Our study is the first to predict the precise angular spread of the signal photons, and paves the way for a first verification of quantum vacuum nonlinearity in a well-controlled laboratory experiment at one of the many high-intensity laser facilities currently coming online.
NASA Astrophysics Data System (ADS)
Huang, Jie; Li, Piao; Yao, Weixing
2018-05-01
A loosely coupled fluid-structural thermal numerical method is introduced for the thermal protection system (TPS) gap thermal control analysis in this paper. The aerodynamic heating and structural thermal are analyzed by computational fluid dynamics (CFD) and numerical heat transfer (NHT) methods respectively. An interpolation algorithm based on the control surface is adopted for the data exchanges on the coupled surface. In order to verify the analysis precision of the loosely coupled method, a circular tube example was analyzed, and the wall temperature agrees well with the test result. TPS gap thermal control performance was studied by the loosely coupled method successfully. The gap heat flux is mainly distributed in the small region at the top of the gap which is the high temperature region. Besides, TPS gap temperature and the power of the active cooling system (CCS) calculated by the traditional uncoupled method are higher than that calculated by the coupled method obviously. The reason is that the uncoupled method doesn't consider the coupled effect between the aerodynamic heating and structural thermal, however the coupled method considers it, so TPS gap thermal control performance can be analyzed more accurately by the coupled method.
Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth; ...
2016-03-29
Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less
Numerical simulation of fluid flow and heat transfer in enhanced copper tube
NASA Astrophysics Data System (ADS)
Rahman, M. M.; Zhen, T.; Kadir, A. K.
2013-06-01
Inner grooved tube is enhanced with grooves by increasing the inner surface area. Due to its high efficiency of heat transfer, it is used widely in power generation, air conditioning and many other applications. Heat exchanger is one of the example that uses inner grooved tube to enhance rate heat transfer. Precision in production of inner grooved copper tube is very important because it affects the tube's performance due to various tube parameters. Therefore, it is necessary to carry out analysis in optimizing tube performance prior to production in order to avoid unnecessary loss. The analysis can be carried out either through experimentation or numerical simulation. However, experimental study is too costly and takes longer time in gathering necessary information. Therefore, numerical simulation is conducted instead of experimental research. Firstly, the model of inner grooved tube was generated using SOLIDWORKS. Then it was imported into GAMBIT for healing, followed by meshing, boundary types and zones settings. Next, simulation was done in FLUENT where all the boundary conditions are set. The simulation results were observed and compared with published experimental results. It showed that heat transfer enhancement in range of 649.66% to 917.22% of inner grooved tube compared to plain tube.
Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.
Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing
2016-10-01
The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.
Numerical study on 3D composite morphing actuators
NASA Astrophysics Data System (ADS)
Oishi, Kazuma; Saito, Makoto; Anandan, Nishita; Kadooka, Kevin; Taya, Minoru
2015-04-01
There are a number of actuators using the deformation of electroactive polymer (EAP), where fewer papers seem to have focused on the performance of 3D morphing actuators based on the analytical approach, due mainly to their complexity. The present paper introduces a numerical analysis approach on the large scale deformation and motion of a 3D half dome shaped actuator composed of thin soft membrane (passive material) and EAP strip actuators (EAP active coupon with electrodes on both surfaces), where the locations of the active EAP strips is a key parameter. Simulia/Abaqus Static and Implicit analysis code, whose main feature is the high precision contact analysis capability among structures, are used focusing on the whole process of the membrane to touch and wrap around the object. The unidirectional properties of the EAP coupon actuator are used as input data set for the material properties for the simulation and the verification of our numerical model, where the verification is made as compared to the existing 2D solution. The numerical results can demonstrate the whole deformation process of the membrane to wrap around not only smooth shaped objects like a sphere or an egg, but also irregularly shaped objects. A parametric study reveals the proper placement of the EAP coupon actuators, with the modification of the dome shape to induce the relevant large scale deformation. The numerical simulation for the 3D soft actuators shown in this paper could be applied to a wider range of soft 3D morphing actuators.
Novel methodologies for spectral classification of exon and intron sequences
NASA Astrophysics Data System (ADS)
Kwan, Hon Keung; Kwan, Benjamin Y. M.; Kwan, Jennifer Y. Y.
2012-12-01
Digital processing of a nucleotide sequence requires it to be mapped to a numerical sequence in which the choice of nucleotide to numeric mapping affects how well its biological properties can be preserved and reflected from nucleotide domain to numerical domain. Digital spectral analysis of nucleotide sequences unfolds a period-3 power spectral value which is more prominent in an exon sequence as compared to that of an intron sequence. The success of a period-3 based exon and intron classification depends on the choice of a threshold value. The main purposes of this article are to introduce novel codes for 1-sequence numerical representations for spectral analysis and compare them to existing codes to determine appropriate representation, and to introduce novel thresholding methods for more accurate period-3 based exon and intron classification of an unknown sequence. The main findings of this study are summarized as follows: Among sixteen 1-sequence numerical representations, the K-Quaternary Code I offers an attractive performance. A windowed 1-sequence numerical representation (with window length of 9, 15, and 24 bases) offers a possible speed gain over non-windowed 4-sequence Voss representation which increases as sequence length increases. A winner threshold value (chosen from the best among two defined threshold values and one other threshold value) offers a top precision for classifying an unknown sequence of specified fixed lengths. An interpolated winner threshold value applicable to an unknown and arbitrary length sequence can be estimated from the winner threshold values of fixed length sequences with a comparable performance. In general, precision increases as sequence length increases. The study contributes an effective spectral analysis of nucleotide sequences to better reveal embedded properties, and has potential applications in improved genome annotation.
Numerical cognition is resilient to dramatic changes in early sensory experience.
Kanjlia, Shipra; Feigenson, Lisa; Bedny, Marina
2018-06-20
Humans and non-human animals can approximate large visual quantities without counting. The approximate number representations underlying this ability are noisy, with the amount of noise proportional to the quantity being represented. Numerate humans also have access to a separate system for representing exact quantities using number symbols and words; it is this second, exact system that supports most of formal mathematics. Although numerical approximation abilities and symbolic number abilities are distinct in representational format and in their phylogenetic and ontogenetic histories, they appear to be linked throughout development--individuals who can more precisely discriminate quantities without counting are better at math. The origins of this relationship are debated. On the one hand, symbolic number abilities may be directly linked to, perhaps even rooted in, numerical approximation abilities. On the other hand, the relationship between the two systems may simply reflect their independent relationships with visual abilities. To test this possibility, we asked whether approximate number and symbolic math abilities are linked in congenitally blind individuals who have never experienced visual sets or used visual strategies to learn math. Congenitally blind and blind-folded sighted participants completed an auditory numerical approximation task, as well as a symbolic arithmetic task and non-math control tasks. We found that the precision of approximate number representations was identical across congenitally blind and sighted groups, suggesting that the development of the Approximate Number System (ANS) does not depend on visual experience. Crucially, the relationship between numerical approximation and symbolic math abilities is preserved in congenitally blind individuals. These data support the idea that the Approximate Number System and symbolic number abilities are intrinsically linked, rather than indirectly linked through visual abilities. Copyright © 2018. Published by Elsevier B.V.
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-01-01
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-04-07
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.
Nonlinear Scaling Laws for Parametric Receiving Arrays. Part II. Numerical Analysis
1976-06-30
SECTION 3U SUBROUTINE WRITE -UP» JPL» MAY 1969. 2, F. T, KROGH» »ON TESTING A SUBROUTINE FOR THE NUMERICAL INTEGRATION OF ORDINARY DIFFERENTIAL...WHICH IS ENTIRELY DOUBLE PRECISION. SEE THEIR WRITE -UPS FOR MINOR DIFFERENCES IN USAGE. 12.1.1.5. REMARKS THE ORDINARY DIFFERENTIAL EQUATIONS MAY BE...OF THE DEPENDENT VARIABLES» OR VALUES OF AUXILIARY FUNCTIONS. ONLY THE FIRST TWO OF THESE FEATURES ARE DESCRIBED IN THIS WRITE -UP. SEE REFERENCE 1
NASA Astrophysics Data System (ADS)
Song, Qi; Song, Y. D.; Cai, Wenchuan
2011-09-01
Although backstepping control design approach has been widely utilised in many practical systems, little effort has been made in applying this useful method to train systems. The main purpose of this paper is to apply this popular control design technique to speed and position tracking control of high-speed trains. By integrating adaptive control with backstepping control, we develop a control scheme that is able to address not only the traction and braking dynamics ignored in most existing methods, but also the uncertain friction and aerodynamic drag forces arisen from uncertain resistance coefficients. As such, the resultant control algorithms are able to achieve high precision train position and speed tracking under varying operation railway conditions, as validated by theoretical analysis and numerical simulations.
NASA Technical Reports Server (NTRS)
Lee, M. A.; Lerche, I.
1974-01-01
Study illustrating how the presence of a high-intensity pulse of radiation can distort its own passage through a plane differentially shearing medium. It is demonstrated that the distortion is a sensitive function of the precise, and detailed, variation of the medium's refractive index by considering a couple of simple examples which are worked out numerically. In view of the high-intensity pulses observed from pulsars (approximately 10 to the 30th ergs per pulse), it is believed that the present calculations are of more than academic interest in helping unravel the fundamental properties of pulse production in, and propagating through, differentially sheared media - such as pulsars' magnetospheres within the so-called speed-of-light circle.
Changing computing paradigms towards power efficiency
Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro
2014-01-01
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
NASA Technical Reports Server (NTRS)
Axdahl, Erik L.
2015-01-01
Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.
Raman lidar for hydrogen gas concentration monitoring and future radioactive waste management.
Liméry, Anasthase; Cézard, Nicolas; Fleury, Didier; Goular, Didier; Planchat, Christophe; Bertrand, Johan; Hauchecorne, Alain
2017-11-27
A multi-channel Raman lidar has been developed, allowing for the first time simultaneous and high-resolution profiling of hydrogen gas and water vapor. The lidar measures vibrational Raman scattering in the UV (355 nm) domain. It works in a high-bandwidth photon counting regime using fast SiPM detectors and takes into account the spectral overlap between hydrogen and water vapor Raman spectra. Measurement of concentration profiles of H 2 and H 2 O are demonstrated along a 5-meter-long open gas cell with 1-meter resolution at 85 meters. The instrument precision is investigated by numerical simulation to anticipate the potential performance at longer range. This lidar could find applications in the French project Cigéo for monitoring radioactive waste disposal cells.
High Temperature Superconducting Bearings for Lunar Telescope Mounts
NASA Technical Reports Server (NTRS)
Lamb, Mark; BuiMa, Ki; Cooley, Rodger; Mackey, Daniel; Meng, Ruling; Chu, Ching Wu; Chu, Wei Kan; Chen, Peter C.; Wilson, Thomas
1995-01-01
A telescope to be installed on the lunar surface in the near future must work in a cold and dusty vacuum environment for long periods without on site human maintenance. To track stars, the drive mechanism must be capable of exceedingly fine steps and repeatability. Further, the use of lightweight telescopes for obvious economic benefits burdens the requirement for stable support and rotation. Conventional contact bearings and gear drives have numerous failure modes under such a restrictive and harsh environment. However, hybrid superconducting magnetic bearings (HSMB) fit in naturally. These bearings are stable, light, passive, and essentially frictionless, allowing high precision electronic positioning control. By passive levitation, the HSMB does not wear out and requires neither maintenance nor power. A prototype illustrating the feasibility of this application is presented.
NASA Astrophysics Data System (ADS)
Shibuya, Masato; Takada, Akira; Nakashima, Toshiharu
2016-04-01
In optical lithography, high-performance exposure tools are indispensable to obtain not only fine patterns but also preciseness in pattern width. Since an accurate theoretical method is necessary to predict these values, some pioneer and valuable studies have been proposed. However, there might be some ambiguity or lack of consensus regarding the treatment of diffraction by object, incoming inclination factor onto image plane in scalar imaging theory, and paradoxical phenomenon of the inclined entrance plane wave onto image in vector imaging theory. We have reconsidered imaging theory in detail and also phenomenologically resolved the paradox. By comparing theoretical aerial image intensity with experimental pattern width for one-dimensional pattern, we have validated our theoretical consideration.
Identification of an Extremely 180-Rich Presolar Silicate Grain in Acfer 094
NASA Technical Reports Server (NTRS)
Nguyen, A. N.; Messenger, S.
2009-01-01
Presolar silicate grains have been abundantly identified since their first discovery less than a decade ago [1,2,3]. The O isotopic compositions of both silicate and oxide stardust indicate the vast majority (>90%) condensed around Orich asymptotic giant branch (AGB) stars. Though both presolar phases have average sizes of 300 nm, grains larger than 1 m are extremely uncommon for presolar silicates. Thus, while numerous isotopic systems have been measured in presolar oxide grains [4], very few isotopic analyses for presolar silicates exist outside of O and Si [2,5]. And still, these measurements suffer from isotopic dilution with surrounding matrix material [6]. We conduct a search for presolar silicates in the primitive carbonaceous chondrite Acfer 094 and in some cases obtain high spatial resolution, high precision isotopic ratios.
True Numerical Cognition in the Wild.
Piantadosi, Steven T; Cantlon, Jessica F
2017-04-01
Cognitive and neural research over the past few decades has produced sophisticated models of the representations and algorithms underlying numerical reasoning in humans and other animals. These models make precise predictions for how humans and other animals should behave when faced with quantitative decisions, yet primarily have been tested only in laboratory tasks. We used data from wild baboons' troop movements recently reported by Strandburg-Peshkin, Farine, Couzin, and Crofoot (2015) to compare a variety of models of quantitative decision making. We found that the decisions made by these naturally behaving wild animals rely specifically on numerical representations that have key homologies with the psychophysics of human number representations. These findings provide important new data on the types of problems human numerical cognition was designed to solve and constitute the first robust evidence of true numerical reasoning in wild animals.
NASA Astrophysics Data System (ADS)
Lyu, Pin; Chen, Wenli; Li, Hui; Shen, Lian
2017-11-01
In recent studies, Yang, Meneveau & Shen (Physics of Fluids, 2014; Renewable Energy, 2014) developed a hybrid numerical framework for simulation of offshore wind farm. The framework consists of simulation of nonlinear surface waves using a high-order spectral method, large-eddy simulation of wind turbulence on a wave-surface-fitted curvilinear grid, and an actuator disk model for wind turbines. In the present study, several more precise wind turbine models, including the actuator line model, actuator disk model with rotation, and nacelle model, are introduced into the computation. Besides offshore wind turbines on fixed piles, the new computational framework has the capability to investigate the interaction among wind, waves, and floating wind turbines. In this study, onshore, offshore fixed pile, and offshore floating wind farms are compared in terms of flow field statistics and wind turbine power extraction rate. The authors gratefully acknowledge financial support from China Scholarship Council (No. 201606120186) and the Institute on the Environment of University of Minnesota.
Distribution of Plasmoids in Post-Coronal Mass Ejection Current Sheets
NASA Astrophysics Data System (ADS)
Bhattacharjee, A.; Guo, L.; Huang, Y.
2013-12-01
Recently, the fragmentation of a current sheet in the high-Lundquist-number regime caused by the plasmoid instability has been proposed as a possible mechanism for fast reconnection. In this work, we investigate this scenario by comparing the distribution of plasmoids obtained from Large Angle and Spectrometric Coronagraph (LASCO) observational data of a coronal mass ejection event with a resistive magnetohydrodynamic simulation of a similar event. The LASCO/C2 data are analyzed using visual inspection, whereas the numerical data are analyzed using both visual inspection and a more precise topological method. Contrasting the observational data with numerical data analyzed with both methods, we identify a major limitation of the visual inspection method, due to the difficulty in resolving smaller plasmoids. This result raises questions about reports of log-normal distributions of plasmoids and other coherent features in the recent literature. Based on nonlinear scaling relations of the plasmoid instability, we infer a lower bound on the current sheet width, assuming the underlying mechanism of current sheet broadening is resistive diffusion.
NASA Astrophysics Data System (ADS)
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2011-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.
Reference results for time-like evolution up to
NASA Astrophysics Data System (ADS)
Bertone, Valerio; Carrazza, Stefano; Nocera, Emanuele R.
2015-03-01
We present high-precision numerical results for time-like Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution in the factorisation scheme, for the first time up to next-to-next-to-leading order accuracy in quantum chromodynamics. First, we scrutinise the analytical expressions of the splitting functions available in the literature, in both x and N space, and check their mutual consistency. Second, we implement time-like evolution in two publicly available, entirely independent and conceptually different numerical codes, in x and N space respectively: the already existing APFEL code, which has been updated with time-like evolution, and the new MELA code, which has been specifically developed to perform the study in this work. Third, by means of a model for fragmentation functions, we provide results for the evolution in different factorisation schemes, for different ratios between renormalisation and factorisation scales and at different final scales. Our results are collected in the format of benchmark tables, which could be used as a reference for global determinations of fragmentation functions in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivashinsky, G.I.
1993-01-01
During the period under review, significant progress was been made in studying the intrinsic dynamics of premixed flames and the problems of flame-flow interaction. (1) A weakly nonlinear model for Bunsen burner stabilized flames was proposed and employed for the simulation of three-dimensional polyhedral flames -- one of the most graphic manifestations of thermal-diffusive instability in premixed combustion. (2) A high-precision large-scale numerical simulation of Bunsen burner tip structure was conducted. The results obtained supported the earlier conjecture that the tip opening observed in low Lewis number systems is a purely optical effect not involving either flame extinction or leakagemore » of unburned fuel. (3) A one-dimensional model describing a reaction wave moving through a unidirectional periodic flow field is proposed and studied numerically. For long-wavelength fields the system exhibits a peculiar non-uniqueness of possible propagation regimes. The transition from one regime to another occurs in a manner of hysteresis.« less
Comparisons between stellar models and reliability of the theoretical models
NASA Astrophysics Data System (ADS)
Lebreton, Yveline; Montalbán, Josefina
2010-07-01
The high quality of the asteroseismic data provided by space missions such as CoRoT (Michel et al. in The CoRoT Mission, ESA Spec. Publ. vol. 1306, p. 39, 2006) or expected from new operating missions such as Kepler (Christensen-Dalsgaard et al. in Commun. Asteroseismol. 150:350, 2007) requires the capacity of stellar evolution codes to provide accurate models whose numerical precision is better than the expected observational errors (i.e. below 0.1 μHz on the frequencies in the case of CoRoT). We present a review of some thorough comparisons of stellar models produced by different evolution codes, involved in the CoRoT/ESTA activities (Monteiro in Evolution and Seismic Tools for Stellar Astrophysics, 2009). We examine the numerical aspects of the computations as well as the effects of different implementations of the same physics on the global quantities, physical structure and oscillations properties of the stellar models. We also discuss a few aspects of the input physics.
Critical exponents of the explosive percolation transition
NASA Astrophysics Data System (ADS)
da Costa, R. A.; Dorogovtsev, S. N.; Goltsev, A. V.; Mendes, J. F. F.
2014-04-01
In a new type of percolation phase transition, which was observed in a set of nonequilibrium models, each new connection between vertices is chosen from a number of possibilities by an Achlioptas-like algorithm. This causes preferential merging of small components and delays the emergence of the percolation cluster. First simulations led to a conclusion that a percolation cluster in this irreversible process is born discontinuously, by a discontinuous phase transition, which results in the term "explosive percolation transition." We have shown that this transition is actually continuous (second order) though with an anomalously small critical exponent of the percolation cluster. Here we propose an efficient numerical method enabling us to find the critical exponents and other characteristics of this second-order transition for a representative set of explosive percolation models with different number of choices. The method is based on gluing together the numerical solutions of evolution equations for the cluster size distribution and power-law asymptotics. For each of the models, with high precision, we obtain critical exponents and the critical point.
Precisely Tailored DNA Nanostructures and their Theranostic Applications.
Zhu, Bing; Wang, Lihua; Li, Jiang; Fan, Chunhai
2017-12-01
A critical challenge in nanotechnology is the limited precision and controllability of the structural parameters, which brings about concerns in uniformity, reproducibility and performance. Self-assembled DNA nanostructures, as a newly emerged type of nano-biomaterials, possess low-nanometer precision, excellent programmability and addressability. They can precisely arrange various molecules and materials to form spatially ordered complex, resulting in unambiguous physical or chemical properties. Because of these, DNA nanostructures have shown great promise in numerous biomedical theranostic applications. In this account, we briefly review the history and advances on construction of DNA nanoarchitectures and superstructures with accurate structural parameters. We focus on recent progress in exploiting these DNA nanostructures as platforms for quantitative biosensing, intracellular diagnosis, imaging, and smart drug delivery. We also discuss key challenges in practical applications. © 2017 The Chemical Society of Japan & Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Comparison of fecal egg counting methods in four livestock species.
Paras, Kelsey L; George, Melissa M; Vidyashankar, Anand N; Kaplan, Ray M
2018-06-15
Gastrointestinal nematode parasites are important pathogens of all domesticated livestock species. Fecal egg counts (FEC) are routinely used for evaluating anthelmintic efficacy and for making targeted anthelmintic treatment decisions. Numerous FEC techniques exist and vary in precision and accuracy. These performance characteristics are especially important when performing fecal egg count reduction tests (FECRT). The objective of this study was to compare the accuracy and precision of three commonly used FEC methods and determine if differences existed among livestock species. In this study, we evaluated the modified-Wisconsin, 3-chamber (high-sensitivity) McMaster, and Mini-FLOTAC methods in cattle, sheep, horses, and llamas in three phases. In the first phase, we performed an egg-spiking study to assess the egg recovery rate and accuracy of the different FEC methods. In the second phase, we examined clinical samples from four different livestock species and completed multiple replicate FEC using each method. In the last phase, we assessed the cheesecloth straining step as a potential source of egg loss. In the egg-spiking study, the Mini-FLOTAC recovered 70.9% of the eggs, which was significantly higher than either the McMaster (P = 0.002) or Wisconsin (P = 0.002). In the clinical samples from ruminants, Mini-FLOTAC consistently yielded the highest EPG, revealing a significantly higher level of egg recovery (P < 0.0001). For horses and llamas, both McMaster and Mini-FLOTAC yielded significantly higher EPG than Wisconsin (P < 0.0001, P < 0.0001, P < 0.001, and P = 0.024). Mini-FLOTAC was the most accurate method and was the most precise test for both species of ruminants. The Wisconsin method was the most precise for horses and McMaster was more precise for llama samples. We compared the Wisconsin and Mini-FLOTAC methods using a modified technique where both methods were performed using either the Mini-FLOTAC sieve or cheesecloth. The differences in the estimated mean EPG on log scale between the Wisconsin and mini-FLOTAC methods when cheesecloth was used (P < 0.0001) and when cheesecloth was excluded (P < 0.0001) were significant, providing strong evidence that the straining step is an important source of error. The high accuracy and precision demonstrated in this study for the Mini-FLOTAC, suggest that this method can be recommended for routine use in all host species. The benefits of Mini-FLOTAC will be especially relevant when high accuracy is important, such as when performing FECRT. Copyright © 2018 Elsevier B.V. All rights reserved.
Energy conserving numerical methods for the computation of complex vortical flows
NASA Astrophysics Data System (ADS)
Allaneau, Yves
One of the original goals of this thesis was to develop numerical tools to help with the design of micro air vehicles. Micro Air Vehicles (MAVs) are small flying devices of only a few inches in wing span. Some people consider that as their size becomes smaller and smaller, it would be increasingly more difficult to keep all the classical control surfaces such as the rudders, the ailerons and the usual propellers. Over the years, scientists took inspiration from nature. Birds, by flapping and deforming their wings, are capable of accurate attitude control and are able to generate propulsion. However, the biomimicry design has its own limitations and it is difficult to place a hummingbird in a wind tunnel to study precisely the motion of its wings. Our approach was to use numerical methods to tackle this challenging problem. In order to precisely evaluate the lift and drag generated by the wings, one needs to be able to capture with high fidelity the extremely complex vortical flow produced in the wake. This requires a numerical method that is stable yet not too dissipative, so that the vortices do not get diffused in an unphysical way. We solved this problem by developing a new Discontinuous Galerkin scheme that, in addition to conserving mass, momentum and total energy locally, also preserves kinetic energy globally. This property greatly improves the stability of the simulations, especially in the special case p=0 when the approximation polynomials are taken to be piecewise constant (we recover a finite volume scheme). In addition to needing an adequate numerical scheme, a high fidelity solution requires many degrees of freedom in the computations to represent the flow field. The size of the smallest eddies in the flow is given by the Kolmogoroff scale. Capturing these eddies requires a mesh counting in the order of Re³ cells, where Re is the Reynolds number of the flow. We show that under-resolving the system, to a certain extent, is acceptable. However our simulations still required meshes containing tens of millions of degrees of freedom. Such computations can only be done in reasonable amounts of time by spreading the work on multiple CPUs via domain decomposition. Further speed-up efforts were made by implementing a version of the code for GPUs using Nvidia's CUDA programming language. Finally we searched for optimal wing motions by coupling our computational fluid dynamics code with the optimization package SNOPT. The wing motion was parameterized by a few angles describing the local curvature and the twisting of the wing. These were expressed in terms of truncated Fourier series, the Fourier coefficients being our optimization parameters. With this approach we were able to obtain propulsive efficiencies of around 50% (thrust power/power input).
Valx: A system for extracting and structuring numeric lab test comparison statements from text
Hao, Tianyong; Liu, Hongfang; Weng, Chunhua
2017-01-01
Objectives To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Methods Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes 7 steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. Results The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 Diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Conclusions Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community. PMID:26940748
Valx: A System for Extracting and Structuring Numeric Lab Test Comparison Statements from Text.
Hao, Tianyong; Liu, Hongfang; Weng, Chunhua
2016-05-17
To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes seven steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community.
The Faculty of Language Integrates the Two Core Systems of Number.
Hiraiwa, Ken
2017-01-01
Only humans possess the faculty of language that allows an infinite array of hierarchically structured expressions (Hauser et al., 2002; Berwick and Chomsky, 2015). Similarly, humans have a capacity for infinite natural numbers, while all other species seem to lack such a capacity (Gelman and Gallistel, 1978; Dehaene, 1997). Thus, the origin of this numerical capacity and its relation to language have been of much interdisciplinary interest in developmental and behavioral psychology, cognitive neuroscience, and linguistics (Dehaene, 1997; Hauser et al., 2002; Pica et al., 2004). Hauser et al. (2002) and Chomsky (2008) hypothesize that a recursive generative operation that is central to the computational system of language (called Merge ) can give rise to the successor function in a set-theoretic fashion, from which capacities for discretely infinite natural numbers may be derived. However, a careful look at two domains in language, grammatical number and numerals, reveals no trace of the successor function. Following behavioral and neuropsychological evidence that there are two core systems of number cognition innately available, a core system of representation of large, approximate numerical magnitudes and a core system of precise representation of distinct small numbers (Feigenson et al., 2004), I argue that grammatical number reflects the core system of precise representation of distinct small numbers alone. In contrast, numeral systems arise from integrating the pre-existing two core systems of number and the human language faculty. To the extent that my arguments are correct, linguistic representations of number, grammatical number, and numerals do not incorporate anything like the successor function.
The Faculty of Language Integrates the Two Core Systems of Number
Hiraiwa, Ken
2017-01-01
Only humans possess the faculty of language that allows an infinite array of hierarchically structured expressions (Hauser et al., 2002; Berwick and Chomsky, 2015). Similarly, humans have a capacity for infinite natural numbers, while all other species seem to lack such a capacity (Gelman and Gallistel, 1978; Dehaene, 1997). Thus, the origin of this numerical capacity and its relation to language have been of much interdisciplinary interest in developmental and behavioral psychology, cognitive neuroscience, and linguistics (Dehaene, 1997; Hauser et al., 2002; Pica et al., 2004). Hauser et al. (2002) and Chomsky (2008) hypothesize that a recursive generative operation that is central to the computational system of language (called Merge) can give rise to the successor function in a set-theoretic fashion, from which capacities for discretely infinite natural numbers may be derived. However, a careful look at two domains in language, grammatical number and numerals, reveals no trace of the successor function. Following behavioral and neuropsychological evidence that there are two core systems of number cognition innately available, a core system of representation of large, approximate numerical magnitudes and a core system of precise representation of distinct small numbers (Feigenson et al., 2004), I argue that grammatical number reflects the core system of precise representation of distinct small numbers alone. In contrast, numeral systems arise from integrating the pre-existing two core systems of number and the human language faculty. To the extent that my arguments are correct, linguistic representations of number, grammatical number, and numerals do not incorporate anything like the successor function. PMID:28360870
A Density Perturbation Method to Study the Eigenstructure of Two-Phase Flow Equation Systems
NASA Astrophysics Data System (ADS)
Cortes, J.; Debussche, A.; Toumi, I.
1998-12-01
Many interesting and challenging physical mechanisms are concerned with the mathematical notion of eigenstructure. In two-fluid models, complex phasic interactions yield a complex eigenstructure which may raise numerous problems in numerical simulations. In this paper, we develop a perturbation method to examine the eigenvalues and eigenvectors of two-fluid models. This original method, based on the stiffness of the density ratio, provides a convenient tool to study the relevance of pressure momentum interactions and allows us to get precise approximations of the whole flow eigendecomposition for minor requirements. Roe scheme is successfully implemented and some numerical tests are presented.
Lemurs and macaques show similar numerical sensitivity.
Jones, Sarah M; Pearson, John; DeWind, Nicholas K; Paulsen, David; Tenekedjieva, Ana-Maria; Brannon, Elizabeth M
2014-05-01
We investigated the precision of the approximate number system (ANS) in three lemur species (Lemur catta, Eulemur mongoz, and Eulemur macaco flavifrons), one Old World monkey species (Macaca mulatta) and humans (Homo sapiens). In Experiment 1, four individuals of each nonhuman primate species were trained to select the numerically larger of two visual arrays on a touchscreen. We estimated numerical acuity by modeling Weber fractions (w) and found quantitatively equivalent performance among all four nonhuman primate species. In Experiment 2, we tested adult humans in a similar procedure, and they outperformed the four nonhuman species but showed qualitatively similar performance. These results indicate that the ANS is conserved over the primate order.
NASA Technical Reports Server (NTRS)
Krishnamoorthy, S.; Ramaswamy, B.; Joo, S. W.
1995-01-01
A thin film draining on an inclined plate has been studied numerically using finite element method. Three-dimensional governing equations of continuity, momentum and energy with a moving boundary are integrated in an arbitrary Lagrangian Eulerian frame of reference. Kinematic equation is solved to precisely update interface location. Rivulet formation based on instability mechanism has been simulated using full-scale computation. Comparisons with long-wave theory are made to validate the numerical scheme. Detailed analysis of two- and three-dimensional nonlinear wave formation and spontaneous rupture forming rivulets under the influence of combined thermocapillary and surface-wave instabilities is performed.
Multicritical points for spin-glass models on hierarchical lattices.
Ohzeki, Masayuki; Nishimori, Hidetoshi; Berker, A Nihat
2008-06-01
The locations of multicritical points on many hierarchical lattices are numerically investigated by the renormalization group analysis. The results are compared with an analytical conjecture derived by using the duality, the gauge symmetry, and the replica method. We find that the conjecture does not give the exact answer but leads to locations slightly away from the numerically reliable data. We propose an improved conjecture to give more precise predictions of the multicritical points than the conventional one. This improvement is inspired by a different point of view coming from the renormalization group and succeeds in deriving very consistent answers with many numerical data.
3D Printed Programmable Release Capsules.
Gupta, Maneesh K; Meng, Fanben; Johnson, Blake N; Kong, Yong Lin; Tian, Limei; Yeh, Yao-Wen; Masters, Nina; Singamaneni, Srikanth; McAlpine, Michael C
2015-08-12
The development of methods for achieving precise spatiotemporal control over chemical and biomolecular gradients could enable significant advances in areas such as synthetic tissue engineering, biotic-abiotic interfaces, and bionanotechnology. Living organisms guide tissue development through highly orchestrated gradients of biomolecules that direct cell growth, migration, and differentiation. While numerous methods have been developed to manipulate and implement biomolecular gradients, integrating gradients into multiplexed, three-dimensional (3D) matrices remains a critical challenge. Here we present a method to 3D print stimuli-responsive core/shell capsules for programmable release of multiplexed gradients within hydrogel matrices. These capsules are composed of an aqueous core, which can be formulated to maintain the activity of payload biomolecules, and a poly(lactic-co-glycolic) acid (PLGA, an FDA approved polymer) shell. Importantly, the shell can be loaded with plasmonic gold nanorods (AuNRs), which permits selective rupturing of the capsule when irradiated with a laser wavelength specifically determined by the lengths of the nanorods. This precise control over space, time, and selectivity allows for the ability to pattern 2D and 3D multiplexed arrays of enzyme-loaded capsules along with tunable laser-triggered rupture and release of active enzymes into a hydrogel ambient. The advantages of this 3D printing-based method include (1) highly monodisperse capsules, (2) efficient encapsulation of biomolecular payloads, (3) precise spatial patterning of capsule arrays, (4) "on the fly" programmable reconfiguration of gradients, and (5) versatility for incorporation in hierarchical architectures. Indeed, 3D printing of programmable release capsules may represent a powerful new tool to enable spatiotemporal control over biomolecular gradients.
Linearized lattice Boltzmann method for micro- and nanoscale flow and heat transfer.
Shi, Yong; Yap, Ying Wan; Sader, John E
2015-07-01
Ability to characterize the heat transfer in flowing gases is important for a wide range of applications involving micro- and nanoscale devices. Gas flows away from the continuum limit can be captured using the Boltzmann equation, whose analytical solution poses a formidable challenge. An efficient and accurate numerical simulation of the Boltzmann equation is thus highly desirable. In this article, the linearized Boltzmann Bhatnagar-Gross-Krook equation is used to develop a hierarchy of thermal lattice Boltzmann (LB) models based on half-space Gaussian-Hermite (GH) quadrature ranging from low to high algebraic precision, using double distribution functions. Simplified versions of the LB models in the continuum limit are also derived, and are shown to be consistent with existing thermal LB models for noncontinuum heat transfer reported in the literature. Accuracy of the proposed LB hierarchy is assessed by simulating thermal Couette flows for a wide range of Knudsen numbers. Effects of the underlying quadrature schemes (half-space GH vs full-space GH) and continuum-limit simplifications on computational accuracy are also elaborated. The numerical findings in this article provide direct evidence of improved computational capability of the proposed LB models for modeling noncontinuum flows and heat transfer at small length scales.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
NASA Astrophysics Data System (ADS)
Xu, Y.; Jones, A. D.; Rhoades, A.
2017-12-01
Precipitation is a key component in hydrologic cycles, and changing precipitation regimes contribute to more intense and frequent drought and flood events around the world. Numerical climate modeling is a powerful tool to study climatology and to predict future changes. Despite the continuous improvement in numerical models, long-term precipitation prediction remains a challenge especially at regional scales. To improve numerical simulations of precipitation, it is important to find out where the uncertainty in precipitation simulations comes from. There are two types of uncertainty in numerical model predictions. One is related to uncertainty in the input data, such as model's boundary and initial conditions. These uncertainties would propagate to the final model outcomes even if the numerical model has exactly replicated the true world. But a numerical model cannot exactly replicate the true world. Therefore, the other type of model uncertainty is related the errors in the model physics, such as the parameterization of sub-grid scale processes, i.e., given precise input conditions, how much error could be generated by the in-precise model. Here, we build two statistical models based on a neural network algorithm to predict long-term variation of precipitation over California: one uses "true world" information derived from observations, and the other uses "modeled world" information using model inputs and outputs from the North America Coordinated Regional Downscaling Project (NA CORDEX). We derive multiple climate feature metrics as the predictors for the statistical model to represent the impact of global climate on local hydrology, and include topography as a predictor to represent the local control. We first compare the predictors between the true world and the modeled world to determine the errors contained in the input data. By perturbing the predictors in the statistical model, we estimate how much uncertainty in the model's final outcomes is accounted for by each predictor. By comparing the statistical model derived from true world information and modeled world information, we assess the errors lying in the physics of the numerical models. This work provides a unique insight to assess the performance of numerical climate models, and can be used to guide improvement of precipitation prediction.
NASA Technical Reports Server (NTRS)
Voss, P. B.; Stimpfle, R. M.; Cohen, R. C.; Hanisco, T. F.; Bonne, G. P.; Perkins, K. K.; Lanzendorf, E. J.; Anderson, J. G.; Salawitch, R. J.
2001-01-01
We examine inorganic chlorine (Cly) partitioning in the summer lower stratosphere using in situ ER-2 aircraft observations made during the Photochemistry of Ozone Loss in the Arctic Region in Summer (POLARIS) campaign. New steady state and numerical models estimate [ClONO2]/[HCl] using currently accepted photochemistry. These models are tightly constrained by observations with OH (parameterized as a function of solar zenith angle) substituting for modeled HO2 chemistry. We find that inorganic chlorine photochemistry alone overestimates observed [ClONO2]/[HCl] by approximately 55-60% at mid and high latitudes. On the basis of POLARIS studies of the inorganic chlorine budget, [ClO]/[ClONO2], and an intercomparison with balloon observations, the most direct explanation for the model-measurement discrepancy in Cly partitioning is an error in the reactions, rate constants, and measured species concentrations linking HCl and ClO (simulated [ClO]/[HCl] too high) in combination with a possible systematic error in the ER-2 ClONO2 measurement (too low). The high precision of our simulation (+/-15% 1-sigma for [ClONO2]/[HCl], which is compared with observations) increases confidence in the observations, photolysis calculations, and laboratory rate constants. These results, along with other findings, should lead to improvements in both the accuracy and precision of stratospheric photochemical models.
User's Manual for Downscaler Fusion Software
Recently, a series of 3 papers has been published in the statistical literature that details the use of downscaling to obtain more accurate and precise predictions of air pollution across the conterminous U.S. This downscaling approach combines CMAQ gridded numerical model output...
Gram-Schmidt Orthogonalization by Gauss Elimination.
ERIC Educational Resources Information Center
Pursell, Lyle; Trimble, S. Y.
1991-01-01
Described is the hand-calculation method for the orthogonalization of a given set of vectors through the integration of Gaussian elimination with existing algorithms. Although not numerically preferable, this method adds increased precision as well as organization to the solution process. (JJK)
Song, Kenan; Zhang, Yiying; Meng, Jiangsha; Green, Emily C.; Tajaddod, Navid; Li, Heng; Minus, Marilyn L.
2013-01-01
Among the many potential applications of carbon nanotubes (CNT), its usage to strengthen polymers has been paid considerable attention due to the exceptional stiffness, excellent strength, and the low density of CNT. This has provided numerous opportunities for the invention of new material systems for applications requiring high strength and high modulus. Precise control over processing factors, including preserving intact CNT structure, uniform dispersion of CNT within the polymer matrix, effective filler–matrix interfacial interactions, and alignment/orientation of polymer chains/CNT, contribute to the composite fibers’ superior properties. For this reason, fabrication methods play an important role in determining the composite fibers’ microstructure and ultimate mechanical behavior. The current state-of-the-art polymer/CNT high-performance composite fibers, especially in regards to processing–structure–performance, are reviewed in this contribution. Future needs for material by design approaches for processing these nano-composite systems are also discussed. PMID:28809290
Dipole Excitation With A Paul Ion Trap Mass Spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacAskill, J. A.; Madzunkov, S. M.; Chutjian, A.
Preliminary results are presented for the use of an auxiliary radiofrequency (rf) excitation voltage in combination with a high purity, high voltage rf generator to perform dipole excitation within a high precision Paul ion trap. These results show the effects of the excitation frequency over a continuous frequency range on the resultant mass spectra from the Paul trap with particular emphasis on ion ejection times, ion signal intensity, and peak shapes. Ion ejection times are found to decrease continuously with variations in dipole frequency about several resonant values and show remarkable symmetries. Signal intensities vary in a complex fashion withmore » numerous resonant features and are driven to zero at specific frequency values. Observed intensity variations depict dipole excitations that target ions of all masses as well as individual masses. Substantial increases in mass resolution are obtained with resolving powers for nitrogen increasing from 114 to 325.« less
NASA Astrophysics Data System (ADS)
Bottom, Michael; Muirhead, Philip S.; Swift, Jonathan J.; Zhao, Ming; Gardner, Paul; Plavchan, Peter P.; Riddle, Reed L.; Herzig, Erich; Johnson, John A.; Wright, Jason T.; McCrady, Nate; Wittenmyer, Robert A.
2014-08-01
We present the science motivation, design, and on-sky test data of a high-throughput fiber coupling unit suitable for automated 1-meter class telescopes. The optical and mechanical design of the fiber coupling is detailed and we describe a flexible controller software designed specifically for this unit. The system performance is characterized with a set of numerical simulations, and we present on-sky results that validate the performance of the controller and the expected throughput of the fiber coupling. This unit was designed specifically for the MINERVA array, a robotic observatory consisting of multiple 0.7 m telescopes linked to a single high-resolution stabilized spectrograph for the purpose of exoplanet discovery using high-cadence radial velocimetry. However, this unit could easily be used for general astronomical purposes requiring fiber coupling or precise guiding.
Discrete square root filtering - A survey of current techniques.
NASA Technical Reports Server (NTRS)
Kaminskii, P. G.; Bryson, A. E., Jr.; Schmidt, S. F.
1971-01-01
Current techniques in square root filtering are surveyed and related by applying a duality association. Four efficient square root implementations are suggested, and compared with three common conventional implementations in terms of computational complexity and precision. It is shown that the square root computational burden should not exceed the conventional by more than 50% in most practical problems. An examination of numerical conditioning predicts that the square root approach can yield twice the effective precision of the conventional filter in ill-conditioned problems. This prediction is verified in two examples.
The Nonlinear Dynamic Response of an Elastic-Plastic Thin Plate under Impulsive Loading,
1987-06-11
Among those numerical methods, the finite element method is the most effective one. The method presented in this paper is an " influence function " numerical...computational time is much less than the finite element method. Its precision is higher also. II. Basic Assumption and the Influence Function of a Simple...calculation. Fig. 1 3 2. The Influence function of a Simple Supported Plate The motion differential equation of a thin plate can be written as DV’w+ _.eluq() (1
NASA Astrophysics Data System (ADS)
Antunes, Pedro R. S.; Ferreira, Rui A. C.
2017-07-01
In this work we study boundary value problems associated to a nonlinear fractional ordinary differential equation involving left and right Caputo derivatives. We discuss the regularity of the solutions of such problems and, in particular, give precise necessary conditions so that the solutions are C1([0, 1]). Taking into account our analytical results, we address the numerical solution of those problems by the augmented -RBF method. Several examples illustrate the good performance of the numerical method.
Employing Tropospheric Numerical Weather Prediction Model for High-Precision GNSS Positioning
NASA Astrophysics Data System (ADS)
Alves, Daniele; Gouveia, Tayna; Abreu, Pedro; Magário, Jackes
2014-05-01
In the past few years is increasing the necessity of realizing high accuracy positioning. In this sense, the spatial technologies have being widely used. The GNSS (Global Navigation Satellite System) has revolutionized the geodetic positioning activities. Among the existent methods one can emphasize the Precise Point Positioning (PPP) and network-based positioning. But, to get high accuracy employing these methods, mainly in real time, is indispensable to realize the atmospheric modeling (ionosphere and troposphere) accordingly. Related to troposphere, there are the empirical models (for example Saastamoinen and Hopfield). But when highly accuracy results (error of few centimeters) are desired, maybe these models are not appropriated to the Brazilian reality. In order to minimize this limitation arises the NWP (Numerical Weather Prediction) models. In Brazil the CPTEC/INPE (Center for Weather Prediction and Climate Studies / Brazilian Institute for Spatial Researches) provides a regional NWP model, currently used to produce Zenithal Tropospheric Delay (ZTD) predictions (http://satelite.cptec.inpe.br/zenital/). The actual version, called eta15km model, has a spatial resolution of 15 km and temporal resolution of 3 hours. In this paper the main goal is to accomplish experiments and analysis concerning the use of troposphere NWP model (eta15km model) in PPP and network-based positioning. Concerning PPP it was used data from dozens of stations over the Brazilian territory, including Amazon forest. The results obtained with NWP model were compared with Hopfield one. NWP model presented the best results in all experiments. Related to network-based positioning it was used data from GNSS/SP Network in São Paulo State, Brazil. This network presents the best configuration in the country to realize this kind of positioning. Actually the network is composed by twenty stations (http://www.fct.unesp.br/#!/pesquisa/grupos-de-estudo-e-pesquisa/gege//gnss-sp-network2789/). The results obtained employing NWP model also were compared to Hopfield one, and the results were very interesting. The theoretical concepts, experiments, results and analysis will be presented in this paper.
Three-Dimensional High-Order Spectral Finite Volume Method for Unstructured Grids
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel; Wang, Z. J.; Kwak, Dochan (Technical Monitor)
2002-01-01
Many areas require a very high-order accurate numerical solution of conservation laws for complex shapes. This paper deals with the extension to three dimensions of the Spectral Finite Volume (SV) method for unstructured grids, which was developed to solve such problems. We first summarize the limitations of traditional methods such as finite-difference, and finite-volume for both structured and unstructured grids. We then describe the basic formulation of the spectral finite volume method. What distinguishes the SV method from conventional high-order finite-volume methods for unstructured triangular or tetrahedral grids is the data reconstruction. Instead of using a large stencil of neighboring cells to perform a high-order reconstruction, the stencil is constructed by partitioning each grid cell, called a spectral volume (SV), into 'structured' sub-cells, called control volumes (CVs). One can show that if all the SV cells are partitioned into polygonal or polyhedral CV sub-cells in a geometrically similar manner, the reconstructions for all the SVs become universal, irrespective of their shapes, sizes, orientations, or locations. It follows that the reconstruction is reduced to a weighted sum of unknowns involving just a few simple adds and multiplies, and those weights are universal and can be pre-determined once for all. The method is thus very efficient, accurate, and yet geometrically flexible. The most critical part of the SV method is the partitioning of the SV into CVs. In this paper we present the partitioning of a tetrahedral SV into polyhedral CVs with one free parameter for polynomial reconstructions up to degree of precision five. (Note that the order of accuracy of the method is one order higher than the reconstruction degree of precision.) The free parameter will be determined by minimizing the Lebesgue constant of the reconstruction matrix or similar criteria to obtain optimized partitions. The details of an efficient, parallelizable code to solve three-dimensional problems for any order of accuracy are then presented. Important aspects of the data structure are discussed. Comparisons with the Discontinuous Galerkin (DG) method are made. Numerical examples for wave propagation problems are presented.
Tactile display landing safety and precision improvements for the Space Shuttle
NASA Astrophysics Data System (ADS)
Olson, John M.
A tactile display belt using 24 electro-mechanical tactile transducers (tactors) was used to determine if a modified tactile display system, known as the Tactile Situation Awareness System (TSAS) improved the safety and precision of a complex spacecraft (i.e. the Space Shuttle Orbiter) in guided precision approaches and landings. The goal was to determine if tactile cues enhance safety and mission performance through reduced workload, increased situational awareness (SA), and an improved operational capability by increasing secondary cognitive workload capacity and human-machine interface efficiency and effectiveness. Using both qualitative and quantitative measures such as NASA's Justiz Numerical Measure and Synwork1 scores, an Overall Workload (OW) measure, the Cooper-Harper rating scale, and the China Lake Situational Awareness scale, plus Pre- and Post-Flight Surveys, the data show that tactile displays decrease OW, improve SA, counteract fatigue, and provide superior warning and monitoring capacity for dynamic, off-nominal, high concurrent workload scenarios involving complex, cognitive, and multi-sensory critical scenarios. Use of TSAS for maintaining guided precision approaches and landings was generally intuitive, reduced training times, and improved task learning effects. Ultimately, the use of a homogeneous, experienced, and statistically robust population of test pilots demonstrated that the use of tactile displays for Space Shuttle approaches and landings with degraded vehicle systems, weather, and environmental conditions produced substantial improvements in safety, consistency, reliability, and ease of operations under demanding conditions. Recommendations for further analysis and study are provided in order to leverage the results from this research and further explore the potential to reduce the risk of spaceflight and aerospace operations in general.
Stability switches of arbitrary high-order consensus in multiagent networks with time delays.
Yang, Bo
2013-01-01
High-order consensus seeking, in which individual high-order dynamic agents share a consistent view of the objectives and the world in a distributed manner, finds its potential broad applications in the field of cooperative control. This paper presents stability switches analysis of arbitrary high-order consensus in multiagent networks with time delays. By employing a frequency domain method, we explicitly derive analytical equations that clarify a rigorous connection between the stability of general high-order consensus and the system parameters such as the network topology, communication time-delays, and feedback gains. Particularly, our results provide a general and a fairly precise notion of how increasing communication time-delay causes the stability switches of consensus. Furthermore, under communication constraints, the stability and robustness problems of consensus algorithms up to third order are discussed in details to illustrate our central results. Numerical examples and simulation results for fourth-order consensus are provided to demonstrate the effectiveness of our theoretical results.
Observed and modeled mesoscale variability near the Gulf Stream and Kuroshio Extension
NASA Astrophysics Data System (ADS)
Schmitz, William J.; Holland, William R.
1986-08-01
Our earliest intercomparisons between western North Atlantic data and eddy-resolving two-layer quasi-geostrophic symmetric-double-gyre steady wind-forced numerical model results focused on the amplitudes and largest horizontal scales in patterns of eddy kinetic energy, primarily abyssal. Here, intercomparisons are extended to recent eight-layer model runs and new data which allow expansion of the investigation to the Kuroshio Extension and throughout much of the water column. Two numerical experiments are shown to have realistic zonal, vertical, and temporal eddy scales in the vicinity of the Kuroshio Extension in one case and the Gulf Stream in the other. Model zonal mean speeds are larger than observed, but vertical shears are in general agreement with the data. A longitudinal displacement between the maximum intensity in surface and abyssal eddy fields as observed for the North Atlantic is not found in the model results. The numerical simulations examined are highly idealized, notably with respect to basin shape, topography, wind-forcing, and of course dissipation. Therefore the zero-order agreement between modeled and observed basic characteristics of mid-latitude jets and their associated eddy fields suggests that such properties are predominantly determined by the physical mechanisms which dominate the models, where the fluctuations are the result of instability processes. The comparatively high vertical resolution of the model is needed to compare with new higher-resolution data as well as for dynamical reasons, although the precise number of layers required either kinematically or dynamically (or numerically) has not been determined; we estimate four to six when no attempt is made to account for bottom- or near-surface-intensified phenomena.
Magnitude knowledge: the common core of numerical development.
Siegler, Robert S
2016-05-01
The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic numbers, (2) connecting small symbolic numbers to their non-symbolic referents, (3) extending understanding from smaller to larger whole numbers, and (4) accurately representing the magnitudes of rational numbers. The present review identifies substantial commonalities, as well as differences, in these four aspects of numerical development. With both whole and rational numbers, numerical magnitude knowledge is concurrently correlated with, longitudinally predictive of, and causally related to multiple aspects of mathematical understanding, including arithmetic and overall math achievement. Moreover, interventions focused on increasing numerical magnitude knowledge often generalize to other aspects of mathematics. The cognitive processes of association and analogy seem to play especially large roles in this development. Thus, acquisition of numerical magnitude knowledge can be seen as the common core of numerical development. © 2016 John Wiley & Sons Ltd.
Vibrational dephasing in matter-wave interferometers
NASA Astrophysics Data System (ADS)
Rembold, A.; Schütz, G.; Röpke, R.; Chang, W. T.; Hwang, I. S.; Günther, A.; Stibor, A.
2017-03-01
Matter-wave interferometry is a highly sensitive tool to measure small perturbations in a quantum system. This property allows the creation of precision sensors for dephasing mechanisms such as mechanical vibrations. They are a challenge for phase measurements under perturbing conditions that cannot be perfectly decoupled from the interferometer, e.g. for mobile interferometric devices or vibrations with a broad frequency range. Here, we demonstrate a method based on second-order correlation theory in combination with Fourier analysis, to use an electron interferometer as a sensor that precisely characterizes the mechanical vibration spectrum of the interferometer. Using the high spatial and temporal single-particle resolution of a delay line detector, the data allows to reveal the original contrast and spatial periodicity of the interference pattern from ‘washed-out’ matter-wave interferograms that have been vibrationally disturbed in the frequency region between 100 and 1000 Hz. Other than with electromagnetic dephasing, due to excitations of higher harmonics and additional frequencies induced from the environment, the parts in the setup oscillate with frequencies that can be different to the applied ones. The developed numerical search algorithm is capable to determine those unknown oscillations and corresponding amplitudes. The technique can identify vibrational dephasing and decrease damping and shielding requirements in electron, ion, neutron, atom and molecule interferometers that generate a spatial fringe pattern on the detector plane.
Leveraging Pattern Semantics for Extracting Entities in Enterprises
Tao, Fangbo; Zhao, Bo; Fuxman, Ariel; Li, Yang; Han, Jiawei
2015-01-01
Entity Extraction is a process of identifying meaningful entities from text documents. In enterprises, extracting entities improves enterprise efficiency by facilitating numerous applications, including search, recommendation, etc. However, the problem is particularly challenging on enterprise domains due to several reasons. First, the lack of redundancy of enterprise entities makes previous web-based systems like NELL and OpenIE not effective, since using only high-precision/low-recall patterns like those systems would miss the majority of sparse enterprise entities, while using more low-precision patterns in sparse setting also introduces noise drastically. Second, semantic drift is common in enterprises (“Blue” refers to “Windows Blue”), such that public signals from the web cannot be directly applied on entities. Moreover, many internal entities never appear on the web. Sparse internal signals are the only source for discovering them. To address these challenges, we propose an end-to-end framework for extracting entities in enterprises, taking the input of enterprise corpus and limited seeds to generate a high-quality entity collection as output. We introduce the novel concept of Semantic Pattern Graph to leverage public signals to understand the underlying semantics of lexical patterns, reinforce pattern evaluation using mined semantics, and yield more accurate and complete entities. Experiments on Microsoft enterprise data show the effectiveness of our approach. PMID:26705540
Leveraging Pattern Semantics for Extracting Entities in Enterprises.
Tao, Fangbo; Zhao, Bo; Fuxman, Ariel; Li, Yang; Han, Jiawei
2015-05-01
Entity Extraction is a process of identifying meaningful entities from text documents. In enterprises, extracting entities improves enterprise efficiency by facilitating numerous applications, including search, recommendation, etc. However, the problem is particularly challenging on enterprise domains due to several reasons. First, the lack of redundancy of enterprise entities makes previous web-based systems like NELL and OpenIE not effective, since using only high-precision/low-recall patterns like those systems would miss the majority of sparse enterprise entities, while using more low-precision patterns in sparse setting also introduces noise drastically. Second, semantic drift is common in enterprises ("Blue" refers to "Windows Blue"), such that public signals from the web cannot be directly applied on entities. Moreover, many internal entities never appear on the web. Sparse internal signals are the only source for discovering them. To address these challenges, we propose an end-to-end framework for extracting entities in enterprises, taking the input of enterprise corpus and limited seeds to generate a high-quality entity collection as output. We introduce the novel concept of Semantic Pattern Graph to leverage public signals to understand the underlying semantics of lexical patterns, reinforce pattern evaluation using mined semantics, and yield more accurate and complete entities. Experiments on Microsoft enterprise data show the effectiveness of our approach.
Detecting transit signatures of exoplanetary rings using SOAP3.0
NASA Astrophysics Data System (ADS)
Akinsanmi, B.; Oshagh, M.; Santos, N. C.; Barros, S. C. C.
2018-01-01
Context. It is theoretically possible for rings to have formed around extrasolar planets in a similar way to that in which they formed around the giant planets in our solar system. However, no such rings have been detected to date. Aims: We aim to test the possibility of detecting rings around exoplanets by investigating the photometric and spectroscopic ring signatures in high-precision transit signals. Methods: The photometric and spectroscopic transit signals of a ringed planet is expected to show deviations from that of a spherical planet. We used these deviations to quantify the detectability of rings. We present SOAP3.0 which is a numerical tool to simulate ringed planet transits and measure ring detectability based on amplitudes of the residuals between the ringed planet signal and best fit ringless model. Results: We find that it is possible to detect the photometric and spectroscopic signature of near edge-on rings especially around planets with high impact parameter. Time resolution ≤7 min is required for the photometric detection, while 15 min is sufficient for the spectroscopic detection. We also show that future instruments like CHEOPS and ESPRESSO, with precisions that allow ring signatures to be well above their noise-level, present good prospects for detecting rings.
Fragment approach to constrained density functional theory calculations using Daubechies wavelets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan
2015-06-21
In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix ofmore » the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.« less
Luo, Ma-Ji; Chen, Guo-Hua; Ma, Yuan-Hao
2003-01-01
This paper presents a KIVA-3 code based numerical model for three-dimensional transient intake flow in the intake port-valve-cylinder system of internal combustion engine using body-fitted technique, which can be used in numerical study on internal combustion engine with vertical and inclined valves, and has higher calculation precision. A numerical simulation (on the intake process of a two-valve engine with a semi-sphere combustion chamber and a radial intake port) is provided for analysis of the velocity field and pressure field of different plane at different crank angles. The results revealed the formation of the tumble motion, the evolution of flow field parameters and the variation of tumble ratios as important information for the design of engine intake system.
Probabilistic numerics and uncertainty in computations
Hennig, Philipp; Osborne, Michael A.; Girolami, Mark
2015-01-01
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321
Probabilistic numerics and uncertainty in computations.
Hennig, Philipp; Osborne, Michael A; Girolami, Mark
2015-07-08
We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
Braun, Katharina; Böhnke, Frank; Stark, Thomas
2012-06-01
We present a complete geometric model of the human cochlea, including the segmentation and reconstruction of the fluid-filled chambers scala tympani and scala vestibuli, the lamina spiralis ossea and the vibrating structure (cochlear partition). Future fluid-structure coupled simulations require a reliable geometric model of the cochlea. The aim of this study was to present an anatomical model of the human cochlea, which can be used for further numerical calculations. Using high resolution micro-computed tomography (µCT), we obtained images of a cut human temporal bone with a spatial resolution of 5.9 µm. Images were manually segmented to obtain the three-dimensional reconstruction of the cochlea. Due to the high resolution of the µCT data, a detailed examination of the geometry of the twisted cochlear partition near the oval and the round window as well as the precise illustration of the helicotrema was possible. After reconstruction of the lamina spiralis ossea, the cochlear partition and the curved geometry of the scala vestibuli and the scala tympani were presented. The obtained data sets were exported as standard lithography (stl) files. These files represented a complete framework for future numerical simulations of mechanical (acoustic) wave propagation on the cochlear partition in the form of mathematical mechanical cochlea models. Additional quantitative information concerning heights, lengths and volumes of the scalae was found and compared with previous results.
Wang, Jinjing Jenny; Odic, Darko; Halberda, Justin; Feigenson, Lisa
2016-07-01
From early in life, humans have access to an approximate number system (ANS) that supports an intuitive sense of numerical quantity. Previous work in both children and adults suggests that individual differences in the precision of ANS representations correlate with symbolic math performance. However, this work has been almost entirely correlational in nature. Here we tested for a causal link between ANS precision and symbolic math performance by asking whether a temporary modulation of ANS precision changes symbolic math performance. First, we replicated a recent finding that 5-year-old children make more precise ANS discriminations when starting with easier trials and gradually progressing to harder ones, compared with the reverse. Next, we show that this brief modulation of ANS precision influenced children's performance on a subsequent symbolic math task but not a vocabulary task. In a supplemental experiment, we present evidence that children who performed ANS discriminations in a random trial order showed intermediate performance on both the ANS task and the symbolic math task, compared with children who made ordered discriminations. Thus, our results point to a specific causal link from the ANS to symbolic math performance. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, Jinjing (Jenny); Odic, Darko; Halberda, Justin; Feigenson, Lisa
2016-01-01
From early in life, humans have access to an Approximate Number System (ANS) that supports an intuitive sense of numerical quantity. Previous work in both children and adults suggests that individual differences in the precision of ANS representations correlate with symbolic math performance. However, this work has been almost entirely correlational in nature. Here we tested for a causal link between ANS precision and symbolic math performance by asking whether a temporary modulation of ANS precision changes symbolic math performance. First we replicated a recent finding that 5-year-old children make more precise ANS discriminations when starting with easier trials and gradually progressing to harder ones, compared to the reverse. Next, we show that this brief modulation of ANS precision influenced children’s performance on a subsequent symbolic math task, but not a vocabulary task. In a supplemental experiment we present evidence that children who performed ANS discriminations in a random trial order showed intermediate performance both on the ANS task and the symbolic math task, compared to the children who made ordered discriminations. Thus, our results point to a specific causal link from the ANS to symbolic math performance. PMID:27061668
PRECISION MANAGEMENT OF LOCALIZED PROSTATE CANCER
VanderWeele, David J.; Turkbey, Baris; Sowalsky, Adam G.
2017-01-01
Introduction The vast majority of men who are diagnosed with prostate cancer die of other causes, highlighting the importance of determining which patient has a risk of death from prostate cancer. Precision management of prostate cancer patients includes distinguishing which men have potentially lethal disease and employing strategies for determining which treatment modality appropriately balances the desire to achieve a durable response while preventing unnecessary overtreatment. Areas covered In this review, we highlight precision approaches to risk assessment and a context for the precision-guided application of definitive therapy. We focus on three dilemmas relevant to the diagnosis of localized prostate cancer: screening, the decision to treat, and postoperative management. Expert commentary In the last five years, numerous precision tools have emerged with potential benefit to the patient. However, to achieve optimal outcome, the decision to employ one or more of these tests must be considered in the context of prevailing conventional factors. Moreover, performance and interpretation of a molecular or imaging precision test remains practitioner-dependent. The next five years will witness increased marriage of molecular and imaging biomarkers for improved multi-modal diagnosis and discrimination of disease that is aggressive versus truly indolent. PMID:28133630
Digital dermatitis in cattle: current bacterial and immunological findings
USDA-ARS?s Scientific Manuscript database
Globally, digital dermatitis is a leading form of lameness observed in production dairy cattle. While the precise etiology remains to be determined, the disease is clearly associated with infection by numerous Treponema species in addition to other anaerobic bacteria. Multiple treponeme phylotypes, ...
Electrode Models for Electric Current Computed Tomography
CHENG, KUO-SHENG; ISAACSON, DAVID; NEWELL, J. C.; GISSER, DAVID G.
2016-01-01
This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 Ω · cm were studied. Values of “effective” contact impedance z used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 Ω · cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an “effective” contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field. PMID:2777280
Electrode models for electric current computed tomography.
Cheng, K S; Isaacson, D; Newell, J C; Gisser, D G
1989-09-01
This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 omega.cm were studied. Values of "effective" contact impedance zeta used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 omega.cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an "effective" contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field.
Pulse energy dependence of subcellular dissection by femtosecond laser pulses
NASA Technical Reports Server (NTRS)
Heisterkamp, A.; Maxwell, I. Z.; Mazur, E.; Underwood, J. M.; Nickerson, J. A.; Kumar, S.; Ingber, D. E.
2005-01-01
Precise dissection of cells with ultrashort laser pulses requires a clear understanding of how the onset and extent of ablation (i.e., the removal of material) depends on pulse energy. We carried out a systematic study of the energy dependence of the plasma-mediated ablation of fluorescently-labeled subcellular structures in the cytoskeleton and nuclei of fixed endothelial cells using femtosecond, near-infrared laser pulses focused through a high-numerical aperture objective lens (1.4 NA). We find that the energy threshold for photobleaching lies between 0.9 and 1.7 nJ. By comparing the changes in fluorescence with the actual material loss determined by electron microscopy, we find that the threshold for true material ablation is about 20% higher than the photobleaching threshold. This information makes it possible to use the fluorescence to determine the onset of true material ablation without resorting to electron microscopy. We confirm the precision of this technique by severing a single microtubule without disrupting the neighboring microtubules, less than 1 micrometer away. c2005 Optical Society of America.
Computational simulation of weld microstructure and distortion by considering process mechanics
NASA Astrophysics Data System (ADS)
Mochizuki, M.; Mikami, Y.; Okano, S.; Itoh, S.
2009-05-01
Highly precise fabrication of welded materials is in great demand, and so microstructure and distortion controls are essential. Furthermore, consideration of process mechanics is important for intelligent fabrication. In this study, the microstructure and hardness distribution in multi-pass weld metal are evaluated by computational simulations under the conditions of multiple heat cycles and phase transformation. Because conventional CCT diagrams of weld metal are not available even for single-pass weld metal, new diagrams for multi-pass weld metals are created. The weld microstructure and hardness distribution are precisely predicted when using the created CCT diagram for multi-pass weld metal and calculating the weld thermal cycle. Weld distortion is also investigated by using numerical simulation with a thermal elastic-plastic analysis. In conventional evaluations of weld distortion, the average heat input has been used as the dominant parameter; however, it is difficult to consider the effect of molten pool configurations on weld distortion based only on the heat input. Thus, the effect of welding process conditions on weld distortion is studied by considering molten pool configurations, determined by temperature distribution and history.
Sensing qualitative events to control manipulation
NASA Astrophysics Data System (ADS)
Pook, Polly K.; Ballard, Dana H.
1992-11-01
Dexterous robotic hands have numerous sensors distributed over a flexible high-degree-of- freedom framework. Control of these hands often relies on a detailed task description that is either specified a priori or computed on-line from sensory feedback. Such controllers are complex and may use unnecessary precision. In contrast, one can incorporate plan cues that provide a contextual backdrop in order to simplify the control task. To demonstrate, a Utah/MIT dexterous hand mounted on a Puma 760 arm flips a plastic egg, using the finger tendon tensions as the sole control signal. The completion of each subtask, such as picking up the spatula, finding the pan, and sliding the spatula under the egg, is detected by sensing tension states. The strategy depends on the task context but does not require precise positioning knowledge. We term this qualitative manipulation to draw a parallel with qualitative vision strategies. The approach is to design closed-loop programs that detect significant events to control manipulation but ignore inessential details. The strategy is generalized by analyzing the robot state dynamics during teleoperated hand actions to reveal the essential features that control each action.
Zhang, Yan; Lee, Dong-Weon
2010-05-01
An integrated system made up of a double-hot arm electro-thermal microactuator and a piezoresistor embedded at the base of the 'cold arm' is proposed. The electro-thermo-mechanical modeling and optimization is developed to elaborate the operation mechanism of the hybrid system through numerical simulations. For given materials, the geometry design mostly influences the performance of the sensor and actuator, which can be considered separately. That is because thermal expansion induced heating energy has less influence on the base area of the 'cold arm,' where is the maximum stress. The piezoresistor is positioned here for large sensitivity to monitor the in-plane movement of the system and characterize the actuator response precisely in real time. Force method is used to analyze the thermal induced mechanical expansion in the redundant structure. On the other hand, the integrated actuating mechanism is designed for high speed imaging. Based on the simulation results, the actuator operates at levels below 5 mA appearing to be very reliable, and the stress sensitivity is about 40 MPa per micron.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Shaohua; School of Automation, Chongqing University, Chongqing 400044; Sun, Quanping
This paper addresses chaos control of the micro-electro- mechanical resonator by using adaptive dynamic surface technology with extended state observer. To reveal the mechanism of the micro- electro-mechanical resonator, the phase diagrams and corresponding time histories are given to research the nonlinear dynamics and chaotic behavior, and Homoclinic and heteroclinic chaos which relate closely with the appearance of chaos are presented based on the potential function. To eliminate the effect of chaos, an adaptive dynamic surface control scheme with extended state observer is designed to convert random motion into regular motion without precise system model parameters and measured variables. Puttingmore » tracking differentiator into chaos controller solves the ‘explosion of complexity’ of backstepping and poor precision of the first-order filters. Meanwhile, to obtain high performance, a neural network with adaptive law is employed to approximate unknown nonlinear function in the process of controller design. The boundedness of all the signals of the closed-loop system is proved in theoretical analysis. Finally, numerical simulations are executed and extensive results illustrate effectiveness and robustness of the proposed scheme.« less
Influence of Ni on Martensitic Phase Transformations in NiTi Shape Memory Alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frenzel, J.; George, Easo P; Dlouhy, A.
High-precision data on phase transformation temperatures in NiTi, including numerical expressions for the effect of Ni on M{sub S}, M{sub F}, A{sub S}, A{sub F} and T{sub 0}, are obtained, and the reasons for the large experimental scatter observed in previous studies are discussed. Clear experimental evidence is provided confirming the predictions of Tang et al. 1999 regarding deviations from a linear relation between the thermodynamic equilibrium temperature and Ni concentration. In addition to affecting the phase transition temperatures, increasing Ni contents are found to decrease the width of thermal hysteresis and the heat of transformation. These findings are rationalizedmore » on the basis of the crystallographic data of Prokoshkin et al. 2004 and the theory of Ball and James. The results show that it is important to document carefully the details of the arc-melting procedure used to make shape memory alloys and that, if the effects of processing are properly accounted for, precise values for the Ni concentration of the NiTi matrix can be obtained.« less
Gene mutation-based and specific therapies in precision medicine.
Wang, Xiangdong
2016-04-01
Precision medicine has been initiated and gains more and more attention from preclinical and clinical scientists. A number of key elements or critical parts in precision medicine have been described and emphasized to establish a systems understanding of precision medicine. The principle of precision medicine is to treat patients on the basis of genetic alterations after gene mutations are identified, although questions and challenges still remain before clinical application. Therapeutic strategies of precision medicine should be considered according to gene mutation, after biological and functional mechanisms of mutated gene expression or epigenetics, or the correspondent protein, are clearly validated. It is time to explore and develop a strategy to target and correct mutated genes by direct elimination, restoration, correction or repair of mutated sequences/genes. Nevertheless, there are still numerous challenges to integrating widespread genomic testing into individual cancer therapies and into decision making for one or another treatment. There are wide-ranging and complex issues to be solved before precision medicine becomes clinical reality. Thus, the precision medicine can be considered as an extension and part of clinical and translational medicine, a new alternative of clinical therapies and strategies, and have an important impact on disease cures and patient prognoses. © 2015 The Author. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.
Quantitative aspects of inductively coupled plasma mass spectrometry
NASA Astrophysics Data System (ADS)
Bulska, Ewa; Wagner, Barbara
2016-10-01
Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.
NASA Technical Reports Server (NTRS)
Balla, R. Jeffrey; Miller, Corey A.
2008-01-01
This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.
Chaotic coordinates for the Large Helical Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudson, S. R., E-mail: shudson@pppl.gov; Suzuki, Y.
The theory of quadratic-flux-minimizing (QFM) surfaces is reviewed, and numerical techniques that allow high-order QFM surfaces to be efficiently constructed for experimentally relevant, non-integrable magnetic fields are described. As a practical example, the chaotic edge of the magnetic field in the Large Helical Device (LHD) is examined. A precise technique for finding the boundary surface is implemented, the hierarchy of partial barriers associated with the near-critical cantori is constructed, and a coordinate system, which we call chaotic coordinates, that is based on a selection of QFM surfaces is constructed that simplifies the description of the magnetic field, so that fluxmore » surfaces become “straight” and islands become “square.”.« less
NASA Astrophysics Data System (ADS)
Tsalamengas, John L.
2018-07-01
We study plane-wave electromagnetic scattering by radially and strongly inhomogeneous dielectric cylinders at oblique incidence. The method of analysis relies on an exact reformulation of the underlying field equations as a first-order 4 × 4 system of differential equations and on the ability to restate the associated initial-value problem in the form of a system of coupled linear Volterra integral equations of the second kind. The integral equations so derived are discretized via a sophisticated variant of the Nyström method. The proposed method yields results accurate up to machine precision without relying on approximations. Numerical results and case studies ably demonstrate the efficiency and high accuracy of the algorithms.
P97/CDC-48: proteostasis control in tumor cell biology.
Fessart, Delphine; Marza, Esther; Taouji, Saïd; Delom, Frédéric; Chevet, Eric
2013-08-28
P97/CDC-48 is a prominent member of a highly evolutionary conserved Walker cassette - containing AAA+ATPases. It has been involved in numerous cellular processes ranging from the control of protein homeostasis to membrane trafficking through the intervention of specific accessory proteins. Expression of p97/CDC-48 in cancers has been correlated with tumor aggressiveness and prognosis, however the precise underlying molecular mechanisms remain to be characterized. Moreover p97/CDC-48 inhibitors were developed and are currently under intense investigation as anticancer drugs. Herein, we discuss the role of p97/CDC-48 in cancer development and its therapeutic potential in tumor cell biology. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The 3D Printing of the Paralyzed Vocal Fold: Added Value in Injection Laryngoplasty.
Hamdan, Abdul-Latif; Haddad, Ghassan; Haydar, Ali; Hamade, Ramsey
2017-08-18
Three-dimensional (3D) printing has had numerous applications in various disciplines, especially otolaryngology. We report the first case of a high-fidelity 3D-printed model of the vocal cords of a patient with unilateral vocal cord paralysis in need of injection laryngoplasty. A case report was carried out. A tailored 3D-printed anatomically precise models for injection laryngoplasty has the potential to enhance preoperative planning, resident teaching, and patient education. A 3D printing model of the paralyzed vocal cord has an added value in the preoperative assessment of patients undergoing injection laryngoplasty. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases
NASA Astrophysics Data System (ADS)
Morifuji, Masato
2018-01-01
We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.
Electronic structure probed with positronium: Theoretical viewpoint
NASA Astrophysics Data System (ADS)
Kuriplach, Jan; Barbiellini, Bernardo
2018-05-01
We inspect carefully how the positronium can be used to study the electronic structure of materials. Recent combined experimental and computational study [A.C.L. Jones et al., Phys. Rev. Lett. 117, 216402 (2016)] has shown that the positronium affinity can be used to benchmark the exchange-correlation approximations in copper. Here we investigate whether an improvement can be achieved by increasing the numerical precision of calculations and by employing the strongly constrained and appropriately normed (SCAN) scheme, and extend the study to other selected systems like aluminum and high entropy alloys. From the methodological viewpoint, the computations of the positronium affinity are further refined and an alternative way of determining the electron chemical potential using charged supercells is examined.
lsjk—a C++ library for arbitrary-precision numeric evaluation of the generalized log-sine functions
NASA Astrophysics Data System (ADS)
Kalmykov, M. Yu.; Sheplyakov, A.
2005-10-01
Generalized log-sine functions Lsj(k)(θ) appear in higher order ɛ-expansion of different Feynman diagrams. We present an algorithm for the numerical evaluation of these functions for real arguments. This algorithm is implemented as a C++ library with arbitrary-precision arithmetics for integer 0⩽k⩽9 and j⩾2. Some new relations and representations of the generalized log-sine functions are given. Program summaryTitle of program:lsjk Catalogue number:ADVS Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVS Program obtained from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing terms: GNU General Public License Computers:all Operating systems:POSIX Programming language:C++ Memory required to execute:Depending on the complexity of the problem, at least 32 MB RAM recommended No. of lines in distributed program, including testing data, etc.:41 975 No. of bytes in distributed program, including testing data, etc.:309 156 Distribution format:tar.gz Other programs called:The CLN library for arbitrary-precision arithmetics is required at version 1.1.5 or greater External files needed:none Nature of the physical problem:Numerical evaluation of the generalized log-sine functions for real argument in the region 0<θ<π. These functions appear in Feynman integrals Method of solution:Series representation for the real argument in the region 0<θ<π Restriction on the complexity of the problem:Limited up to Lsj(9)(θ), and j is an arbitrary integer number. Thus, all function up to the weight 12 in the region 0<θ<π can be evaluated. The algorithm can be extended up to higher values of k(k>9) without modification Typical running time:Depending on the complexity of problem. See text below.
NASA Astrophysics Data System (ADS)
Fritts, Dave; Wang, Ling; Balsley, Ben; Lawrence, Dale
2013-04-01
A number of sources contribute to intermittent small-scale turbulence in the stable boundary layer (SBL). These include Kelvin-Helmholtz instability (KHI), gravity wave (GW) breaking, and fluid intrusions, among others. Indeed, such sources arise naturally in response to even very simple "multi-scale" superpositions of larger-scale GWs and smaller-scale GWs, mean flows, or fine structure (FS) throughout the atmosphere and the oceans. We describe here results of two direct numerical simulations (DNS) of these GW-FS interactions performed at high resolution and high Reynolds number that allow exploration of these turbulence sources and the character and effects of the turbulence that arises in these flows. Results include episodic turbulence generation, a broad range of turbulence scales and intensities, PDFs of dissipation fields exhibiting quasi-log-normal and more complex behavior, local turbulent mixing, and "sheet and layer" structures in potential temperature that closely resemble high-resolution measurements. Importantly, such multi-scale dynamics differ from their larger-scale, quasi-monochromatic gravity wave or quasi-horizontally homogeneous shear flow instabilities in significant ways. The ability to quantify such multi-scale dynamics with new, very high-resolution measurements is also advancing rapidly. New in-situ sensors on small, unmanned aerial vehicles (UAVs), balloons, or tethered systems are enabling definition of SBL (and deeper) environments and turbulence structure and dissipation fields with high spatial and temporal resolution and precision. These new measurement and modeling capabilities promise significant advances in understanding small-scale instability and turbulence dynamics, in quantifying their roles in mixing, transport, and evolution of the SBL environment, and in contributing to improved parameterizations of these dynamics in mesoscale, numerical weather prediction, climate, and general circulation models. We expect such measurement and modeling capabilities to also aid in the design of new and more comprehensive future SBL measurement programs.
A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning.
Li, Xin; Zhang, Peng; Guo, Jiming; Wang, Jinling; Qiu, Weining
2017-04-21
Ambiguity resolution (AR) is crucial for high-precision indoor pseudolite positioning. Due to the existing characteristics of the pseudolite positioning system, such as the geometry structure of the stationary pseudolite which is consistently invariant, the indoor signal is easy to interrupt and the first order linear truncation error cannot be ignored, and a new AR method based on the idea of the ambiguity function method (AFM) is proposed in this paper. The proposed method is a single-epoch and nonlinear method that is especially well-suited for indoor pseudolite positioning. Considering the very low computational efficiency of conventional AFM, we adopt an improved particle swarm optimization (IPSO) algorithm to search for the best solution in the coordinate domain, and variances of a least squares adjustment is conducted to ensure the reliability of the solving ambiguity. Several experiments, including static and kinematic tests, are conducted to verify the validity of the proposed AR method. Numerical results show that the IPSO significantly improved the computational efficiency of AFM and has a more elaborate search ability compared to the conventional grid searching method. For the indoor pseudolite system, which had an initial approximate coordinate precision better than 0.2 m, the AFM exhibited good performances in both static and kinematic tests. With the corrected ambiguity gained from our proposed method, indoor pseudolite positioning can achieve centimeter-level precision using a low-cost single-frequency software receiver.
Non-destructive evaluation of coating thickness using guided waves
NASA Astrophysics Data System (ADS)
Ostiguy, Pierre-Claude; Quaegebeur, Nicolas; Masson, Patrice
2015-04-01
Among existing strategies for non-destructive evaluation of coating thickness, ultrasonic methods based on the measurement of the Time-of-Flight (ToF) of high frequency bulk waves propagating through the thickness of a structure are widespread. However, these methods only provide a very localized measurement of the coating thickness and the precision on the results is largely affected by the surface roughness, porosity or multi-layered nature of the host structure. Moreover, since the measurement is very local, inspection of large surfaces can be time consuming. This article presents a robust methodology for coating thickness estimation based on the generation and measurement of guided waves. Guided waves have the advantage over ultrasonic bulk waves of being less sensitive to surface roughness, and of measuring an average thickness over a wider area, thus reducing the time required to inspect large surfaces. The approach is based on an analytical multi-layer model and intercorrelation of reference and measured signals. The method is first assessed numerically for an aluminum plate, where it is demonstrated that coating thickness can be measured within a precision of 5 micrometers using the S0 mode at frequencies below 500 kHz. Then, an experimental validation is conducted and results show that coating thicknesses in the range of 10 to 200 micrometers can be estimated within a precision of 10 micrometers of the exact coating thickness on this type of structure.
Ambiguity and variability of database and software names in bioinformatics.
Duck, Geraint; Kovacevic, Aleksandar; Robertson, David L; Stevens, Robert; Nenadic, Goran
2015-01-01
There are numerous options available to achieve various tasks in bioinformatics, but until recently, there were no tools that could systematically identify mentions of databases and tools within the literature. In this paper we explore the variability and ambiguity of database and software name mentions and compare dictionary and machine learning approaches to their identification. Through the development and analysis of a corpus of 60 full-text documents manually annotated at the mention level, we report high variability and ambiguity in database and software mentions. On a test set of 25 full-text documents, a baseline dictionary look-up achieved an F-score of 46 %, highlighting not only variability and ambiguity but also the extensive number of new resources introduced. A machine learning approach achieved an F-score of 63 % (with precision of 74 %) and 70 % (with precision of 83 %) for strict and lenient matching respectively. We characterise the issues with various mention types and propose potential ways of capturing additional database and software mentions in the literature. Our analyses show that identification of mentions of databases and tools is a challenging task that cannot be achieved by relying on current manually-curated resource repositories. Although machine learning shows improvement and promise (primarily in precision), more contextual information needs to be taken into account to achieve a good degree of accuracy.
Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning
NASA Astrophysics Data System (ADS)
Bradley, Ben K.
Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and orbit propagation, yielding savings in computation time and memory. Orbit propagation and position transformation simulations are analyzed to generate a complete set of recommendations for performing the ITRS/GCRS transformation for a wide range of needs, encompassing real-time on-board satellite operations and precise post-processing applications. In addition, a complete derivation of the ITRS/GCRS frame transformation time-derivative is detailed for use in velocity transformations between the GCRS and ITRS and is applied to orbit propagation in the rotating ITRS. EOP interpolation methods and ocean tide corrections are shown to impact the ITRS/GCRS transformation accuracy at the level of 5 cm and 20 cm on the surface of the Earth and at the Global Positioning System (GPS) altitude, respectively. The precession-nutation and EOP simplifications yield maximum propagation errors of approximately 2 cm and 1 m after 15 minutes and 6 hours in low-Earth orbit (LEO), respectively, while reducing computation time and memory usage. Finally, for orbit propagation in the ITRS, a simplified scheme is demonstrated that yields propagation errors under 5 cm after 15 minutes in LEO. This approach is beneficial for orbit determination based on GPS measurements. We conclude with a summary of recommendations on EOP usage and bias-precession-nutation implementations for achieving a wide range of transformation and propagation accuracies at several altitudes. This comprehensive set of recommendations allows satellite operators, astrodynamicists, and scientists to make informed decisions when choosing the best implementation for their application, balancing accuracy and computational complexity.
Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent
De Sa, Christopher; Feldman, Matthew; Ré, Christopher; Olukotun, Kunle
2018-01-01
Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called Buckwild! that uses both asynchronous execution and low-precision computation. We introduce the DMGC model, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and show that it provides a way to both classify these algorithms and model their performance. We leverage this insight to propose and analyze techniques to improve the speed of low-precision SGD. First, we propose software optimizations that can increase throughput on existing CPUs by up to 11×. Second, we propose architectural changes, including a new cache technique we call an obstinate cache, that increase throughput beyond the limits of current-generation hardware. We also implement and analyze low-precision SGD on the FPGA, which is a promising alternative to the CPU for future SGD systems. PMID:29391770
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution
Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.
2016-01-01
We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939
NASA Astrophysics Data System (ADS)
Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.
2017-10-01
Hamiltonian Truncation (a.k.a. Truncated Spectrum Approach) is an efficient numerical technique to solve strongly coupled QFTs in d = 2 spacetime dimensions. Further theoretical developments are needed to increase its accuracy and the range of applicability. With this goal in mind, here we present a new variant of Hamiltonian Truncation which exhibits smaller dependence on the UV cutoff than other existing implementations, and yields more accurate spectra. The key idea for achieving this consists in integrating out exactly a certain class of high energy states, which corresponds to performing renormalization at the cubic order in the interaction strength. We test the new method on the strongly coupled two-dimensional quartic scalar theory. Our work will also be useful for the future goal of extending Hamiltonian Truncation to higher dimensions d ≥ 3.
Simultaneous quantification of flavonoids and triterpenoids in licorice using HPLC.
Wang, Yuan-Chuen; Yang, Yi-Shan
2007-05-01
Numerous bioactive compounds are present in licorice (Glycyrrhizae Radix), including flavonoids and triterpenoids. In this study, a reversed-phase high-performance liquid chromatography (HPLC) method for simultaneous quantification of three flavonoids (liquiritin, liquiritigenin and isoliquiritigenin) and four triterpenoids (glycyrrhizin, 18alpha-glycyrrhetinic acid, 18beta-glycyrrhetinic acid and 18beta-glycyrrhetinic acid methyl ester) from licorice was developed, and further, to quantify these 7 compounds from 20 different licorice samples. Specifically, the reverse-phase HPLC was performed with a gradient mobile phase composed of 25 mM phosphate buffer (pH 2.5)-acetonitrile featuring gradient elution steps as follows: 0 min, 100:0; 10 min, 80:20; 50 min, 70:30; 73 min, 50:50; 110 min, 50:50; 125 min, 20:80; 140 min, 20:80, and peaks were detected at 254 nm. By using our technique, a rather good specificity was obtained regarding to the separation of these seven compounds. The regression coefficient for the linear equations for the seven compounds lay between 0.9978 and 0.9992. The limits of detection and quantification lay in the range of 0.044-0.084 and 0.13-0.25 microg/ml, respectively. The relative recovery rates for the seven compounds lay between 96.63+/-2.43 and 103.55+/-2.77%. Coefficient variation for intra-day and inter-day precisions lay in the range of 0.20-1.84 and 0.28-1.86%, respectively. Based upon our validation results, this analytical technique is a convenient method to simultaneous quantify numerous bioactive compounds derived from licorice, featuring good quantification parameters, accuracy and precision.
Probability Elicitation Under Severe Time Pressure: A Rank-Based Method.
Jaspersen, Johannes G; Montibeller, Gilberto
2015-07-01
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio-scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low-probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real-world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats. © 2015 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Carrico, T.; Langster, T.; Carrico, J.; Alfano, S.; Loucks, M.; Vallado, D.
The authors present several spacecraft rendezvous and close proximity maneuvering techniques modeled with a high-precision numerical integrator using full force models and closed loop control with a Fuzzy Logic intelligent controller to command the engines. The authors document and compare the maneuvers, fuel use, and other parameters. This paper presents an innovative application of an existing capability to design, simulate and analyze proximity maneuvers; already in use for operational satellites performing other maneuvers. The system has been extended to demonstrate the capability to develop closed loop control laws to maneuver spacecraft in close proximity to another, including stand-off, docking, lunar landing and other operations applicable to space situational awareness, space based surveillance, and operational satellite modeling. The fully integrated end-to-end trajectory ephemerides are available from the authors in electronic ASCII text by request. The benefits of this system include: A realistic physics-based simulation for the development and validation of control laws A collaborative engineering environment for the design, development and tuning of spacecraft law parameters, sizing actuators (i.e., rocket engines), and sensor suite selection. An accurate simulation and visualization to communicate the complexity, criticality, and risk of spacecraft operations. A precise mathematical environment for research and development of future spacecraft maneuvering engineering tasks, operational planning and forensic analysis. A closed loop, knowledge-based control example for proximity operations. This proximity operations modeling and simulation environment will provide a valuable adjunct to programs in military space control, space situational awareness and civil space exploration engineering and decision making processes.
Śmietana, Mateusz; Myśliwiec, Marcin; Mikulic, Predrag; Witkowski, Bartłomiej S.; Bock, Wojtek J.
2013-01-01
This work presents an application of thin aluminum oxide (Al2O3) films obtained using atomic layer deposition (ALD) for fine tuning the spectral response and refractive-index (RI) sensitivity of long-period gratings (LPGs) induced in optical fibers. The technique allows for an efficient and well controlled deposition at monolayer level (resolution ∼ 0.12 nm) of excellent quality nano-films as required for optical sensors. The effect of Al2O3 deposition on the spectral properties of the LPGs is demonstrated experimentally and numerically. We correlated both the increase in Al2O3 thickness and changes in optical properties of the film with the shift of the LPG resonance wavelength and proved that similar films are deposited on fibers and oxidized silicon reference samples in the same process run. Since the thin overlay effectively changes the distribution of the cladding modes and thus also tunes the device's RI sensitivity, the tuning can be simply realized by varying number of cycles, which is proportional to thickness of the high-refractive-index (n > 1.6 in infrared spectral range) Al2O3 film. The advantage of this approach is the precision in determining the film properties resulting in RI sensitivity of the LPGs. To the best of our knowledge, this is the first time that an ultra-precise method for overlay deposition has been applied on LPGs for RI tuning purposes and the results have been compared with numerical simulations based on LP mode approximation.
NASA Astrophysics Data System (ADS)
Blair, J. B.; Rabine, D.; Hofton, M. A.; Citrin, E.; Luthcke, S. B.; Misakonis, A.; Wake, S.
2015-12-01
Full waveform laser altimetry has demonstrated its ability to capture highly-accurate surface topography and vertical structure (e.g. vegetation height and structure) even in the most challenging conditions. NASA's high-altitude airborne laser altimeter, LVIS (the Land Vegetation, and Ice Sensor) has produced high-accuracy surface maps over a wide variety of science targets for the last 2 decades. Recently NASA has funded the transition of LVIS into a full-time NASA airborne Facility instrument to increase the amount and quality of the data and to decrease the end-user costs, to expand the utilization and application of this unique sensor capability. Based heavily on the existing LVIS sensor design, the Facility LVIS instrument includes numerous improvements for reliability, resolution, real-time performance monitoring and science products, decreased operational costs, and improved data turnaround time and consistency. The development of this Facility instrument is proceeding well and it is scheduled to begin operations testing in mid-2016. A comprehensive description of the LVIS Facility capability will be presented along with several mission scenarios and science applications examples. The sensor improvements included increased spatial resolution (footprints as small as 5 m), increased range precision (sub-cm single shot range precision), expanded dynamic range, improved detector sensitivity, operational autonomy, real-time flight line tracking, and overall increased reliability and sensor calibration stability. The science customer mission planning and data product interface will be discussed. Science applications of the LVIS Facility include: cryosphere, territorial ecology carbon cycle, hydrology, solid earth and natural hazards, and biodiversity.
Inhibition, Conflict Detection, and Number Conservation
ERIC Educational Resources Information Center
Lubin, Amélie; Simon, Grégory; Houdé, Olivier; De Neys, Wim
2015-01-01
The acquisition of number conservation is a critical step in children's numerical and mathematical development. Classic developmental studies have established that children's number conservation is often biased by misleading intuitions. However, the precise nature of these conservation errors is not clear. A key question is whether conservation…
Orthopositronium Lifetime: Analytic Results in O({alpha}) and O({alpha}{sup 3}ln{alpha})
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kniehl, Bernd A.; Kotikov, Anatoly V.; Veretin, Oleg L.
2008-11-07
We present the O({alpha}) and O({alpha}{sup 3}ln{alpha}) corrections to the total decay width of orthopositronium in closed analytic form, in terms of basic irrational numbers, which can be evaluated numerically to arbitrary precision.
NASA Astrophysics Data System (ADS)
Ding, Zhe; Li, Li; Hu, Yujin
2018-01-01
Sophisticated engineering systems are usually assembled by subcomponents with significantly different levels of energy dissipation. Therefore, these damping systems often contain multiple damping models and lead to great difficulties in analyzing. This paper aims at developing a time integration method for structural systems with multiple damping models. The dynamical system is first represented by a generally damped model. Based on this, a new extended state-space method for the damped system is derived. A modified precise integration method with Gauss-Legendre quadrature is then proposed. The numerical stability and accuracy of the proposed integration method are discussed in detail. It is verified that the method is conditionally stable and has inherent algorithmic damping, period error and amplitude decay. Numerical examples are provided to assess the performance of the proposed method compared with other methods. It is demonstrated that the method is more accurate than other methods with rather good efficiency and the stable condition is easy to be satisfied in practice.
Sokol, Serguei; Millard, Pierre; Portais, Jean-Charles
2012-03-01
The problem of stationary metabolic flux analysis based on isotope labelling experiments first appeared in the early 1950s and was basically solved in early 2000s. Several algorithms and software packages are available for this problem. However, the generic stochastic algorithms (simulated annealing or evolution algorithms) currently used in these software require a lot of time to achieve acceptable precision. For deterministic algorithms, a common drawback is the lack of convergence stability for ill-conditioned systems or when started from a random point. In this article, we present a new deterministic algorithm with significantly increased numerical stability and accuracy of flux estimation compared with commonly used algorithms. It requires relatively short CPU time (from several seconds to several minutes with a standard PC architecture) to estimate fluxes in the central carbon metabolism network of Escherichia coli. The software package influx_s implementing this algorithm is distributed under an OpenSource licence at http://metasys.insa-toulouse.fr/software/influx/. Supplementary data are available at Bioinformatics online.
Efficient numerical evaluation of Feynman integrals
NASA Astrophysics Data System (ADS)
Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran
2016-03-01
Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)
Research on numerical control system based on S3C2410 and MCX314AL
NASA Astrophysics Data System (ADS)
Ren, Qiang; Jiang, Tingbiao
2008-10-01
With the rapid development of micro-computer technology, embedded system, CNC technology and integrated circuits, numerical control system with powerful functions can be realized by several high-speed CPU chips and RISC (Reduced Instruction Set Computing) chips which have small size and strong stability. In addition, the real-time operating system also makes the attainment of embedded system possible. Developing the NC system based on embedded technology can overcome some shortcomings of common PC-based CNC system, such as the waste of resources, low control precision, low frequency and low integration. This paper discusses a hardware platform of ENC (Embedded Numerical Control) system based on embedded processor chip ARM (Advanced RISC Machines)-S3C2410 and DSP (Digital Signal Processor)-MCX314AL and introduces the process of developing ENC system software. Finally write the MCX314AL's driver under the embedded Linux operating system. The embedded Linux operating system can deal with multitask well moreover satisfy the real-time and reliability of movement control. NC system has the advantages of best using resources and compact system with embedded technology. It provides a wealth of functions and superior performance with a lower cost. It can be sure that ENC is the direction of the future development.
NASA Astrophysics Data System (ADS)
Langebach, R.; Haberstroh, Ch.
2010-04-01
In this paper a numerical investigation is presented that characterizes the free convective flow field and the resulting heat transfer mechanisms for a resistance temperature sensor in liquid and gaseous hydrogen at various cryogenic conditions. Motivation for this is the detection of stratification effects e.g. inside a liquid hydrogen storage vessel. In this case, the local temperature measurement in still resting fluid requires a very high standard of precision despite an extremely poor thermal anchoring of the sensor. Due to electrical power dissipation a certain amount of heat has to be transferred from sensor to fluid. This can cause relevant measurement errors due to a slightly elevated sensor temperature. A commercial CFD code was employed to calculate the heat and mass transfer around the typical sensor geometry. The results were compared with existing heat transfer correlations from the literature. As a result the magnitude of averaged heat transfer coefficients and sensor over-heating as a function of power dissipation are given in figures. From the gained numerical results a new correlation for the averaged Nusselt Number is presented that represents very low Rayleigh Number flows. The correlation can be used to estimate sensor self-heating effects in similar situations.
Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.
Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray
2017-07-11
Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.
All-fiber high-power monolithic femtosecond laser at 1.59 µm with 63-fs pulse width
NASA Astrophysics Data System (ADS)
Hekmat, M. J.; Omoomi, M.; Gholami, A.; Yazdabadi, A. Bagheri; Abdollahi, M.; Hamidnejad, E.; Ebrahimi, A.; Normohamadi, H.
2018-01-01
In this research, by adopting an alternative novel approach to ultra-short giant pulse generation which basically originated from difficulties with traditional employed methods, an optimized Er/Yb co-doped double-clad fiber amplifier is applied to boost output average power of single-mode output pulses to a high level of 2-W at 1.59-µm central wavelength. Output pulses of approximately 63-fs pulse width at 52-MHz repetition rate are obtained in an all-fiber monolithic laser configuration. The idea of employing parabolic pulse amplification for stretching output pulses together with high-power pulse amplification using Er/Yb co-doped active fibers for compressing and boosting output average power plays crucial role in obtaining desired results. The proposed configuration enjoys massive advantages over previously reported literature which make it well-suited for high-power precision applications such as medical surgery. Detailed dynamics of pulse stretching and compressing in active fibers with different GVD parameters are numerically and experimentally investigated.
Sun, Tao; Fezzaa, Kamel
2016-06-17
Here, a high-speed X-ray diffraction technique was recently developed at the 32-ID-B beamline of the Advanced Photon Source for studying highly dynamic, yet non-repeatable and irreversible, materials processes. In experiments, the microstructure evolution in a single material event is probed by recording a series of diffraction patterns with extremely short exposure time and high frame rate. Owing to the limited flux in a short pulse and the polychromatic nature of the incident X-rays, analysis of the diffraction data is challenging. Here, HiSPoD, a stand-alone Matlab-based software for analyzing the polychromatic X-ray diffraction data from polycrystalline samples, is described. With HiSPoD,more » researchers are able to perform diffraction peak indexing, extraction of one-dimensional intensity profiles by integrating a two-dimensional diffraction pattern, and, more importantly, quantitative numerical simulations to obtain precise sample structure information.« less
High-Accuracy Measurements of Total Column Water Vapor From the Orbiting Carbon Observatory-2
NASA Technical Reports Server (NTRS)
Nelson, Robert R.; Crisp, David; Ott, Lesley E.; O'Dell, Christopher W.
2016-01-01
Accurate knowledge of the distribution of water vapor in Earth's atmosphere is of critical importance to both weather and climate studies. Here we report on measurements of total column water vapor (TCWV) from hyperspectral observations of near-infrared reflected sunlight over land and ocean surfaces from the Orbiting Carbon Observatory-2 (OCO-2). These measurements are an ancillary product of the retrieval algorithm used to measure atmospheric carbon dioxide concentrations, with information coming from three highly resolved spectral bands. Comparisons to high-accuracy validation data, including ground-based GPS and microwave radiometer data, demonstrate that OCO-2 TCWV measurements have maximum root-mean-square deviations of 0.9-1.3mm. Our results indicate that OCO-2 is the first space-based sensor to accurately and precisely measure the two most important greenhouse gases, water vapor and carbon dioxide, at high spatial resolution [1.3 x 2.3 km(exp. 2)] and that OCO-2 TCWV measurements may be useful in improving numerical weather predictions and reanalysis products.
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
Development of Light-Activated CRISPR Using Guide RNAs with Photocleavable Protectors.
Jain, Piyush K; Ramanan, Vyas; Schepers, Arnout G; Dalvie, Nisha S; Panda, Apekshya; Fleming, Heather E; Bhatia, Sangeeta N
2016-09-26
The ability to remotely trigger CRISPR/Cas9 activity would enable new strategies to study cellular events with greater precision and complexity. In this work, we have developed a method to photocage the activity of the guide RNA called "CRISPR-plus" (CRISPR-precise light-mediated unveiling of sgRNAs). The photoactivation capability of our CRISPR-plus method is compatible with the simultaneous targeting of multiple DNA sequences and supports numerous modifications that can enable guide RNA labeling for use in imaging and mechanistic investigations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Organomimetic clusters: Precision in 3D
NASA Astrophysics Data System (ADS)
Majewski, Marek B.; Howarth, Ashlee J.; Farha, Omar K.
2017-04-01
Biomimetic molecules that can be easily tailored offer numerous opportunities. Now, boron-based clusters have been shown to be excellent biomimetics. The ease with which the cluster surfaces can be modified stands to change how chemists might go about preparing materials for imaging, drug delivery and other applications.
Accuracy of Surgery Clerkship Performance Raters.
ERIC Educational Resources Information Center
Littlefield, John H.; And Others
1991-01-01
Interrater reliability in numerical ratings of clerkship performance (n=1,482 students) in five surgery programs was studied. Raters were classified as accurate or moderately or significantly stringent or lenient. Results indicate that increasing the proportion of accurate raters would substantially improve the precision of class rankings. (MSE)
Data Sharing For Precision Medicine: Policy Lessons And Future Directions.
Blasimme, Alessandro; Fadda, Marta; Schneider, Manuel; Vayena, Effy
2018-05-01
Data sharing is a precondition of precision medicine. Numerous organizations have produced abundant guidance on data sharing. Despite such efforts, data are not being shared to a degree that can trigger the expected data-driven revolution in precision medicine. We set out to explore why. Here we report the results of a comprehensive analysis of data-sharing guidelines issued over the past two decades by multiple organizations. We found that the guidelines overlap on a restricted set of policy themes. However, we observed substantial fragmentation in the policy landscape across specific organizations and data types. This may have contributed to the current stalemate in data sharing. To move toward a more efficient data-sharing ecosystem for precision medicine, policy makers should explore innovative ways to cope with central policy themes such as privacy, consent, and data quality; focus guidance on interoperability, attribution, and public engagement; and promote data-sharing policies that can be adapted to multiple data types.
NASA Astrophysics Data System (ADS)
Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan
2017-08-01
We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.
Validation of UARS Microwave Limb Sounder Temperature and Pressure Measurements
NASA Technical Reports Server (NTRS)
Fishbein, E. F.; Cofield, R. E.; Froidevaux, L.; Jarnot, R. F.; Lungu, T.; Read, W. G.; Shippony, Z.; Waters, J. W.; McDermid, I. S.; McGee, T. J.;
1996-01-01
The accuracy and precision of the Upper Atmosphere Research Satellite (UARS) Microwave Limb Sounder (MLS) atmospheric temperature and tangent-point pressure measurements are described. Temperatures and tangent- point pressure (atmospheric pressure at the tangent height of the field of view boresight) are retrieved from a 15-channel 63-GHz radiometer measuring O2 microwave emissions from the stratosphere and mesosphere. The Version 3 data (first public release) contains scientifically useful temperatures from 22 to 0.46 hPa. Accuracy estimates are based on instrument performance, spectroscopic uncertainty and retrieval numerics, and range from 2.1 K at 22 hPa to 4.8 K at 0.46 hPa for temperature and from 200 m (equivalent log pressure) at 10 hPa to 300 m at 0.1 hPa. Temperature accuracy is limited mainly by uncertainty in instrument characterization, and tangent-point pressure accuracy is limited mainly by the accuracy of spectroscopic parameters. Precisions are around 1 K and 100 m. Comparisons are presented among temperatures from MLS, the National Meteorological Center (NMC) stratospheric analysis and lidar stations at Table Mountain, California, Observatory of Haute Provence (OHP), France, and Goddard Spaceflight Center, Maryland. MLS temperatures tend to be 1-2 K lower than NMC and lidar, but MLS is often 5 - 10 K lower than NMC in the winter at high latitudes, especially within the northern hemisphere vortex. Winter MLS and OHP (44 deg N) lidar temperatures generally agree and tend to be lower than NMC. Problems with Version 3 MLS temperatures and tangent-point pressures are identified, but the high precision of MLS radiances will allow improvements with better algorithms planned for the future.
Three-frequency BDS precise point positioning ambiguity resolution based on raw observables
NASA Astrophysics Data System (ADS)
Li, Pan; Zhang, Xiaohong; Ge, Maorong; Schuh, Harald
2018-02-01
All BeiDou navigation satellite system (BDS) satellites are transmitting signals on three frequencies, which brings new opportunity and challenges for high-accuracy precise point positioning (PPP) with ambiguity resolution (AR). This paper proposes an effective uncalibrated phase delay (UPD) estimation and AR strategy which is based on a raw PPP model. First, triple-frequency raw PPP models are developed. The observation model and stochastic model are designed and extended to accommodate the third frequency. Then, the UPD is parameterized in raw frequency form while estimating with the high-precision and low-noise integer linear combination of float ambiguity which are derived by ambiguity decorrelation. Third, with UPD corrected, the LAMBDA method is used for resolving full or partial ambiguities which can be fixed. This method can be easily and flexibly extended for dual-, triple- or even more frequency. To verify the effectiveness and performance of triple-frequency PPP AR, tests with real BDS data from 90 stations lasting for 21 days were performed in static mode. Data were processed with three strategies: BDS triple-frequency ambiguity-float PPP, BDS triple-frequency PPP with dual-frequency (B1/B2) and three-frequency AR, respectively. Numerous experiment results showed that compared with the ambiguity-float solution, the performance in terms of convergence time and positioning biases can be significantly improved by AR. Among three groups of solutions, the triple-frequency PPP AR achieved the best performance. Compared with dual-frequency AR, additional the third frequency could apparently improve the position estimations during the initialization phase and under constraint environments when the dual-frequency PPP AR is limited by few satellite numbers.
NASA Astrophysics Data System (ADS)
Serio, C.; Blasi, M. G.; Liuzzi, G.; Masiello, G.; Venafra, S.
2017-02-01
IASI (Infrared Atmospheric Sounder Interferometer) is flying on the European MetOp series of weather satellites. Besides acquiring temperature and humidity data, IASI also observes the infrared emission of the main minor and trace atmospheric components with high precision. The retrieval of these gases would be highly beneficial to the efforts of scientists monitoring Earths climate. IASI retrieval capability and algorithms have been mostly driven by Numerical Weather Prediction centers, whose limited resources for data transmission and computing is hampering the full exploitation of IASI information content. The quest for real or nearly real time processing has affected the precision of the estimation of minor and trace gases, which are normally retrieved on a very coarse spatial grid. The paper presents the very first retrieval of the complete suite of IASI target parameters by exploiting all its 8461 channels. The analysis has been exemplified for sea surface and the target parameters will include sea surface temperature, temperature profile, water vapour and HDO profiles, ozone profile, total column amount of CO, CO2, CH4, N2O, SO2, HNO3, NH3, OCS and CF4. Concerning CO2, CH4 and N2O, it will be shown that their colum amount can be obtained for each single IASI IFOV (Instantaneous Field of View) with a precision better than 1-2%, which opens the possibility to analyze, e.g., the formation of regional patterns of greenhouse gases. To assess the quality of the retrieval, a case study has been set up which considers two years of IASI soundings over the Hawaii, Manua Loa validation station.
QED contributions to electron g-2
NASA Astrophysics Data System (ADS)
Laporta, Stefano
2018-05-01
In this paper I briefly describe the results of the numerical evaluation of the mass-independent 4-loop contribution to the electron g-2 in QED with 1100 digits of precision. In particular I also show the semi-analytical fit to the numerical value, which contains harmonic polylogarithms of eiπ/3, e2iπ/3 and eiπ/2 one-dimensional integrals of products of complete elliptic integrals and six finite parts of master integrals, evaluated up to 4800 digits. I give also some information about the methods and the program used.
[Spectral emissivity of thin films].
Zhong, D
2001-02-01
In this paper, the contribution of multiple reflections in thin film to the spectral emissivity of thin films of low absorption is discussed. The expression of emissivity of thin films derived here is related to the thin film thickness d and the optical constants n(lambda) and k(lambda). It is shown that in the special case d-->infinity the emissivity of thin films is equivalent to that of the bulk material. Realistic numerical and more precise general numerical results for the dependence of the emissivity on d, n(lambda) and k(lambda) are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Jian; Department of Physics, The Ohio State University, 191 W. Woodruff Ave, Columbus, OH 43210; Chen, Mingjun, E-mail: chenmj@hit.edu.cn, E-mail: chowdhury.24@osu.edu
Rapid growth and ultra-precision machining of large-size KDP (KH{sub 2}PO{sub 4}) crystals with high laser damage resistance are tough challenges in the development of large laser systems. It is of high interest and practical significance to have theoretical models for scientists and manufacturers to determine the laser-induced damage threshold (LIDT) of actually prepared KDP optics. Here, we numerically and experimentally investigate the laser-induced damage on KDP crystals in ultra-short pulse laser regime. On basis of the rate equation for free electron generation, a model dedicated to predicting the LIDT is developed by considering the synergistic effect of photoionization, impact ionizationmore » and decay of electrons. Laser damage tests are performed to measure the single-pulse LIDT with several testing protocols. The testing results combined with previously reported experimental data agree well with those calculated by the model. By taking the light intensification into consideration, the model is successfully applied to quantitatively evaluate the effect of surface flaws inevitably introduced in the preparation processes on the laser damage resistance of KDP crystals. This work can not only contribute to further understanding of the laser damage mechanisms of optical materials, but also provide available models for evaluating the laser damage resistance of exquisitely prepared optical components used in high power laser systems.« less
NASA Astrophysics Data System (ADS)
Tang, H.; Sun, W.
2016-12-01
The theoretical computation of dislocation theory in a given earth model is necessary in the explanation of observations of the co- and post-seismic deformation of earthquakes. For this purpose, computation theories based on layered or pure half space [Okada, 1985; Okubo, 1992; Wang et al., 2006] and on spherically symmetric earth [Piersanti et al., 1995; Pollitz, 1997; Sabadini & Vermeersen, 1997; Wang, 1999] have been proposed. It is indicated that the compressibility, curvature and the continuous variation of the radial structure of Earth should be simultaneously taken into account for modern high precision displacement-based observations like GPS. Therefore, Tanaka et al. [2006; 2007] computed global displacement and gravity variation by combining the reciprocity theorem (RPT) [Okubo, 1993] and numerical inverse Laplace integration (NIL) instead of the normal mode method [Peltier, 1974]. Without using RPT, we follow the straightforward numerical integration of co-seismic deformation given by Sun et al. [1996] to present a straightforward numerical inverse Laplace integration method (SNIL). This method is used to compute the co- and post-seismic displacement of point dislocations buried in a spherically symmetric, self-gravitating viscoelastic and multilayered earth model and is easy to extended to the application of geoid and gravity. Comparing with pre-existing method, this method is relatively more straightforward and time-saving, mainly because we sum associated Legendre polynomials and dislocation love numbers before using Riemann-Merlin formula to implement SNIL.
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
Advances in laser ablation MC-ICPMS isotopic analysis of rock materials
NASA Astrophysics Data System (ADS)
Young, E. D.
2007-12-01
Laser ablation multiple-collector inductively coupled plasma-source mass spectrometry (LA-MC-ICPMS) is a rapid method for obtaining high-precision isotope ratio measurements in geological samples. The method has been used with success for measuring isotope ratios of numerous elements, including Pb, Hf, Mg, Si, and Fe in terrestrial and extraterrestrial samples. It fills the gap between the highest precision obtainable with acid digestion together with MC-ICPMS and thermal ionization mass spectrometry (TIMS) and the maximum spatial resolution afforded by secondary ion mass spectrometry (SIMS). Matrix effects have been shown to be negligible for Pb isotopic analysis by LA-MC-ICPMS (Simon et al., 2007). Glass standards NBS 610, 612, and 614 have Pb/matrix ratios spanning two orders of magnitude. Our sample-standard bracketing laser ablation technique gives accurate and precise 208Pb/206Pb and 207Pb/206Pb for these glasses. The accuracy is superior to that obtained when using Tl to correct for mass fractionation. Accuracy and precision (± 0.2 ‰) for Pb in feldspars is comparable to that for double-spike TIMS. Data like these have been used to distinguish distinct sources of magmas in the Long Valley silicic magma system. LA-MC-ICPMS analyses of Mg isotope ratios in calcium-aluminum-rich inclusions (CAIs) from carbonaceous chondrite meteorites have revealed a wealth of new information about the history of these objects. A byproduct of this work has been recognition of the importance of different mass fractionation laws among three isotopes of a given element. Kinetic and equilibrium processes define distinct fractionation laws. Reservoir effects can further modify these laws. The result is that the linear coefficient β that relates the logarithms of the ratios n2/n1 and n3/n1 (ni refers to the number of atoms of isotope i) of isotopes with masses m3 > m2 > m1 is not unique. Rather, it is process dependent. In the case of Mg, this coefficient ranges from 0.521 for single-step equilibrium processes to 0.510 or even lower for kinetic processes. Rayleigh fractionation involving a kinetic process with a single-step β of 0.510 produces an effective β of 0.512. Such differences in fractionation laws can be crucial for determining excesses or deficits in isotopes relative to mass fractionation. Contrary to some assertions, Si isotope ratios can be measured with high accuracy and precision using 193 nm excimer lasers with nanosecond pulse widths (Shahar and Young, 2007). Silicon isotope ratios in CAIs measured by 193 nm LA-MC-ICPMS have been combined with Mg isotope ratios to constrain the astrophysical environments in which these oldest solar system materials formed. Accuracy of the measurements was determined using gravimetric standards of various matrix compositions. The results establish that matrix effects for Si are below detection at the ± 0.2 ‰ precision of the laser ablation technique. High mass resolving power (m/Δ m ~ 9000) is necessary to obtain accurate Si isotope ratios by laser ablation. High-precision LA-MC-ICPMS measurements of 176Hf/177Hf in zircons can be obtained by normalizing to 179Hf/177Hf assuming an exponential fractionation law and no mass-dependent Hf, Lu, or Yb stable isotope fractionation. With corrections for interfering 176Lu and 176Yb precision for this method can be on the order of 0.3 epsilon (0.03 ‰). The approach has been used to infer the existence of continental crust on Earth 4.4 billion years before present (Harrison et al., 2005).
NASA Astrophysics Data System (ADS)
Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi
2015-02-01
Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.
Optimization of deformation monitoring networks using finite element strain analysis
NASA Astrophysics Data System (ADS)
Alizadeh-Khameneh, M. Amin; Eshagh, Mehdi; Jensen, Anna B. O.
2018-04-01
An optimal design of a geodetic network can fulfill the requested precision and reliability of the network, and decrease the expenses of its execution by removing unnecessary observations. The role of an optimal design is highlighted in deformation monitoring network due to the repeatability of these networks. The core design problem is how to define precision and reliability criteria. This paper proposes a solution, where the precision criterion is defined based on the precision of deformation parameters, i. e. precision of strain and differential rotations. A strain analysis can be performed to obtain some information about the possible deformation of a deformable object. In this study, we split an area into a number of three-dimensional finite elements with the help of the Delaunay triangulation and performed the strain analysis on each element. According to the obtained precision of deformation parameters in each element, the precision criterion of displacement detection at each network point is then determined. The developed criterion is implemented to optimize the observations from the Global Positioning System (GPS) in Skåne monitoring network in Sweden. The network was established in 1989 and straddled the Tornquist zone, which is one of the most active faults in southern Sweden. The numerical results show that 17 out of all 21 possible GPS baseline observations are sufficient to detect minimum 3 mm displacement at each network point.
Research on the tool holder mode in high speed machining
NASA Astrophysics Data System (ADS)
Zhenyu, Zhao; Yongquan, Zhou; Houming, Zhou; Xiaomei, Xu; Haibin, Xiao
2018-03-01
High speed machining technology can improve the processing efficiency and precision, but also reduce the processing cost. Therefore, the technology is widely regarded in the industry. With the extensive application of high-speed machining technology, high-speed tool system has higher and higher requirements on the tool chuck. At present, in high speed precision machining, several new kinds of clip heads are as long as there are heat shrinkage tool-holder, high-precision spring chuck, hydraulic tool-holder, and the three-rib deformation chuck. Among them, the heat shrinkage tool-holder has the advantages of high precision, high clamping force, high bending rigidity and dynamic balance, etc., which are widely used. Therefore, it is of great significance to research the new requirements of the machining tool system. In order to adapt to the requirement of high speed machining precision machining technology, this paper expounds the common tool holder technology of high precision machining, and proposes how to select correctly tool clamping system in practice. The characteristics and existing problems are analyzed in the tool clamping system.
Moeller, Korbinian; Martignon, Laura; Wessolowski, Silvia; Engel, Joachim; Nuerk, Hans-Christoph
2011-01-01
Children typically learn basic numerical and arithmetic principles using finger-based representations. However, whether or not reliance on finger-based representations is beneficial or detrimental is the subject of an ongoing debate between researchers in neurocognition and mathematics education. From the neurocognitive perspective, finger counting provides multisensory input, which conveys both cardinal and ordinal aspects of numbers. Recent data indicate that children with good finger-based numerical representations show better arithmetic skills and that training finger gnosis, or “finger sense,” enhances mathematical skills. Therefore neurocognitive researchers conclude that elaborate finger-based numerical representations are beneficial for later numerical development. However, research in mathematics education recommends fostering mentally based numerical representations so as to induce children to abandon finger counting. More precisely, mathematics education recommends first using finger counting, then concrete structured representations and, finally, mental representations of numbers to perform numerical operations. Taken together, these results reveal an important debate between neurocognitive and mathematics education research concerning the benefits and detriments of finger-based strategies for numerical development. In the present review, the rationale of both lines of evidence will be discussed. PMID:22144969
Lonnemann, Jan; Li, Su; Zhao, Pei; Li, Peng; Linkersdörfer, Janosch; Lindberg, Sven; Hasselhorn, Marcus; Yan, Song
2017-01-01
Human beings are assumed to possess an approximate number system (ANS) dedicated to extracting and representing approximate numerical magnitude information. The ANS is assumed to be fundamental to arithmetic learning and has been shown to be associated with arithmetic performance. It is, however, still a matter of debate whether better arithmetic skills are reflected in the ANS. To address this issue, Chinese and German adults were compared regarding their performance in simple arithmetic tasks and in a non-symbolic numerical magnitude comparison task. Chinese participants showed a better performance in solving simple arithmetic tasks and faster reaction times in the non-symbolic numerical magnitude comparison task without making more errors than their German peers. These differences in performance could not be ascribed to differences in general cognitive abilities. Better arithmetic skills were thus found to be accompanied by a higher speed of retrieving non-symbolic numerical magnitude knowledge but not by a higher precision of non-symbolic numerical magnitude representations. The group difference in the speed of retrieving non-symbolic numerical magnitude knowledge was fully mediated by the performance in arithmetic tasks, suggesting that arithmetic skills shape non-symbolic numerical magnitude processing skills. PMID:28384191
NASA Astrophysics Data System (ADS)
Caffo, Michele; Czyż, Henryk; Gunia, Michał; Remiddi, Ettore
2009-03-01
We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. Program summaryProgram title: BOKASUN Catalogue identifier: AECG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9404 No. of bytes in distributed program, including test data, etc.: 104 123 Distribution format: tar.gz Programming language: FORTRAN77 Computer: Any computer with a Fortran compiler accepting FORTRAN77 standard. Tested on various PC's with LINUX Operating system: LINUX RAM: 120 kbytes Classification: 4.4 Nature of problem: Any integral arising in the evaluation of the two-loop sunrise Feynman diagram can be expressed in terms of a given set of Master Integrals, which should be calculated numerically. The program provides a fast and precise evaluation method of the Master Integrals for arbitrary (but not vanishing) masses and arbitrary value of the external momentum. Solution method: The integrals depend on three internal masses and the external momentum squared p. The method is a combination of an accelerated expansion in 1/p in its (pretty large!) region of fast convergence and of a Runge-Kutta numerical solution of a system of linear differential equations. Running time: To obtain 4 Master Integrals on PC with 2 GHz processor it takes 3 μs for series expansion with pre-calculated coefficients, 80 μs for series expansion without pre-calculated coefficients, from a few seconds up to a few minutes for Runge-Kutta method (depending on the required accuracy and the values of the physical parameters).
Wallisch, Pascal; Ostojic, Srdjan
2016-01-01
Synaptic plasticity is sensitive to the rate and the timing of presynaptic and postsynaptic action potentials. In experimental protocols inducing plasticity, the imposed spike trains are typically regular and the relative timing between every presynaptic and postsynaptic spike is fixed. This is at odds with firing patterns observed in the cortex of intact animals, where cells fire irregularly and the timing between presynaptic and postsynaptic spikes varies. To investigate synaptic changes elicited by in vivo-like firing, we used numerical simulations and mathematical analysis of synaptic plasticity models. We found that the influence of spike timing on plasticity is weaker than expected from regular stimulation protocols. Moreover, when neurons fire irregularly, synaptic changes induced by precise spike timing can be equivalently induced by a modest firing rate variation. Our findings bridge the gap between existing results on synaptic plasticity and plasticity occurring in vivo, and challenge the dominant role of spike timing in plasticity. SIGNIFICANCE STATEMENT Synaptic plasticity, the change in efficacy of connections between neurons, is thought to underlie learning and memory. The dominant paradigm posits that the precise timing of neural action potentials (APs) is central for plasticity induction. This concept is based on experiments using highly regular and stereotyped patterns of APs, in stark contrast with natural neuronal activity. Using synaptic plasticity models, we investigated how irregular, in vivo-like activity shapes synaptic plasticity. We found that synaptic changes induced by precise timing of APs are much weaker than suggested by regular stimulation protocols, and can be equivalently induced by modest variations of the AP rate alone. Our results call into question the dominant role of precise AP timing for plasticity in natural conditions. PMID:27807166
On the reliability of computed chaotic solutions of non-linear differential equations
NASA Astrophysics Data System (ADS)
Liao, Shijun
2009-08-01
A new concept, namely the critical predictable time Tc, is introduced to give a more precise description of computed chaotic solutions of non-linear differential equations: it is suggested that computed chaotic solutions are unreliable and doubtable when t > Tc. This provides us a strategy to detect reliable solution from a given computed result. In this way, the computational phenomena, such as computational chaos (CC), computational periodicity (CP) and computational prediction uncertainty, which are mainly based on long-term properties of computed time-series, can be completely avoided. Using this concept, the famous conclusion `accurate long-term prediction of chaos is impossible' should be replaced by a more precise conclusion that `accurate prediction of chaos beyond the critical predictable time Tc is impossible'. So, this concept also provides us a timescale to determine whether or not a particular time is long enough for a given non-linear dynamic system. Besides, the influence of data inaccuracy and various numerical schemes on the critical predictable time is investigated in details by using symbolic computation software as a tool. A reliable chaotic solution of Lorenz equation in a rather large interval 0 <= t < 1200 non-dimensional Lorenz time units is obtained for the first time. It is found that the precision of the initial condition and the computed data at each time step, which is mathematically necessary to get such a reliable chaotic solution in such a long time, is so high that it is physically impossible due to the Heisenberg uncertainty principle in quantum physics. This, however, provides us a so-called `precision paradox of chaos', which suggests that the prediction uncertainty of chaos is physically unavoidable, and that even the macroscopical phenomena might be essentially stochastic and thus could be described by probability more economically.
UAV remote sening for precision agriculture
NASA Astrophysics Data System (ADS)
Vigneau, Nathalie; Chéron, Corentin; Mainfroy, Florent; Faroux, Romain
2014-05-01
Airinov offers to farmers, scientists and experimenters (plant breeders, etc.) its technical skills about UAVs, cartography and agronomic remote sensing. The UAV is a 2-m-wingspan flying wing. It can carry away either a RGB camera or a multispectral sensor, which records reflectance in 4 spectral bands. The spectral characteristics of the sensor are modular. Each spectral band is comprised between 400 and 850 nm and the FWHM (Full Width at Half Maximum) is between 10 and 40 nm. The spatial resolution varies according to sensor, flying height and user needs from 15cm/px for multispectral sensor at 150m to 1.5cm/px for RGB camera at 50m. The flight is totally automatic thanks to on-board autopilot, IMU (Inertial Measurement Unit) and GPS. Data processing (unvignetting, mosaicking, correction in reflectance) leads to agronomic variables as LAI (Leaf Area Index) or chlorophyll content for barley, wheat, rape and maize as well as vegetation indices as NDVI (Normalized Difference Vegetation Index). Using these data, Airinov can product advices for farmers as nitrogen preconisation for rape. For scientists, Airinov offers trial plot monitoring by micro-plots vectorisation and numerical data exctraction micro-plot by micro-plot. This can lead to kinetic curve for LAI or NDVI to compare cover establishment for different genotypes for example. Airinov's system is a new way to monitor plots with a lot of data (biophysical or biochemical parameters) at high rate, high spatial resolution and high precision.
Epigenetic regulation of gene expression in cancer: techniques, resources and analysis
Kagohara, Luciane T; Stein-O’Brien, Genevieve L; Kelley, Dylan; Flam, Emily; Wick, Heather C; Danilova, Ludmila V; Easwaran, Hariharan; Favorov, Alexander V; Qian, Jiang; Gaykalova, Daria A; Fertig, Elana J
2018-01-01
Abstract Cancer is a complex disease, driven by aberrant activity in numerous signaling pathways in even individual malignant cells. Epigenetic changes are critical mediators of these functional changes that drive and maintain the malignant phenotype. Changes in DNA methylation, histone acetylation and methylation, noncoding RNAs, posttranslational modifications are all epigenetic drivers in cancer, independent of changes in the DNA sequence. These epigenetic alterations were once thought to be crucial only for the malignant phenotype maintenance. Now, epigenetic alterations are also recognized as critical for disrupting essential pathways that protect the cells from uncontrolled growth, longer survival and establishment in distant sites from the original tissue. In this review, we focus on DNA methylation and chromatin structure in cancer. The precise functional role of these alterations is an area of active research using emerging high-throughput approaches and bioinformatics analysis tools. Therefore, this review also describes these high-throughput measurement technologies, public domain databases for high-throughput epigenetic data in tumors and model systems and bioinformatics algorithms for their analysis. Advances in bioinformatics data that combine these epigenetic data with genomics data are essential to infer the function of specific epigenetic alterations in cancer. These integrative algorithms are also a focus of this review. Future studies using these emerging technologies will elucidate how alterations in the cancer epigenome cooperate with genetic aberrations during tumor initiation and progression. This deeper understanding is essential to future studies with epigenetics biomarkers and precision medicine using emerging epigenetic therapies. PMID:28968850
Avital, Itzhak; Langan, Russell C.; Summers, Thomas A.; Steele, Scott R.; Waldman, Scott A.; Backman, Vadim; Yee, Judy; Nissan, Aviram; Young, Patrick; Womeldorph, Craig; Mancusco, Paul; Mueller, Renee; Noto, Khristian; Grundfest, Warren; Bilchik, Anton J.; Protic, Mladjan; Daumer, Martin; Eberhardt, John; Man, Yan Gao; Brücher, Björn LDM; Stojadinovic, Alexander
2013-01-01
Colorectal cancer (CRC) is the third most common cause of cancer-related death in the United States (U.S.), with estimates of 143,460 new cases and 51,690 deaths for the year 2012. Numerous organizations have published guidelines for CRC screening; however, these numerical estimates of incidence and disease-specific mortality have remained stable from years prior. Technological, genetic profiling, molecular and surgical advances in our modern era should allow us to improve risk stratification of patients with CRC and identify those who may benefit from preventive measures, early aggressive treatment, alternative treatment strategies, and/or frequent surveillance for the early detection of disease recurrence. To better negotiate future economic constraints and enhance patient outcomes, ultimately, we propose to apply the principals of personalized and precise cancer care to risk-stratify patients for CRC screening (Precision Risk Stratification-Based Screening, PRSBS). We believe that genetic, molecular, ethnic and socioeconomic disparities impact oncological outcomes in general, those related to CRC, in particular. This document highlights evidence-based screening recommendations and risk stratification methods in response to our CRC working group private-public consensus meeting held in March 2012. Our aim was to address how we could improve CRC risk stratification-based screening, and to provide a vision for the future to achieving superior survival rates for patients diagnosed with CRC. PMID:23459409