Sample records for high precision modeling

  1. High precision NC lathe feeding system rigid-flexible coupling model reduction technology

    NASA Astrophysics Data System (ADS)

    Xuan, He; Hua, Qingsong; Cheng, Lianjun; Zhang, Hongxin; Zhao, Qinghai; Mao, Xinkai

    2017-08-01

    This paper proposes the use of dynamic substructure method of reduction of order to achieve effective reduction of feed system for high precision NC lathe feeding system rigid-flexible coupling model, namely the use of ADAMS to establish the rigid flexible coupling simulation model of high precision NC lathe, and then the vibration simulation of the period by using the FD 3D damper is very effective for feed system of bolt connection reduction of multi degree of freedom model. The vibration simulation calculation is more accurate, more quickly.

  2. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.

  3. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.

  4. A Monte Carlo Simulation Comparing the Statistical Precision of Two High-Stakes Teacher Evaluation Methods: A Value-Added Model and a Composite Measure

    ERIC Educational Resources Information Center

    Spencer, Bryden

    2016-01-01

    Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…

  5. Development of the One Centimeter Accuracy Geoid Model of Latvia for GNSS Measurements

    NASA Astrophysics Data System (ADS)

    Balodis, J.; Silabriedis, G.; Haritonova, D.; Kaļinka, M.; Janpaule, I.; Morozova, K.; Jumāre, I.; Mitrofanovs, I.; Zvirgzds, J.; Kaminskis, J.; Liepiņš, I.

    2015-11-01

    There is an urgent necessity for a highly accurate and reliable geoid model to enable prompt determination of normal height with the use of GNSS coordinate determination due to the high precision requirements in geodesy, building and high precision road construction development. Additionally, the Latvian height system is in the process of transition from BAS- 77 (Baltic Height System) to EVRS2007 system. The accuracy of the geoid model must approach the precision of about ∼1 cm looking forward to the Baltic Rail and other big projects. The use of all the available and verified data sources is planned, including the use of enlarged set of GNSS/levelling data, gravimetric measurement data and, additionally, the vertical deflection measurements over the territory of Latvia. The work is going ahead stepwise. Just the issue of GNSS reference network stability is discussed. In order to achieve the ∼1 cm precision geoid, it is required to have a homogeneous high precision GNSS network as a basis for ellipsoidal height determination for GNSS/levelling points. Both the LatPos and EUPOS® - Riga network have been examined in this article.

  6. [Estimation of desert vegetation coverage based on multi-source remote sensing data].

    PubMed

    Wan, Hong-Mei; Li, Xia; Dong, Dao-Rui

    2012-12-01

    Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study areaAbstract: Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study area and based on the ground investigation and the multi-source remote sensing data of different resolutions, the estimation models for desert vegetation coverage were built, with the precisions of different estimation methods and models compared. The results showed that with the increasing spatial resolution of remote sensing data, the precisions of the estimation models increased. The estimation precision of the models based on the high, middle-high, and middle-low resolution remote sensing data was 89.5%, 87.0%, and 84.56%, respectively, and the precisions of the remote sensing models were higher than that of vegetation index method. This study revealed the change patterns of the estimation precision of desert vegetation coverage based on different spatial resolution remote sensing data, and realized the quantitative conversion of the parameters and scales among the high, middle, and low spatial resolution remote sensing data of desert vegetation coverage, which would provide direct evidence for establishing and implementing comprehensive remote sensing monitoring scheme for the ecological restoration in the study area.

  7. A discrete time-varying internal model-based approach for high precision tracking of a multi-axis servo gantry.

    PubMed

    Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing

    2014-09-01

    In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. An Online Gravity Modeling Method Applied for High Precision Free-INS

    PubMed Central

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-01-01

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261

  9. An Online Gravity Modeling Method Applied for High Precision Free-INS.

    PubMed

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-09-23

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.

  10. Towards the GEOSAT Follow-On Precise Orbit Determination Goals of High Accuracy and Near-Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.

    2006-01-01

    The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.

  11. Development and Validation of High Precision Thermal, Mechanical, and Optical Models for the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Lindensmith, Chris A.; Briggs, H. Clark; Beregovski, Yuri; Feria, V. Alfonso; Goullioud, Renaud; Gursel, Yekta; Hahn, Inseob; Kinsella, Gary; Orzewalla, Matthew; Phillips, Charles

    2006-01-01

    SIM Planetquest (SIM) is a large optical interferometer for making microarcsecond measurements of the positions of stars, and to detect Earth-sized planets around nearby stars. To achieve this precision, SIM requires stability of optical components to tens of picometers per hour. The combination of SIM s large size (9 meter baseline) and the high stability requirement makes it difficult and costly to measure all aspects of system performance on the ground. To reduce risks, costs and to allow for a design with fewer intermediate testing stages, the SIM project is developing an integrated thermal, mechanical and optical modeling process that will allow predictions of the system performance to be made at the required high precision. This modeling process uses commercial, off-the-shelf tools and has been validated against experimental results at the precision of the SIM performance requirements. This paper presents the description of the model development, some of the models, and their validation in the Thermo-Opto-Mechanical (TOM3) testbed which includes full scale brassboard optical components and the metrology to test them at the SIM performance requirement levels.

  12. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  13. Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.

    2009-05-01

    We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

  14. Improved Slip Casting Of Ceramic Models

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.; Vasquez, Peter; Hicks, Lana P.

    1994-01-01

    Improved technique of investment slip casting developed for making precise ceramic wind-tunnel models. Needed in wind-tunnel experiments to verify predictions of aerothermodynamical computer codes. Ceramic materials used because of their low heat conductivities and ability to survive high temperatures. Present improved slip-casting technique enables casting of highly detailed models from aqueous or nonaqueous solutions. Wet shell molds peeled off models to ensure precise and undamaged details. Used at NASA Langley Research Center to form superconducting ceramic components from nonaqueous slip solutions. Technique has many more applications when ceramic materials developed further for such high-strength/ temperature components as engine parts.

  15. High Precision Edge Detection Algorithm for Mechanical Parts

    NASA Astrophysics Data System (ADS)

    Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui

    2018-04-01

    High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  16. On the use of programmable hardware and reduced numerical precision in earth-system modeling.

    PubMed

    Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N

    2015-09-01

    Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.

  17. Capacity and precision in an animal model of visual short-term memory.

    PubMed

    Lara, Antonio H; Wallis, Jonathan D

    2012-03-14

    Temporary storage of information in visual short-term memory (VSTM) is a key component of many complex cognitive abilities. However, it is highly limited in capacity. Understanding the neurophysiological nature of this capacity limit will require a valid animal model of VSTM. We used a multiple-item color change detection task to measure macaque monkeys' VSTM capacity. Subjects' performance deteriorated and reaction times increased as a function of the number of items in memory. Additionally, we measured the precision of the memory representations by varying the distance between sample and test colors. In trials with similar sample and test colors, subjects made more errors compared to trials with highly discriminable colors. We modeled the error distribution as a Gaussian function and used this to estimate the precision of VSTM representations. We found that as the number of items in memory increases the precision of the representations decreases dramatically. Additionally, we found that focusing attention on one of the objects increases the precision with which that object is stored and degrades the precision of the remaining. These results are in line with recent findings in human psychophysics and provide a solid foundation for understanding the neurophysiological nature of the capacity limit of VSTM.

  18. High precision locating control system based on VCM for Talbot lithography

    NASA Astrophysics Data System (ADS)

    Yao, Jingwei; Zhao, Lixin; Deng, Qian; Hu, Song

    2016-10-01

    Aiming at the high precision and efficiency requirements of Z-direction locating in Talbot lithography, a control system based on Voice Coil Motor (VCM) was designed. In this paper, we built a math model of VCM and its moving characteristic was analyzed. A double-closed loop control strategy including position loop and current loop were accomplished. The current loop was implemented by driver, in order to achieve the rapid follow of the system current. The position loop was completed by the digital signal processor (DSP) and the position feedback was achieved by high precision linear scales. Feed forward control and position feedback Proportion Integration Differentiation (PID) control were applied in order to compensate for dynamic lag and improve the response speed of the system. And the high precision and efficiency of the system were verified by simulation and experiments. The results demonstrated that the performance of Z-direction gantry was obviously improved, having high precision, quick responses, strong real-time and easily to expend for higher precision.

  19. High-Precision Measurement of the Ne19 Half-Life and Implications for Right-Handed Weak Currents

    NASA Astrophysics Data System (ADS)

    Triambak, S.; Finlay, P.; Sumithrarachchi, C. S.; Hackman, G.; Ball, G. C.; Garrett, P. E.; Svensson, C. E.; Cross, D. S.; Garnsworthy, A. B.; Kshetri, R.; Orce, J. N.; Pearson, M. R.; Tardiff, E. R.; Al-Falou, H.; Austin, R. A. E.; Churchman, R.; Djongolov, M. K.; D'Entremont, R.; Kierans, C.; Milovanovic, L.; O'Hagan, S.; Reeve, S.; Sjue, S. K. L.; Williams, S. J.

    2012-07-01

    We report a precise determination of the Ne19 half-life to be T1/2=17.262±0.007s. This result disagrees with the most recent precision measurements and is important for placing bounds on predicted right-handed interactions that are absent in the current standard model. We are able to identify and disentangle two competing systematic effects that influence the accuracy of such measurements. Our findings prompt a reassessment of results from previous high-precision lifetime measurements that used similar equipment and methods.

  20. High-precision measurement of the 19Ne half-life and implications for right-handed weak currents.

    PubMed

    Triambak, S; Finlay, P; Sumithrarachchi, C S; Hackman, G; Ball, G C; Garrett, P E; Svensson, C E; Cross, D S; Garnsworthy, A B; Kshetri, R; Orce, J N; Pearson, M R; Tardiff, E R; Al-Falou, H; Austin, R A E; Churchman, R; Djongolov, M K; D'Entremont, R; Kierans, C; Milovanovic, L; O'Hagan, S; Reeve, S; Sjue, S K L; Williams, S J

    2012-07-27

    We report a precise determination of the (19)Ne half-life to be T(1/2)=17.262±0.007 s. This result disagrees with the most recent precision measurements and is important for placing bounds on predicted right-handed interactions that are absent in the current standard model. We are able to identify and disentangle two competing systematic effects that influence the accuracy of such measurements. Our findings prompt a reassessment of results from previous high-precision lifetime measurements that used similar equipment and methods.

  1. Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms.

    PubMed

    Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan

    2015-08-14

    High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms.

  2. Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms

    PubMed Central

    Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan

    2015-01-01

    High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms. PMID:26287203

  3. Study of nanometer-level precise phase-shift system used in electronic speckle shearography and phase-shift pattern interferometry

    NASA Astrophysics Data System (ADS)

    Jing, Chao; Liu, Zhongling; Zhou, Ge; Zhang, Yimo

    2011-11-01

    The nanometer-level precise phase-shift system is designed to realize the phase-shift interferometry in electronic speckle shearography pattern interferometry. The PZT is used as driving component of phase-shift system and translation component of flexure hinge is developed to realize micro displacement of non-friction and non-clearance. Closed-loop control system is designed for high-precision micro displacement, in which embedded digital control system is developed for completing control algorithm and capacitive sensor is used as feedback part for measuring micro displacement in real time. Dynamic model and control model of the nanometer-level precise phase-shift system is analyzed, and high-precision micro displacement is realized with digital PID control algorithm on this basis. It is proved with experiments that the location precision of the precise phase-shift system to step signal of displacement is less than 2nm and the location precision to continuous signal of displacement is less than 5nm, which is satisfied with the request of the electronic speckle shearography and phase-shift pattern interferometry. The stripe images of four-step phase-shift interferometry and the final phase distributed image correlated with distortion of objects are listed in this paper to prove the validity of nanometer-level precise phase-shift system.

  4. High-precision measurements of cementless acetabular components using model-based RSA: an experimental study.

    PubMed

    Baad-Hansen, Thomas; Kold, Søren; Kaptein, Bart L; Søballe, Kjeld

    2007-08-01

    In RSA, tantalum markers attached to metal-backed acetabular cups are often difficult to detect on stereo radiographs due to the high density of the metal shell. This results in occlusion of the prosthesis markers and may lead to inconclusive migration results. Within the last few years, new software systems have been developed to solve this problem. We compared the precision of 3 RSA systems in migration analysis of the acetabular component. A hemispherical and a non-hemispherical acetabular component were mounted in a phantom. Both acetabular components underwent migration analyses with 3 different RSA systems: conventional RSA using tantalum markers, an RSA system using a hemispherical cup algorithm, and a novel model-based RSA system. We found narrow confidence intervals, indicating high precision of the conventional marker system and model-based RSA with regard to migration and rotation. The confidence intervals of conventional RSA and model-based RSA were narrower than those of the hemispherical cup algorithm-based system regarding cup migration and rotation. The model-based RSA software combines the precision of the conventional RSA software with the convenience of the hemispherical cup algorithm-based system. Based on our findings, we believe that these new tools offer an improvement in the measurement of acetabular component migration.

  5. Parametric geometric model and hydrodynamic shape optimization of a flying-wing structure underwater glider

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-yu; Yu, Jian-cheng; Zhang, Ai-qun; Wang, Ya-xing; Zhao, Wen-tao

    2017-12-01

    Combining high precision numerical analysis methods with optimization algorithms to make a systematic exploration of a design space has become an important topic in the modern design methods. During the design process of an underwater glider's flying-wing structure, a surrogate model is introduced to decrease the computation time for a high precision analysis. By these means, the contradiction between precision and efficiency is solved effectively. Based on the parametric geometry modeling, mesh generation and computational fluid dynamics analysis, a surrogate model is constructed by adopting the design of experiment (DOE) theory to solve the multi-objects design optimization problem of the underwater glider. The procedure of a surrogate model construction is presented, and the Gaussian kernel function is specifically discussed. The Particle Swarm Optimization (PSO) algorithm is applied to hydrodynamic design optimization. The hydrodynamic performance of the optimized flying-wing structure underwater glider increases by 9.1%.

  6. Examining Exposure Assessment in Shift Work Research: A Study on Depression Among Nurses.

    PubMed

    Hall, Amy L; Franche, Renée-Louise; Koehoorn, Mieke

    2018-02-13

    Coarse exposure assessment and assignment is a common issue facing epidemiological studies of shift work. Such measures ignore a number of exposure characteristics that may impact on health, increasing the likelihood of biased effect estimates and masked exposure-response relationships. To demonstrate the impacts of exposure assessment precision in shift work research, this study investigated relationships between work schedule and depression in a large survey of Canadian nurses. The Canadian 2005 National Survey of the Work and Health of Nurses provided the analytic sample (n = 11450). Relationships between work schedule and depression were assessed using logistic regression models with high, moderate, and low-precision exposure groupings. The high-precision grouping described shift timing and rotation frequency, the moderate-precision grouping described shift timing, and the low-precision grouping described the presence/absence of shift work. Final model estimates were adjusted for the potential confounding effects of demographic and work variables, and bootstrap weights were used to generate sampling variances that accounted for the survey sample design. The high-precision exposure grouping model showed the strongest relationships between work schedule and depression, with increased odds ratios [ORs] for rapidly rotating (OR = 1.51, 95% confidence interval [CI] = 0.91-2.51) and undefined rotating (OR = 1.67, 95% CI = 0.92-3.02) shift workers, and a decreased OR for depression in slow rotating (OR = 0.79, 95% CI = 0.57-1.08) shift workers. For the low- and moderate-precision exposure grouping models, weak relationships were observed for all work schedule categories (OR range 0.95 to 0.99). Findings from this study support the need to consider and collect the data required for precise and conceptually driven exposure assessment and assignment in future studies of shift work and health. Further research into the effects of shift rotation frequency on depression is also recommended. © The Author(s) 2018. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  7. Refining FIA plot locations using LiDAR point clouds

    Treesearch

    Charlie Schrader-Patton; Greg C. Liknes; Demetrios Gatziolis; Brian M. Wing; Mark D. Nelson; Patrick D. Miles; Josh Bixby; Daniel G. Wendt; Dennis Kepler; Abbey Schaaf

    2015-01-01

    Forest Inventory and Analysis (FIA) plot location coordinate precision is often insufficient for use with high resolution remotely sensed data, thereby limiting the use of these plots for geospatial applications and reducing the validity of models that assume the locations are precise. A practical and efficient method is needed to improve coordinate precision. To...

  8. Evaluation of the precision agricultural landscape modeling system (PALMS) in the semiarid Texas southern high plains

    USDA-ARS?s Scientific Manuscript database

    Accurate models to simulate the soil water balance in semiarid cropping systems are needed to evaluate management practices for soil and water conservation in both irrigated and dryland production systems. The objective of this study was to evaluate the application of the Precision Agricultural Land...

  9. Evaluation of the Precision Agricultural Landscape Modeling System (PALMS) in the Semiarid Texas Southern High Plains

    USDA-ARS?s Scientific Manuscript database

    Accurate models to simulate the soil water balance in semiarid cropping systems are needed to evaluate management practices for soil and water conservation in both irrigated and dryland production systems. The objective of this study was to evaluate the application of the Precision Agricultural Land...

  10. An Improved Method of AGM for High Precision Geolocation of SAR Images

    NASA Astrophysics Data System (ADS)

    Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.

    2018-05-01

    In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.

  11. Capacity and precision in an animal model of visual short-term memory

    PubMed Central

    Lara, Antonio H.; Wallis, Jonathan D.

    2013-01-01

    Temporary storage of information in visual short-term memory (VSTM) is a key component of many complex cognitive abilities. However, it is highly limited in capacity. Understanding the neurophysiological nature of this capacity limit will require a valid animal model of VSTM. We used a multiple-item color change detection task to measure macaque monkeys’ VSTM capacity. Subjects’ performance deteriorated and reaction times increased as a function of the number of items in memory. Additionally, we measured the precision of the memory representations by varying the distance between sample and test colors. In trials with similar sample and test colors, subjects made more errors compared to trials with highly discriminable colors. We modeled the error distribution as a Gaussian function and used this to estimate the precision of VSTM representations. We found that as the number of items in memory increases the precision of the representations decreases dramatically. Additionally, we found that focusing attention on one of the objects increases the precision with which that object is stored and degrading the precision of the remaining. These results are in line with recent findings in human psychophysics and provide a solid foundation for understanding the neurophysiological nature of the capacity limit of VSTM. PMID:22419756

  12. Prediction of beef carcass and meat traits from rearing factors in young bulls and cull cows.

    PubMed

    Soulat, J; Picard, B; Léger, S; Monteils, V

    2016-04-01

    The aim of this study was to predict the beef carcass and LM (thoracis part) characteristics and the sensory properties of the LM from rearing factors applied during the fattening period. Individual data from 995 animals (688 young bulls and 307 cull cows) in 15 experiments were used to establish prediction models. The data concerned rearing factors (13 variables), carcass characteristics (5 variables), LM characteristics (2 variables), and LM sensory properties (3 variables). In this study, 8 prediction models were established: dressing percentage and the proportions of fat tissue and muscle in the carcass to characterize the beef carcass; cross-sectional area of fibers (mean fiber area) and isocitrate dehydrogenase activity to characterize the LM; and, finally, overall tenderness, juiciness, and flavor intensity scores to characterize the LM sensory properties. A random effect was considered in each model: the breed for the prediction models for the carcass and LM characteristics and the trained taste panel for the prediction of the meat sensory properties. To evaluate the quality of prediction models, 3 criteria were measured: robustness, accuracy, and precision. The model was robust when the root mean square errors of prediction of calibration and validation sub-data sets were near to one another. Except for the mean fiber area model, the obtained predicted models were robust. The prediction models were considered to have a high accuracy when the mean prediction error (MPE) was ≤0.10 and to have a high precision when the was the closest to 1. The prediction of the characteristics of the carcass from the rearing factors had a high precision ( > 0.70) and a high prediction accuracy (MPE < 0.10), except for the fat percentage model ( = 0.67, MPE = 0.16). However, the predictions of the LM characteristics and LM sensory properties from the rearing factors were not sufficiently precise ( < 0.50) and accurate (MPE > 0.10). Only the flavor intensity of the beef score could be satisfactorily predicted from the rearing factors with high precision ( = 0.72) and accuracy (MPE = 0.10). All the prediction models displayed different effects of the rearing factors according to animal categories (young bulls or cull cows). In consequence, these prediction models display the necessary adaption of rearing factors during the fattening period according to animal categories to optimize the carcass traits according to animal categories.

  13. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2013-01-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  14. Composite adaptive control of belt polishing force for aero-engine blade

    NASA Astrophysics Data System (ADS)

    Zhsao, Pengbing; Shi, Yaoyao

    2013-09-01

    The existing methods for blade polishing mainly focus on robot polishing and manual grinding. Due to the difficulty in high-precision control of the polishing force, the blade surface precision is very low in robot polishing, in particular, quality of the inlet and exhaust edges can not satisfy the processing requirements. Manual grinding has low efficiency, high labor intensity and unstable processing quality, moreover, the polished surface is vulnerable to burn, and the surface precision and integrity are difficult to ensure. In order to further improve the profile accuracy and surface quality, a pneumatic flexible polishing force-exerting mechanism is designed and a dual-mode switching composite adaptive control(DSCAC) strategy is proposed, which combines Bang-Bang control and model reference adaptive control based on fuzzy neural network(MRACFNN) together. By the mode decision-making mechanism, Bang-Bang control is used to track the control command signal quickly when the actual polishing force is far away from the target value, and MRACFNN is utilized in smaller error ranges to improve the system robustness and control precision. Based on the mathematical model of the force-exerting mechanism, simulation analysis is implemented on DSCAC. Simulation results show that the output polishing force can better track the given signal. Finally, the blade polishing experiments are carried out on the designed polishing equipment. Experimental results show that DSCAC can effectively mitigate the influence of gas compressibility, valve dead-time effect, valve nonlinear flow, cylinder friction, measurement noise and other interference on the control precision of polishing force, which has high control precision, strong robustness, strong anti-interference ability and other advantages compared with MRACFNN. The proposed research achieves high-precision control of the polishing force, effectively improves the blade machining precision and surface consistency, and significantly reduces the surface roughness.

  15. A fiducial skull marker for precise MRI-based stereotaxic surgery in large animal models.

    PubMed

    Glud, Andreas Nørgaard; Bech, Johannes; Tvilling, Laura; Zaer, Hamed; Orlowski, Dariusz; Fitting, Lise Moberg; Ziedler, Dora; Geneser, Michael; Sangill, Ryan; Alstrup, Aage Kristian Olsen; Bjarkam, Carsten Reidies; Sørensen, Jens Christian Hedemann

    2017-06-15

    Stereotaxic neurosurgery in large animals is used widely in different sophisticated models, where precision is becoming more crucial as desired anatomical target regions are becoming smaller. Individually calculated coordinates are necessary in large animal models with cortical and subcortical anatomical differences. We present a convenient method to make an MRI-visible skull fiducial for 3D MRI-based stereotaxic procedures in larger experimental animals. Plastic screws were filled with either copper-sulfate solution or MRI-visible paste from a commercially available cranial head marker. The screw fiducials were inserted in the animal skulls and T1 weighted MRI was performed allowing identification of the inserted skull marker. Both types of fiducial markers were clearly visible on the MRÍs. This allows high precision in the stereotaxic space. The use of skull bone based fiducial markers gives high precision for both targeting and evaluation of stereotaxic systems. There are no metal artifacts and the fiducial is easily removed after surgery. The fiducial marker can be used as a very precise reference point, either for direct targeting or in evaluation of other stereotaxic systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. [Extraction of buildings three-dimensional information from high-resolution satellite imagery based on Barista software].

    PubMed

    Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi

    2010-05-01

    The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.

  17. Effects of Boron and Graphite Uncertainty in Fuel for TREAT Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughn, Kyle; Mausolff, Zander; Gonzalez, Esteban

    Advanced modeling techniques and current computational capacity make full core TREAT simulations possible, with the goal of such simulations to understand the pre-test core and minimize the number of required calibrations. But, in order to simulate TREAT with a high degree of precision the reactor materials and geometry must also be modeled with a high degree of precision. This paper examines how uncertainty in the reported values of boron and graphite have an effect on simulations of TREAT.

  18. Detector Outline Document for the Fourth Concept Detector ("4th") at the International Linear Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbareschi, Daniele; et al.

    We describe a general purpose detector ( "Fourth Concept") at the International Linear Collider (ILC) that can measure with high precision all the fundamental fermions and bosons of the standard model, and thereby access all known physics processes. The 4th concept consists of four basic subsystems: a pixel vertex detector for high precision vertex definitions, impact parameter tagging and near-beam occupancy reduction; a Time Projection Chamber for robust pattern recognition augmented with three high-precision pad rows for precision momentum measurement; a high precision multiple-readout fiber calorimeter, complemented with an EM dual-readout crystal calorimeter, for the energy measurement of hadrons, jets,more » electrons, photons, missing momentum, and the tagging of muons; and, an iron-free dual-solenoid muon system for the inverse direction bending of muons in a gas volume to achieve high acceptance and good muon momentum resolution. The pixel vertex chamber, TPC and calorimeter are inside the solenoidal magnetic field. All four subsytems separately achieve the important scientific goal to be 2-to-10 times better than the already excellent LEP detectors, ALEPH, DELPHI, L3 and OPAL. All four basic subsystems contribute to the identification of standard model partons, some in unique ways, such that consequent physics studies are cogent. As an integrated detector concept, we achieve comprehensive physics capabilities that puts all conceivable physics at the ILC within reach.« less

  19. Mathematical model of bone drilling for virtual surgery system

    NASA Astrophysics Data System (ADS)

    Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.

    2018-04-01

    The bone drilling is an essential part of surgeries in ENT and Dentistry. A proper training of drilling machine handling skills is impossible without proper modelling of the drilling process. Utilization of high precision methods like FEM is limited due to the requirement of 1000 Hz update rate for haptic feedback. The study presents a mathematical model of the drilling process that accounts the properties of materials, the geometry and the rotation rate of a burr to compute the removed material volume. The simplicity of the model allows for integrating it in the high-frequency haptic thread. The precision of the model is enough for a virtual surgery system targeted on the training of the basic surgery skills.

  20. High-precision multiband spectroscopy of ultracold fermions in a nonseparable optical lattice

    NASA Astrophysics Data System (ADS)

    Fläschner, Nick; Tarnowski, Matthias; Rem, Benno S.; Vogel, Dominik; Sengstock, Klaus; Weitenberg, Christof

    2018-05-01

    Spectroscopic tools are fundamental for the understanding of complex quantum systems. Here, we demonstrate high-precision multiband spectroscopy in a graphenelike lattice using ultracold fermionic atoms. From the measured band structure, we characterize the underlying lattice potential with a relative error of 1.2 ×10-3 . Such a precise characterization of complex lattice potentials is an important step towards precision measurements of quantum many-body systems. Furthermore, we explain the excitation strengths into different bands with a model and experimentally study their dependency on the symmetry of the perturbation operator. This insight suggests the excitation strengths as a suitable observable for interaction effects on the eigenstates.

  1. Personalized In Vitro and In Vivo Cancer Models to Guide Precision Medicine | Office of Cancer Genomics

    Cancer.gov

    Precision medicine is an approach that takes into account the influence of individuals' genes, environment, and lifestyle exposures to tailor interventions. Here, we describe the development of a robust precision cancer care platform that integrates whole-exome sequencing with a living biobank that enables high-throughput drug screens on patient-derived tumor organoids. To date, 56 tumor-derived organoid cultures and 19 patient-derived xenograft (PDX) models have been established from the 769 patients enrolled in an Institutional Review Board-approved clinical trial.

  2. High-Precision Half-Life and Branching Ratio Measurements for the Superallowed β+ Emitter 26Alm

    NASA Astrophysics Data System (ADS)

    Finlay, P.; Svensson, C. E.; Demand, G. A.; Garrett, P. E.; Green, K. L.; Leach, K. G.; Phillips, A. A.; Rand, E. T.; Ball, G.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Leslie, J. R.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Grinyer, G. F.; Sumithrarachchi, C. S.; Williams, S. J.; Triambak, S.

    2013-03-01

    High-precision half-life and branching-ratio measurements for the superallowed β+ emitter 26Alm were performed at the TRIUMF-ISAC radioactive ion beam facility. An upper limit of ≤ 15 ppm at 90% C.L. was determined for the sum of all possible non-analogue β+/EC decay branches of 26Alm, yielding a superallowed branching ratio of 100.0000+0-0.0015%. A value of T1/2 = 6:34654(76) s was determined for the 26Alm half-life which is consistent with, but 2.5 times more precise than, the previous world average. Combining these results with world-average measurements yields an ft value of 3037.58(60) s, the most precisely determined for any superallowed emitting nucleus to date. This high-precision ft value for 26Alm provides a new benchmark to refine theoretical models of isospin-symmetry-breaking effects in superallowed β decays.

  3. Frozen lattice and absorptive model for high angle annular dark field scanning transmission electron microscopy: A comparison study in terms of integrated intensity and atomic column position measurement.

    PubMed

    Alania, M; Lobato, I; Van Aert, S

    2018-01-01

    In this paper, both the frozen lattice (FL) and the absorptive potential (AP) approximation models are compared in terms of the integrated intensity and the precision with which atomic columns can be located from an image acquired using high angle annular dark field (HAADF) scanning transmission electron microscopy (STEM). The comparison is made for atoms of Cu, Ag, and Au. The integrated intensity is computed for both an isolated atomic column and an atomic column inside an FCC structure. The precision has been computed using the so-called Cramér-Rao Lower Bound (CRLB), which provides a theoretical lower bound on the variance with which parameters can be estimated. It is shown that the AP model results into accurate measurements for the integrated intensity only for small detector ranges under relatively low angles and for small thicknesses. In terms of the attainable precision, both methods show similar results indicating picometer range precision under realistic experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Double the dates and go for Bayes - Impacts of model choice, dating density and quality on chronologies

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Christen, J. Andrés; Bennett, K. D.; Reimer, Paula J.

    2018-05-01

    Reliable chronologies are essential for most Quaternary studies, but little is known about how age-depth model choice, as well as dating density and quality, affect the precision and accuracy of chronologies. A meta-analysis suggests that most existing late-Quaternary studies contain fewer than one date per millennium, and provide millennial-scale precision at best. We use existing and simulated sediment cores to estimate what dating density and quality are required to obtain accurate chronologies at a desired precision. For many sites, a doubling in dating density would significantly improve chronologies and thus their value for reconstructing and interpreting past environmental changes. Commonly used classical age-depth models stop becoming more precise after a minimum dating density is reached, but the precision of Bayesian age-depth models which take advantage of chronological ordering continues to improve with more dates. Our simulations show that classical age-depth models severely underestimate uncertainty and are inaccurate at low dating densities, and also perform poorly at high dating densities. On the other hand, Bayesian age-depth models provide more realistic precision estimates, including at low to average dating densities, and are much more robust against dating scatter and outliers. Indeed, Bayesian age-depth models outperform classical ones at all tested dating densities, qualities and time-scales. We recommend that chronologies should be produced using Bayesian age-depth models taking into account chronological ordering and based on a minimum of 2 dates per millennium.

  5. High-precision buffer circuit for suppression of regenerative oscillation

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Hare, David A.; Tcheng, Ping

    1995-01-01

    Precision analog signal conditioning electronics have been developed for wind tunnel model attitude inertial sensors. This application requires low-noise, stable, microvolt-level DC performance and a high-precision buffered output. Capacitive loading of the operational amplifier output stages due to the wind tunnel analog signal distribution facilities caused regenerative oscillation and consequent rectification bias errors. Oscillation suppression techniques commonly used in audio applications were inadequate to maintain the performance requirements for the measurement of attitude for wind tunnel models. Feedback control theory is applied to develop a suppression technique based on a known compensation (snubber) circuit, which provides superior oscillation suppression with high output isolation and preserves the low-noise low-offset performance of the signal conditioning electronics. A practical design technique is developed to select the parameters for the compensation circuit to suppress regenerative oscillation occurring when typical shielded cable loads are driven.

  6. Development and simulation of microfluidic Wheatstone bridge for high-precision sensor

    NASA Astrophysics Data System (ADS)

    Shipulya, N. D.; Konakov, S. A.; Krzhizhanovskaya, V. V.

    2016-08-01

    In this work we present the results of analytical modeling and 3D computer simulation of microfluidic Wheatstone bridge, which is used for high-accuracy measurements and precision instruments. We propose and simulate a new method of a bridge balancing process by changing the microchannel geometry. This process is based on the “etching in microchannel” technology we developed earlier (doi:10.1088/1742-6596/681/1/012035). Our method ensures a precise control of the flow rate and flow direction in the bridge microchannel. The advantage of our approach is the ability to work without any control valves and other active electronic systems, which are usually used for bridge balancing. The geometrical configuration of microchannels was selected based on the analytical estimations. A detailed 3D numerical model was based on Navier-Stokes equations for a laminar fluid flow at low Reynolds numbers. We investigated the behavior of the Wheatstone bridge under different process conditions; found a relation between the channel resistance and flow rate through the bridge; and calculated the pressure drop across the system under different total flow rates and viscosities. Finally, we describe a high-precision microfluidic pressure sensor that employs the Wheatstone bridge and discuss other applications in complex precision microfluidic systems.

  7. Phasemeter core for intersatellite laser heterodyne interferometry: modelling, simulations and experiments

    NASA Astrophysics Data System (ADS)

    Gerberding, Oliver; Sheard, Benjamin; Bykov, Iouri; Kullmann, Joachim; Esteban Delgado, Juan Jose; Danzmann, Karsten; Heinzel, Gerhard

    2013-12-01

    Intersatellite laser interferometry is a central component of future space-borne gravity instruments like Laser Interferometer Space Antenna (LISA), evolved LISA, NGO and future geodesy missions. The inherently small laser wavelength allows us to measure distance variations with extremely high precision by interfering a reference beam with a measurement beam. The readout of such interferometers is often based on tracking phasemeters, which are able to measure the phase of an incoming beatnote with high precision over a wide range of frequencies. The implementation of such phasemeters is based on all digital phase-locked loops (ADPLL), hosted in FPGAs. Here, we present a precise model of an ADPLL that allows us to design such a readout algorithm and we support our analysis by numerical performance measurements and experiments with analogue signals.

  8. Rapid evolution of mimicry following local model extinction.

    PubMed

    Akcali, Christopher K; Pfennig, David W

    2014-06-01

    Batesian mimicry evolves when individuals of a palatable species gain the selective advantage of reduced predation because they resemble a toxic species that predators avoid. Here, we evaluated whether-and in which direction-Batesian mimicry has evolved in a natural population of mimics following extirpation of their model. We specifically asked whether the precision of coral snake mimicry has evolved among kingsnakes from a region where coral snakes recently (1960) went locally extinct. We found that these kingsnakes have evolved more precise mimicry; by contrast, no such change occurred in a sympatric non-mimetic species or in conspecifics from a region where coral snakes remain abundant. Presumably, more precise mimicry has continued to evolve after model extirpation, because relatively few predator generations have passed, and the fitness costs incurred by predators that mistook a deadly coral snake for a kingsnake were historically much greater than those incurred by predators that mistook a kingsnake for a coral snake. Indeed, these results are consistent with prior theoretical and empirical studies, which revealed that only the most precise mimics are favoured as their model becomes increasingly rare. Thus, highly noxious models can generate an 'evolutionary momentum' that drives the further evolution of more precise mimicry-even after models go extinct. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  9. Rapid evolution of mimicry following local model extinction

    PubMed Central

    Akcali, Christopher K.; Pfennig, David W.

    2014-01-01

    Batesian mimicry evolves when individuals of a palatable species gain the selective advantage of reduced predation because they resemble a toxic species that predators avoid. Here, we evaluated whether—and in which direction—Batesian mimicry has evolved in a natural population of mimics following extirpation of their model. We specifically asked whether the precision of coral snake mimicry has evolved among kingsnakes from a region where coral snakes recently (1960) went locally extinct. We found that these kingsnakes have evolved more precise mimicry; by contrast, no such change occurred in a sympatric non-mimetic species or in conspecifics from a region where coral snakes remain abundant. Presumably, more precise mimicry has continued to evolve after model extirpation, because relatively few predator generations have passed, and the fitness costs incurred by predators that mistook a deadly coral snake for a kingsnake were historically much greater than those incurred by predators that mistook a kingsnake for a coral snake. Indeed, these results are consistent with prior theoretical and empirical studies, which revealed that only the most precise mimics are favoured as their model becomes increasingly rare. Thus, highly noxious models can generate an ‘evolutionary momentum’ that drives the further evolution of more precise mimicry—even after models go extinct. PMID:24919704

  10. Single-anchor support and supercritical CO2 drying enable high-precision microfabrication of three-dimensional structures.

    PubMed

    Maruo, Shoji; Hasegawa, Takuya; Yoshimura, Naoki

    2009-11-09

    In high-precision two-photon microfabrication of three-dimensional (3-D) polymeric microstructures, supercritical CO(2) drying was employed to reduce surface tension, which tends to cause the collapse of micro/nano structures. Use of supercritical drying allowed high-aspect ratio microstructures, such as micropillars and cantilevers, to be fabricated. We also propose a single-anchor supporting method to eliminate non-uniform shrinkage of polymeric structures otherwise caused by attachment to the substrate. Use of this method permitted frame models such as lattices to be produced without harmful distortion. The combination of supercritical CO(2) drying and the single-anchor supporting method offers reliable high-precision microfabrication of sophisticated, fragile 3-D micro/nano structures.

  11. Micro-optical fabrication by ultraprecision diamond machining and precision molding

    NASA Astrophysics Data System (ADS)

    Li, Hui; Li, Likai; Naples, Neil J.; Roblee, Jeffrey W.; Yi, Allen Y.

    2017-06-01

    Ultraprecision diamond machining and high volume molding for affordable high precision high performance optical elements are becoming a viable process in optical industry for low cost high quality microoptical component manufacturing. In this process, first high precision microoptical molds are fabricated using ultraprecision single point diamond machining followed by high volume production methods such as compression or injection molding. In the last two decades, there have been steady improvements in ultraprecision machine design and performance, particularly with the introduction of both slow tool and fast tool servo. Today optical molds, including freeform surfaces and microlens arrays, are routinely diamond machined to final finish without post machining polishing. For consumers, compression molding or injection molding provide efficient and high quality optics at extremely low cost. In this paper, first ultraprecision machine design and machining processes such as slow tool and fast too servo are described then both compression molding and injection molding of polymer optics are discussed. To implement precision optical manufacturing by molding, numerical modeling can be included in the future as a critical part of the manufacturing process to ensure high product quality.

  12. Biofilm development of an opportunistic model bacterium analysed at high spatiotemporal resolution in the framework of a precise flow cell

    PubMed Central

    Lim, Chun Ping; Mai, Phuong Nguyen Quoc; Roizman Sade, Dan; Lam, Yee Cheong; Cohen, Yehuda

    2016-01-01

    Life of bacteria is governed by the physical dimensions of life in microscales, which is dominated by fast diffusion and flow at low Reynolds numbers. Microbial biofilms are structurally and functionally heterogeneous and their development is suggested to be interactively related to their microenvironments. In this study, we were guided by the challenging requirements of precise tools and engineered procedures to achieve reproducible experiments at high spatial and temporal resolutions. Here, we developed a robust precise engineering approach allowing for the quantification of real-time, high-content imaging of biofilm behaviour under well-controlled flow conditions. Through the merging of engineering and microbial ecology, we present a rigorous methodology to quantify biofilm development at resolutions of single micrometre and single minute, using a newly developed flow cell. We designed and fabricated a high-precision flow cell to create defined and reproducible flow conditions. We applied high-content confocal laser scanning microscopy and developed image quantification using a model biofilm of a defined opportunistic strain, Pseudomonas putida OUS82. We observed complex patterns in the early events of biofilm formation, which were followed by total dispersal. These patterns were closely related to the flow conditions. These biofilm behavioural phenomena were found to be highly reproducible, despite the heterogeneous nature of biofilm. PMID:28721252

  13. High precision tracking of a piezoelectric nano-manipulator with parameterized hysteresis compensation

    NASA Astrophysics Data System (ADS)

    Yan, Peng; Zhang, Yangming

    2018-06-01

    High performance scanning of nano-manipulators is widely deployed in various precision engineering applications such as SPM (scanning probe microscope), where trajectory tracking of sophisticated reference signals is an challenging control problem. The situation is further complicated when rate dependent hysteresis of the piezoelectric actuators and the stress-stiffening induced nonlinear stiffness of the flexure mechanism are considered. In this paper, a novel control framework is proposed to achieve high precision tracking of a piezoelectric nano-manipulator subjected to hysteresis and stiffness nonlinearities. An adaptive parameterized rate-dependent Prandtl-Ishlinskii model is constructed and the corresponding adaptive inverse model based online compensation is derived. Meanwhile a robust adaptive control architecture is further introduced to improve the tracking accuracy and robustness of the compensated system, where the parametric uncertainties of the nonlinear dynamics can be well eliminated by on-line estimations. Comparative experimental studies of the proposed control algorithm are conducted on a PZT actuated nano-manipulating stage, where hysteresis modeling accuracy and excellent tracking performance are demonstrated in real-time implementations, with significant improvement over existing results.

  14. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  15. Precise measurement of the {sup 19}Ne half-life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Triambak, S.; TRIUMF, 4004 Wesbrook Mall, Vancouver, BC V6T 2A3

    2011-11-30

    We describe a high-precision measurement of the half-life of the T = 1/2 nucleus {sup 19}Ne, performed at TRIUMF, Canada's National Laboratory for Nuclear and Particle Physics, Vancouver, Canada. Some implications of this measurement related to tests of the Standard Model are discussed.

  16. Optimal structure of metaplasticity for adaptive learning

    PubMed Central

    2017-01-01

    Learning from reward feedback in a changing environment requires a high degree of adaptability, yet the precise estimation of reward information demands slow updates. In the framework of estimating reward probability, here we investigated how this tradeoff between adaptability and precision can be mitigated via metaplasticity, i.e. synaptic changes that do not always alter synaptic efficacy. Using the mean-field and Monte Carlo simulations we identified ‘superior’ metaplastic models that can substantially overcome the adaptability-precision tradeoff. These models can achieve both adaptability and precision by forming two separate sets of meta-states: reservoirs and buffers. Synapses in reservoir meta-states do not change their efficacy upon reward feedback, whereas those in buffer meta-states can change their efficacy. Rapid changes in efficacy are limited to synapses occupying buffers, creating a bottleneck that reduces noise without significantly decreasing adaptability. In contrast, more-populated reservoirs can generate a strong signal without manifesting any observable plasticity. By comparing the behavior of our model and a few competing models during a dynamic probability estimation task, we found that superior metaplastic models perform close to optimally for a wider range of model parameters. Finally, we found that metaplastic models are robust to changes in model parameters and that metaplastic transitions are crucial for adaptive learning since replacing them with graded plastic transitions (transitions that change synaptic efficacy) reduces the ability to overcome the adaptability-precision tradeoff. Overall, our results suggest that ubiquitous unreliability of synaptic changes evinces metaplasticity that can provide a robust mechanism for mitigating the tradeoff between adaptability and precision and thus adaptive learning. PMID:28658247

  17. Precise Near IR Radial Velocity First Light Observations With iSHELL

    NASA Astrophysics Data System (ADS)

    Cale, Bryson L.; Plavchan, Peter; Gagné, Jonathan; Gao, Peter; Nishimoto, America; Tanner, Angelle; Walp, Bernie; Brinkworth, Carolyn; Johnson, John Asher; Vasisht, Gautam

    2018-01-01

    We present our current progress on obtaining precise radial velocities with the new iSHELL spectrograph at NASA's Infrared Telescope Facility. To obtain precise RV's, we use a methane isotopologue absorption gas cell in the calibration unit. Over the past year, we've collected 3-12 epochs of 17 bright G, K, and M dwarfs at a high SNR. By focusing on late type type stars, we obtain relatively higher SNR in the near infrared. We've successfully updated both our spectral and RV extraction pipelines, with a few exceptions. Inherent to the iSHELL data is a wavelength dependent fringing component, which must be incorporated into our model to obtain adequate RV precision. With iSHELL's predecessor, CSHELL, we obtained a precision of 3 m/s on the bright M giant SV Peg. With further progress on our fringing and telluric models, we hope to obtain a precision of <3 m/s with iSHELL, sufficient to detect terrestrial planets in the habitable zone of nearby M dwarfs.

  18. Estimating thumb–index finger precision grip and manipulation potential in extant and fossil primates

    PubMed Central

    Feix, Thomas; Kivell, Tracy L.; Pouydebat, Emmanuelle; Dollar, Aaron M.

    2015-01-01

    Primates, and particularly humans, are characterized by superior manual dexterity compared with other mammals. However, drawing the biomechanical link between hand morphology/behaviour and functional capabilities in non-human primates and fossil taxa has been challenging. We present a kinematic model of thumb–index precision grip and manipulative movement based on bony hand morphology in a broad sample of extant primates and fossil hominins. The model reveals that both joint mobility and digit proportions (scaled to hand size) are critical for determining precision grip and manipulation potential, but that having either a long thumb or great joint mobility alone does not necessarily yield high precision manipulation. The results suggest even the oldest available fossil hominins may have shared comparable precision grip manipulation with modern humans. In particular, the predicted human-like precision manipulation of Australopithecus afarensis, approximately one million years before the first stone tools, supports controversial archaeological evidence of tool-use in this taxon. PMID:25878134

  19. Estimating thumb-index finger precision grip and manipulation potential in extant and fossil primates.

    PubMed

    Feix, Thomas; Kivell, Tracy L; Pouydebat, Emmanuelle; Dollar, Aaron M

    2015-05-06

    Primates, and particularly humans, are characterized by superior manual dexterity compared with other mammals. However, drawing the biomechanical link between hand morphology/behaviour and functional capabilities in non-human primates and fossil taxa has been challenging. We present a kinematic model of thumb-index precision grip and manipulative movement based on bony hand morphology in a broad sample of extant primates and fossil hominins. The model reveals that both joint mobility and digit proportions (scaled to hand size) are critical for determining precision grip and manipulation potential, but that having either a long thumb or great joint mobility alone does not necessarily yield high precision manipulation. The results suggest even the oldest available fossil hominins may have shared comparable precision grip manipulation with modern humans. In particular, the predicted human-like precision manipulation of Australopithecus afarensis, approximately one million years before the first stone tools, supports controversial archaeological evidence of tool-use in this taxon. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  20. Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun

    2014-11-01

    Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.

  1. Three-dimensional surgical modelling with an open-source software protocol: study of precision and reproducibility in mandibular reconstruction with the fibula free flap.

    PubMed

    Ganry, L; Quilichini, J; Bandini, C M; Leyder, P; Hersant, B; Meningaud, J P

    2017-08-01

    Very few surgical teams currently use totally independent and free solutions to perform three-dimensional (3D) surgical modelling for osseous free flaps in reconstructive surgery. This study assessed the precision and technical reproducibility of a 3D surgical modelling protocol using free open-source software in mandibular reconstruction with fibula free flaps and surgical guides. Precision was assessed through comparisons of the 3D surgical guide to the sterilized 3D-printed guide, determining accuracy to the millimetre level. Reproducibility was assessed in three surgical cases by volumetric comparison to the millimetre level. For the 3D surgical modelling, a difference of less than 0.1mm was observed. Almost no deformations (<0.2mm) were observed post-autoclave sterilization of the 3D-printed surgical guides. In the three surgical cases, the average precision of fibula free flap modelling was between 0.1mm and 0.4mm, and the average precision of the complete reconstructed mandible was less than 1mm. The open-source software protocol demonstrated high accuracy without complications. However, the precision of the surgical case depends on the surgeon's 3D surgical modelling. Therefore, surgeons need training on the use of this protocol before applying it to surgical cases; this constitutes a limitation. Further studies should address the transfer of expertise. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  2. High-Precision Half-Life Measurement for the Superallowed β+ Emitter 22Mg

    NASA Astrophysics Data System (ADS)

    Dunlop, Michelle

    2017-09-01

    High precision measurements of the Ft values for superallowed Fermi beta transitions between 0+ isobaric analogue states allow for stringent tests of the electroweak interaction. These transitions provide an experimental probe of the Conserved-Vector-Current hypothesis, the most precise determination of the up-down element of the Cabibbo-Kobayashi-Maskawa matrix, and set stringent limits on the existence of scalar currents in the weak interaction. To calculate the Ft values several theoretical corrections must be applied to the experimental data, some of which have large model dependent variations. Precise experimental determinations of the ft values can be used to help constrain the different models. The uncertainty in the 22Mg superallowed Ft value is dominated by the uncertainty in the experimental ft value. The adopted half-life of 22Mg is determined from two measurements which disagree with one another, resulting in the inflation of the weighted-average half-life uncertainty by a factor of 2. The 22Mg half-life was measured with a precision of 0.02% via direct β counting at TRIUMF's ISAC facility, leading to an improvement in the world-average half-life by more than a factor of 3.

  3. Thermal-mechanical behavior of high precision composite mirrors

    NASA Technical Reports Server (NTRS)

    Kuo, C. P.; Lou, M. C.; Rapp, D.

    1993-01-01

    Composite mirror panels were designed, constructed, analyzed, and tested in the framework of a NASA precision segmented reflector task. The deformations of the reflector surface during the exposure to space enviroments were predicted using a finite element model. The composite mirror panels have graphite-epoxy or graphite-cyanate facesheets, separated by an aluminum or a composite honeycomb core. It is pointed out that in order to carry out detailed modeling of composite mirrors with high accuracy, it is necessary to have temperature dependent properties of the materials involved and the type and magnitude of manufacturing errors and material nonuniformities. The structural modeling and analysis efforts addressed the impact of key design and materials parameters on the performance of mirrors.

  4. Cognitive and Neural Bases of Skilled Performance.

    DTIC Science & Technology

    1987-10-04

    advantage is that this method is not computationally demanding, and model -specific analyses such as high -precision source localization with realistic...and a two- < " high -threshold model satisfy theoretical and pragmatic independence. Discrimination and bias measures from these two models comparing...recognition memory of patients with dementing diseases, amnesics, and normal controls. We found the two- high -threshold model to be more sensitive Lloyd

  5. High precision U-PB geochronology and implications for the tectonic evolution of the Superior Province

    NASA Technical Reports Server (NTRS)

    Davis, D. W.; Corfu, F.; Krogh, T. E.

    1986-01-01

    The underlying mechanisms of Archean tectonics and the degree to which modern plate tectonic models are applicable early in Earth's history continue to be a subject of considerable debate. A precise knowledge of the timing of geological events is of the utmost importance in studying this problem. The high precision U-Pb method has been applied in recent years to rock units in many areas of the Superior Province. Most of these data have precisions of about + or - 2-3 Ma. The resulting detailed chronologies of local igneous development and the regional age relationships furnish tight constraints on any Archean tectonic model. Superior province terrains can be classified into 3 types: (1) low grade areas dominated by meta-volcanic rocks (greenstone belts); (2) high grade, largely metaplutonic areas with abundant orthogneiss and foliated to massive I-type granitoid bodies; and (3) high grade areas with abundant metasediments, paragneiss and S-type plutons. Most of the U-Pb age determinations have been done on type 1 terrains with very few having been done in type 3 terrains. A compilation of over 120 ages indicates that the major part of igneous activity took place in the period 2760-2670 Ma, known as the Kenoran event. This event was ubiquitous throughout the Superior Province.

  6. Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision.

    PubMed

    Ender, Andreas; Mehl, Albert

    2013-02-01

    A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. A steel reference dentate model was fabricated and measured with a reference scanner (digital reference model). Conventional impressions were made from the reference model, poured with Type IV dental stone, scanned with the reference scanner, and exported as digital models. Additionally, digital impressions of the reference model were made and the digital models were exported. Precision was measured by superimposing the digital models within each group. Superimposing the digital models on the digital reference model assessed the trueness of each impression method. Statistical significance was assessed with an independent sample t test (α=.05). The reference scanner delivered high accuracy over the entire dental arch with a precision of 1.6 ±0.6 µm and a trueness of 5.3 ±1.1 µm. Conventional impressions showed significantly higher precision (12.5 ±2.5 µm) and trueness values (20.4 ±2.2 µm) with small deviations in the second molar region (P<.001). Digital impressions were significantly less accurate with a precision of 32.4 ±9.6 µm and a trueness of 58.6 ±15.8µm (P<.001). More systematic deviations of the digital models were visible across the entire dental arch. The new reference scanner is capable of measuring the precision and trueness of both digital and conventional complete-arch impressions. The digital impression is less accurate and shows a different pattern of deviation than the conventional impression. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  7. Head-target tracking control of well drilling

    NASA Astrophysics Data System (ADS)

    Agzamov, Z. V.

    2018-05-01

    The method of directional drilling trajectory control for oil and gas wells using predictive models is considered in the paper. The developed method does not apply optimization and therefore there is no need for the high-performance computing. Nevertheless, it allows following the well-plan with high precision taking into account process input saturation. Controller output is calculated both from the present target reference point of the well-plan and from well trajectory prediction with using the analytical model. This method allows following a well-plan not only on angular, but also on the Cartesian coordinates. Simulation of the control system has confirmed the high precision and operation performance with a wide range of random disturbance action.

  8. Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.

    PubMed

    Yang, Lu

    2009-01-01

    For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.

  9. GNSS orbit determination by precise modeling of non-gravitational forces acting on satellite's body

    NASA Astrophysics Data System (ADS)

    Wielgosz, Agata; Kalarus, Maciej; Liwosz, Tomasz

    2016-04-01

    Satellites orbiting around Earth are affected by gravitational forces and non-gravitational perturbations (NGP). While the perturbations caused by gravitational forces, which are due to central body gravity (including high-precision geopotential field) and its changes (due to secular variations and tides), solar bodies attraction and relativistic effects are well-modeled, the perturbations caused by the non-gravitational forces are the most limiting factor in Precise Orbit Determination (POD). In this work we focused on very precise non-gravitational force modeling for medium Earth orbit satellites by applying the various models of solar radiation pressure including changes in solar irradiance and Earth/Moon shadow transition, Earth albedo and thermal radiation. For computing influence of aforementioned forces on spacecraft the analytical box-wing satellite model was applied. Smaller effects like antenna thrust or spacecraft thermal radiation were also included. In the process of orbit determination we compared the orbit with analytically computed NGP with the standard procedure in which CODE model is fitted for NGP recovery. We considered satellites from several systems and on different orbits and for different periods: when the satellite is all the time in full sunlight and when transits the umbra and penumbra regions.

  10. Subpixel edge estimation with lens aberrations compensation based on the iterative image approximation for high-precision thermal expansion measurements of solids

    NASA Astrophysics Data System (ADS)

    Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.

    2017-06-01

    A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.

  11. Use of single-representative reverse-engineered surface-models for RSA does not affect measurement accuracy and precision.

    PubMed

    Seehaus, Frank; Schwarze, Michael; Flörkemeier, Thilo; von Lewinski, Gabriela; Kaptein, Bart L; Jakubowitz, Eike; Hurschler, Christof

    2016-05-01

    Implant migration can be accurately quantified by model-based Roentgen stereophotogrammetric analysis (RSA), using an implant surface model to locate the implant relative to the bone. In a clinical situation, a single reverse engineering (RE) model for each implant type and size is used. It is unclear to what extent the accuracy and precision of migration measurement is affected by implant manufacturing variability unaccounted for by a single representative model. Individual RE models were generated for five short-stem hip implants of the same type and size. Two phantom analyses and one clinical analysis were performed: "Accuracy-matched models": one stem was assessed, and the results from the original RE model were compared with randomly selected models. "Accuracy-random model": each of the five stems was assessed and analyzed using one randomly selected RE model. "Precision-clinical setting": implant migration was calculated for eight patients, and all five available RE models were applied to each case. For the two phantom experiments, the 95%CI of the bias ranged from -0.28 mm to 0.30 mm for translation and -2.3° to 2.5° for rotation. In the clinical setting, precision is less than 0.5 mm and 1.2° for translation and rotation, respectively, except for rotations about the proximodistal axis (<4.1°). High accuracy and precision of model-based RSA can be achieved and are not biased by using a single representative RE model. At least for implants similar in shape to the investigated short-stem, individual models are not necessary. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:903-910, 2016. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  12. Model-based RSA of a femoral hip stem using surface and geometrical shape models.

    PubMed

    Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M

    2006-07-01

    Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.

  13. Impact of orbit modeling on DORIS station position and Earth rotation estimates

    NASA Astrophysics Data System (ADS)

    Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav

    2014-04-01

    The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.

  14. Superior Intraparietal Sulcus Controls the Variability of Visual Working Memory Precision.

    PubMed

    Galeano Weber, Elena M; Peters, Benjamin; Hahn, Tim; Bledowski, Christoph; Fiebach, Christian J

    2016-05-18

    Limitations of working memory (WM) capacity depend strongly on the cognitive resources that are available for maintaining WM contents in an activated state. Increasing the number of items to be maintained in WM was shown to reduce the precision of WM and to increase the variability of WM precision over time. Although WM precision was recently associated with neural codes particularly in early sensory cortex, we have so far no understanding of the neural bases underlying the variability of WM precision, and how WM precision is preserved under high load. To fill this gap, we combined human fMRI with computational modeling of behavioral performance in a delayed color-estimation WM task. Behavioral results replicate a reduction of WM precision and an increase of precision variability under high loads (5 > 3 > 1 colors). Load-dependent BOLD signals in primary visual cortex (V1) and superior intraparietal sulcus (IPS), measured during the WM task at 2-4 s after sample onset, were modulated by individual differences in load-related changes in the variability of WM precision. Although stronger load-related BOLD increase in superior IPS was related to lower increases in precision variability, thus stabilizing WM performance, the reverse was observed for V1. Finally, the detrimental effect of load on behavioral precision and precision variability was accompanied by a load-related decline in the accuracy of decoding the memory stimuli (colors) from left superior IPS. We suggest that the superior IPS may contribute to stabilizing visual WM performance by reducing the variability of memory precision in the face of higher load. This study investigates the neural bases of capacity limitations in visual working memory by combining fMRI with cognitive modeling of behavioral performance, in human participants. It provides evidence that the superior intraparietal sulcus (IPS) is a critical brain region that influences the variability of visual working memory precision between and within individuals (Fougnie et al., 2012; van den Berg et al., 2012) under increased memory load, possibly in cooperation with perceptual systems of the occipital cortex. These findings substantially extend our understanding of the nature of capacity limitations in visual working memory and their neural bases. Our work underlines the importance of integrating cognitive modeling with univariate and multivariate methods in fMRI research, thus improving our knowledge of brain-behavior relationships. Copyright © 2016 the authors 0270-6474/16/365623-13$15.00/0.

  15. A kinematic/kinetic hybrid airplane simulator model : draft.

    DOT National Transportation Integrated Search

    2008-01-01

    A kinematics-based flight model, for normal flight : regimes, currently uses precise flight data to achieve a high : level of aircraft realism. However, it was desired to further : increase the models accuracy, without a substantial increase in : ...

  16. A kinematic/kinetic hybrid airplane simulator model.

    DOT National Transportation Integrated Search

    2008-01-01

    A kinematics-based flight model, for normal flight : regimes, currently uses precise flight data to achieve a high : level of aircraft realism. However, it was desired to further : increase the models accuracy, without a substantial increase in : ...

  17. Singlet-catalyzed electroweak phase transitions and precision Higgs boson studies

    NASA Astrophysics Data System (ADS)

    Profumo, Stefano; Ramsey-Musolf, Michael J.; Wainwright, Carroll L.; Winslow, Peter

    2015-02-01

    We update the phenomenology of gauge-singlet extensions of the Standard Model scalar sector and their implications for the electroweak phase transition. Considering the introduction of one real scalar singlet to the scalar potential, we analyze present constraints on the potential parameters from Higgs coupling measurements at the Large Hadron Collider (LHC) and electroweak precision observables for the kinematic regime in which no new scalar decay modes arise. We then show how future precision measurements of Higgs boson signal strengths and the Higgs self-coupling could probe the scalar potential parameter space associated with a strong first-order electroweak phase transition. We illustrate using benchmark precision for several future collider options, including the high-luminosity LHC, the International Linear Collider, Triple-Large Electron-Positron collider, the China Electron-Positron Collider, and a 100 TeV proton-proton collider, such as the Very High Energy LHC or the Super Proton-Proton Collider. For the regions of parameter space leading to a strong first-order electroweak phase transition, we find that there exists considerable potential for observable deviations from purely Standard Model Higgs properties at these prospective future colliders.

  18. The Outer Solar System Origin Survey full data release orbit catalog and characterization.

    NASA Astrophysics Data System (ADS)

    Kavelaars, J. J.; Bannister, Michele T.; Gladman, Brett; Petit, Jean-Marc; Gwyn, Stephen; Alexandersen, Mike; Chen, Ying-Tung; Volk, Kathryn; OSSOS Collaboration.

    2017-10-01

    The Outer Solar System Origin Survey (OSSOS) completed main data acquisition in February 2017. Here we report the release of our full orbit sample, which include 836 TNOs with high precision orbit determination and classification. We combine the OSSOS orbit sample with previously release Canada-France Ecliptic Plane Survey (CFEPS) and a precursor survey to OSSOS by Alexandersen et al. to provide a sample of over 1100 TNO orbits with high precision classified orbits and precisely determined discovery and tracking circumstances (characterization). We are releasing the full sample and characterization to the world community, along with software for conducting ‘Survey Simulations’, so that this sample of orbits can be used to test models of the formation of our outer solar system against the observed sample. Here I will present the characteristics of the data set and present a parametric model for the structure of the classical Kuiper belt.

  19. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.

  20. Tidal current energy potential of Nalón river estuary assessment using a high precision flow model

    NASA Astrophysics Data System (ADS)

    Badano, Nicolás; Valdés, Rodolfo Espina; Álvarez, Eduardo Álvarez

    2018-05-01

    Obtaining energy from tide currents in onshore locations is of great interest due to the proximity to the points of consumption. This opens the door to the feasibility of new installations based on hydrokinetic microturbines even in zones of moderate speed. In this context, the accuracy of energy predictions based on hydrodynamic models is of paramount importance. This research presents a high precision methodology based on a multidimensional hydrodynamic model that is used to study the energetic potential in estuaries. Moreover, it is able to estimate the flow variations caused by microturbine installations. The paper also shows the results obtained from the application of the methodology in a study of the Nalón river mouth (Asturias, Spain).

  1. Ion microprobe measurement of strontium isotopes in calcium carbonate with application to salmon otoliths

    USGS Publications Warehouse

    Weber, P.K.; Bacon, C.R.; Hutcheon, I.D.; Ingram, B.L.; Wooden, J.L.

    2005-01-01

    The ion microprobe has the capability to generate high resolution, high precision isotopic measurements, but analysis of the isotopic composition of strontium, as measured by the 87Sr/ 86Sr ratio, has been hindered by isobaric interferences. Here we report the first high precision measurements of 87Sr/ 86Sr by ion microprobe in calcium carbonate samples with moderate Sr concentrations. We use the high mass resolving power (7000 to 9000 M.R.P.) of the SHRIMP-RG ion microprobe in combination with its high transmission to reduce the number of interfering species while maintaining sufficiently high count rates for precise isotopic measurements. The isobaric interferences are characterized by peak modeling and repeated analyses of standards. We demonstrate that by sample-standard bracketing, 87Sr/86Sr ratios can be measured in inorganic and biogenic carbonates with Sr concentrations between 400 and 1500 ppm with ???2??? external precision (2??) for a single analysis, and subpermil external precision with repeated analyses. Explicit correction for isobaric interferences (peak-stripping) is found to be less accurate and precise than sample-standard bracketing. Spatial resolution is ???25 ??m laterally and 2 ??m deep for a single analysis, consuming on the order of 2 ng of material. The method is tested on otoliths from salmon to demonstrate its accuracy and utility. In these growth-banded aragonitic structures, one-week temporal resolution can be achieved. The analytical method should be applicable to other calcium carbonate samples with similar Sr concentrations. Copyright ?? 2005 Elsevier Ltd.

  2. Automated and model-based assembly of an anamorphic telescope

    NASA Astrophysics Data System (ADS)

    Holters, Martin; Dirks, Sebastian; Stollenwerk, Jochen; Loosen, Peter

    2018-02-01

    Since the first usage of optical glasses there has been an increasing demand for optical systems which are highly customized for a wide field of applications. To meet the challenge of the production of so many unique systems, the development of new techniques and approaches has risen in importance. However, the assembly of precision optical systems with lot sizes of one up to a few tens of systems is still dominated by manual labor. In contrast, highly adaptive and model-based approaches may offer a solution for manufacturing with a high degree of automation and high throughput while maintaining high precision. In this work a model-based automated assembly approach based on ray-tracing is presented. This process runs autonomously, and accounts for a wide range of functionality. It firstly identifies the sequence for an optimized assembly and secondly, generates and matches intermediate figures of merit to predict the overall optical functionality of the optical system. This process also takes into account the generation of a digital twin of the optical system, by mapping key-performance-indicators like the first and the second momentum of intensity into the optical model. This approach is verified by the automatic assembly of an anamorphic telescope within an assembly cell. By continuous measuring and mapping the key-performance-indicators into the optical model, the quality of the digital twin is determined. Moreover, by measuring the optical quality and geometrical parameters of the telescope, the precision of this approach is determined. Finally, the productivity of the process is evaluated by monitoring the speed of the different steps of the process.

  3. Three-axis lever actuator with flexure hinges for an optical disk system

    NASA Astrophysics Data System (ADS)

    Han, Chang-Soo; Kim, Soo-Hyun

    2002-10-01

    A three-axis lever actuator with a flexure hinge has been designed and fabricated. This actuator is driven by electromagnetic force based on a coil-magnet system and can be used as a high precision actuator and, especially as a pickup head actuator in optical disks. High precision and low sensitivity to external vibration are the major advantages of this lever actuator. An analysis model was found and compared to the finite element method. Dynamic characteristics of the three-axis lever actuator were measured. The results are in very close agreement to those predicted by the model and finite element analysis.

  4. QCD Precision Measurements and Structure Function Extraction at a High Statistics, High Energy Neutrino Scattering Experiment:. NuSOnG

    NASA Astrophysics Data System (ADS)

    Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.

    We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.

  5. Improving the precision of the keyword-matching pornographic text filtering method using a hybrid model.

    PubMed

    Su, Gui-yang; Li, Jian-hua; Ma, Ying-hua; Li, Sheng-hong

    2004-09-01

    With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.

  6. The Dharma Planet Survey (DPS), a Robotic, High Cadence and High Doppler Precision Survey of Habitable Rocky Planets around Nearby Stars

    NASA Astrophysics Data System (ADS)

    Ge, Jian; Ma, Bo; Muterspaugh, Matthew W.; Singer, Michael; Varosi, Frank; Powell, Scott; Williamson, Michael W.; Sithajan, Sirinrat; Grieves, Nolan; Zhao, Bo; Schofield, Sidney; Liu, Jian; Cassette, Anthony; Carlson, Kevin; Klanot, Khaya; Jeram, Sarik; Barnes, Rory

    2016-01-01

    The Dharma Planet Survey (DPS) is to monitor ~100 nearby very bright FGKM dwarfs (most of them brighter than V=8) during 2014-2018 using the TOU optical very high resolution spectrograph (R~100,000, 380-900nm) at the 2m Automatic Spectroscopy Telescope at Fairborn Observatory initially (2014-2015) and at the dedicated 50-inch Robotic Telescope (2016-2018) on Mt. Lemmon after the telescope is installed in the fall of 2015. Operated in high vacuum (<0.01mTorr) with precisely controlled temperature (~1-2 mK), TOU has delivered ~ 1 m/s (RMS) instrument stability after the hardware upgrade in September 2015. DPS aims at reaching better than 0.5 m/s Doppler measurement precision for bright survey targets after the instrument tiny drift is carefully calibrated with Thorium-Argon and Sine reference sources. With very high RV precision and high cadence (~100 observations per target randomly spread over 450 days), a large number of rocky planets, including possible habitable ones, are expected to be detected. The survey also provides the largest single homogenous high precision RV sample of nearby stars for studying low mass planet populations and constraining various planet formation models. Early scientific results from the DPS pilot survey of 25 FGKM dwarfs will be presented.

  7. Sakurai Prize: The Future of Higgs Physics

    NASA Astrophysics Data System (ADS)

    Dawson, Sally

    2017-01-01

    The discovery of the Higgs boson relied critically on precision calculations. The quantum contributions from the Higgs boson to the W and top quark masses suggested long before the Higgs discovery that a Standard Model Higgs boson should have a mass in the 100-200 GeV range. The experimental extraction of Higgs properties requires normalization to the predicted Higgs production and decay rates, for which higher order corrections are also essential. As Higgs physics becomes a mature subject, more and more precise calculations will be required. If there is new physics at high scales, it will contribute to the predictions and precision Higgs physics will be a window to beyond the Standard Model physics.

  8. A real-time ionospheric model based on GNSS Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Hongping; Ge, Maorong; Huang, Guanwen

    2013-09-01

    This paper proposes a method of real-time monitoring and modeling the ionospheric Total Electron Content (TEC) by Precise Point Positioning (PPP). Firstly, the ionospheric TEC and receiver’s Differential Code Biases (DCB) are estimated with the undifferenced raw observation in real-time, then the ionospheric TEC model is established based on the Single Layer Model (SLM) assumption and the recovered ionospheric TEC. In this study, phase observations with high precision are directly used instead of phase smoothed code observations. In addition, the DCB estimation is separated from the establishment of the ionospheric model which will limit the impacts of the SLM assumption impacts. The ionospheric model is established at every epoch for real time application. The method is validated with three different GNSS networks on a local, regional, and global basis. The results show that the method is feasible and effective, the real-time ionosphere and DCB results are very consistent with the IGS final products, with a bias of 1-2 TECU and 0.4 ns respectively.

  9. A Comparative Study of the Applied Methods for Estimating Deflection of the Vertical in Terrestrial Geodetic Measurements

    PubMed Central

    Vittuari, Luca; Tini, Maria Alessandra; Sarti, Pierguido; Serantoni, Eugenio; Borghi, Alessandra; Negusini, Monia; Guillaume, Sébastien

    2016-01-01

    This paper compares three different methods capable of estimating the deflection of the vertical (DoV): one is based on the joint use of high precision spirit leveling and Global Navigation Satellite Systems (GNSS), a second uses astro-geodetic measurements and the third gravimetric geoid models. The working data sets refer to the geodetic International Terrestrial Reference Frame (ITRF) co-location sites of Medicina (Northern, Italy) and Noto (Sicily), these latter being excellent test beds for our investigations. The measurements were planned and realized to estimate the DoV with a level of precision comparable to the angular accuracy achievable in high precision network measured by modern high-end total stations. The three methods are in excellent agreement, with an operational supremacy of the astro-geodetic method, being faster and more precise than the others. The method that combines leveling and GNSS has slightly larger standard deviations; although well within the 1 arcsec level, which was assumed as threshold. Finally, the geoid model based method, whose 2.5 arcsec standard deviations exceed this threshold, is also statistically consistent with the others and should be used to determine the DoV components where local ad hoc measurements are lacking. PMID:27104544

  10. Developing ISM Dust Grain Models with Precision Elemental Abundances from IXO

    NASA Technical Reports Server (NTRS)

    Valencic, L. A.; Smith, R. K.; Juet, A.

    2009-01-01

    The exact nature of interstellar dust grains in the Galaxy remains mysterious, despite their ubiquity. Many viable models exist, based on available IR-UV data and assumed elemental abundances. However, the abundances, which are perhaps the most stringent constraint, are not well known: modelers must use proxies in the absence of direct measurements for the diffuse interstellar medium (ISM). Recent revisions of these proxy values have only added to confusion over which is the best representative for the diffuse ISM, and highlighted the need for direct, high signal-to-noise measurements from the ISM itself. The International X-ray Observatory's superior facilities will enable high-precision elemental abundance measurements. We ill show how these results will measure both the overall ISM abundances and challenge dust models, allowing us to construct a more realistic picture of the ISM.

  11. High-precision comparison of the antiproton-to-proton charge-to-mass ratio.

    PubMed

    Ulmer, S; Smorra, C; Mooser, A; Franke, K; Nagahama, H; Schneider, G; Higuchi, T; Van Gorp, S; Blaum, K; Matsuda, Y; Quint, W; Walz, J; Yamazaki, Y

    2015-08-13

    Invariance under the charge, parity, time-reversal (CPT) transformation is one of the fundamental symmetries of the standard model of particle physics. This CPT invariance implies that the fundamental properties of antiparticles and their matter-conjugates are identical, apart from signs. There is a deep link between CPT invariance and Lorentz symmetry--that is, the laws of nature seem to be invariant under the symmetry transformation of spacetime--although it is model dependent. A number of high-precision CPT and Lorentz invariance tests--using a co-magnetometer, a torsion pendulum and a maser, among others--have been performed, but only a few direct high-precision CPT tests that compare the fundamental properties of matter and antimatter are available. Here we report high-precision cyclotron frequency comparisons of a single antiproton and a negatively charged hydrogen ion (H(-)) carried out in a Penning trap system. From 13,000 frequency measurements we compare the charge-to-mass ratio for the antiproton (q/m)p- to that for the proton (q/m)p and obtain (q/m)p-/(q/m)p − 1 =1(69) × 10(-12). The measurements were performed at cyclotron frequencies of 29.6 megahertz, so our result shows that the CPT theorem holds at the atto-electronvolt scale. Our precision of 69 parts per trillion exceeds the energy resolution of previous antiproton-to-proton mass comparisons as well as the respective figure of merit of the standard model extension by a factor of four. In addition, we give a limit on sidereal variations in the measured ratio of <720 parts per trillion. By following the arguments of ref. 11, our result can be interpreted as a stringent test of the weak equivalence principle of general relativity using baryonic antimatter, and it sets a new limit on the gravitational anomaly parameter of |α − 1| < 8.7 × 10(-7).

  12. WE-AB-202-09: Feasibility and Quantitative Analysis of 4DCT-Based High Precision Lung Elastography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasse, K; Neylon, J; Low, D

    2016-06-15

    Purpose: The purpose of this project is to derive high precision elastography measurements from 4DCT lung scans to facilitate the implementation of elastography in a radiotherapy context. Methods: 4DCT scans of the lungs were acquired, and breathing stages were subsequently registered to each other using an optical flow DIR algorithm. The displacement of each voxel gleaned from the registration was taken to be the ground-truth deformation. These vectors, along with the 4DCT source datasets, were used to generate a GPU-based biomechanical simulation that acted as a forward model to solve the inverse elasticity problem. The lung surface displacements were appliedmore » as boundary constraints for the model-guided lung tissue elastography, while the inner voxels were allowed to deform according to the linear elastic forces within the model. A biomechanically-based anisotropic convergence magnification technique was applied to the inner voxels in order to amplify the subtleties of the interior deformation. Solving the inverse elasticity problem was accomplished by modifying the tissue elasticity and iteratively deforming the biomechanical model. Convergence occurred when each voxel was within 0.5 mm of the ground-truth deformation and 1 kPa of the ground-truth elasticity distribution. To analyze the feasibility of the model-guided approach, we present the results for regions of low ventilation, specifically, the apex. Results: The maximum apical boundary expansion was observed to be between 2 and 6 mm. Simulating this expansion within an apical lung model, it was observed that 100% of voxels converged within 0.5 mm of ground-truth deformation, while 91.8% converged within 1 kPa of the ground-truth elasticity distribution. A mean elasticity error of 0.6 kPa illustrates the high precision of our technique. Conclusion: By utilizing 4DCT lung data coupled with a biomechanical model, high precision lung elastography can be accurately performed, even in low ventilation regions of the lungs. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1144087.« less

  13. Analysis of precision in chemical oscillators: implications for circadian clocks

    NASA Astrophysics Data System (ADS)

    d'Eysmond, Thomas; De Simone, Alessandro; Naef, Felix

    2013-10-01

    Biochemical reaction networks often exhibit spontaneous self-sustained oscillations. An example is the circadian oscillator that lies at the heart of daily rhythms in behavior and physiology in most organisms including humans. While the period of these oscillators evolved so that it resonates with the 24 h daily environmental cycles, the precision of the oscillator (quantified via the Q factor) is another relevant property of these cell-autonomous oscillators. Since this quantity can be measured in individual cells, it is of interest to better understand how this property behaves across mathematical models of these oscillators. Current theoretical schemes for computing the Q factors show limitations for both high-dimensional models and in the vicinity of Hopf bifurcations. Here, we derive low-noise approximations that lead to numerically stable schemes also in high-dimensional models. In addition, we generalize normal form reductions that are appropriate near Hopf bifurcations. Applying our approximations to two models of circadian clocks, we show that while the low-noise regime is faithfully recapitulated, increasing the level of noise leads to species-dependent precision. We emphasize that subcomponents of the oscillator gradually decouple from the core oscillator as noise increases, which allows us to identify the subnetworks responsible for robust rhythms.

  14. Nighttime Aerosol Optical Depth Measurements Using a Ground-based Lunar Photometer

    NASA Technical Reports Server (NTRS)

    Berkoff, Tim; Omar, Ali; Haggard, Charles; Pippin, Margaret; Tasaddaq, Aasam; Stone, Tom; Rodriguez, Jon; Slutsker, Ilya; Eck, Tom; Holben, Brent; hide

    2015-01-01

    In recent years it was proposed to combine AERONET network photometer capabilities with a high precision lunar model used for satellite calibration to retrieve columnar nighttime AODs. The USGS lunar model can continuously provide pre-atmosphere high precision lunar irradiance determinations for multiple wavelengths at ground sensor locations. When combined with measured irradiances from a ground-based AERONET photometer, atmospheric column transmissions can determined yielding nighttime column aerosol AOD and Angstrom coefficients. Additional demonstrations have utilized this approach to further develop calibration methods and to obtain data in polar regions where extended periods of darkness occur. This new capability enables more complete studies of the diurnal behavior of aerosols, and feedback for models and satellite retrievals for the nighttime behavior of aerosols. It is anticipated that the nighttime capability of these sensors will be useful for comparisons with satellite lidars such as CALIOP and CATS in additional to ground-based lidars in MPLNET at night, when the signal-to-noise ratio is higher than daytime and more precise AOD comparisons can be made.

  15. Sub-sampling genetic data to estimate black bear population size: A case study

    USGS Publications Warehouse

    Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.

    2007-01-01

    Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.

  16. Analysis of key technologies in geomagnetic navigation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Zhao, Yan

    2008-10-01

    Because of the costly price and the error accumulation of high precise Inertial Navigation Systems (INS) and the vulnerability of Global Navigation Satellite Systems (GNSS), the geomagnetic navigation technology, a passive autonomous navigation method, is paid attention again. Geomagnetic field is a natural spatial physical field, and is a function of position and time in near earth space. The navigation technology based on geomagnetic field is researched in a wide range of commercial and military applications. This paper presents the main features and the state-of-the-art of Geomagnetic Navigation System (GMNS). Geomagnetic field models and reference maps are described. Obtaining, modeling and updating accurate Anomaly Magnetic Field information is an important step for high precision geomagnetic navigation. In addition, the errors of geomagnetic measurement using strapdown magnetometers are analyzed. The precise geomagnetic data is obtained by means of magnetometer calibration and vehicle magnetic field compensation. According to the measurement data and reference map or model of geomagnetic field, the vehicle's position and attitude can be obtained using matching algorithm or state-estimating method. The tendency of geomagnetic navigation in near future is introduced at the end of this paper.

  17. Personalized In Vitro and In Vivo Cancer Models to Guide Precision Medicine

    PubMed Central

    Pauli, Chantal; Hopkins, Benjamin D.; Prandi, Davide; Shaw, Reid; Fedrizzi, Tarcisio; Sboner, Andrea; Sailer, Verena; Augello, Michael; Puca, Loredana; Rosati, Rachele; McNary, Terra J.; Churakova, Yelena; Cheung, Cynthia; Triscott, Joanna; Pisapia, David; Rao, Rema; Mosquera, Juan Miguel; Robinson, Brian; Faltas, Bishoy M.; Emerling, Brooke E.; Gadi, Vijayakrishna K.; Bernard, Brady; Elemento, Olivier; Beltran, Himisha; Dimichelis, Francesca; Kemp, Christopher J.; Grandori, Carla; Cantley, Lewis C.; Rubin, Mark A.

    2017-01-01

    Precision Medicine is an approach that takes into account the influence of individuals' genes, environment and lifestyle exposures to tailor interventions. Here, we describe the development of a robust precision cancer care platform, which integrates whole exome sequencing (WES) with a living biobank that enables high throughput drug screens on patient-derived tumor organoids. To date, 56 tumor-derived organoid cultures, and 19 patient-derived xenograft (PDX) models have been established from the 769 patients enrolled in an IRB approved clinical trial. Because genomics alone was insufficient to identify therapeutic options for the majority of patients with advanced disease, we used high throughput drug screening effective strategies. Analysis of tumor derived cells from four cases, two uterine malignancies and two colon cancers, identified effective drugs and drug combinations that were subsequently validated using 3D cultures and PDX models. This platform thereby promotes the discovery of novel therapeutic approaches that can be assessed in clinical trials and provides personalized therapeutic options for individual patients where standard clinical options have been exhausted. PMID:28331002

  18. Multi-Scale Computational Models for Electrical Brain Stimulation

    PubMed Central

    Seo, Hyeon; Jun, Sung C.

    2017-01-01

    Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476

  19. Composite panel development at JPL

    NASA Technical Reports Server (NTRS)

    Mcelroy, Paul; Helms, Rich

    1988-01-01

    Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.

  20. Measurement of latent cognitive abilities involved in concept identification learning.

    PubMed

    Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Nock, Matthew K; Naifeh, James A; Heeringa, Steven; Ursano, Robert J; Stein, Murray B

    2015-01-01

    We used cognitive and psychometric modeling techniques to evaluate the construct validity and measurement precision of latent cognitive abilities measured by a test of concept identification learning: the Penn Conditional Exclusion Test (PCET). Item response theory parameters were embedded within classic associative- and hypothesis-based Markov learning models and were fitted to 35,553 Army soldiers' PCET data from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Data were consistent with a hypothesis-testing model with multiple latent abilities-abstraction and set shifting. Latent abstraction ability was positively correlated with number of concepts learned, and latent set-shifting ability was negatively correlated with number of perseverative errors, supporting the construct validity of the two parameters. Abstraction was most precisely assessed for participants with abilities ranging from 1.5 standard deviations below the mean to the mean itself. Measurement of set shifting was acceptably precise only for participants making a high number of perseverative errors. The PCET precisely measures latent abstraction ability in the Army STARRS sample, especially within the range of mildly impaired to average ability. This precision pattern is ideal for a test developed to measure cognitive impairment as opposed to cognitive strength. The PCET also measures latent set-shifting ability, but reliable assessment is limited to the impaired range of ability, reflecting that perseverative errors are rare among cognitively healthy adults. Integrating cognitive and psychometric models can provide information about construct validity and measurement precision within a single analytical framework.

  1. Simulation of Thermal Behavior in High-Precision Measurement Instruments

    NASA Astrophysics Data System (ADS)

    Weis, Hanna Sophie; Augustin, Silke

    2008-06-01

    In this paper, a way to modularize complex finite-element models is described. The modularization is done with temperature fields that appear in high-precision measurement instruments. There, the temperature negatively impacts the achievable uncertainty of measurement. To correct for this uncertainty, the temperature must be known at every point. This cannot be achieved just by measuring temperatures at specific locations. Therefore, a numerical treatment is necessary. As the system of interest is very complex, modularization is unavoidable to obtain good numerical results.

  2. High-Precision Monte Carlo Simulation of the Ising Models on the Penrose Lattice and the Dual Penrose Lattice

    NASA Astrophysics Data System (ADS)

    Komura, Yukihiro; Okabe, Yutaka

    2016-04-01

    We study the Ising models on the Penrose lattice and the dual Penrose lattice by means of the high-precision Monte Carlo simulation. Simulating systems up to the total system size N = 20633239, we estimate the critical temperatures on those lattices with high accuracy. For high-speed calculation, we use the generalized method of the single-GPU-based computation for the Swendsen-Wang multi-cluster algorithm of Monte Carlo simulation. As a result, we estimate the critical temperature on the Penrose lattice as Tc/J = 2.39781 ± 0.00005 and that of the dual Penrose lattice as Tc*/J = 2.14987 ± 0.00005. Moreover, we definitely confirm the duality relation between the critical temperatures on the dual pair of quasilattices with a high degree of accuracy, sinh (2J/Tc)sinh (2J/Tc*) = 1.00000 ± 0.00004.

  3. Inexact hardware for modelling weather & climate

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, Tim

    2014-05-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.

  4. Modeling and Implementation of Multi-Position Non-Continuous Rotation Gyroscope North Finder.

    PubMed

    Luo, Jun; Wang, Zhiqian; Shen, Chengwu; Kuijper, Arjan; Wen, Zhuoman; Liu, Shaojin

    2016-09-20

    Even when the Global Positioning System (GPS) signal is blocked, a rate gyroscope (gyro) north finder is capable of providing the required azimuth reference information to a certain extent. In order to measure the azimuth between the observer and the north direction very accurately, we propose a multi-position non-continuous rotation gyro north finding scheme. Our new generalized mathematical model analyzes the elements that affect the azimuth measurement precision and can thus provide high precision azimuth reference information. Based on the gyro's principle of detecting a projection of the earth rotation rate on its sensitive axis and the proposed north finding scheme, we are able to deduct an accurate mathematical model of the gyro outputs against azimuth with the gyro and shaft misalignments. Combining the gyro outputs model and the theory of propagation of uncertainty, some approaches to optimize north finding are provided, including reducing the gyro bias error, constraining the gyro random error, increasing the number of rotation points, improving rotation angle measurement precision, decreasing the gyro and the shaft misalignment angles. According them, a north finder setup is built and the azimuth uncertainty of 18" is obtained. This paper provides systematic theory for analyzing the details of the gyro north finder scheme from simulation to implementation. The proposed theory can guide both applied researchers in academia and advanced practitioners in industry for designing high precision robust north finder based on different types of rate gyroscopes.

  5. High-Precision Simulation of the Gravity Field of Rapidly-Rotating Barotropes in Hydrostatic Equilibrium

    NASA Astrophysics Data System (ADS)

    Hubbard, W. B.

    2013-12-01

    The so-called theory of figures (TOF) uses potential theory to solve for the structure of highly distorted rotating liquid planets in hydrostatic equilibrium. TOF is noteworthy both for its antiquity (Maclaurin 1742) and its mathematical complexity. Planned high-precision gravity measurements near the surfaces of Jupiter and Saturn (possibly detecting signals ~ microgal) will place unprecedented requirements on TOF, not because one expects hydrostatic equilibrium to that level, but because nonhydrostatic components in the surface gravity, at expected levels ~ 1 milligal, must be referenced to precise hydrostatic-equilibrium models. The Maclaurin spheroid is both a useful test of numerical TOF codes (Hubbard 2012, ApJ Lett 756:L15), and an approach to an efficient TOF code for arbitrary barotropes of variable density (Hubbard 2013, ApJ 768:43). For the latter, one trades off vertical resolution by replacing a continuous barotropic pressure-density relation with a stairstep relation, corresponding to N concentric Maclaurin spheroids (CMS), each of constant density. The benefit of this trade-off is that two-dimensional integrals over the mass distributions at each interface are reduced to one-dimensional integrals, quickly and accurately evaluated by Gaussian quadrature. The shapes of the spheroids comprise N level surfaces within the planet and at its surface, are gravitationally coupled to each other, and are found by self-consistent iteration, relaxing to a final configuration to within the computer's precision limits. The angular and radial variation of external gravity (using the usual geophysical expansion in multipole moments) can be found to the limit of typical floating point precision (~ 1.e-14), much better than the expected noise/signal for either the Juno or Cassini gravity experiments. The stairstep barotrope can be adjusted to fit a prescribed continuous or discontinuous interior barotrope, and can be made to approximate it to any required precision by increasing N. One can insert a higher density of CMSs toward the surface of an interior model in order to more accurately model high-order gravitational moments. The magnitude of high-order moments predicted by TOF declines geometrically with order number, and falls below the magnitude of expected non-hydrostatic terms produced by interior dynamics at ~ order 10 and above. Juno's sensitivity is enough to detect tidal gravity signals from Galilean satellites. The CMS method can be generalized to predict tidal zonal and tesseral terms consistent with an interior model fitted to measured zonal harmonics. For this purpose, two-dimensional Gaussian quadrature is necessary at each CMS interface. However, once the model is relaxed to equilibrium, one need not refit the model to the average zonal harmonics because of the smallness of the tidal terms. I will describe how the CMS method has been validated through comparisons with standard TOF models for which fully or partially analytic solutions exist, as well as through consistency checks. At this stage in software development in preparation for Jupiter orbit, we are focused on increasing the speed of the code in order to more efficiently search the parameter space of acceptable Jupiter interior models, as well as to interface it with advanced hydrogen-helium equations of state.

  6. Toward precision medicine in Alzheimer's disease.

    PubMed

    Reitz, Christiane

    2016-03-01

    In Western societies, Alzheimer's disease (AD) is the most common form of dementia and the sixth leading cause of death. In recent years, the concept of precision medicine, an approach for disease prevention and treatment that is personalized to an individual's specific pattern of genetic variability, environment and lifestyle factors, has emerged. While for some diseases, in particular select cancers and a few monogenetic disorders such as cystic fibrosis, significant advances in precision medicine have been made over the past years, for most other diseases precision medicine is only in its beginning. To advance the application of precision medicine to a wider spectrum of disorders, governments around the world are starting to launch Precision Medicine Initiatives, major efforts to generate the extensive scientific knowledge needed to integrate the model of precision medicine into every day clinical practice. In this article we summarize the state of precision medicine in AD, review major obstacles in its development, and discuss its benefits in this highly prevalent, clinically and pathologically complex disease.

  7. Operation Brain Trauma Therapy

    DTIC Science & Technology

    2016-12-01

    either clinical trials in TBI if shown to be highly effective across OBTT, or tested in a precision medicine TBI phenotype (such as contusion) based...clinical trial if shown to be potently effective in one of the models in OBTT (i.e., a model that mimicked a specific clinical TBI phenotype). In... effective drug seen thus far in primary screening albeit with benefit highly model dependent, largely restricted to the CCI model. This suggests

  8. The MOLLER Experiment: ``An Ultra-precise Measurement of the Weak Charge of the Electron using moller Scattering''

    NASA Astrophysics Data System (ADS)

    Beminiwattha, Rakitha; Moller Collaboration

    2017-09-01

    Parity Violating Electron Scattering (PVES) is an extremely successful precision frontier tool that has been used for testing the Standard Model (SM) and understanding nucleon structure. Several generations of highly successful PVES programs at SLAC, MIT-Bates, MAMI-Mainz, and Jefferson Lab have contributed to the understanding of nucleon structure and testing the SM. But missing phenomena like matter-antimatter asymmetry, neutrino flavor oscillations, and dark matter and energy suggest that the SM is only a `low energy' effective theory. The MOLLER experiment at Jefferson Lab will measure the weak charge of the electron, QWe = 1 - 4sin2θW , with a precision of 2.4 % by measuring the parity violating asymmetry in electron-electron () scattering and will be sensitive to subtle but measurable deviations from precisely calculable predictions from the SM. The MOLLER experiment will provide the best contact interaction search for leptons at low OR high energy makes it a probe of physics beyond the Standard Model with sensitivities to mass-scales of new PV physics up to 7.5 TeV. Overview of the experiment and recent pre-R&D progress will be reported.

  9. Far Infrared All-Sky Survey

    NASA Technical Reports Server (NTRS)

    Richards, Paul L.

    1998-01-01

    Precise measurements of the angular power spectrum of the Cosmic Microwave Background (CMB) anisotropy will revolutionize cosmology. These measurements will discriminate between competing cosmological models and, if the standard inflationary scenario is correct, will determine each of the fundamental cosmological parameters with high precision. The astrophysics community has recognized this potential: the orbital experiments MAP and PLANCK, have been approved to measure CMB anisotropy. Balloon-borne experiments can realize much of this potential before these missions are launched. Additionally, properly designed balloon-borne experiments can complement MAP in frequency and angular resolution and can give the first realistic test of the instrumentation proposed for the high frequency instrument on PLANCK. The MAXIMA experiment is part of the MAXIMA/BOOMERANG collaboration which is doing balloon observations of the angular power spectrum of the Cosmic Microwave Background from l = 10 to l = 800. These experiments are designed to use the benefits of both North American and Antarctic long-duration ballooning to full advantage. We have developed several new technologies that together allow the power spectrum to be measured with unprecedented combination of angular resolution, beam throw, sensitivity, sky coverage and control of systematic effects. These technologies are the basis for the high frequency instrument for the PLANCK mission. Our measurements will strongly discriminate between models of the origin and evolution of structure in the universe and, for many models, will determine the value of the basic cosmological parameters to high precision.

  10. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  11. Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu

    2015-07-01

    Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in real time. The ARTK positioning accuracy is better and more robust than the combination of phase difference over time (PDOT) and SRTK method at a high rate. The ARTK positioning accuracy is equivalent to SRTK solution when the DLTTD is 0.5 s, and centimeter level accuracy can be achieved even when DLTTD is 15 s.

  12. Reliable low precision simulations in land surface models

    NASA Astrophysics Data System (ADS)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  13. High-Precision Image Aided Inertial Navigation with Known Features: Observability Analysis and Performance Evaluation

    PubMed Central

    Jiang, Weiping; Wang, Li; Niu, Xiaoji; Zhang, Quan; Zhang, Hui; Tang, Min; Hu, Xiangyun

    2014-01-01

    A high-precision image-aided inertial navigation system (INS) is proposed as an alternative to the carrier-phase-based differential Global Navigation Satellite Systems (CDGNSSs) when satellite-based navigation systems are unavailable. In this paper, the image/INS integrated algorithm is modeled by a tightly-coupled iterative extended Kalman filter (IEKF). Tightly-coupled integration ensures that the integrated system is reliable, even if few known feature points (i.e., less than three) are observed in the images. A new global observability analysis of this tightly-coupled integration is presented to guarantee that the system is observable under the necessary conditions. The analysis conclusions were verified by simulations and field tests. The field tests also indicate that high-precision position (centimeter-level) and attitude (half-degree-level)-integrated solutions can be achieved in a global reference. PMID:25330046

  14. Modeling applications for precision agriculture in the California Central Valley

    NASA Astrophysics Data System (ADS)

    Marklein, A. R.; Riley, W. J.; Grant, R. F.; Mezbahuddin, S.; Mekonnen, Z. A.; Liu, Y.; Ying, S.

    2017-12-01

    Drought in California has increased the motivation to develop precision agriculture, which uses observations to make site-specific management decisions throughout the growing season. In agricultural systems that are prone to drought, these efforts often focus on irrigation efficiency. Recent improvements in soil sensor technology allow the monitoring of plant and soil status in real-time, which can then inform models aimed at improving irrigation management. But even on farms with resources to deploy soil sensors across the landscape, leveraging that sensor data to design an efficient irrigation scheme remains a challenge. We conduct a modeling experiment aimed at simulating precision agriculture to address several questions: (1) how, when, and where does irrigation lead to optimal yield? and (2) What are the impacts of different precision irrigation schemes on yields, soil organic carbon (SOC), and total water use? We use the ecosys model to simulate precision agriculture in a conventional tomato-corn rotation in the California Central Valley with varying soil water content thresholds for irrigation and soil water sensor depths. This model is ideal for our question because it includes explicit process-based functions for the plant growth, plant water use, soil hydrology, and SOC, and has been tested extensively in agricultural ecosystems. Low irrigation thresholds allows the soil to become drier before irrigating compared to high irrigation thresholds; as such, we found that the high irrigation thresholds use more irrigation over the course of the season, have higher yields, and have lower water use efficiency. The irrigation threshold did not affect SOC. Yields and water use are highest at sensor depths of 0.5 to 0.15 m, but water use efficiency was also lowest at these depths. We found SOC to be significantly affected by sensor depth, with the highest SOC at the shallowest sensor depths. These results will help regulate irrigation water while maintaining yield in California, especially with uncertain precipitation regimes.

  15. System identification of the JPL micro-precision interferometer truss - Test-analysis reconciliation

    NASA Technical Reports Server (NTRS)

    Red-Horse, J. R.; Marek, E. L.; Levine-West, M.

    1993-01-01

    The JPL Micro-Precision Interferometer (MPI) is a testbed for studying the use of control-structure interaction technology in the design of space-based interferometers. A layered control architecture will be employed to regulate the interferometer optical system to tolerances in the nanometer range. An important aspect of designing and implementing the control schemes for such a system is the need for high fidelity, test-verified analytical structural models. This paper focuses on one aspect of the effort to produce such a model for the MPI structure, test-analysis model reconciliation. Pretest analysis, modal testing, and model refinement results are summarized for a series of tests at both the component and full system levels.

  16. In-vitro evaluation of the accuracy of conventional and digital methods of obtaining full-arch dental impressions.

    PubMed

    Ender, Andreas; Mehl, Albert

    2015-01-01

    To investigate the accuracy of conventional and digital impression methods used to obtain full-arch impressions by using an in-vitro reference model. Eight different conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; and irreversible hydrocolloid, ALG) and digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; and Lava COS, LAV) full-arch impressions were obtained from a reference model with a known morphology, using a highly accurate reference scanner. The impressions obtained were then compared with the original geometry of the reference model and within each test group. A point-to-point measurement of the surface of the model using the signed nearest neighbour method resulted in a mean (10%-90%)/2 percentile value for the difference between the impression and original model (trueness) as well as the difference between impressions within a test group (precision). Trueness values ranged from 11.5 μm (VSE) to 60.2 μm (POE), and precision ranged from 12.3 μm (VSE) to 66.7 μm (POE). Among the test groups, VSE, VSES, and CER showed the highest trueness and precision. The deviation pattern varied with the impression method. Conventional impressions showed high accuracy across the full dental arch in all groups, except POE and ALG. Conventional and digital impression methods show differences regarding full-arch accuracy. Digital impression systems reveal higher local deviations of the full-arch model. Digital intraoral impression systems do not show superior accuracy compared to highly accurate conventional impression techniques. However, they provide excellent clinical results within their indications applying the correct scanning technique.

  17. Mechanism and experimental research on ultra-precision grinding of ferrite

    NASA Astrophysics Data System (ADS)

    Ban, Xinxing; Zhao, Huiying; Dong, Longchao; Zhu, Xueliang; Zhang, Chupeng; Gu, Yawen

    2017-02-01

    Ultra-precision grinding of ferrite is conducted to investigate the removal mechanism. Effect of the accuracy of machine tool key components on grinding surface quality is analyzed. The surface generation model of ferrite ultra-precision grinding machining is established. In order to reveal the surface formation mechanism of ferrite in the process of ultraprecision grinding, furthermore, the scientific and accurate of the calculation model are taken into account to verify the grinding surface roughness, which is proposed. Orthogonal experiment is designed using the high precision aerostatic turntable and aerostatic spindle for ferrite which is a typical hard brittle materials. Based on the experimental results, the influence factors and laws of ultra-precision grinding surface of ferrite are discussed through the analysis of the surface roughness. The results show that the quality of ferrite grinding surface is the optimal parameters, when the wheel speed of 20000r/mm, feed rate of 10mm/min, grinding depth of 0.005mm, and turntable rotary speed of 5r/min, the surface roughness Ra can up to 75nm.

  18. Massive metrology using fast e-beam technology improves OPC model accuracy by >2x at faster turnaround time

    NASA Astrophysics Data System (ADS)

    Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei

    2018-03-01

    Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.

  19. Development of novel hybrid flexure-based microgrippers for precision micro-object manipulation.

    PubMed

    Mohd Zubir, Mohd Nashrul; Shirinzadeh, Bijan; Tian, Yanling

    2009-06-01

    This paper describes the process of developing a microgripper that is capable of high precision and fidelity manipulation of micro-objects. The design adopts the concept of flexure-based hinges on its joints to provide the rotational motion, thus eliminating the inherent nonlinearities associated with the application of conventional rigid hinges. A combination of two modeling techniques, namely, pseudorigid body model and finite element analysis was utilized to expedite the prototyping procedure, which leads to the establishment of a high performance mechanism. A new hybrid compliant structure integrating cantilever beam and flexural hinge configurations within microgripper mechanism mainframe has been developed. This concept provides a novel approach to harness the advantages within each individual configuration while mutually compensating the limitations inherent between them. A wire electrodischarge machining technique was utilized to fabricate the gripper out of high grade aluminum alloy (Al 7075T6). Experimental studies were conducted on the model to obtain various correlations governing the gripper performance as well as for model verification. The experimental results demonstrate high level of compliance in comparison to the computational results. A high amplification characteristic and maximum achievable stroke of 100 microm can be achieved.

  20. Development of novel hybrid flexure-based microgrippers for precision micro-object manipulation

    NASA Astrophysics Data System (ADS)

    Mohd Zubir, Mohd Nashrul; Shirinzadeh, Bijan; Tian, Yanling

    2009-06-01

    This paper describes the process of developing a microgripper that is capable of high precision and fidelity manipulation of micro-objects. The design adopts the concept of flexure-based hinges on its joints to provide the rotational motion, thus eliminating the inherent nonlinearities associated with the application of conventional rigid hinges. A combination of two modeling techniques, namely, pseudorigid body model and finite element analysis was utilized to expedite the prototyping procedure, which leads to the establishment of a high performance mechanism. A new hybrid compliant structure integrating cantilever beam and flexural hinge configurations within microgripper mechanism mainframe has been developed. This concept provides a novel approach to harness the advantages within each individual configuration while mutually compensating the limitations inherent between them. A wire electrodischarge machining technique was utilized to fabricate the gripper out of high grade aluminum alloy (Al 7075T6). Experimental studies were conducted on the model to obtain various correlations governing the gripper performance as well as for model verification. The experimental results demonstrate high level of compliance in comparison to the computational results. A high amplification characteristic and maximum achievable stroke of 100 μm can be achieved.

  1. Comparison of Einstein-Boltzmann solvers for testing general relativity

    NASA Astrophysics Data System (ADS)

    Bellini, E.; Barreira, A.; Frusciante, N.; Hu, B.; Peirone, S.; Raveri, M.; Zumalacárregui, M.; Avilez-Lopez, A.; Ballardini, M.; Battye, R. A.; Bolliet, B.; Calabrese, E.; Dirian, Y.; Ferreira, P. G.; Finelli, F.; Huang, Z.; Ivanov, M. M.; Lesgourgues, J.; Li, B.; Lima, N. A.; Pace, F.; Paoletti, D.; Sawicki, I.; Silvestri, A.; Skordis, C.; Umiltà, C.; Vernizzi, F.

    2018-01-01

    We compare Einstein-Boltzmann solvers that include modifications to general relativity and find that, for a wide range of models and parameters, they agree to a high level of precision. We look at three general purpose codes that primarily model general scalar-tensor theories, three codes that model Jordan-Brans-Dicke (JBD) gravity, a code that models f (R ) gravity, a code that models covariant Galileons, a code that models Hořava-Lifschitz gravity, and two codes that model nonlocal models of gravity. Comparing predictions of the angular power spectrum of the cosmic microwave background and the power spectrum of dark matter for a suite of different models, we find agreement at the subpercent level. This means that this suite of Einstein-Boltzmann solvers is now sufficiently accurate for precision constraints on cosmological and gravitational parameters.

  2. High-precision half-life and branching-ratio measurements for superallowed Fermi β+ emitters at TRIUMF - ISAC

    NASA Astrophysics Data System (ADS)

    Laffoley, A. T.; Dunlop, R.; Finlay, P.; Grinyer, G. F.; Andreoiu, C.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Blank, B.; Bouzomita, H.; Chagnon-Lessard, S.; Chester, A.; Cross, D. S.; Demand, G.; Diaz Varela, A.; Djongolov, M.; Ettenauer, S.; Garnsworthy, A. B.; Garrett, P. E.; Giovinazzo, J.; Glister, J.; Green, K. L.; Hackman, G.; Hadinia, B.; Jamieson, D. S.; Ketelhut, S.; Leach, K. G.; Leslie, J. R.; Pearson, C. J.; Phillips, A. A.; Rand, E. T.; Starosta, K.; Sumithrarachchi, C. S.; Svensson, C. E.; Tardiff, E. R.; Thomas, J. C.; Towner, I. S.; Triambak, S.; Unsworth, C.; Williams, S. J.; Wong, J.; Yates, S. W.; Zganjar, E. F.

    2014-03-01

    A program of high-precision half-life and branching-ratio measurements for superallowed Fermi β emitters is being carried out at TRIUMF's Isotope Separator and Accelerator (ISAC) radioactive ion beam facility. Recent half-life measurements for the superallowed decays of 14O, 18Ne, and 26Alm, as well as branching-ratio measurements for 26Alm and 74Rb are reported. These results provide demanding tests of the Standard Model and the theoretical isospin symmetry breaking (ISB) corrections in superallowed Fermi β decays.

  3. Crop Yield Predictions - High Resolution Statistical Model for Intra-season Forecasts Applied to Corn in the US

    NASA Astrophysics Data System (ADS)

    Cai, Y.

    2017-12-01

    Accurately forecasting crop yields has broad implications for economic trading, food production monitoring, and global food security. However, the variation of environmental variables presents challenges to model yields accurately, especially when the lack of highly accurate measurements creates difficulties in creating models that can succeed across space and time. In 2016, we developed a sequence of machine-learning based models forecasting end-of-season corn yields for the US at both the county and national levels. We combined machine learning algorithms in a hierarchical way, and used an understanding of physiological processes in temporal feature selection, to achieve high precision in our intra-season forecasts, including in very anomalous seasons. During the live run, we predicted the national corn yield within 1.40% of the final USDA number as early as August. In the backtesting of the 2000-2015 period, our model predicts national yield within 2.69% of the actual yield on average already by mid-August. At the county level, our model predicts 77% of the variation in final yield using data through the beginning of August and improves to 80% by the beginning of October, with the percentage of counties predicted within 10% of the average yield increasing from 68% to 73%. Further, the lowest errors are in the most significant producing regions, resulting in very high precision national-level forecasts. In addition, we identify the changes of important variables throughout the season, specifically early-season land surface temperature, and mid-season land surface temperature and vegetation index. For the 2017 season, we feed 2016 data to the training set, together with additional geospatial data sources, aiming to make the current model even more precise. We will show how our 2017 US corn yield forecasts converges in time, which factors affect the yield the most, as well as present our plans for 2018 model adjustments.

  4. Implementation of high precision optical and radiometric LRO tracking data in the orbit determination to supplement the baseline S-band tracking

    NASA Astrophysics Data System (ADS)

    Mao, D.; Torrence, M. H.; Mazarico, E.; Neumann, G. A.; Smith, D. E.; Zuber, M. T.

    2016-12-01

    LRO has been in a polar lunar orbit for 7 year since it was launched in June 2009. Seven instruments are onboard LRO to perform a global and detailed geophysical, geological and geochemical mapping of the Moon, some of which have very high spatial resolution. To take full advantage of the high resolution LRO datasets from these instruments, the spacecraft orbit must be reconstructed precisely. The baseline LRO tracking was the NASA's White Sands station in New Mexico and a commercial network, the Universal Space Network (USN), providing up to 20 hours per day of almost continuous S-band radio frequency link to LRO. The USN stations produce S-band range data with a 0.4 m precision and Doppler data with a 0.8 mm/s precision. Using the S-band tracking data together with the high-resolution gravity field model from the GRAIL mission, definitive LRO orbit solutions are obtained with an accuracy of 10 m in total position and 0.5 m radially. Confirmed by the 0.50-m high-resolution NAC images from the LROC team, these orbits well represent the LRO orbit "truth". In addition to the S-band data, one-way Laser Ranging (LR) to LRO provides a unique LRO optical tracking dataset over 5 years, from June 2009 to September 2014. Ten international satellite laser ranging stations contributed over 4000 hours LR data with the 0.05 - 0.10 m normal point precision. Another set of high precision LRO tracking data is provided by the Deep Space Network (DSN), which produces radiometric tracking data more precise than the USN S-band data. In the last two years of the LRO mission, the temporal coverage of the USN data has decreased significantly. We show that LR and DSN data can be a good supplement to the baseline tracking data for the orbit reconstruction.

  5. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  6. High-precision predictions for the light CP-even Higgs boson mass of the minimal supersymmetric standard model.

    PubMed

    Hahn, T; Heinemeyer, S; Hollik, W; Rzehak, H; Weiglein, G

    2014-04-11

    For the interpretation of the signal discovered in the Higgs searches at the LHC it will be crucial in particular to discriminate between the minimal Higgs sector realized in the standard model (SM) and its most commonly studied extension, the minimal supersymmetric standard model (MSSM). The measured mass value, having already reached the level of a precision observable with an experimental accuracy of about 500 MeV, plays an important role in this context. In the MSSM the mass of the light CP-even Higgs boson, Mh, can directly be predicted from the other parameters of the model. The accuracy of this prediction should at least match the one of the experimental result. The relatively high mass value of about 126 GeV has led to many investigations where the scalar top quarks are in the multi-TeV range. We improve the prediction for Mh in the MSSM by combining the existing fixed-order result, comprising the full one-loop and leading and subleading two-loop corrections, with a resummation of the leading and subleading logarithmic contributions from the scalar top sector to all orders. In this way for the first time a high-precision prediction for the mass of the light CP-even Higgs boson in the MSSM is possible all the way up to the multi-TeV region of the relevant supersymmetric particles. The results are included in the code FEYNHIGGS.

  7. French Meteor Network for High Precision Orbits of Meteoroids

    NASA Technical Reports Server (NTRS)

    Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.

    2011-01-01

    There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.

  8. Precise attitude control of the Stanford relativity satellite.

    NASA Technical Reports Server (NTRS)

    Bull, J. S.; Debra, D. B.

    1973-01-01

    A satellite being designed by the Stanford University to measure (with extremely high precision) the effect of General Relativity is described. Specifically, the satellite will measure two relativistic precessions predicted by the theory: the geodetic effect (6.9 arcsec/yr), due solely to motion about the earth, and the motional effect (0.05 arcsec/yr), due to rotation of the earth. The gyro design requirements, including the requirement for precise attitude control and a dynamic model for attitude control synthesis, are discussed. Closed loop simulation of the satellite's natural dynamics on an analog computer is described.

  9. Application of backpack Lidar to geological cross-section measurement

    NASA Astrophysics Data System (ADS)

    Lin, Jingyu; Wang, Ran; Xiao, Zhouxuan; Li, Lu; Yao, Weihua; Han, Wei; Zhao, Baolin

    2017-11-01

    As the traditional geological cross section measurement, the artificial traverse method was recently substituted by using point coordinates data. However, it is still the crux of the matter that how to acquire the high-precision point coordinates data quickly and economically. Thereby, the backpack Lidar is presented on the premise of the principle of using point coordinates in this issue. Undoubtedly, Lidar technique, one of booming and international active remote sensing techniques, is a powerful tool in obtaining precise topographic information, high-precision 3-D coordinates and building a real 3-D model. With field practice and date processing indoors, it is essentially accomplished that geological sections maps could be generated simply, accurately and automatically in the support of relevant software such as ArcGIS and LiDAR360.

  10. An improved grey model for the prediction of real-time GPS satellite clock bias

    NASA Astrophysics Data System (ADS)

    Zheng, Z. Y.; Chen, Y. Q.; Lu, X. S.

    2008-07-01

    In real-time GPS precise point positioning (PPP), real-time and reliable satellite clock bias (SCB) prediction is a key to implement real-time GPS PPP. It is difficult to hold the nuisance and inenarrable performance of space-borne GPS satellite atomic clock because of its high-frequency, sensitivity and impressionable, it accords with the property of grey model (GM) theory, i. e. we can look on the variable process of SCB as grey system. Firstly, based on limits of quadratic polynomial (QP) and traditional GM to predict SCB, a modified GM (1,1) is put forward to predict GPS SCB in this paper; and then, taking GPS SCB data for example, we analyzed clock bias prediction with different sample interval, the relationship between GM exponent and prediction accuracy, precision comparison of GM to QP, and concluded the general rule of different type SCB and GM exponent; finally, to test the reliability and validation of the modified GM what we put forward, taking IGS clock bias ephemeris product as reference, we analyzed the prediction precision with the modified GM, It is showed that the modified GM is reliable and validation to predict GPS SCB and can offer high precise SCB prediction for real-time GPS PPP.

  11. High-precision robotic microcontact printing (R-μCP) utilizing a vision guided selectively compliant articulated robotic arm.

    PubMed

    McNulty, Jason D; Klann, Tyler; Sha, Jin; Salick, Max; Knight, Gavin T; Turng, Lih-Sheng; Ashton, Randolph S

    2014-06-07

    Increased realization of the spatial heterogeneity found within in vivo tissue microenvironments has prompted the desire to engineer similar complexities into in vitro culture substrates. Microcontact printing (μCP) is a versatile technique for engineering such complexities onto cell culture substrates because it permits microscale control of the relative positioning of molecules and cells over large surface areas. However, challenges associated with precisely aligning and superimposing multiple μCP steps severely limits the extent of substrate modification that can be achieved using this method. Thus, we investigated the feasibility of using a vision guided selectively compliant articulated robotic arm (SCARA) for μCP applications. SCARAs are routinely used to perform high precision, repetitive tasks in manufacturing, and even low-end models are capable of achieving microscale precision. Here, we present customization of a SCARA to execute robotic-μCP (R-μCP) onto gold-coated microscope coverslips. The system not only possesses the ability to align multiple polydimethylsiloxane (PDMS) stamps but also has the capability to do so even after the substrates have been removed, reacted to graft polymer brushes, and replaced back into the system. Plus, non-biased computerized analysis shows that the system performs such sequential patterning with <10 μm precision and accuracy, which is equivalent to the repeatability specifications of the employed SCARA model. R-μCP should facilitate the engineering of complex in vivo-like complexities onto culture substrates and their integration with microfluidic devices.

  12. Joint Tomographic Imaging of 3-­-D Density Structure Using Cosmic Ray Muons and High-­-Precision Gravity Data

    NASA Astrophysics Data System (ADS)

    Rowe, C. A.; Guardincerri, E.; Roy, M.; Dichter, M.

    2015-12-01

    As part of the CO2 reservoir muon imaging project headed by the Pacific Northwest National Laboraory (PNNL) under the U.S. Department of Energy Subsurface Technology and Engineering Research, Development, and Demonstration (SubTER) iniative, Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) plan to leverage the recently decommissioned and easily accessible Tunnel Vault on LANL property to test the complementary modeling strengths of muon radiography and high-precision gravity surveys. This tunnel extends roughly 300 feet into the hillside, with a maximum depth below the surface of approximately 300 feet. We will deploy LANL's Mini Muon Tracker (MMT), a detector consisting of 576 drift tubes arranged in alternating parallel planes of orthogonally oriented tubes. This detector is capable of precise determination of trajectories for incoming muons with angular resolution of a few milliradians. We will deploy the MMT at several locations within the tunnel, to obtain numerous crossing muon trajectories and permit a 3D tomographic image of the overburden to be built. In the same project, UNM will use a Scintrex digital gravimeter to collect high-precision gravity data from a dense grid on the hill slope above the tunnel as well as within the tunnel itself. This will provide both direct and differential gravity readings for density modeling of the overburden. By leveraging detailed geologic knowledge of the canyon and the lithology overlying the tunnel, as well as the structural elements, elevations and blueprints of the tunnel itself, we will evaluate the muon and gravity data both independently and in a simultaneous, joint inversion to build a combined 3D density model of the overburden.

  13. The Prediction of Length-of-day Variations Based on Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Lei, Y.; Zhao, D. N.; Gao, Y. P.; Cai, H. B.

    2015-01-01

    Due to the complicated time-varying characteristics of the length-of-day (LOD) variations, the accuracies of traditional strategies for the prediction of the LOD variations such as the least squares extrapolation model, the time-series analysis model, and so on, have not met the requirements for real-time and high-precision applications. In this paper, a new machine learning algorithm --- the Gaussian process (GP) model is employed to forecast the LOD variations. Its prediction precisions are analyzed and compared with those of the back propagation neural networks (BPNN), general regression neural networks (GRNN) models, and the Earth Orientation Parameters Prediction Comparison Campaign (EOP PCC). The results demonstrate that the application of the GP model to the prediction of the LOD variations is efficient and feasible.

  14. The CARMENES search for exoplanets around M dwarfs. High-resolution optical and near-infrared spectroscopy of 324 survey stars

    NASA Astrophysics Data System (ADS)

    Reiners, A.; Zechmeister, M.; Caballero, J. A.; Ribas, I.; Morales, J. C.; Jeffers, S. V.; Schöfer, P.; Tal-Or, L.; Quirrenbach, A.; Amado, P. J.; Kaminski, A.; Seifert, W.; Abril, M.; Aceituno, J.; Alonso-Floriano, F. J.; Ammler-von Eiff, M.; Antona, R.; Anglada-Escudé, G.; Anwand-Heerwart, H.; Arroyo-Torres, B.; Azzaro, M.; Baroch, D.; Barrado, D.; Bauer, F. F.; Becerril, S.; Béjar, V. J. S.; Benítez, D.; Berdinas˜, Z. M.; Bergond, G.; Blümcke, M.; Brinkmöller, M.; del Burgo, C.; Cano, J.; Cárdenas Vázquez, M. C.; Casal, E.; Cifuentes, C.; Claret, A.; Colomé, J.; Cortés-Contreras, M.; Czesla, S.; Díez-Alonso, E.; Dreizler, S.; Feiz, C.; Fernández, M.; Ferro, I. M.; Fuhrmeister, B.; Galadí-Enríquez, D.; Garcia-Piquer, A.; García Vargas, M. L.; Gesa, L.; Galera, V. Gómez; González Hernández, J. I.; González-Peinado, R.; Grözinger, U.; Grohnert, S.; Guàrdia, J.; Guenther, E. W.; Guijarro, A.; Guindos, E. de; Gutiérrez-Soto, J.; Hagen, H.-J.; Hatzes, A. P.; Hauschildt, P. H.; Hedrosa, R. P.; Helmling, J.; Henning, Th.; Hermelo, I.; Hernández Arabí, R.; Hernández Castaño, L.; Hernández Hernando, F.; Herrero, E.; Huber, A.; Huke, P.; Johnson, E. N.; Juan, E. de; Kim, M.; Klein, R.; Klüter, J.; Klutsch, A.; Kürster, M.; Lafarga, M.; Lamert, A.; Lampón, M.; Lara, L. M.; Laun, W.; Lemke, U.; Lenzen, R.; Launhardt, R.; López del Fresno, M.; López-González, J.; López-Puertas, M.; López Salas, J. F.; López-Santiago, J.; Luque, R.; Magán Madinabeitia, H.; Mall, U.; Mancini, L.; Mandel, H.; Marfil, E.; Marín Molina, J. A.; Maroto Fernández, D.; Martín, E. L.; Martín-Ruiz, S.; Marvin, C. J.; Mathar, R. J.; Mirabet, E.; Montes, D.; Moreno-Raya, M. E.; Moya, A.; Mundt, R.; Nagel, E.; Naranjo, V.; Nortmann, L.; Nowak, G.; Ofir, A.; Oreiro, R.; Pallé, E.; Panduro, J.; Pascual, J.; Passegger, V. M.; Pavlov, A.; Pedraz, S.; Pérez-Calpena, A.; Medialdea, D. Pérez; Perger, M.; Perryman, M. A. C.; Pluto, M.; Rabaza, O.; Ramón, A.; Rebolo, R.; Redondo, P.; Reffert, S.; Reinhart, S.; Rhode, P.; Rix, H.-W.; Rodler, F.; Rodríguez, E.; Rodríguez-López, C.; Rodríguez Trinidad, A.; Rohloff, R.-R.; Rosich, A.; Sadegi, S.; Sánchez-Blanco, E.; Sánchez Carrasco, M. A.; Sánchez-López, A.; Sanz-Forcada, J.; Sarkis, P.; Sarmiento, L. F.; Schäfer, S.; Schmitt, J. H. M. M.; Schiller, J.; Schweitzer, A.; Solano, E.; Stahl, O.; Strachan, J. B. P.; Stürmer, J.; Suárez, J. C.; Tabernero, H. M.; Tala, M.; Trifonov, T.; Tulloch, S. M.; Ulbrich, R. G.; Veredas, G.; Vico Linares, J. I.; Vilardell, F.; Wagner, K.; Winkler, J.; Wolthoff, V.; Xu, W.; Yan, F.; Zapatero Osorio, M. R.

    2018-04-01

    The CARMENES radial velocity (RV) survey is observing 324 M dwarfs to search for any orbiting planets. In this paper, we present the survey sample by publishing one CARMENES spectrum for each M dwarf. These spectra cover the wavelength range 520-1710 nm at a resolution of at least R >80 000, and we measure its RV, Hα emission, and projected rotation velocity. We present an atlas of high-resolution M-dwarf spectra and compare the spectra to atmospheric models. To quantify the RV precision that can be achieved in low-mass stars over the CARMENES wavelength range, we analyze our empirical information on the RV precision from more than 6500 observations. We compare our high-resolution M-dwarf spectra to atmospheric models where we determine the spectroscopic RV information content, Q, and signal-to-noise ratio. We find that for all M-type dwarfs, the highest RV precision can be reached in the wavelength range 700-900 nm. Observations at longer wavelengths are equally precise only at the very latest spectral types (M8 and M9). We demonstrate that in this spectroscopic range, the large amount of absorption features compensates for the intrinsic faintness of an M7 star. To reach an RV precision of 1 m s-1 in very low mass M dwarfs at longer wavelengths likely requires the use of a 10 m class telescope. For spectral types M6 and earlier, the combination of a red visual and a near-infrared spectrograph is ideal to search for low-mass planets and to distinguish between planets and stellar variability. At a 4 m class telescope, an instrument like CARMENES has the potential to push the RV precision well below the typical jitter level of 3-4 m s-1.

  15. Time-calibrated Milankovitch cycles for the late Permian.

    PubMed

    Wu, Huaichun; Zhang, Shihong; Hinnov, Linda A; Jiang, Ganqing; Feng, Qinglai; Li, Haiyan; Yang, Tianshui

    2013-01-01

    An important innovation in the geosciences is the astronomical time scale. The astronomical time scale is based on the Milankovitch-forced stratigraphy that has been calibrated to astronomical models of paleoclimate forcing; it is defined for much of Cenozoic-Mesozoic. For the Palaeozoic era, however, astronomical forcing has not been widely explored because of lack of high-precision geochronology or astronomical modelling. Here we report Milankovitch cycles from late Permian (Lopingian) strata at Meishan and Shangsi, South China, time calibrated by recent high-precision U-Pb dating. The evidence extends empirical knowledge of Earth's astronomical parameters before 250 million years ago. Observed obliquity and precession terms support a 22-h length-of-day. The reconstructed astronomical time scale indicates a 7.793-million year duration for the Lopingian epoch, when strong 405-kyr cycles constrain astronomical modelling. This is the first significant advance in defining the Palaeozoic astronomical time scale, anchored to absolute time, bridging the Palaeozoic-Mesozoic transition.

  16. 92 Years of the Ising Model: A High Resolution Monte Carlo Study

    NASA Astrophysics Data System (ADS)

    Xu, Jiahao; Ferrenberg, Alan M.; Landau, David P.

    2018-04-01

    Using extensive Monte Carlo simulations that employ the Wolff cluster flipping and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising model with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, we obtained the critical inverse temperature K c = 0.221 654 626(5) and the critical exponent of the correlation length ν = 0.629 912(86) with precision that improves upon previous Monte Carlo estimates.

  17. Precision Timing of PSR J0437-4715: An Accurate Pulsar Distance, a High Pulsar Mass, and a Limit on the Variation of Newton's Gravitational Constant

    NASA Astrophysics Data System (ADS)

    Verbiest, J. P. W.; Bailes, M.; van Straten, W.; Hobbs, G. B.; Edwards, R. T.; Manchester, R. N.; Bhat, N. D. R.; Sarkissian, J. M.; Jacoby, B. A.; Kulkarni, S. R.

    2008-05-01

    Analysis of 10 years of high-precision timing data on the millisecond pulsar PSR J0437-4715 has resulted in a model-independent kinematic distance based on an apparent orbital period derivative, dot Pb , determined at the 1.5% level of precision (Dk = 157.0 +/- 2.4 pc), making it one of the most accurate stellar distance estimates published to date. The discrepancy between this measurement and a previously published parallax distance estimate is attributed to errors in the DE200 solar system ephemerides. The precise measurement of dot Pb allows a limit on the variation of Newton's gravitational constant, |Ġ/G| <= 23 × 10-12 yr-1. We also constrain any anomalous acceleration along the line of sight to the pulsar to |a⊙/c| <= 1.5 × 10-18 s-1 at 95% confidence, and derive a pulsar mass, mpsr = 1.76 +/- 0.20 M⊙, one of the highest estimates so far obtained.

  18. Real-Time Precise Point Positioning (RTPPP) with raw observations and its application in real-time regional ionospheric VTEC modeling

    NASA Astrophysics Data System (ADS)

    Liu, Teng; Zhang, Baocheng; Yuan, Yunbin; Li, Min

    2018-01-01

    Precise Point Positioning (PPP) is an absolute positioning technology mainly used in post data processing. With the continuously increasing demand for real-time high-precision applications in positioning, timing, retrieval of atmospheric parameters, etc., Real-Time PPP (RTPPP) and its applications have drawn more and more research attention in recent years. This study focuses on the models, algorithms and ionospheric applications of RTPPP on the basis of raw observations, in which high-precision slant ionospheric delays are estimated among others in real time. For this purpose, a robust processing strategy for multi-station RTPPP with raw observations has been proposed and realized, in which real-time data streams and State-Space-Representative (SSR) satellite orbit and clock corrections are used. With the RTPPP-derived slant ionospheric delays from a regional network, a real-time regional ionospheric Vertical Total Electron Content (VTEC) modeling method is proposed based on Adjusted Spherical Harmonic Functions and a Moving-Window Filter. SSR satellite orbit and clock corrections from different IGS analysis centers are evaluated. Ten globally distributed real-time stations are used to evaluate the positioning performances of the proposed RTPPP algorithms in both static and kinematic modes. RMS values of positioning errors in static/kinematic mode are 5.2/15.5, 4.7/17.4 and 12.8/46.6 mm, for north, east and up components, respectively. Real-time slant ionospheric delays from RTPPP are compared with those from the traditional Carrier-to-Code Leveling (CCL) method, in terms of function model, formal precision and between-receiver differences of short baseline. Results show that slant ionospheric delays from RTPPP are more precise and have a much better convergence performance than those from the CCL method in real-time processing. 30 real-time stations from the Asia-Pacific Reference Frame network are used to model the ionospheric VTECs over Australia in real time, with slant ionospheric delays from both RTPPP and CCL methods for comparison. RMS of the VTEC differences between RTPPP/CCL method and CODE final products is 0.91/1.09 TECU, and RMS of the VTEC differences between RTPPP and CCL methods is 0.67 TECU. Slant Total Electron Contents retrieved from different VTEC models are also validated with epoch-differenced Geometry-Free combinations of dual-frequency phase observations, and mean RMS values are 2.14, 2.33 and 2.07 TECU for RTPPP method, CCL method and CODE final products, respectively. This shows the superiority of RTPPP-derived slant ionospheric delays in real-time ionospheric VTEC modeling.

  19. Efficient exploration of cosmology dependence in the EFT of LSS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo, E-mail: matteoc@dark-cosmology.dk, E-mail: sfore@stanford.edu, E-mail: senatore@stanford.edu

    The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. The ideas and codes we present may easily be extended for other applications or higher-precision results.« less

  20. Efficient exploration of cosmology dependence in the EFT of LSS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo

    The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. Finally, the ideas and codes we present may easily be extended for other applications or higher-precision results.« less

  1. Efficient exploration of cosmology dependence in the EFT of LSS

    DOE PAGES

    Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo

    2017-04-18

    The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. Finally, the ideas and codes we present may easily be extended for other applications or higher-precision results.« less

  2. Development of 3-axis precise positioning seismic physical modeling system in the simulation of marine seismic exploration

    NASA Astrophysics Data System (ADS)

    Kim, D.; Shin, S.; Ha, J.; Lee, D.; Lim, Y.; Chung, W.

    2017-12-01

    Seismic physical modeling is a laboratory-scale experiment that deals with the actual and physical phenomena that may occur in the field. In seismic physical modeling, field conditions are downscaled and used. For this reason, even a small error may lead to a big error in an actual field. Accordingly, the positions of the source and the receiver must be precisely controlled in scale modeling. In this study, we have developed a seismic physical modeling system capable of precisely controlling the 3-axis position. For automatic and precise position control of an ultrasonic transducer(source and receiver) in the directions of the three axes(x, y, and z), a motor was mounted on each of the three axes. The motor can automatically and precisely control the positions with positional precision of 2''; for the x and y axes and 0.05 mm for the z axis. As it can automatically and precisely control the positions in the directions of the three axes, it has an advantage in that simulations can be carried out using the latest exploration techniques, such as OBS and Broadband Seismic. For the signal generation section, a waveform generator that can produce a maximum of two sources was used, and for the data acquisition section, which receives and stores reflected signals, an A/D converter that can receive a maximum of four signals was used. As multiple sources and receivers could be used at the same time, the system was set up in such a way that diverse exploration methods, such as single channel, multichannel, and 3-D exploration, could be realized. A computer control program based on LabVIEW was created, so that it could control the position of the transducer, determine the data acquisition parameters, and check the exploration data and progress in real time. A marine environment was simulated using a water tank 1 m wide, 1 m long, and 0.9 m high. To evaluate the performance and applicability of the seismic physical modeling system developed in this study, single channel and multichannel explorations were carried out in the marine environment and the accuracy of the modeling system was verified by comparatively analyzing the exploration data and the numerical modeling data acquired.

  3. Comparative Study on a Solving Model and Algorithm for a Flush Air Data Sensing System

    PubMed Central

    Liu, Yanbin; Xiao, Dibo; Lu, Yuping

    2014-01-01

    With the development of high-performance aircraft, precise air data are necessary to complete challenging tasks such as flight maneuvering with large angles of attack and high speed. As a result, the flush air data sensing system (FADS) was developed to satisfy the stricter control demands. In this paper, comparative stuides on the solving model and algorithm for FADS are conducted. First, the basic principles of FADS are given to elucidate the nonlinear relations between the inputs and the outputs. Then, several different solving models and algorithms of FADS are provided to compute the air data, including the angle of attck, sideslip angle, dynamic pressure and static pressure. Afterwards, the evaluation criteria of the resulting models and algorithms are discussed to satisfy the real design demands. Futhermore, a simulation using these algorithms is performed to identify the properites of the distinct models and algorithms such as the measuring precision and real-time features. The advantages of these models and algorithms corresponding to the different flight conditions are also analyzed, furthermore, some suggestions on their engineering applications are proposed to help future research. PMID:24859025

  4. Comparative study on a solving model and algorithm for a flush air data sensing system.

    PubMed

    Liu, Yanbin; Xiao, Dibo; Lu, Yuping

    2014-05-23

    With the development of high-performance aircraft, precise air data are necessary to complete challenging tasks such as flight maneuvering with large angles of attack and high speed. As a result, the flush air data sensing system (FADS) was developed to satisfy the stricter control demands. In this paper, comparative stuides on the solving model and algorithm for FADS are conducted. First, the basic principles of FADS are given to elucidate the nonlinear relations between the inputs and the outputs. Then, several different solving models and algorithms of FADS are provided to compute the air data, including the angle of attck, sideslip angle, dynamic pressure and static pressure. Afterwards, the evaluation criteria of the resulting models and algorithms are discussed to satisfy the real design demands. Futhermore, a simulation using these algorithms is performed to identify the properites of the distinct models and algorithms such as the measuring precision and real-time features. The advantages of these models and algorithms corresponding to the different flight conditions are also analyzed, furthermore, some suggestions on their engineering applications are proposed to help future research.

  5. The Reference Elevation Model of Antarctica (REMA): A High Resolution, Time-Stamped Digital Elevation Model for the Antarctic Ice Sheet

    NASA Astrophysics Data System (ADS)

    Howat, I.; Noh, M. J.; Porter, C. C.; Smith, B. E.; Morin, P. J.

    2017-12-01

    We are creating the Reference Elevation Model of Antarctica (REMA), a continuous, high resolution (2-8 m), high precision (accuracy better than 1 m) reference surface for a wide range of glaciological and geodetic applications. REMA will be constructed from stereo-photogrammetric Digital Surface Models (DSM) extracted from pairs of submeter resolution DigitalGlobe satellite imagery and vertically registred to precise elevations from near-coincident airborne LiDAR, ground-based GPS surveys and Cryosat-2 radar altimetry. Both a seamless mosaic and individual, time-stamped DSM strips, collected primarily between 2012 and 2016, will be distributed to enable change measurement. These data will be used for mapping bed topography from ice thickness, measuring ice thickness changes, constraining ice flow and geodynamic models, mapping glacial geomorphology, terrain corrections and filtering of remote sensing observations, and many other science tasks. Is will also be critical for mapping ice traverse routes, landing sites and other field logistics planning. REMA will also provide a critical elevation benchmark for future satellite altimetry missions including ICESat-2. Here we report on REMA production progress, initial accuracy assessment and data availability.

  6. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method

    PubMed Central

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508

  7. DFB laser array driver circuit controlled by adjustable signal

    NASA Astrophysics Data System (ADS)

    Du, Weikang; Du, Yinchao; Guo, Yu; Li, Wei; Wang, Hao

    2018-01-01

    In order to achieve the intelligent controlling of DFB laser array, this paper presents the design of an intelligence and high precision numerical controlling electric circuit. The system takes MCU and FPGA as the main control chip, with compact, high-efficiency, no impact, switching protection characteristics. The output of the DFB laser array can be determined by an external adjustable signal. The system transforms the analog control model into a digital control model, which improves the performance of the driver. The system can monitor the temperature and current of DFB laser array in real time. The output precision of the current can reach ± 0.1mA, which ensures the stable and reliable operation of the DFB laser array. Such a driver can benefit the flexible usage of the DFB laser array.

  8. A Flexure-Based Mechanism for Precision Adjustment of National Ignition Facility Target Shrouds in Three Rotational Degrees of Freedom

    DOE PAGES

    Boehm, K. -J.; Gibson, C. R.; Hollaway, J. R.; ...

    2016-09-01

    This study presents the design of a flexure-based mount allowing adjustment in three rotational degrees of freedom (DOFs) through high-precision set-screw actuators. The requirements of the application called for small but controlled angular adjustments for mounting a cantilevered beam. The proposed design is based on an array of parallel beams to provide sufficiently high stiffness in the translational directions while allowing angular adjustment through the actuators. A simplified physical model in combination with standard beam theory was applied to estimate the deflection profile and maximum stresses in the beams. A finite element model was built to calculate the stresses andmore » beam profiles for scenarios in which the flexure is simultaneously actuated in more than one DOF.« less

  9. EMRinger: side chain–directed model and map validation for 3D cryo-electron microscopy

    DOE PAGES

    Barad, Benjamin A.; Echols, Nathaniel; Wang, Ray Yu-Ruei; ...

    2015-08-17

    Advances in high-resolution cryo-electron microscopy (cryo-EM) require the development of validation metrics to independently assess map quality and model geometry. We report that EMRinger is a tool that assesses the precise fitting of an atomic model into the map during refinement and shows how radiation damage alters scattering from negatively charged amino acids. EMRinger (https://github.com/fraser-lab/EMRinger) will be useful for monitoring progress in resolving and modeling high-resolution features in cryo-EM.

  10. High precision measurements on fission-fragment de-excitation

    NASA Astrophysics Data System (ADS)

    Oberstedt, Stephan; Gatera, Angélique; Geerts, Wouter; Göök, Alf; Hambsch, Franz-Josef; Vidali, Marzio; Oberstedt, Andreas

    2017-11-01

    In recent years nuclear fission has gained renewed interest both from the nuclear energy community and in basic science. The first, represented by the OECD Nuclear Energy Agency, expressed the need for more accurate fission cross-section and fragment yield data for safety assessments of Generation IV reactor systems. In basic science modelling made much progress in describing the de-excitation mechanism of neutron-rich isotopes, e.g. produced in nuclear fission. Benchmarking the different models require a precise experimental data on prompt fission neutron and γ-ray emission, e.g. multiplicity, average energy per particle and total dissipated energy per fission, preferably as function of fission-fragment mass and total kinetic energy. A collaboration of scientists from JRC Geel (formerly known as JRC IRMM) and other institutes took the lead in establishing a dedicated measurement programme on prompt fission neutron and γ-ray characteristics, which has triggered even more measurement activities around the world. This contribution presents new advanced instrumentation and methodology we use to generate high-precision spectral data and will give a flavour of future data needs and opportunities.

  11. Spectroscopic Factors from the Single Neutron Pickup ^64Zn(d,t)

    NASA Astrophysics Data System (ADS)

    Leach, Kyle; Garrett, P. E.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wong, J.; Towner, I. S.; Ball, G. C.; Faestermann, T.; Krücken, R.; Hertenberger, R.; Wirth, H.-F.

    2010-11-01

    A great deal of attention has recently been paid towards high-precision superallowed β-decay Ft values. With the availability of extremely high-precision (<0.1%) experimental data, precision on the individual Ft values are now dominated by the ˜1% theoretical corrections. This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking (ISB) correction calculations become more difficult due to the truncated model space. Experimental spectroscopic factors for these nuclei are important for the identification of the relevant orbitals that should be included in the model space of the calculations. Motivated by this need, the single-nucleon transfer reaction ^64Zn(d,t)^63Zn was conducted at the Maier-Leibnitz-Laboratory (MLL) of TUM/LMU in Munich, Germany, using a 22 MeV polarized deuteron beam from the tandem Van de Graaff accelerator and the TUM/LMU Q3D magnetic spectrograph, with angular distributions from 10^o to 60^o. Results from this experiment will be presented and implications for calculations of ISB corrections in the superallowed ° decay of ^62Ga will be discussed.

  12. A Precise Physical Orbit for the M-dwarf Binary Gliese 268

    NASA Astrophysics Data System (ADS)

    Barry, R. K.; Demory, B.-O.; Ségransan, D.; Forveille, T.; Danchi, W. C.; Di Folco, E.; Queloz, D.; Spooner, H. R.; Torres, G.; Traub, W. A.; Delfosse, X.; Mayor, M.; Perrier, C.; Udry, S.

    2012-11-01

    We report high-precision interferometric and radial velocity (RV) observations of the M-dwarf binary Gl 268. Combining measurements conducted using the IOTA interferometer and the ELODIE and Harvard Center for Astrophysics RV instruments leads to a mass of 0.22596 ± 0.00084 M ⊙ for component A and 0.19230 ± 0.00071 M ⊙ for component B. The system parallax as determined by these observations is 0.1560 ± 0.0030 arcsec—a measurement with 1.9% uncertainty in excellent agreement with Hipparcos (0.1572 ± 0.0033). The absolute H-band magnitudes of the component stars are not well constrained by these measurements; however, we can place an approximate upper limit of 7.95 and 8.1 for Gl 268A and B, respectively. We test these physical parameters against the predictions of theoretical models that combine stellar evolution with high fidelity, non-gray atmospheric models. Measured and predicted values are compatible within 2σ. These results are among the most precise masses measured for visual binaries and compete with the best adaptive optics and eclipsing binary results.

  13. Measuring atmospheric density using GPS-LEO tracking data

    NASA Astrophysics Data System (ADS)

    Kuang, D.; Desai, S.; Sibthorpe, A.; Pi, X.

    2014-01-01

    We present a method to estimate the total neutral atmospheric density from precise orbit determination of Low Earth Orbit (LEO) satellites. We derive the total atmospheric density by determining the drag force acting on the LEOs through centimeter-level reduced-dynamic precise orbit determination (POD) using onboard Global Positioning System (GPS) tracking data. The precision of the estimated drag accelerations is assessed using various metrics, including differences between estimated along-track accelerations from consecutive 30-h POD solutions which overlap by 6 h, comparison of the resulting accelerations with accelerometer measurements, and comparison against an existing atmospheric density model, DTM-2000. We apply the method to GPS tracking data from CHAMP, GRACE, SAC-C, Jason-2, TerraSAR-X and COSMIC satellites, spanning 12 years (2001-2012) and covering orbital heights from 400 km to 1300 km. Errors in the estimates, including those introduced by deficiencies in other modeled forces (such as solar radiation pressure and Earth radiation pressure), are evaluated and the signal and noise levels for each satellite are analyzed. The estimated density data from CHAMP, GRACE, SAC-C and TerraSAR-X are identified as having high signal and low noise levels. These data all have high correlations with anominal atmospheric density model and show common features in relative residuals with respect to the nominal model in related parameter space. On the contrary, the estimated density data from COSMIC and Jason-2 show errors larger than the actual signal at corresponding altitudes thus having little practical value for this study. The results demonstrate that this method is applicable to data from a variety of missions and can provide useful total neutral density measurements for atmospheric study up to altitude as high as 715 km, with precision and resolution between those derived from traditional special orbital perturbation analysis and those obtained from onboard accelerometers.

  14. On the recovery of gravity anomalies from high precision altimeter data

    NASA Technical Reports Server (NTRS)

    Lelgemann, D.

    1976-01-01

    A model for the recovery of gravity anomalies from high precision altimeter data is derived which consists of small correction terms to the inverse Stokes' formula. The influence of unknown sea surface topography in the case of meandering currents such as the Gulf Stream is discussed. A formula was derived in order to estimate the accuracy of the gravity anomalies from the known accuracy of the altimeter data. It is shown that for the case of known harmonic coefficients of lower order the range of integration in Stokes inverse formula can be reduced very much.

  15. A double sealing technique for increasing the precision of headspace-gas chromatographic analysis.

    PubMed

    Xie, Wei-Qi; Yu, Kong-Xian; Gong, Yi-Xian

    2018-01-19

    This paper investigates a new double sealing technique for increasing the precision of the headspace gas chromatographic method. The air leakage problem caused by the high pressure in the headspace vial during the headspace sampling process has a great impact to the measurement precision in the conventional headspace analysis (i.e., single sealing technique). The results (using ethanol solution as the model sample) show that the present technique is effective to minimize such a problem. The double sealing technique has an excellent measurement precision (RSD < 0.15%) and accuracy (recovery = 99.1%-100.6%) for the ethanol quantification. The detection precision of the present method was 10-20 times higher than that in earlier HS-GC work that use conventional single sealing technique. The present double sealing technique may open up a new avenue, and also serve as a general strategy for improving the performance (i.e., accuracy and precision) of headspace analysis of various volatile compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  17. Validating spatiotemporal predictions of an important pest of small grains.

    PubMed

    Merrill, Scott C; Holtzer, Thomas O; Peairs, Frank B; Lester, Philip J

    2015-01-01

    Arthropod pests are typically managed using tactics applied uniformly to the whole field. Precision pest management applies tactics under the assumption that within-field pest pressure differences exist. This approach allows for more precise and judicious use of scouting resources and management tactics. For example, a portion of a field delineated as attractive to pests may be selected to receive extra monitoring attention. Likely because of the high variability in pest dynamics, little attention has been given to developing precision pest prediction models. Here, multimodel synthesis was used to develop a spatiotemporal model predicting the density of a key pest of wheat, the Russian wheat aphid, Diuraphis noxia (Kurdjumov). Spatially implicit and spatially explicit models were synthesized to generate spatiotemporal pest pressure predictions. Cross-validation and field validation were used to confirm model efficacy. A strong within-field signal depicting aphid density was confirmed with low prediction errors. Results show that the within-field model predictions will provide higher-quality information than would be provided by traditional field scouting. With improvements to the broad-scale model component, the model synthesis approach and resulting tool could improve pest management strategy and provide a template for the development of spatially explicit pest pressure models. © 2014 Society of Chemical Industry.

  18. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    PubMed

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2  = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.

  19. An absolute calibration system for millimeter-accuracy APOLLO measurements

    NASA Astrophysics Data System (ADS)

    Adelberger, E. G.; Battat, J. B. R.; Birkmeier, K. J.; Colmenares, N. R.; Davis, R.; Hoyle, C. D.; Huang, L. R.; McMillan, R. J.; Murphy, T. W., Jr.; Schlerman, E.; Skrobol, C.; Stubbs, C. W.; Zach, A.

    2017-12-01

    Lunar laser ranging provides a number of leading experimental tests of gravitation—important in our quest to unify general relativity and the standard model of physics. The apache point observatory lunar laser-ranging operation (APOLLO) has for years achieved median range precision at the  ∼2 mm level. Yet residuals in model-measurement comparisons are an order-of-magnitude larger, raising the question of whether the ranging data are not nearly as accurate as they are precise, or if the models are incomplete or ill-conditioned. This paper describes a new absolute calibration system (ACS) intended both as a tool for exposing and eliminating sources of systematic error, and also as a means to directly calibrate ranging data in situ. The system consists of a high-repetition-rate (80 MHz) laser emitting short (< 10 ps) pulses that are locked to a cesium clock. In essence, the ACS delivers photons to the APOLLO detector at exquisitely well-defined time intervals as a ‘truth’ input against which APOLLO’s timing performance may be judged and corrected. Preliminary analysis indicates no inaccuracies in APOLLO data beyond the  ∼3 mm level, suggesting that historical APOLLO data are of high quality and motivating continued work on model capabilities. The ACS provides the means to deliver APOLLO data both accurate and precise below the 2 mm level.

  20. Stereotactic Body Radiation Therapy Delivery in a Genetically Engineered Mouse Model of Lung Cancer.

    PubMed

    Du, Shisuo; Lockamy, Virginia; Zhou, Lin; Xue, Christine; LeBlanc, Justin; Glenn, Shonna; Shukla, Gaurav; Yu, Yan; Dicker, Adam P; Leeper, Dennis B; Lu, You; Lu, Bo

    2016-11-01

    To implement clinical stereotactic body radiation therapy (SBRT) using a small animal radiation research platform (SARRP) in a genetically engineered mouse model of lung cancer. A murine model of multinodular Kras-driven spontaneous lung tumors was used for this study. High-resolution cone beam computed tomography (CBCT) imaging was used to identify and target peripheral tumor nodules, whereas off-target lung nodules in the contralateral lung were used as a nonirradiated control. CBCT imaging helps localize tumors, facilitate high-precision irradiation, and monitor tumor growth. SBRT planning, prescription dose, and dose limits to normal tissue followed the guidelines set by RTOG protocols. Pathologic changes in the irradiated tumors were investigated using immunohistochemistry. The image guided radiation delivery using the SARRP system effectively localized and treated lung cancer with precision in a genetically engineered mouse model of lung cancer. Immunohistochemical data confirmed the precise delivery of SBRT to the targeted lung nodules. The 60 Gy delivered in 3 weekly fractions markedly reduced the proliferation index, Ki-67, and increased apoptosis per staining for cleaved caspase-3 in irradiated lung nodules. It is feasible to use the SARRP platform to perform dosimetric planning and delivery of SBRT in mice with lung cancer. This allows for preclinical studies that provide a rationale for clinical trials involving SBRT, especially when combined with immunotherapeutics. Copyright © 2016. Published by Elsevier Inc.

  1. Efficient Generation of Gene-Modified Pigs Harboring Precise Orthologous Human Mutation via CRISPR/Cas9-Induced Homology-Directed Repair in Zygotes.

    PubMed

    Zhou, Xiaoyang; Wang, Lulu; Du, Yinan; Xie, Fei; Li, Liang; Liu, Yu; Liu, Chuanhong; Wang, Shiqiang; Zhang, Shibing; Huang, Xingxu; Wang, Yong; Wei, Hong

    2016-01-01

    Precise genetic mutation of model animals is highly valuable for functional investigation of human mutations. Clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated 9 (Cas9)-induced homology-directed repair (HDR) is usually used for precise genetic mutation, being limited by the relatively low efficiency compared with that of non-homologous end joining (NHEJ). Although inhibition of NHEJ was shown to enhance HDR-derived mutation, in this work, without inhibition of NHEJ, we first generated gene-modified pigs harboring precise orthologous human mutation (Sox10 c.A325>T) via CRISPR/Cas9-induced HDR in zygotes using single-strand oligo DNA (ssODN) as template with an efficiency as high as 80%, indicating that pig zygotes exhibited high activities of HDR relative to NHEJ and were highly amendable to genetic mutation via CIRSPR/Cas9-induced HDR. Besides, we found a higher concentration of ssODN remarkably reduced HDR-derived mutation in pig zygotes, suggesting a possible balance for optimal HDR-derived mutation in zygotes between the excessive accessibility to HDR templates and the activities of HDR relative to NHEJ which appeared to be negatively correlated to ssODN concentration. In addition, the HDR-derived mutation, as well as those from NHEJ, extensively integrated into various tissues including gonad of founder pig without detected off-targeting, suggesting CRISPR/Cas9-induced HDR in zygotes is a reliable approach for precise genetic mutation in pigs. © 2015 WILEY PERIODICALS, INC.

  2. A High-Level Language for Modeling Algorithms and Their Properties

    NASA Astrophysics Data System (ADS)

    Akhtar, Sabina; Merz, Stephan; Quinson, Martin

    Designers of concurrent and distributed algorithms usually express them using pseudo-code. In contrast, most verification techniques are based on more mathematically-oriented formalisms such as state transition systems. This conceptual gap contributes to hinder the use of formal verification techniques. Leslie Lamport introduced PlusCal, a high-level algorithmic language that has the "look and feel" of pseudo-code, but is equipped with a precise semantics and includes a high-level expression language based on set theory. PlusCal models can be compiled to TLA + and verified using the model checker tlc.

  3. Effects of Prepolymerized Particle Size and Polymerization Kinetics on Volumetric Shrinkage of Dental Modeling Resins

    PubMed Central

    Ha, Jung-Yun; Chun, Ju-Na; Son, Jun Sik; Kim, Kyo-Han

    2014-01-01

    Dental modeling resins have been developed for use in areas where highly precise resin structures are needed. The manufacturers claim that these polymethyl methacrylate/methyl methacrylate (PMMA/MMA) resins show little or no shrinkage after polymerization. This study examined the polymerization shrinkage of five dental modeling resins as well as one temporary PMMA/MMA resin (control). The morphology and the particle size of the prepolymerized PMMA powders were investigated by scanning electron microscopy and laser diffraction particle size analysis, respectively. Linear polymerization shrinkage strains of the resins were monitored for 20 minutes using a custom-made linometer, and the final values (at 20 minutes) were converted into volumetric shrinkages. The final volumetric shrinkage values for the modeling resins were statistically similar (P > 0.05) or significantly larger (P < 0.05) than that of the control resin and were related to the polymerization kinetics (P < 0.05) rather than the PMMA bead size (P = 0.335). Therefore, the optimal control of the polymerization kinetics seems to be more important for producing high-precision resin structures rather than the use of dental modeling resins. PMID:24779020

  4. Gaussian signal relaxation around spin echoes: Implications for precise reversible transverse relaxation quantification of pulmonary tissue at 1.5 and 3 Tesla.

    PubMed

    Zapp, Jascha; Domsch, Sebastian; Weingärtner, Sebastian; Schad, Lothar R

    2017-05-01

    To characterize the reversible transverse relaxation in pulmonary tissue and to study the benefit of a quadratic exponential (Gaussian) model over the commonly used linear exponential model for increased quantification precision. A point-resolved spectroscopy sequence was used for comprehensive sampling of the relaxation around spin echoes. Measurements were performed in an ex vivo tissue sample and in healthy volunteers at 1.5 Tesla (T) and 3 T. The goodness of fit using χred2 and the precision of the fitted relaxation time by means of its confidence interval were compared between the two relaxation models. The Gaussian model provides enhanced descriptions of pulmonary relaxation with lower χred2 by average factors of 4 ex vivo and 3 in volunteers. The Gaussian model indicates higher sensitivity to tissue structure alteration with increased precision of reversible transverse relaxation time measurements also by average factors of 4 ex vivo and 3 in volunteers. The mean relaxation times of the Gaussian model in volunteers are T2,G' = (1.97 ± 0.27) msec at 1.5 T and T2,G' = (0.83 ± 0.21) msec at 3 T. Pulmonary signal relaxation was found to be accurately modeled as Gaussian, providing a potential biomarker T2,G' with high sensitivity. Magn Reson Med 77:1938-1945, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  5. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    NASA Astrophysics Data System (ADS)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  6. High-Precision Plutonium Isotopic Compositions Measured on Los Alamos National Laboratory’s General’s Tanks Samples: Bearing on Model Ages, Reactor Modelling, and Sources of Material. Further Discussion of Chronometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Khalil J.; Rim, Jung Ho; Porterfield, Donivan R.

    2015-06-29

    In this study, we re-analyzed late-1940’s, Manhattan Project era Plutonium-rich sludge samples recovered from the ''General’s Tanks'' located within the nation’s oldest Plutonium processing facility, Technical Area 21. These samples were initially characterized by lower accuracy, and lower precision mass spectrometric techniques. We report here information that was previously not discernable: the two tanks contain isotopically distinct Pu not only for the major (i.e., 240Pu, 239Pu) but trace ( 238Pu , 241Pu, 242Pu) isotopes. Revised isotopics slightly changed the calculated 241Am- 241Pu model ages and interpretations.

  7. Omics-Based Strategies in Precision Medicine: Toward a Paradigm Shift in Inborn Errors of Metabolism Investigations

    PubMed Central

    Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya

    2016-01-01

    The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era. PMID:27649151

  8. Omics-Based Strategies in Precision Medicine: Toward a Paradigm Shift in Inborn Errors of Metabolism Investigations.

    PubMed

    Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya

    2016-09-14

    The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era.

  9. Challenging the standard model by high-precision comparisons of the fundamental properties of protons and antiprotons

    NASA Astrophysics Data System (ADS)

    Ulmer, S.; Mooser, A.; Nagahama, H.; Sellner, S.; Smorra, C.

    2018-03-01

    The BASE collaboration investigates the fundamental properties of protons and antiprotons, such as charge-to-mass ratios and magnetic moments, using advanced cryogenic Penning trap systems. In recent years, we performed the most precise measurement of the magnetic moments of both the proton and the antiproton, and conducted the most precise comparison of the proton-to-antiproton charge-to-mass ratio. In addition, we have set the most stringent constraint on directly measured antiproton lifetime, based on a unique reservoir trap technique. Our matter/antimatter comparison experiments provide stringent tests of the fundamental charge-parity-time invariance, which is one of the fundamental symmetries of the standard model of particle physics. This article reviews the recent achievements of BASE and gives an outlook to our physics programme in the ELENA era. This article is part of the Theo Murphy meeting issue `Antiproton physics in the ELENA era'.

  10. Challenging the standard model by high-precision comparisons of the fundamental properties of protons and antiprotons.

    PubMed

    Ulmer, S; Mooser, A; Nagahama, H; Sellner, S; Smorra, C

    2018-03-28

    The BASE collaboration investigates the fundamental properties of protons and antiprotons, such as charge-to-mass ratios and magnetic moments, using advanced cryogenic Penning trap systems. In recent years, we performed the most precise measurement of the magnetic moments of both the proton and the antiproton, and conducted the most precise comparison of the proton-to-antiproton charge-to-mass ratio. In addition, we have set the most stringent constraint on directly measured antiproton lifetime, based on a unique reservoir trap technique. Our matter/antimatter comparison experiments provide stringent tests of the fundamental charge-parity-time invariance, which is one of the fundamental symmetries of the standard model of particle physics. This article reviews the recent achievements of BASE and gives an outlook to our physics programme in the ELENA era.This article is part of the Theo Murphy meeting issue 'Antiproton physics in the ELENA era'. © 2018 The Authors.

  11. Challenging the standard model by high-precision comparisons of the fundamental properties of protons and antiprotons

    PubMed Central

    Mooser, A.; Nagahama, H.; Sellner, S.; Smorra, C.

    2018-01-01

    The BASE collaboration investigates the fundamental properties of protons and antiprotons, such as charge-to-mass ratios and magnetic moments, using advanced cryogenic Penning trap systems. In recent years, we performed the most precise measurement of the magnetic moments of both the proton and the antiproton, and conducted the most precise comparison of the proton-to-antiproton charge-to-mass ratio. In addition, we have set the most stringent constraint on directly measured antiproton lifetime, based on a unique reservoir trap technique. Our matter/antimatter comparison experiments provide stringent tests of the fundamental charge–parity–time invariance, which is one of the fundamental symmetries of the standard model of particle physics. This article reviews the recent achievements of BASE and gives an outlook to our physics programme in the ELENA era. This article is part of the Theo Murphy meeting issue ‘Antiproton physics in the ELENA era’. PMID:29459414

  12. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Mixed Single/Double Precision in OpenIFS: A Detailed Study of Energy Savings, Scaling Effects, Architectural Effects, and Compilation Effects

    NASA Astrophysics Data System (ADS)

    Fagan, Mike; Dueben, Peter; Palem, Krishna; Carver, Glenn; Chantry, Matthew; Palmer, Tim; Schlacter, Jeremy

    2017-04-01

    It has been shown that a mixed precision approach that judiciously replaces double precision with single precision calculations can speed-up global simulations. In particular, a mixed precision variation of the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) showed virtually the same quality model results as the standard double precision version (Vana et al., Single precision in weather forecasting models: An evaluation with the IFS, Monthly Weather Review, in print). In this study, we perform detailed measurements of savings in computing time and energy using a mixed precision variation of the -OpenIFS- model. The mixed precision variation of OpenIFS is analogous to the IFS variation used in Vana et al. We (1) present results for energy measurements for simulations in single and double precision using Intel's RAPL technology, (2) conduct a -scaling- study to quantify the effects that increasing model resolution has on both energy dissipation and computing cycles, (3) analyze the differences between single core and multicore processing, and (4) compare the effects of different compiler technologies on the mixed precision OpenIFS code. In particular, we compare intel icc/ifort with gnu gcc/gfortran.

  14. A Study of Particle Beam Spin Dynamics for High Precision Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiedler, Andrew J.

    In the search for physics beyond the Standard Model, high precision experiments to measure fundamental properties of particles are an important frontier. One group of such measurements involves magnetic dipole moment (MDM) values as well as searching for an electric dipole moment (EDM), both of which could provide insights about how particles interact with their environment at the quantum level and if there are undiscovered new particles. For these types of high precision experiments, minimizing statistical uncertainties in the measurements plays a critical role. \\\\ \\indent This work leverages computer simulations to quantify the effects of statistical uncertainty for experimentsmore » investigating spin dynamics. In it, analysis of beam properties and lattice design effects on the polarization of the beam is performed. As a case study, the beam lines that will provide polarized muon beams to the Fermilab Muon \\emph{g}-2 experiment are analyzed to determine the effects of correlations between the phase space variables and the overall polarization of the muon beam.« less

  15. Development of Models for High Precision Simulation of the Space Mission Microscope

    NASA Astrophysics Data System (ADS)

    Bremer, Stefanie; List, Meike; Selig, Hanns; Lämmerzahl, Claus

    MICROSCOPE is a French space mission for testing the Weak Equivalence Principle (WEP). The mission goal is the determination of the Eötvös parameter with an accuracy of 10-15. This will be achieved by means of two high-precision capacitive differential accelerometers, that are built by the French institute ONERA. At the German institute ZARM drop tower tests are carried out to verify the payload performance. Additionally, the mission data evaluation is prepared in close cooperation with the French partners CNES, ONERA and OCA. Therefore a comprehensive simulation of the real system including the science signal and all error sources is built for the development and testing of data reduction and data analysis algorithms to extract the WEP violation signal. Currently, the High Performance Satellite Dynamics Simulator (HPS), a cooperation project of ZARM and the DLR Institute of Space Systems, is adapted to the MICROSCOPE mission for the simulation of test mass and satellite dynamics. Models of environmental disturbances like solar radiation pressure are considered, too. Furthermore detailed modeling of the on-board capacitive sensors is done.

  16. rpe v5: an emulator for reduced floating-point precision in large numerical simulations

    NASA Astrophysics Data System (ADS)

    Dawson, Andrew; Düben, Peter D.

    2017-06-01

    This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.

  17. Spectral optimization and uncertainty quantification in combustion modeling

    NASA Astrophysics Data System (ADS)

    Sheen, David Allan

    Reliable simulations of reacting flow systems require a well-characterized, detailed chemical model as a foundation. Accuracy of such a model can be assured, in principle, by a multi-parameter optimization against a set of experimental data. However, the inherent uncertainties in the rate evaluations and experimental data leave a model still characterized by some finite kinetic rate parameter space. Without a careful analysis of how this uncertainty space propagates into the model's predictions, those predictions can at best be trusted only qualitatively. In this work, the Method of Uncertainty Minimization using Polynomial Chaos Expansions is proposed to quantify these uncertainties. In this method, the uncertainty in the rate parameters of the as-compiled model is quantified. Then, the model is subjected to a rigorous multi-parameter optimization, as well as a consistency-screening process. Lastly, the uncertainty of the optimized model is calculated using an inverse spectral optimization technique, and then propagated into a range of simulation conditions. An as-compiled, detailed H2/CO/C1-C4 kinetic model is combined with a set of ethylene combustion data to serve as an example. The idea that the hydrocarbon oxidation model should be understood and developed in a hierarchical fashion has been a major driving force in kinetics research for decades. How this hierarchical strategy works at a quantitative level, however, has never been addressed. In this work, we use ethylene and propane combustion as examples and explore the question of hierarchical model development quantitatively. The Method of Uncertainty Minimization using Polynomial Chaos Expansions is utilized to quantify the amount of information that a particular combustion experiment, and thereby each data set, contributes to the model. This knowledge is applied to explore the relationships among the combustion chemistry of hydrogen/carbon monoxide, ethylene, and larger alkanes. Frequently, new data will become available, and it will be desirable to know the effect that inclusion of these data has on the optimized model. Two cases are considered here. In the first, a study of H2/CO mass burning rates has recently been published, wherein the experimentally-obtained results could not be reconciled with any extant H2/CO oxidation model. It is shown in that an optimized H2/CO model can be developed that will reproduce the results of the new experimental measurements. In addition, the high precision of the new experiments provide a strong constraint on the reaction rate parameters of the chemistry model, manifested in a significant improvement in the precision of simulations. In the second case, species time histories were measured during n-heptane oxidation behind reflected shock waves. The highly precise nature of these measurements is expected to impose critical constraints on chemical kinetic models of hydrocarbon combustion. The results show that while an as-compiled, prior reaction model of n-alkane combustion can be accurate in its prediction of the detailed species profiles, the kinetic parameter uncertainty in the model remains to be too large to obtain a precise prediction of the data. Constraining the prior model against the species time histories within the measurement uncertainties led to notable improvements in the precision of model predictions against the species data as well as the global combustion properties considered. Lastly, we show that while the capability of the multispecies measurement presents a step-change in our precise knowledge of the chemical processes in hydrocarbon combustion, accurate data of global combustion properties are still necessary to predict fuel combustion.

  18. High density scintillating glass proton imaging detector

    NASA Astrophysics Data System (ADS)

    Wilkinson, C. J.; Goranson, K.; Turney, A.; Xie, Q.; Tillman, I. J.; Thune, Z. L.; Dong, A.; Pritchett, D.; McInally, W.; Potter, A.; Wang, D.; Akgun, U.

    2017-03-01

    In recent years, proton therapy has achieved remarkable precision in delivering doses to cancerous cells while avoiding healthy tissue. However, in order to utilize this high precision treatment, greater accuracy in patient positioning is needed. An accepted approximate uncertainty of +/-3% exists in the current practice of proton therapy due to conversions between x-ray and proton stopping power. The use of protons in imaging would eliminate this source of error and lessen the radiation exposure of the patient. To this end, this study focuses on developing a novel proton-imaging detector built with high-density glass scintillator. The model described herein contains a compact homogeneous proton calorimeter composed of scintillating, high density glass as the active medium. The unique geometry of this detector allows for the measurement of both the position and residual energy of protons, eliminating the need for a separate set of position trackers in the system. Average position and energy of a pencil beam of 106 protons is used to reconstruct the image rather than by analyzing individual proton data. Simplicity and efficiency were major objectives in this model in order to present an imaging technique that is compact, cost-effective, and precise, as well as practical for a clinical setting with pencil-beam scanning proton therapy equipment. In this work, the development of novel high-density glass scintillator and the unique conceptual design of the imager are discussed; a proof-of-principle Monte Carlo simulation study is performed; preliminary two-dimensional images reconstructed from the Geant4 simulation are presented.

  19. High resolution extremity CT for biomechanics modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashby, A.E.; Brand, H.; Hollerbach, K.

    1995-09-23

    With the advent of ever more powerful computing and finite element analysis (FEA) capabilities, the bone and joint geometry detail available from either commercial surface definitions or from medical CT scans is inadequate. For dynamic FEA modeling of joints, precise articular contours are necessary to get appropriate contact definition. In this project, a fresh cadaver extremity was suspended in parafin in a lucite cylinder and then scanned with an industrial CT system to generate a high resolution data set for use in biomechanics modeling.

  20. Fast-PPP assessment in European and equatorial region near the solar cycle maximum

    NASA Astrophysics Data System (ADS)

    Rovira-Garcia, Adria; Juan, José Miguel; Sanz, Jaume

    2014-05-01

    The Fast Precise Point Positioning (Fast-PPP) is a technique to provide quick high-accuracy navigation with ambiguity fixing capability, thanks to an accurate modelling of the ionosphere. Indeed, once the availability of real-time precise satellite orbits and clocks is granted to users, the next challenge is the accuracy of real-time ionospheric corrections. Several steps had been taken by gAGE/UPC to develop such global system for precise navigation. First Wide-Area Real-Time Kinematics (WARTK) feasibility studies enabled precise relative continental navigation using a few tens of reference stations. Later multi-frequency and multi-constellation assessments in different ionospheric scenarios, including maximum solar-cycle conditions, were focussed on user-domain performance. Recently, a mature evolution of the technique consists on a dual service scheme; a global Precise Point Positioning (PPP) service, together with a continental enhancement to shorten convergence. A end to end performance assessment of the Fast-PPP technique is presented in this work, focussed in Europe and in the equatorial region of South East Asia (SEA), both near the solar cycle maximum. The accuracy of the Central Processing Facility (CPF) real-time precise satellite orbits and clocks is respectively, 4 centimetres and 0.2 nanoseconds, in line with the accuracy of the International GNSS Service (IGS) analysis centres. This global PPP service is enhanced by the Fast-PPP by adding the capability of global undifferenced ambiguity fixing thanks to the fractional part of the ambiguities determination. The core of the Fast-PPP is the capability to compute real-time ionospheric determinations with accuracies at the level or better than 1 Total Electron Content Unit (TECU), improving the widely-accepted Global Ionospheric Maps (GIM), with declared accuracies of 2-8 TECU. This large improvement in the modelling accuracy is achieved thanks to a two-layer description of the ionosphere combined with the carrier-phase ambiguity fixing performed in the Fast-PPP CPF. The Fast-PPP user domain positioning takes benefit of such precise ionospheric modelling. Convergence time of dual-frequency classic PPP solutions is reduced from the best part of an hour to 5-10 minutes not only in European mid-latitudes but also in the much more challenging equatorial region. The improvement of ionospheric modelling is directly translated into the accuracy of single-frequency mass-market users, achieving 2-3 decimetres of error after any cold start. Since all Fast-PPP corrections are broadcast together with their confidence level (sigma), such high-accuracy navigation is protected with safety integrity bounds.

  1. Underresolved absorption spectroscopy of OH radicals in flames using broadband UV LEDs

    NASA Astrophysics Data System (ADS)

    White, Logan; Gamba, Mirko

    2018-04-01

    A broadband absorption spectroscopy diagnostic based on underresolution of the spectral absorption lines is evaluated for the inference of species mole fraction and temperature in combustion systems from spectral fitting. The approach uses spectrally broadband UV light emitting diodes and leverages low resolution, small form factor spectrometers. Through this combination, the method can be used to develop high precision measurement sensors. The challenges of underresolved spectroscopy are explored and addressed using spectral derivative fitting, which is found to generate measurements with high precision and accuracy. The diagnostic is demonstrated with experimental measurements of gas temperature and OH mole fraction in atmospheric air/methane premixed laminar flat flames. Measurements exhibit high precision, good agreement with 1-D flame simulations, and high repeatability. A newly developed model of uncertainty in underresolved spectroscopy is applied to estimate two-dimensional confidence regions for the measurements. The results of the uncertainty analysis indicate that the errors in the outputs of the spectral fitting procedure are correlated. The implications of the correlation between uncertainties for measurement interpretation are discussed.

  2. High-Precision 40Ar/39Ar dating of the Deccan Traps

    NASA Astrophysics Data System (ADS)

    Sprain, C. J.; Renne, P. R.; Fendley, I.; Pande, K.; Self, S.; Vanderkluysen, L.; Richards, M. A.

    2017-12-01

    Almost forty years ago it was first hypothesized that greenhouse gases emitted from the Deccan Traps (DT) played a role in the Cretaceous-Paleogene boundary (KPB) mass extinction (McLean 1979, 1980, 1985). At that time, this hypothesis was dismissed due to insufficient geochronology and new evidence that a bolide impact coincided with the KPB. Since then, evidence such as records of protracted extinction and climate change in the Late Cretaceous, in addition to new high-precision geochronology of the DT, has bolstered the Deccan hypothesis. Recently, many models have been produced to simulate how DT volcanism may have perturbed global ecosystems. However, modeled outcomes are largely dependent upon variables such as the amount and species of gas released and the tempo of eruptions, which are not well constrained (Self et al., 2014). To better constrain climatic models and better understand the role DT volcanism played in the KPB extinction, we developed a high-precision geochronologic framework defining the timing and tempo of DT eruptions within the Western Ghats using high-precision 40Ar/39Ar geochronology. Our new results show that the DT erupted relatively continuously starting 66.4 Ma and extending to at least 65.3 Ma with no hiatuses longer than 50 ka, invalidating the concept of three discrete eruption pulses in the Western Ghats (Chenet et al., 2007, 2009; Keller et al., 2008). Our new data further provide the first precise location of the KPB within the DT sequence and place this boundary at or near the Lonavala-Wai subgroup transition, roughly coincident with major changes in eruption frequency, flow-field volumes, and extent of crustal magma contamination. Taken together, these results suggest that a state shift occurred in the DT magmatic system around the time of the Chicxulub impact, consistent with the impact triggering hypothesis of Richards et al. (2015). Our work further shows that over 80% of the estimated volume of the DT within the Western Ghats erupted in 600 ka; however, 70% of this volume, erupted after the KPB calling for a reassessment of the role of DT volcanism played in the KPB mass extinction and subsequent recovery. It is important to note that current volume estimates are likely to change as we work to improve understanding of the distribution of chemical formations, both on and offshore.

  3. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory

    PubMed Central

    Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank

    2016-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957

  4. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory.

    PubMed

    Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank

    2017-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Spectroscopic Factors From the Single Neutron Pickup Reaction ^64Zn(d,t)

    NASA Astrophysics Data System (ADS)

    Leach, Kyle; Garrett, P. E.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Ball, G. C.; Faestermann, T.; Krücken, R.; Wirth, H.-F.; Herten-Berger, R.

    2008-10-01

    A great deal of attention has recently been paid towards high precision superallowed β-decay Ft values. With the availability of extremely high precision (<0.1%) experimental data, the precision on Ft is now limited by the ˜1% theoretical corrections.ootnotetextI.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008). This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking correction calculations become more difficult due to the truncated model space. Experimental data is needed to help constrain input parameters for these calculations, and thus experimental spectroscopic factors for these nuclei are important. Preliminary results from the single-nucleon-transfer reaction ^64Zn(d,t)^63Zn will be presented, and the implications for calculations of isospin-symmetry breaking in the superallowed &+circ; decay of ^62Ga will be discussed.

  6. Tracking individual action potentials throughout mammalian axonal arbors.

    PubMed

    Radivojevic, Milos; Franke, Felix; Altermatt, Michael; Müller, Jan; Hierlemann, Andreas; Bakkum, Douglas J

    2017-10-09

    Axons are neuronal processes specialized for conduction of action potentials (APs). The timing and temporal precision of APs when they reach each of the synapses are fundamentally important for information processing in the brain. Due to small diameters of axons, direct recording of single AP transmission is challenging. Consequently, most knowledge about axonal conductance derives from modeling studies or indirect measurements. We demonstrate a method to noninvasively and directly record individual APs propagating along millimeter-length axonal arbors in cortical cultures with hundreds of microelectrodes at microsecond temporal resolution. We find that cortical axons conduct single APs with high temporal precision (~100 µs arrival time jitter per mm length) and reliability: in more than 8,000,000 recorded APs, we did not observe any conduction or branch-point failures. Upon high-frequency stimulation at 100 Hz, successive became slower, and their arrival time precision decreased by 20% and 12% for the 100th AP, respectively.

  7. Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses

    PubMed Central

    Das, Jayajit

    2016-01-01

    Single cells often generate precise responses by involving dissipative out-of-thermodynamic-equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high-precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early-time T cell signaling. Using exact analytical calculations and numerical simulations, I show that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in determining single-cell kinetics from cell-population results. PMID:26958894

  8. Beyond precision surgery: Molecularly motivated precision care for gastric cancer.

    PubMed

    Choi, Y Y; Cheong, J-H

    2017-05-01

    Gastric cancer is one of the leading causes of cancer-related deaths worldwide. Despite the high disease prevalence, gastric cancer research has not gained much attention. Recently, genome-scale technology has made it possible to explore the characteristics of gastric cancer at the molecular level. Accordingly, gastric cancer can be classified into molecular subtypes that convey more detailed information of tumor than histopathological characteristics, and these subtypes are associated with clinical outcomes. Furthermore, this molecular knowledge helps to identify new actionable targets and develop novel therapeutic strategies. To advance the concept of precision patient care in the clinic, patient-derived xenograft (PDX) models have recently been developed. PDX models not only represent histology and genomic features, but also predict responsiveness to investigational drugs in patient tumors. Molecularly curated PDX cohorts will be instrumental in hypothesis generation, biomarker discovery, and drug screening and testing in proof-of-concept preclinical trials for precision therapy. In the era of precision medicine, molecularly tailored therapeutic strategies should be individualized for cancer patients. To improve the overall clinical outcome, a multimodal approach is indispensable for advanced cancer patients. Careful, oncological principle-based surgery, combined with a molecularly guided multidisciplinary approach, will open new horizons in surgical oncology. Copyright © 2017. Published by Elsevier Ltd.

  9. An active-optics image-motion compensation technology application for high-speed searching and infrared detection system

    NASA Astrophysics Data System (ADS)

    Wu, Jianping; Lu, Fei; Zou, Kai; Yan, Hong; Wan, Min; Kuang, Yan; Zhou, Yanqing

    2018-03-01

    An ultra-high angular velocity and minor-caliber high-precision stably control technology application for active-optics image-motion compensation, is put forward innovatively in this paper. The image blur problem due to several 100°/s high-velocity relative motion between imaging system and target is theoretically analyzed. The velocity match model of detection system and active optics compensation system is built, and active optics image motion compensation platform experiment parameters are designed. Several 100°/s high-velocity high-precision control optics compensation technology is studied and implemented. The relative motion velocity is up to 250°/s, and image motion amplitude is more than 20 pixel. After the active optics compensation, motion blur is less than one pixel. The bottleneck technology of ultra-high angular velocity and long exposure time in searching and infrared detection system is successfully broke through.

  10. EXPLORING DATA-DRIVEN SPECTRAL MODELS FOR APOGEE M DWARFS

    NASA Astrophysics Data System (ADS)

    Lua Birky, Jessica; Hogg, David; Burgasser, Adam J.; Jessica Birky

    2018-01-01

    The Cannon (Ness et al. 2015; Casey et al. 2016) is a flexible, data-driven spectral modeling and parameter inference framework, demonstrated on high-resolution Apache Point Galactic Evolution Experiment (APOGEE; λ/Δλ~22,500, 1.5-1.7µm) spectra of giant stars to estimate stellar labels (Teff, logg, [Fe/H], and chemical abundances) to precisions higher than the model-grid pipeline. The lack of reliable stellar parameters reported by the APOGEE pipeline for temperatures less than ~3550K, motivates extension of this approach to M dwarf stars. Using a training set of 51 M dwarfs with spectral types ranging M0-M9 obtained from SDSS optical spectra, we demonstrate that the Cannon can infer spectral types to a precision of +/-0.6 types, making it an effective tool for classifying high-resolution near-infrared spectra. We discuss the potential for extending this work to determine the physical stellar labels Teff, logg, and [Fe/H].This work is supported by the SDSS Faculty and Student (FAST) initiative.

  11. Precise Ages for the Benchmark Brown Dwarfs HD 19467 B and HD 4747 B

    NASA Astrophysics Data System (ADS)

    Wood, Charlotte; Boyajian, Tabetha; Crepp, Justin; von Braun, Kaspar; Brewer, John; Schaefer, Gail; Adams, Arthur; White, Tim

    2018-01-01

    Large uncertainty in the age of brown dwarfs, stemming from a mass-age degeneracy, makes it difficult to constrain substellar evolutionary models. To break the degeneracy, we need ''benchmark" brown dwarfs (found in binary systems) whose ages can be determined independent of their masses. HD~19467~B and HD~4747~B are two benchmark brown dwarfs detected through the TRENDS (TaRgeting bENchmark objects with Doppler Spectroscopy) high-contrast imaging program for which we have dynamical mass measurements. To constrain their ages independently through isochronal analysis, we measured the radii of the host stars with interferometry using the Center for High Angular Resolution Astronomy (CHARA) Array. Assuming the brown dwarfs have the same ages as their host stars, we use these results to distinguish between several substellar evolutionary models. In this poster, we present new age estimates for HD~19467 and HD~4747 that are more accurate and precise and show our preliminary comparisons to cooling models.

  12. Cosmic reionization on computers. Ultraviolet continuum slopes and dust opacities in high redshift galaxies

    DOE PAGES

    Khakhaleva-Li, Zimu; Gnedin, Nickolay Y.

    2016-03-30

    In this study, we compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting UV and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are notmore » fully sufficient. While the discrepancies with the exiting data are marginal, the future JWST data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less

  13. Cosmic reionization on computers. Ultraviolet continuum slopes and dust opacities in high redshift galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khakhaleva-Li, Zimu; Gnedin, Nickolay Y.

    In this study, we compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting UV and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are notmore » fully sufficient. While the discrepancies with the exiting data are marginal, the future JWST data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less

  14. -Omic and Electronic Health Record Big Data Analytics for Precision Medicine.

    PubMed

    Wu, Po-Yen; Cheng, Chih-Wen; Kaddi, Chanchala D; Venugopalan, Janani; Hoffman, Ryan; Wang, May D

    2017-02-01

    Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of -omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of healthcare. In this paper, we present -omic and EHR data characteristics, associated challenges, and data analytics including data preprocessing, mining, and modeling. To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating -omic information into EHR. Big data analytics is able to address -omic and EHR data challenges for paradigm shift toward precision medicine. Big data analytics makes sense of -omic and EHR data to improve healthcare outcome. It has long lasting societal impact.

  15. Study on SOC wavelet analysis for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Liu, Xuepeng; Zhao, Dongmei

    2017-08-01

    Improving the prediction accuracy of SOC can reduce the complexity of the conservative and control strategy of the strategy such as the scheduling, optimization and planning of LiFePO4 battery system. Based on the analysis of the relationship between the SOC historical data and the external stress factors, the SOC Estimation-Correction Prediction Model based on wavelet analysis is established. Using wavelet neural network prediction model is of high precision to achieve forecast link, external stress measured data is used to update parameters estimation in the model, implement correction link, makes the forecast model can adapt to the LiFePO4 battery under rated condition of charge and discharge the operating point of the variable operation area. The test results show that the method can obtain higher precision prediction model when the input and output of LiFePO4 battery are changed frequently.

  16. One-wire thermocouple

    NASA Technical Reports Server (NTRS)

    Goodrich, W. D.; Staimach, C. J.

    1977-01-01

    Nickel alloy/constantan device accurately measures surface temperature at precise locations. Device is moderate in cost and simplifies fabrication of highly-instrumented seamless-surface heat-transfer models. Device also applies to metal surfaces if constantan wire has insulative coat.

  17. Precisely detecting atomic position of atomic intensity images.

    PubMed

    Wang, Zhijun; Guo, Yaolin; Tang, Sai; Li, Junjie; Wang, Jincheng; Zhou, Yaohe

    2015-03-01

    We proposed a quantitative method to detect atomic position in atomic intensity images from experiments such as high-resolution transmission electron microscopy, atomic force microscopy, and simulation such as phase field crystal modeling. The evaluation of detection accuracy proves the excellent performance of the method. This method provides a chance to precisely determine atomic interactions based on the detected atomic positions from the atomic intensity image, and hence to investigate the related physical, chemical and electrical properties. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Dimensional Precision Research of Wax Molding Rapid Prototyping based on Droplet Injection

    NASA Astrophysics Data System (ADS)

    Mingji, Huang; Geng, Wu; yan, Shan

    2017-11-01

    The traditional casting process is complex, the mold is essential products, mold quality directly affect the quality of the product. With the method of rapid prototyping 3D printing to produce mold prototype. The utility wax model has the advantages of high speed, low cost and complex structure. Using the orthogonal experiment as the main method, analysis each factors of size precision. The purpose is to obtain the optimal process parameters, to improve the dimensional accuracy of production based on droplet injection molding.

  19. [Radiance Simulation of BUV Hyperspectral Sensor on Multi Angle Observation, and Improvement to Initial Total Ozone Estimating Model of TOMS V8 Total Ozone Algorithm].

    PubMed

    Lü, Chun-guang; Wang, Wei-he; Yang, Wen-bo; Tian, Qing-iju; Lu, Shan; Chen, Yun

    2015-11-01

    New hyperspectral sensor to detect total ozone is considered to be carried on geostationary orbit platform in the future, because local troposphere ozone pollution and diurnal variation of ozone receive more and more attention. Sensors carried on geostationary satellites frequently obtain images on the condition of larger observation angles so that it has higher requirements of total ozone retrieval on these observation geometries. TOMS V8 algorithm is developing and widely used in low orbit ozone detecting sensors, but it still lack of accuracy on big observation geometry, therefore, how to improve the accuracy of total ozone retrieval is still an urgent problem that demands immediate solution. Using moderate resolution atmospheric transmission, MODT-RAN, synthetic UV backscatter radiance in the spectra region from 305 to 360 nm is simulated, which refers to clear sky, multi angles (12 solar zenith angles and view zenith angles) and 26 standard profiles, moreover, the correlation and trends between atmospheric total ozone and backward scattering of the earth UV radiation are analyzed based on the result data. According to these result data, a new modified initial total ozone estimation model in TOMS V8 algorithm is considered to be constructed in order to improve the initial total ozone estimating accuracy on big observation geometries. The analysis results about total ozone and simulated UV backscatter radiance shows: Radiance in 317.5 nm (R₃₁₇.₅) decreased as the total ozone rise. Under the small solar zenith Angle (SZA) and the same total ozone, R₃₁₇.₅ decreased with the increase of view zenith Angle (VZA) but increased on the large SZA. Comparison of two fit models shows: without the condition that both SZA and VZA are large (> 80°), exponential fitting model and logarithm fitting model all show high fitting precision (R² > 0.90), and precision of the two decreased as the SZA and VZA rise. In most cases, the precision of logarithm fitting mode is about 0.9% higher than exponential fitting model. With the increasing of VZA or SZA, the fitting precision gradually lower, and the fall is more in the larger VZA or SZA. In addition, the precision of fitting mode exist a plateau in the small SZA range. The modified initial total ozone estimating model (ln(I) vs. Ω) is established based on logarithm fitting mode, and compared with traditional estimating model (I vs. ln(Ω)), that shows: the RMSE of ln(I) vs. Ω and I vs. ln(Ω) all have the down trend with the rise of total ozone. In the low region of total ozone (175-275 DU), the RMSE is obvious higher than high region (425-525 DU), moreover, a RMSE peak and a trough exist in 225 and 475 DU respectively. With the increase of VZA and SZA, the RMSE of two initial estimating models are overall rise, and the upraising degree is ln(I) vs. Ω obvious with the growing of SZA and VZA. The estimating result by modified model is better than traditional model on the whole total ozone range (RMSE is 0.087%-0.537% lower than traditional model), especially on lower total ozone region and large observation geometries. Traditional estimating model relies on the precision of exponential fitting model, and modified estimating model relies on the precision of logarithm fitting model. The improvement of the estimation accuracy by modified initial total ozone estimating model expand the application range of TOMS V8 algorithm. For sensor carried on geostationary orbit platform, there is no doubt that the modified estimating model can help improve the inversion accuracy on wide spatial and time range This modified model could give support and reference to TOMS algorithm update in the future.

  20. Removing function model and experiments on ultrasonic polishing molding die

    NASA Astrophysics Data System (ADS)

    Huang, Qitai; Ni, Ying; Yu, Jingchi

    2010-10-01

    Low temperature glass molding technology is the main method on volume-producing high precision middle and small diameter optical cells in the future. While the accuracy of the molding die will effect the cell precision, so the high precision molding die development is one of the most important part of the low temperature glass molding technology. The molding die is manufactured from high rigid and crisp metal alloy, with the ultrasonic vibration character of high vibration frequency and concentrative energy distribution; abrasive particles will impact the rigid metal alloy surface with very high speed that will remove the material from the work piece. Ultrasonic can make the rigid metal alloy molding die controllable polishing and reduce the roughness and surface error. Different from other ultrasonic fabrication method, untouched ultrasonic polishing is applied on polish the molding die, that means the tool does not touch the work piece in the process of polishing. The abrasive particles vibrate around the balance position with high speed and frequency under the drive of ultrasonic vibration in the liquid medium and impact the workspace surface, the energy of abrasive particles come from ultrasonic vibration, while not from the direct hammer blow of the tool. So a nummular vibrator simple harmonic vibrates on an infinity plane surface is considered as a model of ultrasonic polishing working condition. According to Huygens theory the sound field distribution on a plane surface is analyzed and calculated, the tool removing function is also deduced from this distribution. Then the simple point ultrasonic polishing experiment is proceeded to certificate the theory validity.

  1. Spectroscopic Factors from the Single Neutron Pickup Reaction ^64Zn(d,t)

    NASA Astrophysics Data System (ADS)

    Leach, Kyle; Garrett, P. E.; Ball, G. C.; Bangay, J. C.; Bianco, L.; Demand, G. A.; Faestermann, T.; Finlay, P.; Green, K. L.; Hertenberger, R.; Krücken, R.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wirth, H.-F.; Wong, J.

    2009-10-01

    A great deal of attention has recently been paid towards high-precision superallowed β-decay Ft values. With the availability of extremely high-precision (<0.1%) experimental data, precision on the individual Ft values are now dominated by the ˜1% theoretical corrections^[1]. This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking (ISB) correction calculations become more difficult due to the truncated model space. Experimental spectroscopic factors for these nuclei are important for the identification of the relevant orbitals that should be included in the model space of the calculations. Motivated by this need, the single-nucleon transfer reaction ^64Zn(d,t)^63Zn was conducted at the Maier-Leibnitz-Laboratory (MLL) of TUM/LMU in Munich, Germany, using a 22 MeV polarized deuteron beam from the tandem Van de Graaff accelerator and the TUM/LMU Q3D magnetic spectrograph, with angular distributions from 10^o to 60^o. Results from this experiment will be presented and implications for calculations of ISB corrections in the superallowed &+circ; decay of ^62Ga will be discussed.^[1] I.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008).

  2. A Precise Physical Orbit For The M-Dwarf Binary Gliese 268

    NASA Technical Reports Server (NTRS)

    Barry, R. K.; Demory, B. -O.; Segransan, D.; Forveille, T.; Danchi, W. C.; Di Folco, E.; Queloz, D.; Spooner, H. R.; Torres, G.; Traub, W. A.; hide

    2012-01-01

    We report high-precision interferometric and radial velocity (RV) observations of the M-dwarf binary Gl 268. Combining measurements conducted using the IOTA interferometer and the ELODIE and Harvard Center for Astrophysics RV instruments leads to a mass of 0.22596 plus-minus 0.00084 Mass compared to the sun for component A and 0.19230 plus-minus 0.00071 Mass compared to the sun for component B. The system parallax as determined by these observations is 0.1560 plus-minus 0.0030 arcsec - a measurement with 1.9% uncertainty in excellent agreement with Hipparcos (0.1572 plus-minus 0.0033). The absolute H-band magnitudes of the component stars are not well constrained by these measurements; however, we can place an approximate upper limit of 7.95 and 8.1 for Gl 268A and B, respectively.We test these physical parameters against the predictions of theoretical models that combine stellar evolution with high fidelity, non-gray atmospheric models. Measured and predicted values are compatible within 2sigma. These results are among the most precise masses measured for visual binaries and compete with the best adaptive optics and eclipsing binary results.

  3. THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Habib, Salman; Biswas, Rahul

    2016-04-01

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  4. The mira-titan universe. Precision predictions for dark energy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Bingham, Derek; Lawrence, Earl

    2016-03-28

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  5. High Precision Prediction of Functional Sites in Protein Structures

    PubMed Central

    Buturovic, Ljubomir; Wong, Mike; Tang, Grace W.; Altman, Russ B.; Petkovic, Dragutin

    2014-01-01

    We address the problem of assigning biological function to solved protein structures. Computational tools play a critical role in identifying potential active sites and informing screening decisions for further lab analysis. A critical parameter in the practical application of computational methods is the precision, or positive predictive value. Precision measures the level of confidence the user should have in a particular computed functional assignment. Low precision annotations lead to futile laboratory investigations and waste scarce research resources. In this paper we describe an advanced version of the protein function annotation system FEATURE, which achieved 99% precision and average recall of 95% across 20 representative functional sites. The system uses a Support Vector Machine classifier operating on the microenvironment of physicochemical features around an amino acid. We also compared performance of our method with state-of-the-art sequence-level annotator Pfam in terms of precision, recall and localization. To our knowledge, no other functional site annotator has been rigorously evaluated against these key criteria. The software and predictive models are incorporated into the WebFEATURE service at http://feature.stanford.edu/wf4.0-beta. PMID:24632601

  6. Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.

  7. Highlights from High Energy Neutrino Experiments at CERN

    NASA Astrophysics Data System (ADS)

    Schlatter, W.-D.

    2015-07-01

    Experiments with high energy neutrino beams at CERN provided early quantitative tests of the Standard Model. This article describes results from studies of the nucleon quark structure and of the weak current, together with the precise measurement of the weak mixing angle. These results have established a new quality for tests of the electroweak model. In addition, the measurements of the nucleon structure functions in deep inelastic neutrino scattering allowed first quantitative tests of QCD.

  8. Gas Electron Multipler (GEM) detectors for parity-violating electron scattering experiments at Jefferson Lab

    NASA Astrophysics Data System (ADS)

    Matter, John; Gnanvo, Kondo; Liyanage, Nilanga; Solid Collaboration; Moller Collaboration

    2017-09-01

    The JLab Parity Violation In Deep Inelastic Scattering (PVDIS) experiment will use the upgraded 12 GeV beam and proposed Solenoidal Large Intensity Device (SoLID) to measure the parity-violating electroweak asymmetry in DIS of polarized electrons with high precision in order to search for physics beyond the Standard Model. Unlike many prior Parity-Violating Electron Scattering (PVES) experiments, PVDIS is a single-particle tracking experiment. Furthermore the experiment's high luminosity combined with the SoLID spectrometer's open configuration creates high-background conditions. As such, the PVDIS experiment has the most demanding tracking detector needs of any PVES experiment to date, requiring precision detectors capable of operating at high-rate conditions in PVDIS's full production luminosity. Developments in large-area GEM detector R&D and SoLID simulations have demonstrated that GEMs provide a cost-effective solution for PVDIS's tracking needs. The integrating-detector-based JLab Measurement Of Lepton Lepton Electroweak Reaction (MOLLER) experiment requires high-precision tracking for acceptance calibration. Large-area GEMs will be used as tracking detectors for MOLLER as well. The conceptual designs of GEM detectors for the PVDIS and MOLLER experiments will be presented.

  9. Heterogeneous rupture in the great Cascadia earthquake of 1700 inferred from coastal subsidence estimates

    USGS Publications Warehouse

    Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.

    2013-01-01

    Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.

  10. Two Mathematical Models of Nonlinear Vibrations

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul; Bayard, David; Spanos, John; Breckenridge, William

    2007-01-01

    Two innovative mathematical models of nonlinear vibrations, and methods of applying them, have been conceived as byproducts of an effort to develop a Kalman filter for highly precise estimation of bending motions of a large truss structure deployed in outer space from a space-shuttle payload bay. These models are also applicable to modeling and analysis of vibrations in other engineering disciplines, on Earth as well as in outer space.

  11. Precise comparisons of bottom-pressure and altimetric ocean tides

    NASA Astrophysics Data System (ADS)

    Ray, R. D.

    2013-09-01

    A new set of pelagic tide determinations is constructed from seafloor pressure measurements obtained at 151 sites in the deep ocean. To maximize precision of estimated tides, only stations with long time series are used; median time series length is 567 days. Geographical coverage is considerably improved by use of the international tsunami network, but coverage in the Indian Ocean and South Pacific is still weak. As a tool for assessing global ocean tide models, the data set is considerably more reliable than older data sets: the root-mean-square difference with a recent altimetric tide model is approximately 5 mm for the M2 constituent. Precision is sufficiently high to allow secondary effects in altimetric and bottom-pressure tide differences to be studied. The atmospheric tide in bottom pressure is clearly detected at the S1, S2, and T2 frequencies. The altimetric tide model is improved if satellite altimetry is corrected for crustal loading by the atmospheric tide. Models of the solid body tide can also be constrained. The free core-nutation effect in the K1 Love number is easily detected, but the overall estimates are not as accurate as a recent determination with very long baseline interferometry.

  12. Precise Comparisons of Bottom-Pressure and Altimetric Ocean Tides

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.

    2013-01-01

    A new set of pelagic tide determinations is constructed from seafloor pressure measurements obtained at 151 sites in the deep ocean. To maximize precision of estimated tides, only stations with long time series are used; median time series length is 567 days. Geographical coverage is considerably improved by use of the international tsunami network, but coverage in the Indian Ocean and South Pacific is still weak. As a tool for assessing global ocean tide models, the data set is considerably more reliable than older data sets : the root-mean-square difference with a recent altimetric tide model is approximately 5 mm for the M2 constituent. Precision is sufficiently high to allow secondary effects in altimetric and bottom-pressure tide differences to be studied. The atmospheric tide in bottom pressure is clearly detected at the S1, S2, and T2 frequencies. The altimetric tide model is improved if satellite altimetry is corrected for crustal loading by the atmospheric tide. Models of the solid body tide can also be constrained. The free corenutation effect in the K1 Love number is easily detected, but the overall estimates are not as accurate as a recent determination with very long baseline interferometry.

  13. Testing the white dwarf mass-radius relationship with eclipsing binaries

    NASA Astrophysics Data System (ADS)

    Parsons, S. G.; Gänsicke, B. T.; Marsh, T. R.; Ashley, R. P.; Bours, M. C. P.; Breedt, E.; Burleigh, M. R.; Copperwheat, C. M.; Dhillon, V. S.; Green, M.; Hardy, L. K.; Hermes, J. J.; Irawati, P.; Kerry, P.; Littlefair, S. P.; McAllister, M. J.; Rattanasoon, S.; Rebassa-Mansergas, A.; Sahman, D. I.; Schreiber, M. R.

    2017-10-01

    We present high-precision, model-independent, mass and radius measurements for 16 white dwarfs in detached eclipsing binaries and combine these with previously published data to test the theoretical white dwarf mass-radius relationship. We reach a mean precision of 2.4 per cent in mass and 2.7 per cent in radius, with our best measurements reaching a precision of 0.3 per cent in mass and 0.5 per cent in radius. We find excellent agreement between the measured and predicted radii across a wide range of masses and temperatures. We also find the radii of all white dwarfs with masses less than 0.48 M⊙ to be fully consistent with helium core models, but they are on average 9 per cent larger than those of carbon-oxygen core models. In contrast, white dwarfs with masses larger than 0.52 M⊙ all have radii consistent with carbon-oxygen core models. Moreover, we find that all but one of the white dwarfs in our sample have radii consistent with possessing thick surface hydrogen envelopes (10-5 ≥ MH/MWD ≥ 10-4), implying that the surface hydrogen layers of these white dwarfs are not obviously affected by common envelope evolution.

  14. Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multilayer Calorimeters

    NASA Astrophysics Data System (ADS)

    Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin

    2018-01-01

    Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000 × . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.

  15. Feedforward hysteresis compensation in trajectory control of piezoelectrically-driven nanostagers

    NASA Astrophysics Data System (ADS)

    Bashash, Saeid; Jalili, Nader

    2006-03-01

    Complex structural nonlinearities of piezoelectric materials drastically degrade their performance in variety of micro- and nano-positioning applications. From the precision positioning and control perspective, the multi-path time-history dependent hysteresis phenomenon is the most concerned nonlinearity in piezoelectric actuators to be analyzed. To realize the underlying physics of this phenomenon and to develop an efficient compensation strategy, the intelligent properties of hysteresis with the effects of non-local memories are discussed. Through performing a set of experiments on a piezoelectrically-driven nanostager with high resolution capacitive position sensor, it is shown that for the precise prediction of hysteresis path, certain memory units are required to store the previous hysteresis trajectory data. Based on the experimental observations, a constitutive memory-based mathematical modeling framework is developed and trained for the precise prediction of hysteresis path for arbitrarily assigned input profiles. Using the inverse hysteresis model, a feedforward control strategy is then developed and implemented on the nanostager to compensate for the system everpresent nonlinearity. Experimental results demonstrate that the controller remarkably eliminates the nonlinear effect if memory units are sufficiently chosen for the inverse model.

  16. A New Precision Measurement of the Small-scale Line-of-sight Power Spectrum of the Lyα Forest

    NASA Astrophysics Data System (ADS)

    Walther, Michael; Hennawi, Joseph F.; Hiss, Hector; Oñorbe, Jose; Lee, Khee-Gan; Rorai, Alberto; O’Meara, John

    2018-01-01

    We present a new measurement of the Lyα forest power spectrum at 1.8 < z < 3.4 using 74 Keck/HIRES and VLT/UVES high-resolution, high-signal-to-noise-ratio quasar spectra. We developed a custom pipeline to measure the power spectrum and its uncertainty, which fully accounts for finite resolution and noise and corrects for the bias induced by masking missing data, damped Lyα absorption systems, and metal absorption lines. Our measurement results in unprecedented precision on the small-scale modes k> 0.02 {{s}} {{km}}-1, inaccessible to previous SDSS/BOSS analyses. It is well known that these high-k modes are highly sensitive to the thermal state of the intergalactic medium, but contamination by narrow metal lines is a significant concern. We quantify the effect of metals on the small-scale power and find a modest effect on modes with k< 0.1 {{s}} {{km}}-1. As a result, by masking metals and restricting to k< 0.1 {{s}} {{km}}-1, their impact is completely mitigated. We present an end-to-end Bayesian forward-modeling framework whereby mock spectra with the same noise, resolution, and masking as our data are generated from Lyα forest simulations. These mock spectra are used to build a custom emulator, enabling us to interpolate between a sparse grid of models and perform Markov chain Monte Carlo fits. Our results agree well with BOSS on scales k< 0.02 {{s}} {{km}}-1, where the measurements overlap. The combination of the percent-level low-k precision of BOSS with our 5%–15% high-k measurements results in a powerful new data set for precisely constraining the thermal history of the intergalactic medium, cosmological parameters, and the nature of dark matter. The power spectra and their covariance matrices are provided as electronic tables.

  17. Application of GPS Measurements for Ionospheric and Tropospheric Modelling

    NASA Astrophysics Data System (ADS)

    Rajendra Prasad, P.; Abdu, M. A.; Furlan, Benedito. M. P.; Koiti Kuga, Hélio

    military navigation. The DOD's primary purposes were to use the system in precision weapon delivery and providing a capability that would help reverse the proliferation of navigation systems in military. Subsequently, it was very quickly realized that civil use and scientific utility would far outstrip military use. A variety of scientific applications are uniquely suited to precise positioning capabilities. The relatively high precision, low cost, mobility and convenience of GPS receivers make positioning attractive. The other applications being precise time measurement, surveying and geodesy purposes apart from orbit and attitude determination along with many user services. The system operates by transmitting radio waves from satellites to receivers on the ground, aircraft, or other satellites. These signals are used to calculate location very accurately. Standard Positioning Services (SPS) which restricts access to Coarse/Access (C/A) code and carrier signals on the L1 frequency only. The accuracy thus provided by SPS fall short of most of the accuracy requirements of users. The upper atmosphere is ionized by the ultra violet radiation from the sun. The significant errors in positioning can result when the signals are refracted and slowed by ionospheric conditions, the parameter of the ionosphere that produces most effects on GPS signals is the total number of electrons in the ionospheric propagation path. This integrated number of electrons, called Total Electron Content (TEC) varies, not only from day to night, time of the year and solar flux cycle, but also with geomagnetic latitude and longitude. Being plasma the ionosphere affects the radio waves propagating through it. Effects of scintillation on GPS satellite navigation systems operating at L1 (1.5754 GHz), L2 (1.2276 GHz) frequencies have not been estimated accurately. It is generally recognized that GPS navigation systems are vulnerable in the polar and especially in the equatorial region during the solar maximum period. In the equatorial region the irregularity structures are highly elongated in the north-south direction and are discrete in the east-west direction with dimensions of several hundred km. With such spatial distribution of irregularities needs to determine how often the GPS receivers fails to provide navigation aid with the available constellation. The effects of scintillation on the performance of GPS navigation systems in the equatorial region can be analyzed through commissioning few ground receivers. Incidentally there are few GPS receivers near these latitudes. Despite the recent advances in the ionosphere and tropospheric delay modeling for geodetic applications of GPS, the models currently used are not very precise. The conventional and operational ionosphere models viz. Klobuchar, Bent, and IRI models have certain limitations in providing very precise accuracies at all latitudes. The troposphere delay modeling also suffers in accuracy. The advances made in both computing power and knowledge of the atmosphere leads to make an effort to upgrade some of these models for improving delay corrections in GPS navigation. The ionospheric group delay corrections for orbit determination can be minimized using duel frequency. However in single frequency measurements the group delay correction is an involved task. In this paper an investigation is carried out to estimate the model coefficients of ionosphere along with precise orbit determination modeling using GPS measurements. The locations of the ground-based receivers near equator are known very exactly. Measurements from these ground stations to a precisely known satellite carrying duel receiver is used for orbit determination. The ionosphere model parameters can be refined corresponding to spatially distributed GPS receivers spread over Brazil. The tropospheric delay effects are not significant for the satellites by choosing appropriate elevation angle. However it needs to be analyzed for user like aircraft for an effective use. In this paper brief description of GPS data utilization, Navigational message, orbit computation and precise orbit determination and Ionosphere and troposphere models are summarized. The methodology towards refining ionosphere model coefficients is presented. Some of the plots and results related to orbit determination are presented. The study demonstrated the feasibility of estimating ionosphere group delay at specific latitudes and could be improved through refining some of the model coefficients using GPS measurements. It is possible to accurately determine the tropospheric delay, which may be used for an aircraft in flight without access to real time meteorological information.

  18. Enhanced Precision of the New Hologic Horizon Model Compared With the Old Discovery Model Is Less Evident When Fewer Vertebrae Are Included in the Analysis.

    PubMed

    McNamara, Elizabeth A; Kilim, Holly P; Malabanan, Alan O; Whittaker, LaTarsha G; Rosen, Harold N

    The International Society for Clinical Densitometry guidelines recommend using locally derived precision data for spine bone mineral densities (BMDs), but do not specify whether data derived from L1-L4 spines correctly reflect the precision for spines reporting fewer than 4 vertebrae. Our experience suggested that the decrease in precision with successively fewer vertebrae is progressive as more vertebrae are excluded and that the precision for the newer Horizon Hologic model might be better than that for the previous model, and we sought to quantify. Precision studies were performed on Hologic densitometers by acquiring spine BMD in fast array mode twice on 30 patients, according to International Society for Clinical Densitometry guidelines. This was done 10 different times on various Discovery densitometers, and once on a Horizon densitometer. When 1 vertebral body was excluded from analysis, there was no significant deterioration in precision. When 2 vertebrae were excluded, there was a nonsignificant trend to poorer precision, and when 3 vertebrae were excluded, there was significantly worse precision. When 3 or 4 vertebrae were reported, the precision of the spine BMD measurement was significantly better on the Hologic Horizon than on the Discovery, but the difference in precision between densitometers narrowed and was no longer significant when 1 or 2 vertebrae were reported. The results suggest that (1) the measurement of in vivo spine BMD on the new Hologic Horizon densitometer is significantly more precise than on the older Discovery model; (2) the difference in precision between the Horizon and Discovery models decreases as fewer vertebrae are included; (3) the measurement of spine BMD is less precise as more vertebrae are excluded, but still quite reasonable even when only 1 vertebral body is included; and (4) when 3 vertebrae are reported, L1-L4 precision data can reasonably be used to report significance of changes in BMD. When 1 or 2 vertebrae are reported, precision data for 1 or 2 vertebrae, respectively, should be used, because the exclusion of 2-3 vertebrae significantly worsens precision. Copyright © 2016 International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  19. Evaluation of the accuracy of 7 digital scanners: An in vitro analysis based on 3-dimensional comparisons.

    PubMed

    Renne, Walter; Ludlow, Mark; Fryml, John; Schurch, Zach; Mennito, Anthony; Kessler, Ray; Lauer, Abigail

    2017-07-01

    As digital impressions become more common and more digital impression systems are released onto the market, it is essential to systematically and objectively evaluate their accuracy. The purpose of this in vitro study was to evaluate and compare the trueness and precision of 6 intraoral scanners and 1 laboratory scanner in both sextant and complete-arch scenarios. Furthermore, time of scanning was evaluated and correlated with trueness and precision. A custom complete-arch model was fabricated with a refractive index similar to that of tooth structure. Seven digital impression systems were used to scan the custom model for both posterior sextant and complete arch scenarios. Analysis was performed using 3-dimensional metrology software to measure discrepancies between the master model and experimental casts. Of the intraoral scanners, the Planscan was found to have the best trueness and precision while the 3Shape Trios was found to have the poorest for sextant scanning (P<.001). The order of trueness for complete arch scanning was as follows: 3Shape D800 >iTero >3Shape TRIOS 3 >Carestream 3500 >Planscan >CEREC Omnicam >CEREC Bluecam. The order of precision for complete-arch scanning was as follows: CS3500 >iTero >3Shape D800 >3Shape TRIOS 3 >CEREC Omnicam >Planscan >CEREC Bluecam. For the secondary outcome evaluating the effect time has on trueness and precision, the complete- arch scan time was highly correlated with both trueness (r=0.771) and precision (r=0.771). For sextant scanning, the Planscan was found to be the most precise and true scanner. For complete-arch scanning, the 3Shape Trios was found to have the best balance of speed and accuracy. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  20. The upgraded ATLAS and CMS detectors and their physics capabilities.

    PubMed

    Wells, Pippa S

    2015-01-13

    The update of the European Strategy for Particle Physics from 2013 states that Europe's top priority should be the exploitation of the full potential of the LHC, including the high-luminosity upgrade of the machine and detectors with a view to collecting 10 times more data than in the initial design. The plans for upgrading the ATLAS and CMS detectors so as to maintain their performance and meet the challenges of increasing luminosity are presented here. A cornerstone of the physics programme is to measure the properties of the 125GeV Higgs boson with the highest possible precision, to test its consistency with the Standard Model. The high-luminosity data will allow precise measurements of the dominant production and decay modes, and offer the possibility of observing rare modes including Higgs boson pair production. Direct and indirect searches for additional Higgs bosons beyond the Standard Model will also continue.

  1. Movement decoupling control for two-axis fast steering mirror

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Qiao, Yongming; Lv, Tao

    2017-02-01

    Based on flexure hinge and piezoelectric actuator of two-axis fast steering mirror is a complex system with time varying, uncertain and strong coupling. It is extremely difficult to achieve high precision decoupling control with the traditional PID control method. The feedback error learning method was established an inverse hysteresis model which was based inner product dynamic neural network nonlinear and no-smooth for piezo-ceramic. In order to improve the actuator high precision, a method was proposed, which was based piezo-ceramic inverse model of two dynamic neural network adaptive control. The experiment result indicated that, compared with two neural network adaptive movement decoupling control algorithm, static relative error is reduced from 4.44% to 0.30% and coupling degree is reduced from 12.71% to 0.60%, while dynamic relative error is reduced from 13.92% to 2.85% and coupling degree is reduced from 2.63% to 1.17%.

  2. Data-driven gradient algorithm for high-precision quantum control

    NASA Astrophysics Data System (ADS)

    Wu, Re-Bing; Chu, Bing; Owens, David H.; Rabitz, Herschel

    2018-04-01

    In the quest to achieve scalable quantum information processing technologies, gradient-based optimal control algorithms (e.g., grape) are broadly used for implementing high-precision quantum gates, but their performance is often hindered by deterministic or random errors in the system model and the control electronics. In this paper, we show that grape can be taught to be more effective by jointly learning from the design model and the experimental data obtained from process tomography. The resulting data-driven gradient optimization algorithm (d-grape) can in principle correct all deterministic gate errors, with a mild efficiency loss. The d-grape algorithm may become more powerful with broadband controls that involve a large number of control parameters, while other algorithms usually slow down due to the increased size of the search space. These advantages are demonstrated by simulating the implementation of a two-qubit controlled-not gate.

  3. New Insights of High-precision Asteroseismology: Acoustic Radius and χ2-matching Method for Solar-like Oscillator KIC 6225718

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Li, Yan

    2017-10-01

    Asteroseismology is a powerful tool for probing stellar interiors and determining stellar fundamental parameters. In the present work, we adopt the χ2-minimization method but only use the observed high-precision seismic observations (i.e., oscillation frequencies) to constrain theoretical models for analyzing solar-like oscillator KIC 6225718. Finally, we find the acoustic radius τ0 is the only global parameter that can be accurately measured by the χ2-matching method between observed frequencies and theoretical model calculations for a pure p-mode oscillation star. We obtain seconds for KIC 6225718. It leads that the mass and radius of the CMMs are degenerate with each other. In addition, we find that the distribution range of acoustic radius is slightly enlarged by some extreme cases, which posses both a larger mass and a higher (or lower) metal abundance, at the lower acoustic radius end.

  4. Technology of focus detection for 193nm projection lithographic tool

    NASA Astrophysics Data System (ADS)

    Di, Chengliang; Yan, Wei; Hu, Song; Xu, Feng; Li, Jinglong

    2012-10-01

    With the shortening printing wavelength and increasing numerical aperture of lithographic tool, the depth of focus(DOF) sees a rapidly drop down trend, reach a scale of several hundred nanometers while the repeatable accuracy of focusing and leveling must be one-tenth of DOF, approximately several dozen nanometers. For this feature, this article first introduces several focusing technology, Obtained the advantages and disadvantages of various methods by comparing. Then get the accuracy of dual-grating focusing method through theoretical calculation. And the dual-grating focusing method based on photoelastic modulation is divided into coarse focusing and precise focusing method to analyze, establishing image processing model of coarse focusing and photoelastic modulation model of accurate focusing. Finally, focusing algorithm is simulated with MATLAB. In conclusion dual-grating focusing method shows high precision, high efficiency and non-contact measurement of the focal plane, meeting the demands of focusing in 193nm projection lithography.

  5. Inorganic Chlorine Partitioning in the Summer Lower Stratosphere: Modeled and Measured [ClONO2/HCl] During POLARIS

    NASA Technical Reports Server (NTRS)

    Voss, P. B.; Stimpfle, R. M.; Cohen, R. C.; Hanisco, T. F.; Bonne, G. P.; Perkins, K. K.; Lanzendorf, E. J.; Anderson, J. G.; Salawitch, R. J.

    2001-01-01

    We examine inorganic chlorine (Cly) partitioning in the summer lower stratosphere using in situ ER-2 aircraft observations made during the Photochemistry of Ozone Loss in the Arctic Region in Summer (POLARIS) campaign. New steady state and numerical models estimate [ClONO2]/[HCl] using currently accepted photochemistry. These models are tightly constrained by observations with OH (parameterized as a function of solar zenith angle) substituting for modeled HO2 chemistry. We find that inorganic chlorine photochemistry alone overestimates observed [ClONO2]/[HCl] by approximately 55-60% at mid and high latitudes. On the basis of POLARIS studies of the inorganic chlorine budget, [ClO]/[ClONO2], and an intercomparison with balloon observations, the most direct explanation for the model-measurement discrepancy in Cly partitioning is an error in the reactions, rate constants, and measured species concentrations linking HCl and ClO (simulated [ClO]/[HCl] too high) in combination with a possible systematic error in the ER-2 ClONO2 measurement (too low). The high precision of our simulation (+/-15% 1-sigma for [ClONO2]/[HCl], which is compared with observations) increases confidence in the observations, photolysis calculations, and laboratory rate constants. These results, along with other findings, should lead to improvements in both the accuracy and precision of stratospheric photochemical models.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Shisuo; Lockamy, Virginia; Zhou, Lin

    Purpose: To implement clinical stereotactic body radiation therapy (SBRT) using a small animal radiation research platform (SARRP) in a genetically engineered mouse model of lung cancer. Methods and Materials: A murine model of multinodular Kras-driven spontaneous lung tumors was used for this study. High-resolution cone beam computed tomography (CBCT) imaging was used to identify and target peripheral tumor nodules, whereas off-target lung nodules in the contralateral lung were used as a nonirradiated control. CBCT imaging helps localize tumors, facilitate high-precision irradiation, and monitor tumor growth. SBRT planning, prescription dose, and dose limits to normal tissue followed the guidelines set by RTOGmore » protocols. Pathologic changes in the irradiated tumors were investigated using immunohistochemistry. Results: The image guided radiation delivery using the SARRP system effectively localized and treated lung cancer with precision in a genetically engineered mouse model of lung cancer. Immunohistochemical data confirmed the precise delivery of SBRT to the targeted lung nodules. The 60 Gy delivered in 3 weekly fractions markedly reduced the proliferation index, Ki-67, and increased apoptosis per staining for cleaved caspase-3 in irradiated lung nodules. Conclusions: It is feasible to use the SARRP platform to perform dosimetric planning and delivery of SBRT in mice with lung cancer. This allows for preclinical studies that provide a rationale for clinical trials involving SBRT, especially when combined with immunotherapeutics.« less

  7. One novel type of miniaturization FBG rotation angle sensor with high measurement precision and temperature self-compensation

    NASA Astrophysics Data System (ADS)

    Jiang, Shanchao; Wang, Jing; Sui, Qingmei

    2018-03-01

    In order to achieve rotation angle measurement, one novel type of miniaturization fiber Bragg grating (FBG) rotation angle sensor with high measurement precision and temperature self-compensation is proposed and studied in this paper. The FBG rotation angle sensor mainly contains two core sensitivity elements (FBG1 and FBG2), triangular cantilever beam, and rotation angle transfer element. In theory, the proposed sensor can achieve temperature self-compensation by complementation of the two core sensitivity elements (FBG1 and FBG2), and it has a boundless angel measurement range with 2π rad period duo to the function of the rotation angle transfer element. Based on introducing the joint working processes, the theory calculation model of the FBG rotation angel sensor is established, and the calibration experiment on one prototype is also carried out to obtain its measurement performance. After experimental data analyses, the measurement precision of the FBG rotation angle sensor prototype is 0.2 ° with excellent linearity, and the temperature sensitivities of FBG1 and FBG2 are 10 pm/° and 10.1 pm/°, correspondingly. All these experimental results confirm that the FBG rotation angle sensor can achieve large-range angle measurement with high precision and temperature self-compensation.

  8. Computational Calorimetry: High-Precision Calculation of Host–Guest Binding Thermodynamics

    PubMed Central

    2015-01-01

    We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van’t Hoff equation. Excellent agreement between the direct and van’t Hoff methods is demonstrated for both host–guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design. PMID:26523125

  9. A classification model of Hyperion image base on SAM combined decision tree

    NASA Astrophysics Data System (ADS)

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model heightens 9.9%.

  10. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    NASA Astrophysics Data System (ADS)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  11. A Biologically Realistic Cortical Model of Eye Movement Control in Reading

    ERIC Educational Resources Information Center

    Heinzle, Jakob; Hepp, Klaus; Martin, Kevan A. C.

    2010-01-01

    Reading is a highly complex task involving a precise integration of vision, attention, saccadic eye movements, and high-level language processing. Although there is a long history of psychological research in reading, it is only recently that imaging studies have identified some neural correlates of reading. Thus, the underlying neural mechanisms…

  12. Uncertainty in elevation data and sensitivity to a sea-level rise estuary habitat model: Costs and benefits of high precision

    EPA Science Inventory

    Understanding the threats of sea-level rise (SLR) to ecosystem services in key estuarine habitats is a high priority for Pacific Northwest coastal biologists, fish and wildlife managers, and shellfish growers. The utility of decision support systems for understanding potential e...

  13. High precision wavefront control in point spread function engineering for single emitter localization

    NASA Astrophysics Data System (ADS)

    Siemons, M.; Hulleman, C. N.; Thorsen, R. Ø.; Smith, C. S.; Stallinga, S.

    2018-04-01

    Point Spread Function (PSF) engineering is used in single emitter localization to measure the emitter position in 3D and possibly other parameters such as the emission color or dipole orientation as well. Advanced PSF models such as spline fits to experimental PSFs or the vectorial PSF model can be used in the corresponding localization algorithms in order to model the intricate spot shape and deformations correctly. The complexity of the optical architecture and fit model makes PSF engineering approaches particularly sensitive to optical aberrations. Here, we present a calibration and alignment protocol for fluorescence microscopes equipped with a spatial light modulator (SLM) with the goal of establishing a wavefront error well below the diffraction limit for optimum application of complex engineered PSFs. We achieve high-precision wavefront control, to a level below 20 m$\\lambda$ wavefront aberration over a 30 minute time window after the calibration procedure, using a separate light path for calibrating the pixel-to-pixel variations of the SLM, and alignment of the SLM with respect to the optical axis and Fourier plane within 3 $\\mu$m ($x/y$) and 100 $\\mu$m ($z$) error. Aberrations are retrieved from a fit of the vectorial PSF model to a bead $z$-stack and compensated with a residual wavefront error comparable to the error of the SLM calibration step. This well-calibrated and corrected setup makes it possible to create complex `3D+$\\lambda$' PSFs that fit very well to the vectorial PSF model. Proof-of-principle bead experiments show precisions below 10~nm in $x$, $y$, and $\\lambda$, and below 20~nm in $z$ over an axial range of 1 $\\mu$m with 2000 signal photons and 12 background photons.

  14. Height Connections and Land Uplift Rates in West-Estonian Archipelago

    NASA Astrophysics Data System (ADS)

    Jürgenson, H.; Liibusk, A.; Kall, T.

    2012-04-01

    Land uplift rates are largest in the western part of Estonia. The uplift is due to post-glacial rebound. In 2001-2011, the Estonian national high-precision levelling network was completely renewed and levelled. This was the third precise levelling campaign in the re-gion. The first one had taken place before the Second World War and the second one in the 1950s. The Estonian mainland was connected with the two largest islands (Saaremaa and Hiiumaa) in the west-Estonian archipelago using the water level monitoring (hydrody-namic levelling) method. Three pairs of automatic tide gauges were installed on opposite coasts of each waterway. The tide gauges were equipped with piezoresistive pressure sen-sors. This represented the first use of such kind of equipment in Estonia. The hydrodynamic levelling series span up to two calendar years. Nevertheless, the obtained hydrodynamic levelling results need to be additionally verified using alternative geodetic methods. The obtained results were compared with the previous high-precision levelling data from the 1960s and 1970s. As well, the new Estonian gravimetric geoid model and the GPS survey were used for GPS-levelling. All the three methods were analyzed, and the preliminary results coincided within a 1-2 cm margin. Additionally, the tide gauges on the mainland and on both islands were connected using high-precision levelling. In this manner, three hydrodynamic and three digital levelling height differences formed a closed loop with the length of 250 km. The closing error of the loop was less than 1 cm. Finally, the Fennoscandian post-glacial rebound was determined from repeated levelling as well as from repeated GPS survey. The time span between the two campaigns of the first-order GPS survey was almost 13 years. According to new calculations, the relative land uplift rates within the study area reached up to +2 mm/year. This is an area with a rela-tively small amount of input data for the Nordic models. In addition, a comparison with the Fennoscandian land uplift model NKG2005LU is presented. The results coincided with this model within a 1-mm range. Keywords: hydrodynamic levelling, post-glacial land uplift, GPS-levelling, West-Estonian archipelago.

  15. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS.

    PubMed

    Arce, Pedro; Lagares, Juan Ignacio

    2018-01-25

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm 2 to 40  ×  40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  16. A Unified Model for BDS Wide Area and Local Area Augmentation Positioning Based on Raw Observations.

    PubMed

    Tu, Rui; Zhang, Rui; Lu, Cuixian; Zhang, Pengfei; Liu, Jinhai; Lu, Xiaochun

    2017-03-03

    In this study, a unified model for BeiDou Navigation Satellite System (BDS) wide area and local area augmentation positioning based on raw observations has been proposed. Applying this model, both the Real-Time Kinematic (RTK) and Precise Point Positioning (PPP) service can be realized by performing different corrections at the user end. This algorithm was assessed and validated with the BDS data collected at four regional stations from Day of Year (DOY) 080 to 083 of 2016. When the users are located within the local reference network, the fast and high precision RTK service can be achieved using the regional observation corrections, revealing a convergence time of about several seconds and a precision of about 2-3 cm. For the users out of the regional reference network, the global broadcast State-Space Represented (SSR) corrections can be utilized to realize the global PPP service which shows a convergence time of about 25 min for achieving an accuracy of 10 cm. With this unified model, it can not only integrate the Network RTK (NRTK) and PPP into a seamless positioning service, but also recover the ionosphere Vertical Total Electronic Content (VTEC) and Differential Code Bias (DCB) values that are useful for the ionosphere monitoring and modeling.

  17. A Unified Model for BDS Wide Area and Local Area Augmentation Positioning Based on Raw Observations

    PubMed Central

    Tu, Rui; Zhang, Rui; Lu, Cuixian; Zhang, Pengfei; Liu, Jinhai; Lu, Xiaochun

    2017-01-01

    In this study, a unified model for BeiDou Navigation Satellite System (BDS) wide area and local area augmentation positioning based on raw observations has been proposed. Applying this model, both the Real-Time Kinematic (RTK) and Precise Point Positioning (PPP) service can be realized by performing different corrections at the user end. This algorithm was assessed and validated with the BDS data collected at four regional stations from Day of Year (DOY) 080 to 083 of 2016. When the users are located within the local reference network, the fast and high precision RTK service can be achieved using the regional observation corrections, revealing a convergence time of about several seconds and a precision of about 2–3 cm. For the users out of the regional reference network, the global broadcast State-Space Represented (SSR) corrections can be utilized to realize the global PPP service which shows a convergence time of about 25 min for achieving an accuracy of 10 cm. With this unified model, it can not only integrate the Network RTK (NRTK) and PPP into a seamless positioning service, but also recover the ionosphere Vertical Total Electronic Content (VTEC) and Differential Code Bias (DCB) values that are useful for the ionosphere monitoring and modeling. PMID:28273814

  18. Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation

    PubMed Central

    Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.

    2013-01-01

    Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205

  19. Refine of Regional Ocean Tide Model Using GPS Data

    NASA Astrophysics Data System (ADS)

    Wang, F.; Zhang, P.; Sun, Z.; Jiang, Z.; Zhang, Q.

    2018-04-01

    Due to lack of regional data constraints, all global ocean tide models are not accuracy enough in offshore areas around China, also the displacements predicted by different models are not consistency. The ocean tide loading effects have become a major source of error in the high precision GPS positioning. It is important for high precision GPS applications to build an appropriate regional ocean tide model. We first process the four offshore GPS tracking station's observation data which located in Guangdong province of China by using PPP aproach to get the time series. Then use the spectral inversion method to acquire eigenvalues of the Ocean Tidal Loading. We get the estimated value of not only 12hour period tide wave (M2, S2, N2, K2) but also 24hour period tide wave (O1, K1, P1, Q1) which has not been got in presious studies. The contrast test shows that GPS estimation value of M2, K1 is consistent with the result of five famous glocal ocean load tide models, but S2, N2, K2, O1, P1, Q1 is obviously larger.

  20. Modeling and Positioning of a PZT Precision Drive System.

    PubMed

    Liu, Che; Guo, Yanling

    2017-11-08

    The fact that piezoelectric ceramic transducer (PZT) precision drive systems in 3D printing are faced with nonlinear problems with respect to positioning, such as hysteresis and creep, has had an extremely negative impact on the precision of laser focusing systems. To eliminate the impact of PZT nonlinearity during precision drive movement, mathematical modeling and theoretical analyses of each module comprising the system were carried out in this study, a micro-displacement measurement circuit based on Position Sensitive Detector (PSD) is constructed, followed by the establishment of system closed-loop control and creep control models. An XL-80 laser interferometer (Renishaw, Wotton-under-Edge, UK) was used to measure the performance of the precision drive system, showing that system modeling and control algorithms were correct, with the requirements for precision positioning of the drive system satisfied.

  1. Modeling and Positioning of a PZT Precision Drive System

    PubMed Central

    Liu, Che; Guo, Yanling

    2017-01-01

    The fact that piezoelectric ceramic transducer (PZT) precision drive systems in 3D printing are faced with nonlinear problems with respect to positioning, such as hysteresis and creep, has had an extremely negative impact on the precision of laser focusing systems. To eliminate the impact of PZT nonlinearity during precision drive movement, mathematical modeling and theoretical analyses of each module comprising the system were carried out in this study, a micro-displacement measurement circuit based on Position Sensitive Detector (PSD) is constructed, followed by the establishment of system closed-loop control and creep control models. An XL-80 laser interferometer (Renishaw, Wotton-under-Edge, UK) was used to measure the performance of the precision drive system, showing that system modeling and control algorithms were correct, with the requirements for precision positioning of the drive system satisfied. PMID:29117140

  2. Displacements Study of an Earth Fill Dam Based on High Precision Geodetic Monitoring and Numerical Modeling.

    PubMed

    Acosta, Luis Enrique; de Lacy, M Clara; Ramos, M Isabel; Cano, Juan Pedro; Herrera, Antonio Manuel; Avilés, Manuel; Gil, Antonio José

    2018-04-27

    The aim of this paper is to study the behavior of an earth fill dam, analyzing the deformations determined by high precision geodetic techniques and those obtained by the Finite Element Method (FEM). A large number of control points were established around the area of the dam, and the measurements of their displacements took place during several periods. In this study, high-precision leveling and GNSS (Global Navigation Satellite System) techniques were used to monitor vertical and horizontal displacements respectively. Seven surveys were carried out: February and July 2008, March and July 2013, August 2014, September 2015 and September 2016. Deformations were predicted, taking into account the general characteristics of an earth fill dam. A comparative evaluation of the results derived from predicted (FEM) and observed deformations shows the differences on average being 20 cm for vertical displacements, and 6 cm for horizontal displacements at the crest. These differences are probably due to the simplifications assumed during the FEM modeling process: critical sections are considered homogeneous along their longitude, and the properties of the materials were established according to the general characteristics of an earth fill dam. These characteristics were taken from the normative and similar studies in the country. This could also be due to the geodetic control points being anchored in the superficial layer of the slope when the construction of the dam was finished.

  3. The NANOGrav 11-year Data Set: High-precision Timing of 45 Millisecond Pulsars

    NASA Astrophysics Data System (ADS)

    Arzoumanian, Zaven; Brazier, Adam; Burke-Spolaor, Sarah; Chamberlin, Sydney; Chatterjee, Shami; Christy, Brian; Cordes, James M.; Cornish, Neil J.; Crawford, Fronefield; Thankful Cromartie, H.; Crowter, Kathryn; DeCesar, Megan E.; Demorest, Paul B.; Dolch, Timothy; Ellis, Justin A.; Ferdman, Robert D.; Ferrara, Elizabeth C.; Fonseca, Emmanuel; Garver-Daniels, Nathan; Gentile, Peter A.; Halmrast, Daniel; Huerta, E. A.; Jenet, Fredrick A.; Jessup, Cody; Jones, Glenn; Jones, Megan L.; Kaplan, David L.; Lam, Michael T.; Lazio, T. Joseph W.; Levin, Lina; Lommen, Andrea; Lorimer, Duncan R.; Luo, Jing; Lynch, Ryan S.; Madison, Dustin; Matthews, Allison M.; McLaughlin, Maura A.; McWilliams, Sean T.; Mingarelli, Chiara; Ng, Cherry; Nice, David J.; Pennucci, Timothy T.; Ransom, Scott M.; Ray, Paul S.; Siemens, Xavier; Simon, Joseph; Spiewak, Renée; Stairs, Ingrid H.; Stinebring, Daniel R.; Stovall, Kevin; Swiggum, Joseph K.; Taylor, Stephen R.; Vallisneri, Michele; van Haasteren, Rutger; Vigeland, Sarah J.; Zhu, Weiwei; The NANOGrav Collaboration

    2018-04-01

    We present high-precision timing data over time spans of up to 11 years for 45 millisecond pulsars observed as part of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) project, aimed at detecting and characterizing low-frequency gravitational waves. The pulsars were observed with the Arecibo Observatory and/or the Green Bank Telescope at frequencies ranging from 327 MHz to 2.3 GHz. Most pulsars were observed with approximately monthly cadence, and six high-timing-precision pulsars were observed weekly. All were observed at widely separated frequencies at each observing epoch in order to fit for time-variable dispersion delays. We describe our methods for data processing, time-of-arrival (TOA) calculation, and the implementation of a new, automated method for removing outlier TOAs. We fit a timing model for each pulsar that includes spin, astrometric, and (for binary pulsars) orbital parameters; time-variable dispersion delays; and parameters that quantify pulse-profile evolution with frequency. The timing solutions provide three new parallax measurements, two new Shapiro delay measurements, and two new measurements of significant orbital-period variations. We fit models that characterize sources of noise for each pulsar. We find that 11 pulsars show significant red noise, with generally smaller spectral indices than typically measured for non-recycled pulsars, possibly suggesting a different origin. A companion paper uses these data to constrain the strength of the gravitational-wave background.

  4. Straightforward and precise approach to replicate complex hierarchical structures from plant surfaces onto soft matter polymer

    PubMed Central

    Speck, Thomas; Bohn, Holger F.

    2018-01-01

    The surfaces of plant leaves are rarely smooth and often possess a species-specific micro- and/or nano-structuring. These structures usually influence the surface functionality of the leaves such as wettability, optical properties, friction and adhesion in insect–plant interactions. This work presents a simple, convenient, inexpensive and precise two-step micro-replication technique to transfer surface microstructures of plant leaves onto highly transparent soft polymer material. Leaves of three different plants with variable size (0.5–100 µm), shape and complexity (hierarchical levels) of their surface microstructures were selected as model bio-templates. A thermoset epoxy resin was used at ambient conditions to produce negative moulds directly from fresh plant leaves. An alkaline chemical treatment was established to remove the entirety of the leaf material from the cured negative epoxy mould when necessary, i.e. for highly complex hierarchical structures. Obtained moulds were filled up afterwards with low viscosity silicone elastomer (PDMS) to obtain positive surface replicas. Comparative scanning electron microscopy investigations (original plant leaves and replicated polymeric surfaces) reveal the high precision and versatility of this replication technique. This technique has promising future application for the development of bioinspired functional surfaces. Additionally, the fabricated polymer replicas provide a model to systematically investigate the structural key points of surface functionalities. PMID:29765666

  5. High-precision method of binocular camera calibration with a distortion model.

    PubMed

    Li, Weimin; Shan, Siyu; Liu, Hui

    2017-03-10

    A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.

  6. Variational calculation of second-order reduced density matrices by strong N-representability conditions and an accurate semidefinite programming solver.

    PubMed

    Nakata, Maho; Braams, Bastiaan J; Fujisawa, Katsuki; Fukuda, Mituhiro; Percus, Jerome K; Yamashita, Makoto; Zhao, Zhengji

    2008-04-28

    The reduced density matrix (RDM) method, which is a variational calculation based on the second-order reduced density matrix, is applied to the ground state energies and the dipole moments for 57 different states of atoms, molecules, and to the ground state energies and the elements of 2-RDM for the Hubbard model. We explore the well-known N-representability conditions (P, Q, and G) together with the more recent and much stronger T1 and T2(') conditions. T2(') condition was recently rederived and it implies T2 condition. Using these N-representability conditions, we can usually calculate correlation energies in percentage ranging from 100% to 101%, whose accuracy is similar to CCSD(T) and even better for high spin states or anion systems where CCSD(T) fails. Highly accurate calculations are carried out by handling equality constraints and/or developing multiple precision arithmetic in the semidefinite programming (SDP) solver. Results show that handling equality constraints correctly improves the accuracy from 0.1 to 0.6 mhartree. Additionally, improvements by replacing T2 condition with T2(') condition are typically of 0.1-0.5 mhartree. The newly developed multiple precision arithmetic version of SDP solver calculates extraordinary accurate energies for the one dimensional Hubbard model and Be atom. It gives at least 16 significant digits for energies, where double precision calculations gives only two to eight digits. It also provides physically meaningful results for the Hubbard model in the high correlation limit.

  7. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    PubMed

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Calibration of the Late Cretaceous to Paleocene geomagnetic polarity and astrochronological time scales: new results from high-precision U-Pb geochronology

    NASA Astrophysics Data System (ADS)

    Ramezani, Jahandar; Clyde, William; Wang, Tiantian; Johnson, Kirk; Bowring, Samuel

    2016-04-01

    Reversals in the Earth's magnetic polarity are geologically abrupt events of global magnitude that makes them ideal timelines for stratigraphic correlation across a variety of depositional environments, especially where diagnostic marine fossils are absent. Accurate and precise calibration of the Geomagnetic Polarity Timescale (GPTS) is thus essential to the reconstruction of Earth history and to resolving the mode and tempo of biotic and environmental change in deep time. The Late Cretaceous - Paleocene GPTS is of particular interest as it encompasses a critical period of Earth history marked by the Cretaceous greenhouse climate, the peak of dinosaur diversity, the end-Cretaceous mass extinction and its paleoecological aftermaths. Absolute calibration of the GPTS has been traditionally based on sea-floor spreading magnetic anomaly profiles combined with local magnetostratigraphic sequences for which a numerical age model could be established by interpolation between an often limited number of 40Ar/39Ar dates from intercalated volcanic ash deposits. Although the Neogene part of the GPTS has been adequately calibrated using cyclostratigraphy-based, astrochronological schemes, the application of these approaches to pre-Neogene parts of the timescale has been complicated given the uncertainties of the orbital models and the chaotic behavior of the solar system this far back in time. Here we present refined chronostratigraphic frameworks based on high-precision U-Pb geochronology of ash beds from the Western Interior Basin of North America and the Songliao Basin of Northeast China that places tight temporal constraints on the Late Cretaceous to Paleocene GPTS, either directly or by testing their astrochronological underpinnings. Further application of high-precision radioisotope geochronology and calibrated astrochronology promises a complete and robust Cretaceous-Paleogene GPTS, entirely independent of sea-floor magnetic anomaly profiles.

  9. Predicting Document Retrieval System Performance: An Expected Precision Measure.

    ERIC Educational Resources Information Center

    Losee, Robert M., Jr.

    1987-01-01

    Describes an expected precision (EP) measure designed to predict document retrieval performance. Highlights include decision theoretic models; precision and recall as measures of system performance; EP graphs; relevance feedback; and computing the retrieval status value of a document for two models, the Binary Independent Model and the Two Poisson…

  10. Quantum metrology and estimation of Unruh effect

    PubMed Central

    Wang, Jieci; Tian, Zehua; Jing, Jiliang; Fan, Heng

    2014-01-01

    We study the quantum metrology for a pair of entangled Unruh-Dewitt detectors when one of them is accelerated and coupled to a massless scalar field. Comparing with previous schemes, our model requires only local interaction and avoids the use of cavities in the probe state preparation process. We show that the probe state preparation and the interaction between the accelerated detector and the external field have significant effects on the value of quantum Fisher information, correspondingly pose variable ultimate limit of precision in the estimation of Unruh effect. We find that the precision of the estimation can be improved by a larger effective coupling strength and a longer interaction time. Alternatively, the energy gap of the detector has a range that can provide us a better precision. Thus we may adjust those parameters and attain a higher precision in the estimation. We also find that an extremely high acceleration is not required in the quantum metrology process. PMID:25424772

  11. Superallowed Fermi β-Decay Studies with SCEPTAR and the 8π Gamma-Ray Spectrometer

    NASA Astrophysics Data System (ADS)

    Koopmans, K. A.

    2005-04-01

    The 8π Gamma-Ray Spectrometer, operating at TRIUMF in Vancouver Canada, is a high-precision instrument for detecting the decay radiations from exotic nuclei. In 2003, a new beta-scintillating array called SCEPTAR was installed within the 8π Spectrometer. With these two systems, precise measurements of half-lives and branching ratios can be made, specifically on certain nuclei which exhibit Superallowed Fermi 0+ → 0+ β-decay. These data can be used to determine the value of δC, an isospin symmetry-breaking (Coulomb) correction factor to good precision. As this correction factor is currently one of the leading sources of error in the unitarity test of the CKM matrix, a precise determination of its value could help to eliminate any possible "trivial" explanation of the seeming departure of current experimental data from Standard Model predictions.

  12. -Omic and Electronic Health Records Big Data Analytics for Precision Medicine

    PubMed Central

    Wu, Po-Yen; Cheng, Chih-Wen; Kaddi, Chanchala D.; Venugopalan, Janani; Hoffman, Ryan; Wang, May D.

    2017-01-01

    Objective Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of -omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of health care. Methods In this article, we present -omic and EHR data characteristics, associated challenges, and data analytics including data pre-processing, mining, and modeling. Results To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating -omic information into EHR. Conclusion Big data analytics is able to address –omic and EHR data challenges for paradigm shift towards precision medicine. Significance Big data analytics makes sense of –omic and EHR data to improve healthcare outcome. It has long lasting societal impact. PMID:27740470

  13. Motion and gravity effects in the precision of quantum clocks.

    PubMed

    Lindkvist, Joel; Sabín, Carlos; Johansson, Göran; Fuentes, Ivette

    2015-05-19

    We show that motion and gravity affect the precision of quantum clocks. We consider a localised quantum field as a fundamental model of a quantum clock moving in spacetime and show that its state is modified due to changes in acceleration. By computing the quantum Fisher information we determine how relativistic motion modifies the ultimate bound in the precision of the measurement of time. While in the absence of motion the squeezed vacuum is the ideal state for time estimation, we find that it is highly sensitive to the motion-induced degradation of the quantum Fisher information. We show that coherent states are generally more resilient to this degradation and that in the case of very low initial number of photons, the optimal precision can be even increased by motion. These results can be tested with current technology by using superconducting resonators with tunable boundary conditions.

  14. 3He(α, γ)7Be cross section in a wide energy range

    NASA Astrophysics Data System (ADS)

    Szücs, Tamás; Gyürky, György; Halász, Zoltán; Kiss, Gábor Gy.; Fülöp, Zsolt

    2018-01-01

    The reaction rate of the 3He(α,γ)7 Be reaction is important both in the Big Bang Nucleosynthesis (BBN) and in the Solar hydrogen burning. There have been a lot of experimental and theoretical efforts to determine this reaction rate with high precision. Some long standing issues have been solved by the more precise investigations, like the different S(0) values predicted by the activation and in-beam measurement. However, the recent, more detailed astrophysical model predictions require the reaction rate with even higher precision to unravel new issues like the Solar composition. One way to increase the precision is to provide a comprehensive dataset in a wide energy range, extending the experimental cross section database of this reaction. This paper presents a new cross section measurement between Ecm = 2.5 - 4.4 MeV, in an energy range which extends above the 7Be proton separation threshold.

  15. Motion and gravity effects in the precision of quantum clocks

    PubMed Central

    Lindkvist, Joel; Sabín, Carlos; Johansson, Göran; Fuentes, Ivette

    2015-01-01

    We show that motion and gravity affect the precision of quantum clocks. We consider a localised quantum field as a fundamental model of a quantum clock moving in spacetime and show that its state is modified due to changes in acceleration. By computing the quantum Fisher information we determine how relativistic motion modifies the ultimate bound in the precision of the measurement of time. While in the absence of motion the squeezed vacuum is the ideal state for time estimation, we find that it is highly sensitive to the motion-induced degradation of the quantum Fisher information. We show that coherent states are generally more resilient to this degradation and that in the case of very low initial number of photons, the optimal precision can be even increased by motion. These results can be tested with current technology by using superconducting resonators with tunable boundary conditions. PMID:25988238

  16. Development and Preliminary Testing of a High Precision Long Stroke Slit Change Mechanism for the SPICE Instrument

    NASA Technical Reports Server (NTRS)

    Paciotti, Gabriel; Humphries, Martin; Rottmeier, Fabrice; Blecha, Luc

    2014-01-01

    In the frame of ESA's Solar Orbiter scientific mission, Almatech has been selected to design, develop and test the Slit Change Mechanism of the SPICE (SPectral Imaging of the Coronal Environment) instrument. In order to guaranty optical cleanliness level while fulfilling stringent positioning accuracies and repeatability requirements for slit positioning in the optical path of the instrument, a linear guiding system based on a double flexible blade arrangement has been selected. The four different slits to be used for the SPICE instrument resulted in a total stroke of 16.5 mm in this linear slit changer arrangement. The combination of long stroke and high precision positioning requirements has been identified as the main design challenge to be validated through breadboard models testing. This paper presents the development of SPICE's Slit Change Mechanism (SCM) and the two-step validation tests successfully performed on breadboard models of its flexible blade support system. The validation test results have demonstrated the full adequacy of the flexible blade guiding system implemented in SPICE's Slit Change Mechanism in a stand-alone configuration. Further breadboard test results, studying the influence of the compliant connection to the SCM linear actuator on an enhanced flexible guiding system design have shown significant enhancements in the positioning accuracy and repeatability of the selected flexible guiding system. Preliminary evaluation of the linear actuator design, including a detailed tolerance analyses, has shown the suitability of this satellite roller screw based mechanism for the actuation of the tested flexible guiding system and compliant connection. The presented development and preliminary testing of the high-precision long-stroke Slit Change Mechanism for the SPICE Instrument are considered fully successful such that future tests considering the full Slit Change Mechanism can be performed, with the gained confidence, directly on a Qualification Model. The selected linear Slit Change Mechanism design concept, consisting of a flexible guiding system driven by a hermetically sealed linear drive mechanism, is considered validated for the specific application of the SPICE instrument, with great potential for other special applications where contamination and high precision positioning are dominant design drivers.

  17. Suitability of temperature sum models to simulate the flowering period of birches on regional scale as basis for realistic predictions of the allergenic potential of atmospheric pollen loads

    NASA Astrophysics Data System (ADS)

    Biernath, Christian; Hauck, Julia; Klein, Christian; Thieme, Christoph; Heinlein, Florian; Priesack, Eckart

    2014-05-01

    Persons susceptible to allergenic pollen grains need to apply suppressive pharmacy before the occurrence of the first allergy symptoms. Patient targeted medication could be improved if forecasts of the allergenic potential of pollen (biochemical composition of the pollen grain) and the onset, duration, and end of the pollen season are precise on regional scale. In plant tissue the biochemical composition may change within hours due to the resource availability for plant growth and plant internal nutrient re-mobilization. As these processes highly depend on both, the environmental conditions and the development stage of a plant, precise simulations of the onset and duration of the flowering period are crucial to determine the allergenic potential of tissues and pollen. Here, dynamic plant models that consider the dependence of the chemical composition of tissue on the development stage of the plant embedded in process-based ecosystem models seem promising tools; however, today dynamic plant growth is widely ignored in simulations of atmospheric pollen loads. In this study we raise the question whether frequently applied temperature sum models (TSM) could precisely simulate the plant development stages in case of birches on regional scale. These TSM integrate average temperatures above a base temperature below which no further plant development is assumed. In this study, we therefore tested the ability of TSM to simulate the flowering period of birches on more than 100 sites in Bavaria, Germany over a period of three years (2010-2012). Our simulations indicate that the often applied base temperatures between 2.3°C and 3.5°C for the integration of daily or hourly average temperatures, respectively, in Europe are too high to adequately simulate the onset of birch flowering in Bavaria where a base temperature of 1°C seems more convenient. A more regional calibration of the models to sub-regions in Bavaria with comparable climatic conditions could further improve the simulation results if compared to simulations using a model that was adjusted to only one representative location in Bavaria. Our simulation results suggest that birch phenology needs to be modelled on a more regional scale to derive precise predictions of the flowering period. Some weak simulation results are suspected to be due to the high genetic diversity of birches and their high adaptive potential to a wide range of environmental conditions which indeed is a characteristic for many pioneer species. The high adaptive potential could be an explanation why authors who calibrate their models to other climatic regions observe better simulation results using higher base temperatures. However, our simulations indicate that the simulation results may be biased if the base temperatures are assumed constant for one species and transferred to larger or smaller scales, to other regions with different climatic conditions, or when applied to extrapolate birch pollen seasons to future climate conditions.

  18. A highly precise frequency-based method for estimating the tension of an inclined cable with unknown boundary conditions

    NASA Astrophysics Data System (ADS)

    Ma, Lin

    2017-11-01

    This paper develops a method for precisely determining the tension of an inclined cable with unknown boundary conditions. First, the nonlinear motion equation of an inclined cable is derived, and a numerical model of the motion of the cable is proposed using the finite difference method. The proposed numerical model includes the sag-extensibility, flexural stiffness, inclination angle and rotational stiffness at two ends of the cable. Second, the influence of the dynamic parameters of the cable on its frequencies is discussed in detail, and a method for precisely determining the tension of an inclined cable is proposed based on the derivatives of the eigenvalues of the matrices. Finally, a multiparameter identification method is developed that can simultaneously identify multiple parameters, including the rotational stiffness at two ends. This scheme is applicable to inclined cables with varying sag, varying flexural stiffness and unknown boundary conditions. Numerical examples indicate that the method provides good precision. Because the parameters of cables other than tension (e.g., the flexural stiffness and rotational stiffness at the ends) are not accurately known in practical engineering, the multiparameter identification method could further improve the accuracy of cable tension measurements.

  19. Efficient generation of mouse models of human diseases via ABE- and BE-mediated base editing.

    PubMed

    Liu, Zhen; Lu, Zongyang; Yang, Guang; Huang, Shisheng; Li, Guanglei; Feng, Songjie; Liu, Yajing; Li, Jianan; Yu, Wenxia; Zhang, Yu; Chen, Jia; Sun, Qiang; Huang, Xingxu

    2018-06-14

    A recently developed adenine base editor (ABE) efficiently converts A to G and is potentially useful for clinical applications. However, its precision and efficiency in vivo remains to be addressed. Here we achieve A-to-G conversion in vivo at frequencies up to 100% by microinjection of ABE mRNA together with sgRNAs. We then generate mouse models harboring clinically relevant mutations at Ar and Hoxd13, which recapitulates respective clinical defects. Furthermore, we achieve both C-to-T and A-to-G base editing by using a combination of ABE and SaBE3, thus creating mouse model harboring multiple mutations. We also demonstrate the specificity of ABE by deep sequencing and whole-genome sequencing (WGS). Taken together, ABE is highly efficient and precise in vivo, making it feasible to model and potentially cure relevant genetic diseases.

  20. Highly Physical Solar Radiation Pressure Modeling During Penumbra Transitions

    NASA Astrophysics Data System (ADS)

    Robertson, Robert V.

    Solar radiation pressure (SRP) is one of the major non-gravitational forces acting on spacecraft. Acceleration by radiation pressure depends on the radiation flux; on spacecraft shape, attitude, and mass; and on the optical properties of the spacecraft surfaces. Precise modeling of SRP is needed for dynamic satellite orbit determination, space mission design and control, and processing of data from space-based science instruments. During Earth penumbra transitions, sunlight is passing through Earth's lower atmosphere and, in the process, its path, intensity, spectral composition, and shape are significantly affected. This dissertation presents a new method for highly physical SRP modeling in Earth's penumbra called Solar radiation pressure with Oblateness and Lower Atmospheric Absorption, Refraction, and Scattering (SOLAARS). The fundamental geometry and approach mirrors past work, where the solar radiation field is modeled using a number of light rays, rather than treating the Sun as a single point source. This dissertation aims to clarify this approach, simplify its implementation, and model previously overlooked factors. The complex geometries involved in modeling penumbra solar radiation fields are described in a more intuitive and complete way to simplify implementation. Atmospheric effects due to solar radiation passing through the troposphere and stratosphere are modeled, and the results are tabulated to significantly reduce computational cost. SOLAARS includes new, more efficient and accurate approaches to modeling atmospheric effects which allow us to consider the spatial and temporal variability in lower atmospheric conditions. A new approach to modeling the influence of Earth's polar flattening draws on past work to provide a relatively simple but accurate method for this important effect. Previous penumbra SRP models tend to lie at two extremes of complexity and computational cost, and so the significant improvement in accuracy provided by the complex models has often been lost in the interest of convenience and efficiency. This dissertation presents a simple model which provides an accurate alternative to the full, high precision SOLAARS model with reduced complexity and computational cost. This simpler method is based on curve fitting to results of the full SOLAARS model and is called SOLAARS Curve Fit (SOLAARS-CF). Both the high precision SOLAARS model and the simpler SOLAARS-CF model are applied to the Gravity Recovery and Climate Experiment (GRACE) satellites. Modeling results are compared to the sub-nm/s2 precision GRACE accelerometer data and the results of a traditional penumbra SRP model. These comparisons illustrate the improved accuracy of the SOLAARS and SOLAARS-CF models. A sensitivity analyses for the GRACE orbit illustrates the significance of various input parameters and features of the SOLAARS model on results. The SOLAARS-CF model is applied to a study of penumbra SRP and the Earth flyby anomaly. Beyond the value of its results to the scientific community, this study provides an application example where the computational efficiency of the simplified SOLAARS-CF model is necessary. The Earth flyby anomaly is an open question in orbit determination which has gone unsolved for over 20 years. This study quantifies the influence of penumbra SRP modeling errors on the observed anomalies from the Galileo, Cassini, and Rosetta Earth flybys. The results of this study prove that penumbra SRP is not an explanation for or significant contributor to the Earth flyby anomaly.

  1. From descriptive to predictive distribution models: a working example with Iberian amphibians and reptiles.

    PubMed

    Arntzen, J W

    2006-05-04

    Aim of the study was to identify the conditions under which spatial-environmental models can be used for the improved understanding of species distributions, under the explicit criterion of model predictive performance. I constructed distribution models for 17 amphibian and 21 reptile species in Portugal from atlas data and 13 selected ecological variables with stepwise logistic regression and a geographic information system. Models constructed for Portugal were extrapolated over Spain and tested against range maps and atlas data. Descriptive model precision ranged from 'fair' to 'very good' for 12 species showing a range border inside Portugal ('edge species', kappa (k) 0.35-0.89, average 0.57) and was at best 'moderate' for 26 species with a countrywide Portuguese distribution ('non-edge species', k = 0.03-0.54, average 0.29). The accuracy of the prediction for Spain was significantly related to the precision of the descriptive model for the group of edge species and not for the countrywide species. In the latter group data were consistently better captured with the single variable search-effort than by the panel of environmental data. Atlas data in presence-absence format are often inadequate to model the distribution of species if the considered area does not include part of the range border. Conversely, distribution models for edge-species, especially those displaying high precision, may help in the correct identification of parameters underlying the species range and assist with the informed choice of conservation measures.

  2. Penning trap mass spectrometry Q-value determinations for highly forbidden β-decays

    NASA Astrophysics Data System (ADS)

    Sandler, Rachel; Bollen, Georg; Eibach, Martin; Gamage, Nadeesha; Gulyuz, Kerim; Hamaker, Alec; Izzo, Chris; Kandegedara, Rathnayake; Redshaw, Matt; Ringle, Ryan; Valverde, Adrian; Yandow, Isaac; Low Energy Beam Ion Trap Team

    2017-09-01

    Over the last several decades, extremely sensitive, ultra-low background beta and gamma detection techniques have been developed. These techniques have enabled the observation of very rare processes, such as highly forbidden beta decays e.g. of 113Cd, 50V and 138La. Half-life measurements of highly forbidden beta decays provide a testing ground for theoretical nuclear models, and the comparison of calculated and measured energy spectra could enable a determination of the values of the weak coupling constants. Precision Q-value measurements also allow for systematic tests of the beta-particle detection techniques. We will present the results and current status of Q value determinations for highly forbidden beta decays. The Q values, the mass difference between parent and daughter nuclides, are measured using the high precision Penning trap mass spectrometer LEBIT at the National Superconducting Cyclotron Laboratory.

  3. Target Discovery for Precision Medicine Using High-Throughput Genome Engineering.

    PubMed

    Guo, Xinyi; Chitale, Poonam; Sanjana, Neville E

    2017-01-01

    Over the past few years, programmable RNA-guided nucleases such as the CRISPR/Cas9 system have ushered in a new era of precision genome editing in diverse model systems and in human cells. Functional screens using large libraries of RNA guides can interrogate a large hypothesis space to pinpoint particular genes and genetic elements involved in fundamental biological processes and disease-relevant phenotypes. Here, we review recent high-throughput CRISPR screens (e.g. loss-of-function, gain-of-function, and targeting noncoding elements) and highlight their potential for uncovering novel therapeutic targets, such as those involved in cancer resistance to small molecular drugs and immunotherapies, tumor evolution, infectious disease, inborn genetic disorders, and other therapeutic challenges.

  4. Molecular diagnosis and precision medicine in allergy management.

    PubMed

    Riccio, Anna Maria; De Ferrari, Laura; Chiappori, Alessandra; Ledda, Sabina; Passalacqua, Giovanni; Melioli, Giovanni; Canonica, Giorgio Walter

    2016-11-01

    Precision medicine (PM) can be defined as a structural model aimed at customizing healthcare, with medical decisions/products tailored on an individual patient at a highly detailed level. In this sense, allergy diagnostics based on molecular allergen components allows to accurately define the patient's IgE repertoire. The availability of highly specialized singleplexed and multiplexed platforms support allergists with an advanced diagnostic armamentarium. The therapeutic intervention, driven by the standard diagnostic approach, but further supported by these innovative tools may result, for instance, in a more appropriate prescription of allergen immunotherapy (AIT). Also, the phenotyping of patients, which may have relevant effects on the treatment strategy, could be take advantage by the molecular allergy diagnosis.

  5. Wave processes in the human cardiovascular system: The measuring complex, computing models, and diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Ganiev, R. F.; Reviznikov, D. L.; Rogoza, A. N.; Slastushenskiy, Yu. V.; Ukrainskiy, L. E.

    2017-03-01

    A description of a complex approach to investigation of nonlinear wave processes in the human cardiovascular system based on a combination of high-precision methods of measuring a pulse wave, mathematical methods of processing the empirical data, and methods of direct numerical modeling of hemodynamic processes in an arterial tree is given.

  6. Fundamental differences between optimization code test problems in engineering applications

    NASA Technical Reports Server (NTRS)

    Eason, E. D.

    1984-01-01

    The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.

  7. A new numerically stable implementation of the T-matrix method for electromagnetic scattering by spheroidal particles

    NASA Astrophysics Data System (ADS)

    Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.

    2013-07-01

    We propose, describe, and demonstrate a new numerically stable implementation of the extended boundary-condition method (EBCM) to compute the T-matrix for electromagnetic scattering by spheroidal particles. Our approach relies on the fact that for many of the EBCM integrals in the special case of spheroids, a leading part of the integrand integrates exactly to zero, which causes catastrophic loss of precision in numerical computations. This feature was in fact first pointed out by Waterman in the context of acoustic scattering and electromagnetic scattering by infinite cylinders. We have recently studied it in detail in the case of electromagnetic scattering by particles. Based on this study, the principle of our new implementation is therefore to compute all the integrands without the problematic part to avoid the primary cause of loss of precision. Particular attention is also given to choosing the algorithms that minimise loss of precision in every step of the method, without compromising on speed. We show that the resulting implementation can efficiently compute in double precision arithmetic the T-matrix and therefore optical properties of spheroidal particles to a high precision, often down to a remarkable accuracy (10-10 relative error), over a wide range of parameters that are typically considered problematic. We discuss examples such as high-aspect ratio metallic nanorods and large size parameter (≈35) dielectric particles, which had been previously modelled only using quadruple-precision arithmetic codes.

  8. Determining Empirical Stellar Masses and Radii from Transits and Gaia Parallaxes as Illustrated by Spitzer Observations of KELT-11b

    NASA Astrophysics Data System (ADS)

    Beatty, Thomas G.; Stevens, Daniel J.; Collins, Karen A.; Colón, Knicole D.; James, David J.; Kreidberg, Laura; Pepper, Joshua; Rodriguez, Joseph E.; Siverd, Robert J.; Stassun, Keivan G.; Kielkopf, John F.

    2017-07-01

    Using the Spitzer Space Telescope, we observed a transit at 3.6 μm of KELT-11b. We also observed three partial planetary transits from the ground. We simultaneously fit these observations, ground-based photometry from Pepper et al., radial velocity data from Pepper et al., and a spectral energy distribution (SED) model using catalog magnitudes and the Hipparcos parallax to the system. The only significant difference between our results and those of Pepper et al. is that we find the orbital period to be shorter by 37 s, 4.73610 ± 0.00003 versus 4.73653 ± 0.00006 days, and we measure a transit center time of {{BJD}}{TDB} 2457483.4310 ± 0.0007, which is 42 minutes earlier than predicted. Using our new photometry, we precisely measure the density of the star KELT-11 to 4%. By combining the parallax and catalog magnitudes of the system, we are able to measure the radius of KELT-11b essentially empirically. Coupled with the stellar density, this gives a parallactic mass and radius of 1.8 {M}⊙ and 2.9 {R}⊙ , which are each approximately 1σ higher than the adopted model-estimated mass and radius. If we conduct the same fit using the expected parallax uncertainty from the final Gaia data release, this difference increases to 4σ. The differences between the model and parallactic masses and radii for KELT-11 demonstrate the role that precise Gaia parallaxes, coupled with simultaneous photometric, radial velocity, and SED fitting, can play in determining stellar and planetary parameters. With high-precision photometry of transiting planets and high-precision Gaia parallaxes, the parallactic mass and radius uncertainties of stars become 1% and 3%, respectively. TESS is expected to discover 60-80 systems where these measurements will be possible. These parallactic mass and radius measurements have uncertainties small enough that they may provide observational input into the stellar models themselves.

  9. [Influence of trabecular microstructure modeling on finite element analysis of dental implant].

    PubMed

    Shen, M J; Wang, G G; Zhu, X H; Ding, X

    2016-09-01

    To analyze the influence of trabecular microstructure modeling on the biomechanical distribution of implant-bone interface with a three-dimensional finite element mandible model of trabecular structure. Dental implants were embeded in the mandibles of a beagle dog. After three months of the implant installation, the mandibles with dental implants were harvested and scaned by micro-CT and cone-beam CT. Two three-dimensional finite element mandible models, trabecular microstructure(precise model) and macrostructure(simplified model), were built. The values of stress and strain of implant-bone interface were calculated using the software of Ansys 14.0. Compared with the simplified model, the precise models' average values of the implant bone interface stress increased obviously and its maximum values did not change greatly. The maximum values of quivalent stress of the precise models were 80% and 110% of the simplified model and the average values were 170% and 290% of simplified model. The maximum and average values of equivalent strain of precise models were obviously decreased, and the maximum values of the equivalent effect strain were 17% and 26% of simplified model and the average ones were 21% and 16% of simplified model respectively. Stress and strain concentrations at implant-bone interface were obvious in the simplified model. However, the distributions of stress and strain were uniform in the precise model. The precise model has significant effect on the distribution of stress and strain at implant-bone interface.

  10. Micro-Arcsec mission: implications of the monitoring, diagnostic and calibration of the instrument response in the data reduction chain. .

    NASA Astrophysics Data System (ADS)

    Busonero, D.; Gai, M.

    The goals of 21st century high angular precision experiments rely on the limiting performance associated to the selected instrumental configuration and observational strategy. Both global and narrow angle micro-arcsec space astrometry require that the instrument contributions to the overall error budget has to be less than the desired micro-arcsec level precision. Appropriate modelling of the astrometric response is required for optimal definition of the data reduction and calibration algorithms, in order to ensure high sensitivity to the astrophysical source parameters and in general high accuracy. We will refer to the framework of the SIM-Lite and the Gaia mission, the most challenging space missions of the next decade in the narrow angle and global astrometry field, respectively. We will focus our dissertation on the Gaia data reduction issues and instrument calibration implications. We describe selected topics in the framework of the Astrometric Instrument Modelling for the Gaia mission, evidencing their role in the data reduction chain and we give a brief overview of the Astrometric Instrument Model Data Analysis Software System, a Java-based pipeline under development by our team.

  11. High-precision cryogenic wheel mechanisms of the JWST/MIRI instrument: performance of the flight models

    NASA Astrophysics Data System (ADS)

    Krause, O.; Müller, F.; Birkmann, S.; Böhm, A.; Ebert, M.; Grözinger, U.; Henning, Th.; Hofferbert, R.; Huber, A.; Lemke, D.; Rohloff, R.-R.; Scheithauer, S.; Gross, T.; Fischer, T.; Luichtel, G.; Merkle, H.; Übele, M.; Wieland, H.-U.; Amiaux, J.; Jager, R.; Glauser, A.; Parr-Burman, P.; Sykes, J.

    2010-07-01

    The Mid Infrared Instrument (MIRI) aboard JWST is equipped with one filter wheel and two dichroic-grating wheel mechanisms to reconfigure the instrument between observing modes such as broad/narrow-band imaging, coronagraphy and low/medium resolution spectroscopy. Key requirements for the three mechanisms with up to 18 optical elements on the wheel include: (1) reliable operation at T = 7 K, (2) high positional accuracy of 4 arcsec, (3) low power dissipation, (4) high vibration capability, (5) functionality at 7 K < T < 300 K and (6) long lifetime (5-10 years). To meet these requirements a space-proven wheel concept consisting of a central MoS2-lubricated integrated ball bearing, a central torque motor for actuation, a ratchet system with monolithic CuBe flexural pivots for precise and powerless positioning and a magnetoresistive position sensor has been implemented. We report here the final performance and lessons-learnt from the successful acceptance test program of the MIRI wheel mechanism flight models. The mechanisms have been meanwhile integrated into the flight model of the MIRI instrument, ready for launch in 2014 by an Ariane 5 rocket.

  12. 3D Reconstruction and Standardization of the Rat Vibrissal Cortex for Precise Registration of Single Neuron Morphology

    PubMed Central

    Egger, Robert; Narayanan, Rajeevan T.; Helmstaedter, Moritz; de Kock, Christiaan P. J.; Oberlaender, Marcel

    2012-01-01

    The three-dimensional (3D) structure of neural circuits is commonly studied by reconstructing individual or small groups of neurons in separate preparations. Investigation of structural organization principles or quantification of dendritic and axonal innervation thus requires integration of many reconstructed morphologies into a common reference frame. Here we present a standardized 3D model of the rat vibrissal cortex and introduce an automated registration tool that allows for precise placement of single neuron reconstructions. We (1) developed an automated image processing pipeline to reconstruct 3D anatomical landmarks, i.e., the barrels in Layer 4, the pia and white matter surfaces and the blood vessel pattern from high-resolution images, (2) quantified these landmarks in 12 different rats, (3) generated an average 3D model of the vibrissal cortex and (4) used rigid transformations and stepwise linear scaling to register 94 neuron morphologies, reconstructed from in vivo stainings, to the standardized cortex model. We find that anatomical landmarks vary substantially across the vibrissal cortex within an individual rat. In contrast, the 3D layout of the entire vibrissal cortex remains remarkably preserved across animals. This allows for precise registration of individual neuron reconstructions with approximately 30 µm accuracy. Our approach could be used to reconstruct and standardize other anatomically defined brain areas and may ultimately lead to a precise digital reference atlas of the rat brain. PMID:23284282

  13. O1.3. A COMPUTATIONAL TRIAL-BY-TRIAL EEG ANALYSIS OF HIERARCHICAL PRECISION-WEIGHTED PREDICTION ERRORS

    PubMed Central

    Tomiello, Sara; Schöbi, Dario; Weber, Lilian; Haker, Helene; Sandra, Iglesias; Stephan, Klaas Enno

    2018-01-01

    Abstract Background Action optimisation relies on learning about past decisions and on accumulated knowledge about the stability of the environment. In Bayesian models of learning, belief updating is informed by multiple, hierarchically related, precision-weighted prediction errors (pwPEs). Recent work suggests that hierarchically different pwPEs may be encoded by specific neurotransmitters such as dopamine (DA) and acetylcholine (ACh). Abnormal dopaminergic and cholinergic modulation of N-methyl-D-aspartate (NMDA) receptors plays a central role in the dysconnection hypothesis, which considers impaired synaptic plasticity a central mechanisms in the pathophysiology of schizophrenia. Methods To probe the dichotomy between DA and ACh and to investigate timing parameters of pwPEs, we tested 74 healthy male volunteers performing a probabilistic reward associative learning task in which the contingency between cues and rewards changed over 160 trials between 0.8 and 0.2. Furthermore, the current study employed pharmacological interventions (amisulpride / biperiden / placebo) and genetic analyses (COMT and ChAT) to probe DA and ACh modulation of these computational quantities. The study was double-blind and between-subject. We inferred, from subject-specific behavioural data, a low-level choice PE about the reward outcome, a high-level PE about the probability of the outcome as well as the respective precision-weights (uncertainties) and used them, in a trial-by-trial analysis, to explain electroencephalogram (EEG) signals (64 channels). Behavioural data was modelled implementing three versions of the Hierarchical Gaussian Filter (HGF), a Rescorla-Wagner model, and a Sutton model with a dynamic learning rate. The computational trajectories of the winning model were used as regressors in single-subject trial-by-trial GLM analyses at the sensor level. The resulting parameter estimates were entered into 2nd-level ANOVAs. The reported results were family-wise error corrected at the peak-level (p<0.05) across the whole brain and time window (outcome phase: 0 - 500ms). Results A three-level HGF best explained the data and was used to compute the computational regressors for EEG analyses. We found a significant interaction between pharmacology and COMT for the high-level precision-weight (uncertainty). Specifically: - At 276 ms after outcome presentation the difference between Met/Met and Val/Met was more positive for amisulpride than for biperiden over occipital electrodes. - At 274ms and 278 ms after outcome presentation the difference between Met/Met and Val/Met was more negative over fronto-temporal electrodes for amisulpride than for placebo, and for amisulpride than for biperiden, respectively. No significant results were detected for the other computational quantities or for the ChAT gene. Discussion The differential effects of pharmacology on the processing of high-level precision-weight (uncertainty) were modulated by the DA-related gene COMT. Previous results linked high-level PEs to the cholinergic basal forebrain. One possible explanation for the current results is that high-level computational quantities are represented in cholinergic regions, which in turn are influenced by dopaminergic projections. In order to disentangle dopaminergic and cholinergic effects on synaptic plasticity further analyses will concentrate on biophysical models (e.g. DCM). This may prove useful in detecting pathophysiological subgroups and might therefore be of high relevance in a clinical setting.

  14. Use of genome editing tools in human stem cell-based disease modeling and precision medicine.

    PubMed

    Wei, Yu-da; Li, Shuang; Liu, Gai-gai; Zhang, Yong-xian; Ding, Qiu-rong

    2015-10-01

    Precision medicine emerges as a new approach that takes into account individual variability. The successful conduct of precision medicine requires the use of precise disease models. Human pluripotent stem cells (hPSCs), as well as adult stem cells, can be differentiated into a variety of human somatic cell types that can be used for research and drug screening. The development of genome editing technology over the past few years, especially the CRISPR/Cas system, has made it feasible to precisely and efficiently edit the genetic background. Therefore, disease modeling by using a combination of human stem cells and genome editing technology has offered a new platform to generate " personalized " disease models, which allow the study of the contribution of individual genetic variabilities to disease progression and the development of precise treatments. In this review, recent advances in the use of genome editing in human stem cells and the generation of stem cell models for rare diseases and cancers are discussed.

  15. Performance Analysis of BDS Medium-Long Baseline RTK Positioning Using an Empirical Troposphere Model.

    PubMed

    Shu, Bao; Liu, Hui; Xu, Longwei; Qian, Chuang; Gong, Xiaopeng; An, Xiangdong

    2018-04-14

    For GPS medium-long baseline real-time kinematic (RTK) positioning, the troposphere parameter is introduced along with coordinates, and the model is ill-conditioned due to its strong correlation with the height parameter. For BeiDou Navigation Satellite System (BDS), additional difficulties occur due to its special satellite constellation. In fact, relative zenith troposphere delay (RZTD) derived from high-precision empirical zenith troposphere models can be introduced. Thus, the model strength can be improved, which is also called the RZTD-constrained RTK model. In this contribution, we first analyze the factors affecting the precision of BDS medium-long baseline RTK; thereafter, 15 baselines ranging from 38 km to 167 km in different troposphere conditions are processed to assess the performance of RZTD-constrained RTK. Results show that the troposphere parameter is difficult to distinguish from the height component, even with long time filtering for BDS-only RTK. Due to the lack of variation in geometry for the BDS geostationary Earth orbit satellite, the long convergence time of ambiguity parameters may reduce the height precision of GPS/BDS-combined RTK in the initial period. When the RZTD-constrained model was used in BDS and GPS/BDS-combined situations compared with the traditional RTK, the standard deviation of the height component for the fixed solution was reduced by 52.4% and 34.0%, respectively.

  16. Performance Analysis of BDS Medium-Long Baseline RTK Positioning Using an Empirical Troposphere Model

    PubMed Central

    Liu, Hui; Xu, Longwei; Qian, Chuang; Gong, Xiaopeng; An, Xiangdong

    2018-01-01

    For GPS medium-long baseline real-time kinematic (RTK) positioning, the troposphere parameter is introduced along with coordinates, and the model is ill-conditioned due to its strong correlation with the height parameter. For BeiDou Navigation Satellite System (BDS), additional difficulties occur due to its special satellite constellation. In fact, relative zenith troposphere delay (RZTD) derived from high-precision empirical zenith troposphere models can be introduced. Thus, the model strength can be improved, which is also called the RZTD-constrained RTK model. In this contribution, we first analyze the factors affecting the precision of BDS medium-long baseline RTK; thereafter, 15 baselines ranging from 38 km to 167 km in different troposphere conditions are processed to assess the performance of RZTD-constrained RTK. Results show that the troposphere parameter is difficult to distinguish from the height component, even with long time filtering for BDS-only RTK. Due to the lack of variation in geometry for the BDS geostationary Earth orbit satellite, the long convergence time of ambiguity parameters may reduce the height precision of GPS/BDS-combined RTK in the initial period. When the RZTD-constrained model was used in BDS and GPS/BDS-combined situations compared with the traditional RTK, the standard deviation of the height component for the fixed solution was reduced by 52.4% and 34.0%, respectively. PMID:29661999

  17. What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    PubMed

    Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad

    2015-03-01

    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.

  18. Application of Template Matching for Improving Classification of Urban Railroad Point Clouds

    PubMed Central

    Arastounia, Mostafa; Oude Elberink, Sander

    2016-01-01

    This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452

  19. Constraining regional scale carbon budgets at the US West Coast using a high-resolution atmospheric inverse modeling approach

    NASA Astrophysics Data System (ADS)

    Goeckede, M.; Michalak, A. M.; Vickers, D.; Turner, D.; Law, B.

    2009-04-01

    The study presented is embedded within the NACP (North American Carbon Program) West Coast project ORCA2, which aims at determining the regional carbon balance of the US states Oregon, California and Washington. Our work specifically focuses on the effect of disturbance history and climate variability, aiming at improving our understanding of e.g. drought stress and stand age on carbon sources and sinks in complex terrain with fine-scale variability in land cover types. The ORCA2 atmospheric inverse modeling approach has been set up to capture flux variability on the regional scale at high temporal and spatial resolution. Atmospheric transport is simulated coupling the mesoscale model WRF (Weather Research and Forecast) with the STILT (Stochastic Time Inverted Lagrangian Transport) footprint model. This setup allows identifying sources and sinks that influence atmospheric observations with highly resolved mass transport fields and realistic turbulent mixing. Terrestrial biosphere carbon fluxes are simulated at spatial resolutions of up to 1km and subdaily timesteps, considering effects of ecoregion, land cover type and disturbance regime on the carbon budgets. Our approach assimilates high-precision atmospheric CO2 concentration measurements and eddy-covariance data from several sites throughout the model domain, as well as high-resolution remote sensing products (e.g. LandSat, MODIS) and interpolated surface meteorology (DayMet, SOGS, PRISM). We present top-down modeling results that have been optimized using Bayesian inversion, reflecting the information on regional scale carbon processes provided by the network of high-precision CO2 observations. We address the level of detail (e.g. spatial and temporal resolution) that can be resolved by top-down modeling on the regional scale, given the uncertainties introduced by various sources for model-data mismatch. Our results demonstrate the importance of accurate modeling of carbon-water coupling, with the representation of water availability and drought stress playing a dominant role to capture spatially variable CO2 exchange rates in a region characterized by strong climatic gradients.

  20. Sensorimotor synchronization with tempo-changing auditory sequences: Modeling temporal adaptation and anticipation.

    PubMed

    van der Steen, M C Marieke; Jacoby, Nori; Fairhurst, Merle T; Keller, Peter E

    2015-11-11

    The current study investigated the human ability to synchronize movements with event sequences containing continuous tempo changes. This capacity is evident, for example, in ensemble musicians who maintain precise interpersonal coordination while modulating the performance tempo for expressive purposes. Here we tested an ADaptation and Anticipation Model (ADAM) that was developed to account for such behavior by combining error correction processes (adaptation) with a predictive temporal extrapolation process (anticipation). While previous computational models of synchronization incorporate error correction, they do not account for prediction during tempo-changing behavior. The fit between behavioral data and computer simulations based on four versions of ADAM was assessed. These versions included a model with adaptation only, one in which adaptation and anticipation act in combination (error correction is applied on the basis of predicted tempo changes), and two models in which adaptation and anticipation were linked in a joint module that corrects for predicted discrepancies between the outcomes of adaptive and anticipatory processes. The behavioral experiment required participants to tap their finger in time with three auditory pacing sequences containing tempo changes that differed in the rate of change and the number of turning points. Behavioral results indicated that sensorimotor synchronization accuracy and precision, while generally high, decreased with increases in the rate of tempo change and number of turning points. Simulations and model-based parameter estimates showed that adaptation mechanisms alone could not fully explain the observed precision of sensorimotor synchronization. Including anticipation in the model increased the precision of simulated sensorimotor synchronization and improved the fit of model to behavioral data, especially when adaptation and anticipation mechanisms were linked via a joint module based on the notion of joint internal models. Overall results suggest that adaptation and anticipation mechanisms both play an important role during sensorimotor synchronization with tempo-changing sequences. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459

  2. Millisecond Pulsar Timing Precision with NICER

    NASA Astrophysics Data System (ADS)

    Deneva, Julia; Ray, Paul S.; Ransom, Scott; Wood, Kent S.; Kerr, Matthew T.; Lommen, Andrea; Arzoumanian, Zaven; Black, Kevin; Gendreau, Keith C.; Lewandowska, Natalia; Markwardt, Craig B.; Price, Samuel; Winternitz, Luke

    2018-01-01

    The Neutron Star Interior Composition Explorer (NICER) is an array of 56 X-ray detectors mounted on the outside of the International Space Station. It allows high-precision timing of millisecond pulsars (MSPs) without the pulse broadening effects due to dispersion and scattering by the interstellar medium that plague radio timing. We present initial timing results from four months of NICER data on the MSPs B1937+21, B1821-24, and J0218+4232, and compare them to simulations and theoretical models for X-ray times-of-arrival, and radio observations.

  3. Application of high precision two-way S-band ranging to the navigation of the Galileo Earth encounters

    NASA Technical Reports Server (NTRS)

    Pollmeier, Vincent M.; Kallemeyn, Pieter H.; Thurman, Sam W.

    1993-01-01

    The application of high-accuracy S/S-band (2.1 GHz uplink/2.3 GHz downlink) ranging to orbit determination with relatively short data arcs is investigated for the approach phase of each of the Galileo spacecraft's two Earth encounters (8 December 1990 and 8 December 1992). Analysis of S-band ranging data from Galileo indicated that under favorable signal levels, meter-level precision was attainable. It is shown that ranginging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. Explicit modeling of ranging bias parameters for each station pass is used to largely remove systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle finding capabilities of the data. The accuracy achieved using the precision range filtering strategy proved markedly better when compared to post-flyby reconstructions than did solutions utilizing a traditional Doppler/range filter strategy. In addition, the navigation accuracy achieved with precision ranging was comparable to that obtained using delta-Differenced One-Way Range, an interferometric measurement of spacecraft angular position relative to a natural radio source, which was also used operationally.

  4. Application of Raytracing Through the High Resolution Numerical Weather Model HIRLAM for the Analysis of European VLBI

    NASA Technical Reports Server (NTRS)

    Garcia-Espada, Susana; Haas, Rudiger; Colomer, Francisco

    2010-01-01

    An important limitation for the precision in the results obtained by space geodetic techniques like VLBI and GPS are tropospheric delays caused by the neutral atmosphere, see e.g. [1]. In recent years numerical weather models (NWM) have been applied to improve mapping functions which are used for tropospheric delay modeling in VLBI and GPS data analyses. In this manuscript we use raytracing to calculate slant delays and apply these to the analysis of Europe VLBI data. The raytracing is performed through the limited area numerical weather prediction (NWP) model HIRLAM. The advantages of this model are high spatial (0.2 deg. x 0.2 deg.) and high temporal resolution (in prediction mode three hours).

  5. Modelling the subsurface geomorphology of an active landslide using LIDAR.

    DOT National Transportation Integrated Search

    2014-07-01

    The focus of this research was twofold: : 1. To determine millimeter/sub-millimeter movement within a slide body using high precision terrestrial LIDAR and : artificial targets. This allows movement not apparent to the naked eye to be verified. : 2. ...

  6. Fragment-based modelling of single stranded RNA bound to RNA recognition motif containing proteins

    PubMed Central

    de Beauchene, Isaure Chauvot; de Vries, Sjoerd J.; Zacharias, Martin

    2016-01-01

    Abstract Protein-RNA complexes are important for many biological processes. However, structural modeling of such complexes is hampered by the high flexibility of RNA. Particularly challenging is the docking of single-stranded RNA (ssRNA). We have developed a fragment-based approach to model the structure of ssRNA bound to a protein, based on only the protein structure, the RNA sequence and conserved contacts. The conformational diversity of each RNA fragment is sampled by an exhaustive library of trinucleotides extracted from all known experimental protein–RNA complexes. The method was applied to ssRNA with up to 12 nucleotides which bind to dimers of the RNA recognition motifs (RRMs), a highly abundant eukaryotic RNA-binding domain. The fragment based docking allows a precise de novo atomic modeling of protein-bound ssRNA chains. On a benchmark of seven experimental ssRNA–RRM complexes, near-native models (with a mean heavy-atom deviation of <3 Å from experiment) were generated for six out of seven bound RNA chains, and even more precise models (deviation < 2 Å) were obtained for five out of seven cases, a significant improvement compared to the state of the art. The method is not restricted to RRMs but was also successfully applied to Pumilio RNA binding proteins. PMID:27131381

  7. Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born

    PubMed Central

    2012-01-01

    We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031

  8. Accuracy evaluation of intraoral optical impressions: A clinical study using a reference appliance.

    PubMed

    Atieh, Mohammad A; Ritter, André V; Ko, Ching-Chang; Duqum, Ibrahim

    2017-09-01

    Trueness and precision are used to evaluate the accuracy of intraoral optical impressions. Although the in vivo precision of intraoral optical impressions has been reported, in vivo trueness has not been evaluated because of limitations in the available protocols. The purpose of this clinical study was to compare the accuracy (trueness and precision) of optical and conventional impressions by using a novel study design. Five study participants consented and were enrolled. For each participant, optical and conventional (vinylsiloxanether) impressions of a custom-made intraoral Co-Cr alloy reference appliance fitted to the mandibular arch were obtained by 1 operator. Three-dimensional (3D) digital models were created for stone casts obtained from the conventional impression group and for the reference appliances by using a validated high-accuracy reference scanner. For the optical impression group, 3D digital models were obtained directly from the intraoral scans. The total mean trueness of each impression system was calculated by averaging the mean absolute deviations of the impression replicates from their 3D reference model for each participant, followed by averaging the obtained values across all participants. The total mean precision for each impression system was calculated by averaging the mean absolute deviations between all the impression replicas for each participant (10 pairs), followed by averaging the obtained values across all participants. Data were analyzed using repeated measures ANOVA (α=.05), first to assess whether a systematic difference in trueness or precision of replicate impressions could be found among participants and second to assess whether the mean trueness and precision values differed between the 2 impression systems. Statistically significant differences were found between the 2 impression systems for both mean trueness (P=.010) and mean precision (P=.007). Conventional impressions had higher accuracy with a mean trueness of 17.0 ±6.6 μm and mean precision of 16.9 ±5.8 μm than optical impressions with a mean trueness of 46.2 ±11.4 μm and mean precision of 61.1 ±4.9 μm. Complete arch (first molar-to-first molar) optical impressions were less accurate than conventional impressions but may be adequate for quadrant impressions. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  9. High-resolution chronology of sediment below CCD based on Holocene paleomagnetic secular variations in the Tohoku-oki earthquake rupture zone

    NASA Astrophysics Data System (ADS)

    Kanamatsu, Toshiya; Usami, Kazuko; McHugh, Cecilia M. G.; Ikehara, Ken

    2017-08-01

    Using high-resolution paleomagnetic data, we examined the potential for obtaining precise ages from sediment core samples recovered from deep-sea basins close to rupture zones of the 2011 and earlier earthquakes off Tohoku, Japan. Obtaining detailed stratigraphic ages from deep-sea sediments below the calcium compensation depth (CCD) is difficult, but we found that the samples contain excellent paleomagnetic secular variation records to constrain age models. Variations in paleomagnetic directions obtained from the sediments reveal systematic changes in the cores. A stacked paleomagnetic profile closely matches the Lake Biwa data sets in southwest Japan for the past 7000 years, one can establish age models based on secular variations of the geomagnetic field on sediments recovered uniquely below the CCD. Comparison of paleomagnetic directions near a tephra and a paleomagnetic direction of contemporaneous pyroclastic flow deposits acquired by different magnetization processes shows precise depositional ages reflecting the magnetization delay of the marine sediment record.Plain Language SummaryGenerally obtaining detailed ages from deep-sea sediments is difficult, because available dating method is very limited. We found that the deep-see sediment off North Japan recorded past sequential geomagnetic directions. If those records correlate well with the reference record in past 7000 years, then we could estimate age of sediment by pattern matching. Additionally a volcanic ash emitted in 915 A.D., which was intercalated in our samples, indicates a time lag in our age model. This observation makes our age model more precise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ISPAr41B3..583C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ISPAr41B3..583C"><span>An Improved Snake Model for Refinement of Lidar-Derived Building Roof Contours Using Aerial Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Qi; Wang, Shugen; Liu, Xiuguo</p> <p>2016-06-01</p> <p>Building roof contours are considered as very important geometric data, which have been widely applied in many fields, including but not limited to urban planning, land investigation, change detection and military reconnaissance. Currently, the demand on building contours at a finer scale (especially in urban areas) has been raised in a growing number of studies such as urban environment quality assessment, urban sprawl monitoring and urban air pollution modelling. LiDAR is known as an effective means of acquiring 3D roof points with high elevation accuracy. However, the precision of the building contour obtained from LiDAR data is restricted by its relatively low scanning resolution. With the use of the texture information from high-resolution imagery, the precision can be improved. In this study, an improved snake model is proposed to refine the initial building contours extracted from LiDAR. First, an improved snake model is constructed with the constraints of the deviation angle, image gradient, and area. Then, the nodes of the contour are moved in a certain range to find the best optimized result using greedy algorithm. Considering both precision and efficiency, the candidate shift positions of the contour nodes are constrained, and the searching strategy for the candidate nodes is explicitly designed. The experiments on three datasets indicate that the proposed method for building contour refinement is effective and feasible. The average quality index is improved from 91.66% to 93.34%. The statistics of the evaluation results for every single building demonstrated that 77.0% of the total number of contours is updated with higher quality index.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ApJ...844..136E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ApJ...844..136E"><span>Joint Bayesian Estimation of Quasar Continua and the Lyα Forest Flux Probability Distribution Function</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan</p> <p>2017-08-01</p> <p>We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015OptLT..74...65S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015OptLT..74...65S"><span>A self-synchronized high speed computational ghost imaging system: A leap towards dynamic capturing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Suo, Jinli; Bian, Liheng; Xiao, Yudong; Wang, Yongjin; Zhang, Lei; Dai, Qionghai</p> <p>2015-11-01</p> <p>High quality computational ghost imaging needs to acquire a large number of correlated measurements between the to-be-imaged scene and different reference patterns, thus ultra-high speed data acquisition is of crucial importance in real applications. To raise the acquisition efficiency, this paper reports a high speed computational ghost imaging system using a 20 kHz spatial light modulator together with a 2 MHz photodiode. Technically, the synchronization between such high frequency illumination and bucket detector needs nanosecond trigger precision, so the development of synchronization module is quite challenging. To handle this problem, we propose a simple and effective computational self-synchronization scheme by building a general mathematical model and introducing a high precision synchronization technique. The resulted efficiency is around 14 times faster than state-of-the-arts, and takes an important step towards ghost imaging of dynamic scenes. Besides, the proposed scheme is a general approach with high flexibility for readily incorporating other illuminators and detectors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PrPNP..84...73G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PrPNP..84...73G"><span>Precision muon physics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gorringe, T. P.; Hertzog, D. W.</p> <p>2015-09-01</p> <p>The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26580616','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26580616"><span>First Results of Field Absolute Calibration of the GPS Receiver Antenna at Wuhan University.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hu, Zhigang; Zhao, Qile; Chen, Guo; Wang, Guangxing; Dai, Zhiqiang; Li, Tao</p> <p>2015-11-13</p> <p>GNSS receiver antenna phase center variations (PCVs), which arise from the non-spherical phase response of GNSS signals have to be well corrected for high-precision GNSS applications. Without using a precise antenna phase center correction (PCC) model, the estimated position of a station monument will lead to a bias of up to several centimeters. The Chinese large-scale research project "Crustal Movement Observation Network of China" (CMONOC), which requires high-precision positions in a comprehensive GPS observational network motived establishment of a set of absolute field calibrations of the GPS receiver antenna located at Wuhan University. In this paper the calibration facilities are firstly introduced and then the multipath elimination and PCV estimation strategies currently used are elaborated. The validation of estimated PCV values of test antenna are finally conducted, compared with the International GNSS Service (IGS) type values. Examples of TRM57971.00 NONE antenna calibrations from our calibration facility demonstrate that the derived PCVs and IGS type mean values agree at the 1 mm level.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/888598','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/888598"><span>Proton Radii of 4,6,8He Isotopes from High-Precision Nucleon-Nucleon Interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Caurier, E; Navratil, P</p> <p>2005-11-16</p> <p>Recently, precision laser spectroscopy on {sup 6}He atoms determined accurately the isotope shift between {sup 4}He and {sup 6}He and, consequently, the charge radius of {sup 6}He. A similar experiment for {sup 8}He is under way. We have performed large-scale ab initio calculations for {sup 4,6,8}He isotopes using high-precision nucleon-nucleon (NN) interactions within the no-core shell model (NCSM) approach. With the CD-Bonn 2000 NN potential we found point-proton root-mean-square (rms) radii of {sup 4}He and {sup 6}He 1.45(1) fm and 1.89(4), respectively, in agreement with experiment and predict the {sup 8}He point proton rms radius to be 1.88(6) fm. Atmore » the same time, our calculations show that the recently developed nonlocal INOY NN potential gives binding energies closer to experiment, but underestimates the charge radii.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26958894','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26958894"><span>Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Das, Jayajit</p> <p>2016-03-08</p> <p>Single cells often generate precise responses by involving dissipative out-of-thermodynamic-equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high-precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early-time T cell signaling. Using exact analytical calculations and numerical simulations, I show that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in determining single-cell kinetics from cell-population results. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvC..97c5503V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvC..97c5503V"><span>Precision half-life measurement of 11C: The most precise mirror transition F t value</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Valverde, A. A.; Brodeur, M.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Blankstein, D.; Brown, G.; Burdette, D. P.; Frentz, B.; Gilardy, G.; Hall, M. R.; King, S.; Kolata, J. J.; Long, J.; Macon, K. T.; Nelson, A.; O'Malley, P. D.; Skulski, M.; Strauss, S. Y.; Vande Kolk, B.</p> <p>2018-03-01</p> <p>Background: The precise determination of the F t value in T =1 /2 mixed mirror decays is an important avenue for testing the standard model of the electroweak interaction through the determination of Vu d in nuclear β decays. 11C is an interesting case, as its low mass and small QE C value make it particularly sensitive to violations of the conserved vector current hypothesis. The present dominant source of uncertainty in the 11CF t value is the half-life. Purpose: A high-precision measurement of the 11C half-life was performed, and a new world average half-life was calculated. Method: 11C was created by transfer reactions and separated using the TwinSol facility at the Nuclear Science Laboratory at the University of Notre Dame. It was then implanted into a tantalum foil, and β counting was used to determine the half-life. Results: The new half-life, t1 /2=1220.27 (26 ) s, is consistent with the previous values but significantly more precise. A new world average was calculated, t1/2 world=1220.41 (32 ) s, and a new estimate for the Gamow-Teller to Fermi mixing ratio ρ is presented along with standard model correlation parameters. Conclusions: The new 11C world average half-life allows the calculation of a F tmirror value that is now the most precise value for all superallowed mixed mirror transitions. This gives a strong impetus for an experimental determination of ρ , to allow for the determination of Vu d from this decay.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..MARS37009M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..MARS37009M"><span>Phase inversion and frequency doubling of reflection high-energy electron diffraction intensity oscillations in the layer-by-layer growth of complex oxides</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mao, Zhangwen; Guo, Wei; Ji, Dianxiang; Zhang, Tianwei; Gu, Chenyi; Tang, Chao; Gu, Zhengbin; Nie*, Yuefeng; Pan, Xiaoqing</p> <p></p> <p>In situ reflection high-energy electron diffraction (RHEED) and its intensity oscillations are extremely important for the growth of epitaxial thin films with atomic precision. The RHEED intensity oscillations of complex oxides are, however, rather complicated and a general model is still lacking. Here, we report the unusual phase inversion and frequency doubling of RHEED intensity oscillations observed in the layer-by-layer growth of SrTiO3 using oxide molecular beam epitaxy. In contacts to the common understanding that the maximum(minimum) intensity occurs at SrO(TiO2) termination, respectively, we found that both maximum or minimum intensities can occur at SrO, TiO2, or even incomplete terminations depending on the incident angle of the electron beam, which raises a fundamental question if one can rely on the RHEED intensity oscillations to precisely control the growth of thin films. A general model including surface roughness and termination dependent mean inner potential qualitatively explains the observed phenomena, and provides the answer to the question how to prepare atomically and chemically precise surface/interfaces using RHEED oscillations for complex oxides. We thank National Basic Research Program of China (No. 11574135, 2015CB654901) and the National Thousand-Young-Talents Program.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SMaS...26k5001Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SMaS...26k5001Z"><span>Synchronized motion control and precision positioning compensation of a 3-DOFs macro-micro parallel manipulator fully actuated by piezoelectric actuators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Quan; Li, Chaodong; Zhang, Jiantao; Zhang, Xu</p> <p>2017-11-01</p> <p>The macro-micro combined approach, as an effective way to realize trans-scale nano-precision positioning with multi-dimensions and high velocity, plays a significant role in integrated circuit manufacturing field. A 3-degree-of-freedoms (3-DOFs) macro-micro manipulator is designed and analyzed to compromise the conflictions among the large stroke, high precision and multi-DOFs. The macro manipulator is a 3-Prismatic-Revolute-Revolute (3-PRR) structure parallel manipulator which is driven by three linear ultrasonic motors. The dynamic model and the cross-coupling error based synchronized motion controller of the 3-PRR parallel manipulator are theoretical analyzed and experimental tested. To further improve the positioning accuracy, a 3-DOFs monolithic compliant manipulator actuated by three piezoelectric stack actuators is designed. Then a multilayer BP neural network based inverse kinematic model identifier is developed to perform the positioning control. Finally, by forming the macro-micro structure, the dual stage manipulator successfully achieved the positioning task from the point (2 mm, 2 mm, 0 rad) back to the original point (0 mm, 0 mm, 0 rad) with the translation errors in X and Y directions less than ±50 nm and the rotation error around Z axis less than ±1 μrad, respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18519217','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18519217"><span>A new simple asymmetric hysteresis operator and its application to inverse control of piezoelectric actuators.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Badel, A; Qiu, J; Nakano, T</p> <p>2008-05-01</p> <p>Piezoelectric actuators (PEAs) are commonly used as micropositioning devices due to their high resolution, high stiffness, and fast frequency response. Because piezoceramic materials are ferroelectric, they fundamentally exhibit hysteresis behavior in their response to an applied electric field. The positioning precision can be significantly reduced due to nonlinear hysteresis effects when PEAs are used in relatively long range applications. This paper describes a new, precise, and simple asymmetric hysteresis operator dedicated to PEAs. The complex hysteretic transfer characteristic has been considered in a purely phenomenological way, without taking into account the underlying physics. This operator is based on two curves. The first curve corresponds to the main ascending branch and is modeled by the function f1. The second curve corresponds to the main reversal branch and is modeled by the function g2. The functions f(1) and g(2) are two very simple hyperbola functions with only three parameters. Particular ascending and reversal branches are deduced from appropriate translations of f(1) and g(2). The efficiency and precision of the proposed approach is demonstrated, in practice, by a real-time inverse feed-forward controller for piezoelectric actuators. Advantages and drawbacks of the proposed approach compared with classical hysteresis operators are discussed.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22518526-cosmic-reionization-computers-ultraviolet-continuum-slopes-dust-opacities-high-redshift-galaxies','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22518526-cosmic-reionization-computers-ultraviolet-continuum-slopes-dust-opacities-high-redshift-galaxies"><span>COSMIC REIONIZATION ON COMPUTERS. ULTRAVIOLET CONTINUUM SLOPES AND DUST OPACITIES IN HIGH REDSHIFT GALAXIES</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Khakhaleva-Li, Zimu; Gnedin, Nickolay Y., E-mail: zimu@uchicago.edu, E-mail: gnedin@fnal.gov</p> <p></p> <p>We compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting ultraviolet (UV) and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are not fully sufficient.more » While the discrepancies with the exiting data are marginal, the future James Webb Space Telescope (JWST) data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25430162','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25430162"><span>Electron kinetic effects on interferometry, polarimetry and Thomson scattering measurements in burning plasmas (invited).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mirnov, V V; Brower, D L; Den Hartog, D J; Ding, W X; Duff, J; Parke, E</p> <p>2014-11-01</p> <p>At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = Te/mec(2) model may be insufficient; we present a more precise model with τ(2)-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of Te measurement relevant to ITER operational scenarios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24934301','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24934301"><span>ExpertEyes: open-source, high-definition eyetracking.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Parada, Francisco J; Wyatte, Dean; Yu, Chen; Akavipat, Ruj; Emerick, Brandi; Busey, Thomas</p> <p>2015-03-01</p> <p>ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27410124','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27410124"><span>Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua</p> <p>2016-05-30</p> <p>Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013IAUS..294..257S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013IAUS..294..257S"><span>Starspot detection and properties</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Savanov, I. S.</p> <p>2013-07-01</p> <p>I review the currently available techniques for the starspots detection including the one-dimensional spot modelling of photometric light curves. Special attention will be paid to the modelling of photospheric activity based on the high-precision light curves obtained with space missions MOST, CoRoT, and Kepler. Physical spot parameters (temperature, sizes and variability time scales including short-term activity cycles) are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5011758','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5011758"><span>A novel algorithm for a precise analysis of subchondral bone alterations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gao, Liang; Orth, Patrick; Goebel, Lars K. H.; Cucchiarini, Magali; Madry, Henning</p> <p>2016-01-01</p> <p>Subchondral bone alterations are emerging as considerable clinical problems associated with articular cartilage repair. Their analysis exposes a pattern of variable changes, including intra-lesional osteophytes, residual microfracture holes, peri-hole bone resorption, and subchondral bone cysts. A precise distinction between them is becoming increasingly important. Here, we present a tailored algorithm based on continuous data to analyse subchondral bone changes using micro-CT images, allowing for a clear definition of each entity. We evaluated this algorithm using data sets originating from two large animal models of osteochondral repair. Intra-lesional osteophytes were detected in 3 of 10 defects in the minipig and in 4 of 5 defects in the sheep model. Peri-hole bone resorption was found in 22 of 30 microfracture holes in the minipig and in 17 of 30 microfracture holes in the sheep model. Subchondral bone cysts appeared in 1 microfracture hole in the minipig and in 5 microfracture holes in the sheep model (n = 30 holes each). Calculation of inter-rater agreement (90% agreement) and Cohen’s kappa (kappa = 0.874) revealed that the novel algorithm is highly reliable, reproducible, and valid. Comparison analysis with the best existing semi-quantitative evaluation method was also performed, supporting the enhanced precision of this algorithm. PMID:27596562</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19..543H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19..543H"><span>Improving Weather Forecasts Through Reduced Precision Data Assimilation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hatfield, Samuel; Düben, Peter; Palmer, Tim</p> <p>2017-04-01</p> <p>We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29939999','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29939999"><span>A high precision method for length-based separation of carbon nanotubes using bio-conjugation, SDS-PAGE and silver staining.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Borzooeian, Zahra; Taslim, Mohammad E; Ghasemi, Omid; Rezvani, Saina; Borzooeian, Giti; Nourbakhsh, Amirhasan</p> <p>2018-01-01</p> <p>Parametric separation of carbon nanotubes, especially based on their length is a challenge for a number of nano-tech researchers. We demonstrate a method to combine bio-conjugation, SDS-PAGE, and silver staining in order to separate carbon nanotubes on the basis of length. Egg-white lysozyme, conjugated covalently onto the single-walled carbon nanotubes surfaces using carbodiimide method. The proposed conjugation of a biomolecule onto the carbon nanotubes surfaces is a novel idea and a significant step forward for creating an indicator for length-based carbon nanotubes separation. The conjugation step was followed by SDS-PAGE and the nanotube fragments were precisely visualized using silver staining. This high precision, inexpensive, rapid and simple separation method obviates the need for centrifugation, additional chemical analyses, and expensive spectroscopic techniques such as Raman spectroscopy to visualize carbon nanotube bands. In this method, we measured the length of nanotubes using different image analysis techniques which is based on a simplified hydrodynamic model. The method has high precision and resolution and is effective in separating the nanotubes by length which would be a valuable quality control tool for the manufacture of carbon nanotubes of specific lengths in bulk quantities. To this end, we were also able to measure the carbon nanotubes of different length, produced from different sonication time intervals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/34719','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/34719"><span>Pikalert(R) System Vehicle Data Translator (VDT) Utilizing Integrated Mobile Observations Pikalert VDT Enhancements, Operations, & Maintenance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2017-03-24</p> <p>The Pikalert System provides high precision road weather guidance. It assesses current weather and road conditions based on observations from connected vehicles, road weather information stations, radar, and weather model analysis fields. It also for...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014DPS....4650503H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014DPS....4650503H"><span>Validity of the "Laplace Swindle" in Calculation of Giant-Planet Gravity Fields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hubbard, William B.</p> <p>2014-11-01</p> <p>Jupiter and Saturn have large rotation-induced distortions, providing an opportunity to constrain interior structure via precise measurement of external gravity. Anticipated high-precision gravity measurements close to the surfaces of Jupiter (Juno spacecraft) and Saturn (Cassini spacecraft), possibly detecting zonal harmonics to J10 and beyond, will place unprecedented requirements on gravitational modeling via the theory of figures (TOF). It is not widely appreciated that the traditional TOF employs a formally nonconvergent expansion attributed to Laplace. This suspect expansion is intimately related to the standard zonal harmonic (J-coefficient) expansion of the external gravity potential. It can be shown (Hubbard, Schubert, Kong, and Zhang: Icarus, in press) that both Jupiter and Saturn are in the domain where Laplace's "swindle" works exactly, or at least as well as necessary. More highly-distorted objects such as rapidly spinning asteroids may not be in this domain, however. I present a numerical test for the validity and precision of TOF via polar "audit points". I extend the audit-point test to objects rotating differentially on cylinders, obtaining zonal harmonics to J20 and beyond. Models with only low-order differential rotation do not exhibit dramatic effects in the shape of the zonal harmonic spectrum. However, a model with Jupiter-like zonal winds exhibits a break in the zonal harmonic spectrum above about J10, and generally follows the more shallow Kaula power rule at higher orders. This confirms an earlier result obtained by a different method (Hubbard: Icarus 137, 357-359, 1999).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26547869','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26547869"><span>In vivo precision of conventional and digital methods for obtaining quadrant dental impressions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ender, Andreas; Zimmermann, Moritz; Attin, Thomas; Mehl, Albert</p> <p>2016-09-01</p> <p>Quadrant impressions are commonly used as alternative to full-arch impressions. Digital impression systems provide the ability to take these impressions very quickly; however, few studies have investigated the accuracy of the technique in vivo. The aim of this study is to assess the precision of digital quadrant impressions in vivo in comparison to conventional impression techniques. Impressions were obtained via two conventional (metal full-arch tray, CI, and triple tray, T-Tray) and seven digital impression systems (Lava True Definition Scanner, T-Def; Lava Chairside Oral Scanner, COS; Cadent iTero, ITE; 3Shape Trios, TRI; 3Shape Trios Color, TRC; CEREC Bluecam, Software 4.0, BC4.0; CEREC Bluecam, Software 4.2, BC4.2; and CEREC Omnicam, OC). Impressions were taken three times for each of five subjects (n = 15). The impressions were then superimposed within the test groups. Differences from model surfaces were measured using a normal surface distance method. Precision was calculated using the Perc90_10 value. The values for all test groups were statistically compared. The precision ranged from 18.8 (CI) to 58.5 μm (T-Tray), with the highest precision in the CI, T-Def, BC4.0, TRC, and TRI groups. The deviation pattern varied distinctly depending on the impression method. Impression systems with single-shot capture exhibited greater deviations at the tooth surface whereas high-frame rate impression systems differed more in gingival areas. Triple tray impressions displayed higher local deviation at the occlusal contact areas of upper and lower jaw. Digital quadrant impression methods achieve a level of precision, comparable to conventional impression techniques. However, there are significant differences in terms of absolute values and deviation pattern. With all tested digital impression systems, time efficient capturing of quadrant impressions is possible. The clinical precision of digital quadrant impression models is sufficient to cover a broad variety of restorative indications. Yet the precision differs significantly between the digital impression systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1050319-efficient-mixed-precision-hybrid-cpu-gpu-implementation-nonlinearly-implicit-one-dimensional-particle-cell-algorithm','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1050319-efficient-mixed-precision-hybrid-cpu-gpu-implementation-nonlinearly-implicit-one-dimensional-particle-cell-algorithm"><span>An efficient mixed-precision, hybrid CPU-GPU implementation of a nonlinearly implicit one-dimensional particle-in-cell algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chen, Guangye; Chacon, Luis; Barnes, Daniel C</p> <p>2012-01-01</p> <p>Recently, a fully implicit, energy- and charge-conserving particle-in-cell method has been developed for multi-scale, full-f kinetic simulations [G. Chen, et al., J. Comput. Phys. 230, 18 (2011)]. The method employs a Jacobian-free Newton-Krylov (JFNK) solver and is capable of using very large timesteps without loss of numerical stability or accuracy. A fundamental feature of the method is the segregation of particle orbit integrations from the field solver, while remaining fully self-consistent. This provides great flexibility, and dramatically improves the solver efficiency by reducing the degrees of freedom of the associated nonlinear system. However, it requires a particle push per nonlinearmore » residual evaluation, which makes the particle push the most time-consuming operation in the algorithm. This paper describes a very efficient mixed-precision, hybrid CPU-GPU implementation of the implicit PIC algorithm. The JFNK solver is kept on the CPU (in double precision), while the inherent data parallelism of the particle mover is exploited by implementing it in single-precision on a graphics processing unit (GPU) using CUDA. Performance-oriented optimizations, with the aid of an analytical performance model, the roofline model, are employed. Despite being highly dynamic, the adaptive, charge-conserving particle mover algorithm achieves up to 300 400 GOp/s (including single-precision floating-point, integer, and logic operations) on a Nvidia GeForce GTX580, corresponding to 20 25% absolute GPU efficiency (against the peak theoretical performance) and 50-70% intrinsic efficiency (against the algorithm s maximum operational throughput, which neglects all latencies). This is about 200-300 times faster than an equivalent serial CPU implementation. When the single-precision GPU particle mover is combined with a double-precision CPU JFNK field solver, overall performance gains 100 vs. the double-precision CPU-only serial version are obtained, with no apparent loss of robustness or accuracy when applied to a challenging long-time scale ion acoustic wave simulation.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvD..97a4021P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvD..97a4021P"><span>CaloGAN: Simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin</p> <p>2018-01-01</p> <p>The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce CaloGAN, a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter and achieve speedup factors comparable to or better than existing full simulation techniques on CPU (100 ×-1000 × ) and even faster on GPU (up to ˜105× ). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons, and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70118976','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70118976"><span>Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.</p> <p>2014-01-01</p> <p>Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27180263','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27180263"><span>Evaluation of leaf wetness duration models for operational use in strawberry disease-warning systems in four US states.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Montone, Verona O; Fraisse, Clyde W; Peres, Natalia A; Sentelhas, Paulo C; Gleason, Mark; Ellis, Michael; Schnabel, Guido</p> <p>2016-11-01</p> <p>Leaf wetness duration (LWD) plays a key role in disease development and is often used as an input in disease-warning systems. LWD is often estimated using mathematical models, since measurement by sensors is rarely available and/or reliable. A strawberry disease-warning system called "Strawberry Advisory System" (SAS) is used by growers in Florida, USA, in deciding when to spray their strawberry fields to control anthracnose and Botrytis fruit rot. Currently, SAS is implemented at six locations, where reliable LWD sensors are deployed. A robust LWD model would facilitate SAS expansion from Florida to other regions where reliable LW sensors are not available. The objective of this study was to evaluate the use of mathematical models to estimate LWD and time of spray recommendations in comparison to on site LWD measurements. Specific objectives were to (i) compare model estimated and observed LWD and resulting differences in timing and number of fungicide spray recommendations, (ii) evaluate the effects of weather station sensors precision on LWD models performance, and (iii) compare LWD models performance across four states in the USA. The LWD models evaluated were the classification and regression tree (CART), dew point depression (DPD), number of hours with relative humidity equal or greater than 90 % (NHRH ≥90 %), and Penman-Monteith (P-M). P-M model was expected to have the lowest errors, since it is a physically based and thus portable model. Indeed, the P-M model estimated LWD most accurately (MAE <2 h) at a weather station with high precision sensors but was the least accurate when lower precision sensors of relative humidity and estimated net radiation (based on solar radiation and temperature) were used (MAE = 3.7 h). The CART model was the most robust for estimating LWD and for advising growers on fungicide-spray timing for anthracnose and Botrytis fruit rot control and is therefore the model we recommend for expanding the strawberry disease warning beyond Florida, to other locations where weather stations may be deployed with lower precision sensors, and net radiation observations are not available.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJBm...60.1761M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJBm...60.1761M"><span>Evaluation of leaf wetness duration models for operational use in strawberry disease-warning systems in four US states</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Montone, Verona O.; Fraisse, Clyde W.; Peres, Natalia A.; Sentelhas, Paulo C.; Gleason, Mark; Ellis, Michael; Schnabel, Guido</p> <p>2016-11-01</p> <p>Leaf wetness duration (LWD) plays a key role in disease development and is often used as an input in disease-warning systems. LWD is often estimated using mathematical models, since measurement by sensors is rarely available and/or reliable. A strawberry disease-warning system called "Strawberry Advisory System" (SAS) is used by growers in Florida, USA, in deciding when to spray their strawberry fields to control anthracnose and Botrytis fruit rot. Currently, SAS is implemented at six locations, where reliable LWD sensors are deployed. A robust LWD model would facilitate SAS expansion from Florida to other regions where reliable LW sensors are not available. The objective of this study was to evaluate the use of mathematical models to estimate LWD and time of spray recommendations in comparison to on site LWD measurements. Specific objectives were to (i) compare model estimated and observed LWD and resulting differences in timing and number of fungicide spray recommendations, (ii) evaluate the effects of weather station sensors precision on LWD models performance, and (iii) compare LWD models performance across four states in the USA. The LWD models evaluated were the classification and regression tree (CART), dew point depression (DPD), number of hours with relative humidity equal or greater than 90 % (NHRH ≥90 %), and Penman-Monteith (P-M). P-M model was expected to have the lowest errors, since it is a physically based and thus portable model. Indeed, the P-M model estimated LWD most accurately (MAE <2 h) at a weather station with high precision sensors but was the least accurate when lower precision sensors of relative humidity and estimated net radiation (based on solar radiation and temperature) were used (MAE = 3.7 h). The CART model was the most robust for estimating LWD and for advising growers on fungicide-spray timing for anthracnose and Botrytis fruit rot control and is therefore the model we recommend for expanding the strawberry disease warning beyond Florida, to other locations where weather stations may be deployed with lower precision sensors, and net radiation observations are not available.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19800011261&hterms=reliability+value&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dreliability%2Bvalue','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19800011261&hterms=reliability+value&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dreliability%2Bvalue"><span>Emulation applied to reliability analysis of reconfigurable, highly reliable, fault-tolerant computing systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Migneault, G. E.</p> <p>1979-01-01</p> <p>Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016E%26PSL.446...37W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016E%26PSL.446...37W"><span>High-precision U-Pb geochronologic constraints on the Late Cretaceous terrestrial cyclostratigraphy and geomagnetic polarity from the Songliao Basin, Northeast China</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Tiantian; Ramezani, Jahandar; Wang, Chengshan; Wu, Huaichun; He, Huaiyu; Bowring, Samuel A.</p> <p>2016-07-01</p> <p>The Cretaceous continental sedimentary records are essential to our understanding of how the terrestrial geologic and ecologic systems responded to past climate fluctuations under greenhouse conditions and our ability to forecast climate change in the future. The Songliao Basin of Northeast China preserves a near-complete, predominantly lacustrine, Cretaceous succession, with sedimentary cyclicity that has been tied to Milankocitch forcing of the climate. Over 900 meters of drill-core recovered from the Upper Cretaceous (Turonian to Campanian) of the Songliao Basin has provided a unique opportunity for detailed analyses of its depositional and paleoenvironmental records through integrated and high-resolution cyclostratigraphic, magnetostratigraphic and geochronologic investigations. Here we report high-precision U-Pb zircon dates (CA-ID-TIMS method) from four interbedded bentonites from the drill-core that offer substantial improvements in accuracy, and a ten-fold enhancement in precision, compared to the previous U-Pb SIMS geochronology, and allow a critical evaluation of the Songliao astrochronological time scale. The results indicate appreciable deviations of the astrochronologic model from the absolute radioisotope geochronology, which more likely reflect cyclostratigraphic tuning inaccuracies and omitted cycles due to depositional hiatuses, rather than suspected limitations of astronomical models applied to distant geologic time. Age interpolation based on our new high-resolution geochronologic framework and the calibrated cyclostratigraphy places the end of the Cretaceous Normal Superchon (C34n-C33r chron boundary) in the Songliao Basin at 83.07 ± 0.15 Ma. This date also serves as a new and improved estimate for the global Santonian-Campanian stage boundary.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23239199','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23239199"><span>Impulsivity modulates performance under response uncertainty in a reaching task.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tzagarakis, C; Pellizzer, G; Rogers, R D</p> <p>2013-03-01</p> <p>We sought to explore the interaction of the impulsivity trait with response uncertainty. To this end, we used a reaching task (Pellizzer and Hedges in Exp Brain Res 150:276-289, 2003) where a motor response direction was cued at different levels of uncertainty (1 cue, i.e., no uncertainty, 2 cues or 3 cues). Data from 95 healthy adults (54 F, 41 M) were analysed. Impulsivity was measured using the Barratt Impulsiveness Scale version 11 (BIS-11). Behavioral variables recorded were reaction time (RT), errors of commission (referred to as 'early errors') and errors of precision. Data analysis employed generalised linear mixed models and generalised additive mixed models. For the early errors, there was an interaction of impulsivity with uncertainty and gender, with increased errors for high impulsivity in the one-cue condition for women and the three-cue condition for men. There was no effect of impulsivity on precision errors or RT. However, the analysis of the effect of RT and impulsivity on precision errors showed a different pattern for high versus low impulsives in the high uncertainty (3 cue) condition. In addition, there was a significant early error speed-accuracy trade-off for women, primarily in low uncertainty and a 'reverse' speed-accuracy trade-off for men in high uncertainty. These results extend those of past studies of impulsivity which help define it as a behavioural trait that modulates speed versus accuracy response styles depending on environmental constraints and highlight once more the importance of gender in the interplay of personality and behaviour.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960009521','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960009521"><span>A new model for yaw attitude of Global Positioning System satellites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bar-Sever, Y. E.</p> <p>1995-01-01</p> <p>Proper modeling of the Global Positioning System (GPS) satellite yaw attitude is important in high-precision applications. A new model for the GPS satellite yaw attitude is introduced that constitutes a significant improvement over the previously available model in terms of efficiency, flexibility, and portability. The model is described in detail, and implementation issues, including the proper estimation strategy, are addressed. The performance of the new model is analyzed, and an error budget is presented. This is the first self-contained description of the GPS yaw attitude model.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5367821','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5367821"><span>A farm-level precision land management framework based on integer programming</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Qi; Hu, Guiping; Jubery, Talukder Zaki; Ganapathysubramanian, Baskar</p> <p>2017-01-01</p> <p>Farmland management involves several planning and decision making tasks including seed selection and irrigation management. A farm-level precision farmland management model based on mixed integer linear programming is proposed in this study. Optimal decisions are designed for pre-season planning of crops and irrigation water allocation. The model captures the effect of size and shape of decision scale as well as special irrigation patterns. The authors illustrate the model with a case study on a farm in the state of California in the U.S. and show the model can capture the impact of precision farm management on profitability. The results show that threefold increase of annual net profit for farmers could be achieved by carefully choosing irrigation and seed selection. Although farmers could increase profits by applying precision management to seed or irrigation alone, profit increase is more significant if farmers apply precision management on seed and irrigation simultaneously. The proposed model can also serve as a risk analysis tool for farmers facing seasonal irrigation water limits as well as a quantitative tool to explore the impact of precision agriculture. PMID:28346499</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..324a2040Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..324a2040Z"><span>Research on the tool holder mode in high speed machining</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhenyu, Zhao; Yongquan, Zhou; Houming, Zhou; Xiaomei, Xu; Haibin, Xiao</p> <p>2018-03-01</p> <p>High speed machining technology can improve the processing efficiency and precision, but also reduce the processing cost. Therefore, the technology is widely regarded in the industry. With the extensive application of high-speed machining technology, high-speed tool system has higher and higher requirements on the tool chuck. At present, in high speed precision machining, several new kinds of clip heads are as long as there are heat shrinkage tool-holder, high-precision spring chuck, hydraulic tool-holder, and the three-rib deformation chuck. Among them, the heat shrinkage tool-holder has the advantages of high precision, high clamping force, high bending rigidity and dynamic balance, etc., which are widely used. Therefore, it is of great significance to research the new requirements of the machining tool system. In order to adapt to the requirement of high speed machining precision machining technology, this paper expounds the common tool holder technology of high precision machining, and proposes how to select correctly tool clamping system in practice. The characteristics and existing problems are analyzed in the tool clamping system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4298437','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4298437"><span>The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J</p> <p>2015-01-01</p> <p>Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25628867','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25628867"><span>The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J</p> <p>2015-01-01</p> <p>Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016mecs.conf...21M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016mecs.conf...21M"><span>Motion Simulation Analysis of Rail Weld CNC Fine Milling Machine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mao, Huajie; Shu, Min; Li, Chao; Zhang, Baojun</p> <p></p> <p>CNC fine milling machine is a new advanced equipment of rail weld precision machining with high precision, high efficiency, low environmental pollution and other technical advantages. The motion performance of this machine directly affects its machining accuracy and stability, which makes it an important consideration for its design. Based on the design drawings, this article completed 3D modeling of 60mm/kg rail weld CNC fine milling machine by using Solidworks. After that, the geometry was imported into Adams to finish the motion simulation analysis. The displacement, velocity, angular velocity and some other kinematical parameters curves of the main components were obtained in the post-processing and these are the scientific basis for the design and development for this machine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19840014113&hterms=Polyhedron&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DPolyhedron','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19840014113&hterms=Polyhedron&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DPolyhedron"><span>A geometric modeler based on a dual-geometry representation polyhedra and rational b-splines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Klosterman, A. L.</p> <p>1984-01-01</p> <p>For speed and data base reasons, solid geometric modeling of large complex practical systems is usually approximated by a polyhedra representation. Precise parametric surface and implicit algebraic modelers are available but it is not yet practical to model the same level of system complexity with these precise modelers. In response to this contrast the GEOMOD geometric modeling system was built so that a polyhedra abstraction of the geometry would be available for interactive modeling without losing the precise definition of the geometry. Part of the reason that polyhedra modelers are effective is that all bounded surfaces can be represented in a single canonical format (i.e., sets of planar polygons). This permits a very simple and compact data structure. Nonuniform rational B-splines are currently the best representation to describe a very large class of geometry precisely with one canonical format. The specific capabilities of the modeler are described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1417891','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1417891"><span>Determination of the Kinematics of the Qweak Experiment and Investigation of an Atomic Hydrogen Moller Polarimeter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Gray, Valerie M.</p> <p></p> <p>The Q weak experiment has tested the Standard Model through making a precise measurement of the weak charge of the proton (more » $$Q^p_W$$). This was done through measuring the parity-violating asymmetry for polarized electrons scattering off of unpolarized protons. The parity-violating asymmetry measured is directly proportional to the four-momentum transfer ($Q^2$) from the electron to the proton. The extraction of $$Q^p_W$$ from the measured asymmetry requires a precise $Q^2$ determination. The Q weak experiment had a $Q^2$ = 24.8 ± 0.1 m(GeV 2) which achieved the goal of an uncertainty of <= 0.5%. From the measured asymmetry and $Q^2$, $$Q^p_W$$ was determined to be 0.0719 ± 0.0045, which is in good agreement with the Standard Model prediction. This puts a 7.5 TeV lower limit on possible "new physics". This dissertation describes the analysis of Q^2 for the Q weak experiment. Future parity-violating electron scattering experiments similar to the Q weak experiment will measure asymmetries to high precision in order to test the Standard Model. These measurements will require the beam polarization to be measured to sub-0.5% precision. Presently the electron beam polarization is measured through Moller scattering off of a ferromagnetic foil or through using Compton scattering, both of which can have issues reaching this precision. A novel Atomic Hydrogen Moller Polarimeter has been proposed as a non-invasive way to measure the polarization of an electron beam via Moller scattering off of polarized monatomic hydrogen gas. This dissertation describes the development and initial analysis of a Monte Carlo simulation of an Atomic Hydrogen Moller Polarimeter.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29743692','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29743692"><span>Precision measurement of the weak charge of the proton.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p></p> <p>2018-05-01</p> <p>Large experimental programmes in the fields of nuclear and particle physics search for evidence of physics beyond that explained by current theories. The observation of the Higgs boson completed the set of particles predicted by the standard model, which currently provides the best description of fundamental particles and forces. However, this theory's limitations include a failure to predict fundamental parameters, such as the mass of the Higgs boson, and the inability to account for dark matter and energy, gravity, and the matter-antimatter asymmetry in the Universe, among other phenomena. These limitations have inspired searches for physics beyond the standard model in the post-Higgs era through the direct production of additional particles at high-energy accelerators, which have so far been unsuccessful. Examples include searches for supersymmetric particles, which connect bosons (integer-spin particles) with fermions (half-integer-spin particles), and for leptoquarks, which mix the fundamental quarks with leptons. Alternatively, indirect searches using precise measurements of well predicted standard-model observables allow highly targeted alternative tests for physics beyond the standard model because they can reach mass and energy scales beyond those directly accessible by today's high-energy accelerators. Such an indirect search aims to determine the weak charge of the proton, which defines the strength of the proton's interaction with other particles via the well known neutral electroweak force. Because parity symmetry (invariance under the spatial inversion (x, y, z) → (-x, -y, -z)) is violated only in the weak interaction, it provides a tool with which to isolate the weak interaction and thus to measure the proton's weak charge 1 . Here we report the value 0.0719 ± 0.0045, where the uncertainty is one standard deviation, derived from our measured parity-violating asymmetry in the scattering of polarized electrons on protons, which is -226.5 ± 9.3 parts per billion (the uncertainty is one standard deviation). Our value for the proton's weak charge is in excellent agreement with the standard model 2 and sets multi-teraelectronvolt-scale constraints on any semi-leptonic parity-violating physics not described within the standard model. Our results show that precision parity-violating measurements enable searches for physics beyond the standard model that can compete with direct searches at high-energy accelerators and, together with astronomical observations, can provide fertile approaches to probing higher mass scales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.7127E..0WX','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.7127E..0WX"><span>Study on digital closed-loop system of silicon resonant micro-sensor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Yefeng; He, Mengke</p> <p>2008-10-01</p> <p>Designing a micro, high reliability weak signal extracting system is a critical problem need to be solved in the application of silicon resonant micro-sensor. The closed-loop testing system based on FPGA uses software to replace hardware circuit which dramatically decrease the system's mass and power consumption and make the system more compact, both correlation theory and frequency scanning scheme are used in extracting weak signal, the adaptive frequency scanning arithmetic ensures the system real-time. The error model was analyzed to show the solution to enhance the system's measurement precision. The experiment results show that the closed-loop testing system based on FPGA has the personality of low power consumption, high precision, high-speed, real-time etc, and also the system is suitable for different kinds of Silicon Resonant Micro-sensor.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5627370','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5627370"><span>Bitwise efficiency in chaotic models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Düben, Peter; Palmer, Tim</p> <p>2017-01-01</p> <p>Motivated by the increasing energy consumption of supercomputing for weather and climate simulations, we introduce a framework for investigating the bit-level information efficiency of chaotic models. In comparison with previous explorations of inexactness in climate modelling, the proposed and tested information metric has three specific advantages: (i) it requires only a single high-precision time series; (ii) information does not grow indefinitely for decreasing time step; and (iii) information is more sensitive to the dynamics and uncertainties of the model rather than to the implementation details. We demonstrate the notion of bit-level information efficiency in two of Edward Lorenz’s prototypical chaotic models: Lorenz 1963 (L63) and Lorenz 1996 (L96). Although L63 is typically integrated in 64-bit ‘double’ floating point precision, we show that only 16 bits have significant information content, given an initial condition uncertainty of approximately 1% of the size of the attractor. This result is sensitive to the size of the uncertainty but not to the time step of the model. We then apply the metric to the L96 model and find that a 16-bit scaled integer model would suffice given the uncertainty of the unresolved sub-grid-scale dynamics. We then show that, by dedicating computational resources to spatial resolution rather than numeric precision in a field programmable gate array (FPGA), we see up to 28.6% improvement in forecast accuracy, an approximately fivefold reduction in the number of logical computing elements required and an approximately 10-fold reduction in energy consumed by the FPGA, for the L96 model. PMID:28989303</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017RSPSA.47370144J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017RSPSA.47370144J"><span>Bitwise efficiency in chaotic models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jeffress, Stephen; Düben, Peter; Palmer, Tim</p> <p>2017-09-01</p> <p>Motivated by the increasing energy consumption of supercomputing for weather and climate simulations, we introduce a framework for investigating the bit-level information efficiency of chaotic models. In comparison with previous explorations of inexactness in climate modelling, the proposed and tested information metric has three specific advantages: (i) it requires only a single high-precision time series; (ii) information does not grow indefinitely for decreasing time step; and (iii) information is more sensitive to the dynamics and uncertainties of the model rather than to the implementation details. We demonstrate the notion of bit-level information efficiency in two of Edward Lorenz's prototypical chaotic models: Lorenz 1963 (L63) and Lorenz 1996 (L96). Although L63 is typically integrated in 64-bit `double' floating point precision, we show that only 16 bits have significant information content, given an initial condition uncertainty of approximately 1% of the size of the attractor. This result is sensitive to the size of the uncertainty but not to the time step of the model. We then apply the metric to the L96 model and find that a 16-bit scaled integer model would suffice given the uncertainty of the unresolved sub-grid-scale dynamics. We then show that, by dedicating computational resources to spatial resolution rather than numeric precision in a field programmable gate array (FPGA), we see up to 28.6% improvement in forecast accuracy, an approximately fivefold reduction in the number of logical computing elements required and an approximately 10-fold reduction in energy consumed by the FPGA, for the L96 model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26391476','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26391476"><span>Quantitative volumetric imaging of normal, neoplastic and hyperplastic mouse prostate using ultrasound.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Singh, Shalini; Pan, Chunliu; Wood, Ronald; Yeh, Chiuan-Ren; Yeh, Shuyuan; Sha, Kai; Krolewski, John J; Nastiuk, Kent L</p> <p>2015-09-21</p> <p>Genetically engineered mouse models are essential to the investigation of the molecular mechanisms underlying human prostate pathology and the effects of therapy on the diseased prostate. Serial in vivo volumetric imaging expands the scope and accuracy of experimental investigations of models of normal prostate physiology, benign prostatic hyperplasia and prostate cancer, which are otherwise limited by the anatomy of the mouse prostate. Moreover, accurate imaging of hyperplastic and tumorigenic prostates is now recognized as essential to rigorous pre-clinical trials of new therapies. Bioluminescent imaging has been widely used to determine prostate tumor size, but is semi-quantitative at best. Magnetic resonance imaging can determine prostate volume very accurately, but is expensive and has low throughput. We therefore sought to develop and implement a high throughput, low cost, and accurate serial imaging protocol for the mouse prostate. We developed a high frequency ultrasound imaging technique employing 3D reconstruction that allows rapid and precise assessment of mouse prostate volume. Wild-type mouse prostates were examined (n = 4) for reproducible baseline imaging, and treatment effects on volume were compared, and blinded data analyzed for intra- and inter-operator assessments of reproducibility by correlation and for Bland-Altman analysis. Examples of benign prostatic hyperplasia mouse model prostate (n = 2) and mouse prostate implantation of orthotopic human prostate cancer tumor and its growth (n =  ) are also demonstrated. Serial measurement volume of the mouse prostate revealed that high frequency ultrasound was very precise. Following endocrine manipulation, regression and regrowth of the prostate could be monitored with very low intra- and interobserver variability. This technique was also valuable to monitor the development of prostate growth in a model of benign prostatic hyperplasia. Additionally, we demonstrate accurate ultrasound image-guided implantation of orthotopic tumor xenografts and monitoring of subsequent tumor growth from ~10 to ~750 mm(3) volume. High frequency ultrasound imaging allows precise determination of normal, neoplastic and hyperplastic mouse prostate. Low cost and small image size allows incorporation of this imaging modality inside clean animal facilities, and thereby imaging of immunocompromised models. 3D reconstruction for volume determination is easily mastered, and both small and large relative changes in volume are accurately visualized. Ultrasound imaging does not rely on penetration of exogenous imaging agents, and so may therefore better measure poorly vascularized or necrotic diseased tissue, relative to bioluminescent imaging (IVIS). Our method is precise and reproducible with very low inter- and intra-observer variability. Because it is non-invasive, mouse models of prostatic disease states can be imaged serially, reducing inter-animal variability, and enhancing the power to detect small volume changes following therapeutic intervention.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17361736','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17361736"><span>[Research on airborne hyperspectral identification of red tide organism dominant species based on SVM].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ma, Yi; Zhang, Jie; Cui, Ting-wei</p> <p>2006-12-01</p> <p>Airborne hyperspectral identification of red tide organism dominant species can provide technique for distinguishing red tide and its toxin, and provide support for scaling the disaster. Based on support vector machine(SVM), the present paper provides an identification model of red tide dominant species. Utilizing this model, the authors accomplished three identification experiments with the hyperspectral data obtained on 16th July, and 19th and 25th August, 2001. It is shown from the identification results that the model has a high precision and is not restricted by high dimension of the hyperspectral data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A21I2258S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A21I2258S"><span>Comparison of water vapor from observations and models in the Asian Monsoon UTLS region</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Singer, C. E.; Clouser, B.; Gaeta, D. C.; Moyer, E. J.</p> <p>2017-12-01</p> <p>As part of the StratoClim campaign in July/August 2017, the Chicago Water Isotope Spectrometer (ChiWIS) made water vapor measurements from the mid-troposphere through the lower stratosphere (to 21 km altitude). We compare in-situ measurements with remote sensing observations and model projections both to validate measurements and to evalute the added value of high-precision in-situ sampling. Preliminary results and comparison with other StratoClim tracer measurements suggest that the UTLS region is highly structured, beyond what models or satellite instruments can capture, and that ChiWIS accurately captures these variations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013APS..DNP.JJ005G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013APS..DNP.JJ005G"><span>Development of a Hydrogen Møller Polarimeter for Precision Parity-Violating Electron Scattering</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gray, Valerie M.</p> <p>2013-10-01</p> <p>Parity-violating electron scattering experiments allow for testing the Standard Model at low energy accelerators. Future parity-violating electron scattering experiments, like the P2 experiment at the Johannes Gutenberg University, Mainz, Germany, and the MOLLER and SoLID experiments at Jefferson Lab will measure observables predicted by the Standard Model to high precision. In order to make these measurements, we will need to determine the polarization of the electron beam to sub-percent precision. The present way of measuring the polarization, with Møller scattering in iron foils or using Compton laser backscattering, will not easily be able to reach this precision. The novel Hydrogen Møller Polarimeter presents a non-invasive way to measure the electron polarization by scattering the electron beam off of atomic hydrogen gas polarized in a 7 Tesla solenoidal magnetic trap. This apparatus is expected to be operational by 2016 in Mainz. Currently, simulations of the polarimeter are used to develop the detection system at College of William & Mary, while the hydrogen trap and superconducting solenoid magnet are being developed at the Johannes Gutenberg University, Mainz. I will discuss the progress of the design and development of this novel polarimeter system. This material is based upon work supported by the National Science Foundation under Grant No. PHY-1206053.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JChPh.147x4506C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JChPh.147x4506C"><span>High precision determination of the melting points of water TIP4P/2005 and water TIP4P/Ice models by the direct coexistence technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Conde, M. M.; Rovere, M.; Gallo, P.</p> <p>2017-12-01</p> <p>An exhaustive study by molecular dynamics has been performed to analyze the factors that enhance the precision of the technique of direct coexistence for a system of ice and liquid water. The factors analyzed are the stochastic nature of the method, the finite size effects, and the influence of the initial ice configuration used. The results obtained show that the precision of estimates obtained through the technique of direct coexistence is markedly affected by the effects of finite size, requiring systems with a large number of molecules to reduce the error bar of the melting point. This increase in size causes an increase in the simulation time, but the estimate of the melting point with a great accuracy is important, for example, in studies on the ice surface. We also verified that the choice of the initial ice Ih configuration with different proton arrangements does not significantly affect the estimate of the melting point. Importantly this study leads us to estimate the melting point at ambient pressure of two of the most popular models of water, TIP4P/2005 and TIP4P/Ice, with the greatest precision to date.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20010044826&hterms=geomorphic+model&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dgeomorphic%2Bmodel','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20010044826&hterms=geomorphic+model&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dgeomorphic%2Bmodel"><span>Valley Network Morphology and Topographic Gradients on Mars</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Aharonson, Oded; Zuber, Maria T.; Rothman, Daniel H.; Schorghofer, Norbert; Phillips, Roger J.; Williams, Rebecca M. E.</p> <p>2001-01-01</p> <p>Data returned from the Mars Orbiter Laser Altimeter allows construction of a high precision digital elevation model. Quantitative investigations into the geomorphic properties of drainage features, similar to ones carried out on Earth, are now possible Additional information is contained in the original extended abstract.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1375722','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1375722"><span>Testing of advanced technique for linear lattice and closed orbit correction by modeling its application for iota ring at Fermilab</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Romanov, A.</p> <p></p> <p>Many modern and most future accelerators rely on precise configuration of lattice and trajectory. The Integrable Optics Test Accelerator (IOTA) at Fermilab that is coming to final stages of construction will be used to test advanced approaches of control over particles dynamics. Various experiments planned at IOTA require high flexibility of lattice configuration as well as high precision of lattice and closed orbit control. Dense element placement does not allow to have ideal configuration of diagnostics and correctors for all planned experiments. To overcome this limitations advanced method of lattice an beneficial for other machines. Developed algorithm is based onmore » LOCO approach, extended with various sets of other experimental data, such as dispersion, BPM BPM phase advances, beam shape information from synchrotron light monitors, responses of closed orbit bumps to variations of focusing elements and other. Extensive modeling of corrections for a big number of random seed errors is used to illustrate benefits from developed approach.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4856976','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4856976"><span>COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun</p> <p>2016-01-01</p> <p>An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. PMID:26833260</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EPJWC.17602001S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EPJWC.17602001S"><span>Space-based active optical remote sensing of carbon dioxide column using high-energy two-micron pulsed ipda lidar</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Singh, Upendra N.; Refaat, Tamer F.; Ismail, Syed; Petros, Mulugeta; Davis, Kenneth J.; Kawa, Stephan R.; Menzies, Robert T.</p> <p>2018-04-01</p> <p>Modeling of a space-based high-energy 2-μm triple-pulse Integrated Path Differential Absorption (IPDA) lidar was conducted to demonstrate carbon dioxide (CO2) measurement capability and to evaluate random and systematic errors. A high pulse energy laser and an advanced MCT e-APD detector were incorporated in this model. Projected performance shows 0.5 ppm precision and 0.3 ppm bias in low-tropospheric column CO2 mixing ratio measurements from space for 10 second signal averaging over Railroad Valley (RRV) reference surface.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26274305','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26274305"><span>Differential porosimetry and permeametry for random porous media.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hilfer, R; Lemmer, A</p> <p>2015-07-01</p> <p>Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2992450','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2992450"><span>Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.</p> <p>2010-01-01</p> <p>Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27729037','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27729037"><span>Tumor resection at the pelvis using three-dimensional planning and patient-specific instruments: a case series.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jentzsch, Thorsten; Vlachopoulos, Lazaros; Fürnstahl, Philipp; Müller, Daniel A; Fuchs, Bruno</p> <p>2016-09-21</p> <p>Sarcomas are associated with a relatively high local recurrence rate of around 30 % in the pelvis. Inadequate surgical margins are the most important reason. However, obtaining adequate margins is particularly difficult in this anatomically demanding region. Recently, three-dimensional (3-D) planning, printed models, and patient-specific instruments (PSI) with cutting blocks have been introduced to improve the precision during surgical tumor resection. This case series illustrates these modern 3-D tools in pelvic tumor surgery. The first consecutive patients with 3-D-planned tumor resection around the pelvis were included in this retrospective study at a University Hospital in 2015. Detailed information about the clinical presentation, imaging techniques, preoperative planning, intraoperative surgical procedures, and postoperative evaluation is provided for each case. The primary outcome was tumor-free resection margins as assessed by a postoperative computed tomography (CT) scan of the specimen. The secondary outcomes were precision of preoperative planning and complications. Four patients with pelvic sarcomas were included in this study. The mean follow-up was 7.8 (range, 6.0-9.0) months. The combined use of preoperative planning with 3-D techniques, 3-D-printed models, and PSI for osteotomies led to higher precision (maximal (max) error of 0.4 centimeters (cm)) than conventional 3-D planning and freehand osteotomies (max error of 2.8 cm). Tumor-free margins were obtained where measurable (n = 3; margins were not assessable in a patient with curettage). Two insufficiency fractures were noted postoperatively. Three-dimensional planning as well as the intraoperative use of 3-D-printed models and PSI are valuable for complex sarcoma resection at the pelvis. Three-dimensionally printed models of the patient anatomy may help visualization and precision. PSI with cutting blocks help perform very precise osteotomies for adequate resection margins.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhyA..477..161Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhyA..477..161Z"><span>Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Ningning; Lin, Aijing; Shang, Pengjian</p> <p>2017-07-01</p> <p>In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920014298','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920014298"><span>Precision GPS ephemerides and baselines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1991-01-01</p> <p>Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23311298','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23311298"><span>Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Thorlund, Kristian; Thabane, Lehana; Mills, Edward J</p> <p>2013-01-11</p> <p>Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26913624','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26913624"><span>Memory for a single object has differently variable precisions for relevant and irrelevant features.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Swan, Garrett; Collins, John; Wyble, Brad</p> <p>2016-01-01</p> <p>Working memory is a limited resource. To further characterize its limitations, it is vital to understand exactly what is encoded about a visual object beyond the "relevant" features probed in a particular task. We measured the memory quality of a task-irrelevant feature of an attended object by coupling a delayed estimation task with a surprise test. Participants were presented with a single colored arrow and were asked to retrieve just its color for the first half of the experiment before unexpectedly being asked to report its direction. Mixture modeling of the data revealed that participants had highly variable precision on the surprise test, indicating a coarse-grained memory for the irrelevant feature. Following the surprise test, all participants could precisely recall the arrow's direction; however, this improvement in direction memory came at a cost in precision for color memory even though only a single object was being remembered. We attribute these findings to varying levels of attention to different features during memory encoding.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IzAOP..53.1142G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IzAOP..53.1142G"><span>Basic Geometric Support of Systems for Earth Observation from Geostationary and Highly Elliptical Orbits</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gektin, Yu. M.; Egoshkin, N. A.; Eremeev, V. V.; Kuznecov, A. E.; Moskatinyev, I. V.; Smelyanskiy, M. B.</p> <p>2017-12-01</p> <p>A set of standardized models and algorithms for geometric normalization and georeferencing images from geostationary and highly elliptical Earth observation systems is considered. The algorithms can process information from modern scanning multispectral sensors with two-coordinate scanning and represent normalized images in optimal projection. Problems of the high-precision ground calibration of the imaging equipment using reference objects, as well as issues of the flight calibration and refinement of geometric models using the absolute and relative reference points, are considered. Practical testing of the models, algorithms, and technologies is performed in the calibration of sensors for spacecrafts of the Electro-L series and during the simulation of the Arktika prospective system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017A%26A...608A..46R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017A%26A...608A..46R"><span>Constraining cosmic scatter in the Galactic halo through a differential analysis of metal-poor stars</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Reggiani, Henrique; Meléndez, Jorge; Kobayashi, Chiaki; Karakas, Amanda; Placco, Vinicius</p> <p>2017-12-01</p> <p>Context. The chemical abundances of metal-poor halo stars are important to understanding key aspects of Galactic formation and evolution. Aims: We aim to constrain Galactic chemical evolution with precise chemical abundances of metal-poor stars (-2.8 ≤ [Fe/H] ≤ -1.5). Methods: Using high resolution and high S/N UVES spectra of 23 stars and employing the differential analysis technique we estimated stellar parameters and obtained precise LTE chemical abundances. Results: We present the abundances of Li, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Zn, Sr, Y, Zr, and Ba. The differential technique allowed us to obtain an unprecedented low level of scatter in our analysis, with standard deviations as low as 0.05 dex, and mean errors as low as 0.05 dex for [X/Fe]. Conclusions: By expanding our metallicity range with precise abundances from other works, we were able to precisely constrain Galactic chemical evolution models in a wide metallicity range (-3.6 ≤ [Fe/H] ≤ -0.4). The agreements and discrepancies found are key for further improvement of both models and observations. We also show that the LTE analysis of Cr II is a much more reliable source of abundance for chromium, as Cr I has important NLTE effects. These effects can be clearly seen when we compare the observed abundances of Cr I and Cr II with GCE models. While Cr I has a clear disagreement between model and observations, Cr II is very well modeled. We confirm tight increasing trends of Co and Zn toward lower metallicities, and a tight flat evolution of Ni relative to Fe. Our results strongly suggest inhomogeneous enrichment from hypernovae. Our precise stellar parameters results in a low star-to-star scatter (0.04 dex) in the Li abundances of our sample, with a mean value about 0.4 dex lower than the prediction from standard Big Bang nucleosynthesis; we also study the relation between lithium depletion and stellar mass, but it is difficult to assess a correlation due to the limited mass range. We find two blue straggler stars, based on their very depleted Li abundances. One of them shows intriguing abundance anomalies, including a possible zinc enhancement, suggesting that zinc may have been also produced by a former AGB companion. Tables A.1-A.6 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A46</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AdSpR..59.2691S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AdSpR..59.2691S"><span>Error analysis of high-rate GNSS precise point positioning for seismic wave measurement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan</p> <p>2017-06-01</p> <p>High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=applied+AND+optics&pg=2&id=ED285009','ERIC'); return false;" href="https://eric.ed.gov/?q=applied+AND+optics&pg=2&id=ED285009"><span>Precision Optics Curriculum.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Reid, Robert L.; And Others</p> <p></p> <p>This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26787661','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26787661"><span>MultiGeMS: detection of SNVs from multiple samples using model selection on high-throughput sequencing data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Murillo, Gabriel H; You, Na; Su, Xiaoquan; Cui, Wei; Reilly, Muredach P; Li, Mingyao; Ning, Kang; Cui, Xinping</p> <p>2016-05-15</p> <p>Single nucleotide variant (SNV) detection procedures are being utilized as never before to analyze the recent abundance of high-throughput DNA sequencing data, both on single and multiple sample datasets. Building on previously published work with the single sample SNV caller genotype model selection (GeMS), a multiple sample version of GeMS (MultiGeMS) is introduced. Unlike other popular multiple sample SNV callers, the MultiGeMS statistical model accounts for enzymatic substitution sequencing errors. It also addresses the multiple testing problem endemic to multiple sample SNV calling and utilizes high performance computing (HPC) techniques. A simulation study demonstrates that MultiGeMS ranks highest in precision among a selection of popular multiple sample SNV callers, while showing exceptional recall in calling common SNVs. Further, both simulation studies and real data analyses indicate that MultiGeMS is robust to low-quality data. We also demonstrate that accounting for enzymatic substitution sequencing errors not only improves SNV call precision at low mapping quality regions, but also improves recall at reference allele-dominated sites with high mapping quality. The MultiGeMS package can be downloaded from https://github.com/cui-lab/multigems xinping.cui@ucr.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4132567','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4132567"><span>Eliminating barriers to personalized medicine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>With the emergence of high-throughput discovery platforms, robust preclinical small-animal models, and efficient clinical trial pipelines, it is becoming possible to envision a time when the treatment of human neurologic diseases will become personalized. The emergence of precision medicine will require the identification of subgroups of patients most likely to respond to specific biologically based therapies. This stratification only becomes possible when the determinants that contribute to disease heterogeneity become more fully elucidated. This review discusses the defining factors that underlie disease heterogeneity relevant to the potential for individualized brain tumor (optic pathway glioma) treatments arising in the common single-gene cancer predisposition syndrome, neurofibromatosis type 1 (NF1). In this regard, NF1 is posited as a model genetic condition to establish a workable paradigm for actualizing precision therapeutics for other neurologic disorders. PMID:24975854</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT.........2J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT.........2J"><span>Operations research applications in nuclear energy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Johnson, Benjamin Lloyd</p> <p></p> <p>This dissertation consists of three papers; the first is published in Annals of Operations Research, the second is nearing submission to INFORMS Journal on Computing, and the third is the predecessor of a paper nearing submission to Progress in Nuclear Energy. We apply operations research techniques to nuclear waste disposal and nuclear safeguards. Although these fields are different, they allow us to showcase some benefits of using operations research techniques to enhance nuclear energy applications. The first paper, "Optimizing High-Level Nuclear Waste Disposal within a Deep Geologic Repository," presents a mixed-integer programming model that determines where to place high-level nuclear waste packages in a deep geologic repository to minimize heat load concentration. We develop a heuristic that increases the size of solvable model instances. The second paper, "Optimally Configuring a Measurement System to Detect Diversions from a Nuclear Fuel Cycle," introduces a simulation-optimization algorithm and an integer-programming model to find the best, or near-best, resource-limited nuclear fuel cycle measurement system with a high degree of confidence. Given location-dependent measurement method precisions, we (i) optimize the configuration of n methods at n locations of a hypothetical nuclear fuel cycle facility, (ii) find the most important location at which to improve method precision, and (iii) determine the effect of measurement frequency on near-optimal configurations and objective values. Our results correspond to existing outcomes but we obtain them at least an order of magnitude faster. The third paper, "Optimizing Nuclear Material Control and Accountability Measurement Systems," extends the integer program from the second paper to locate measurement methods in a larger, hypothetical nuclear fuel cycle scenario given fixed purchase and utilization budgets. This paper also presents two mixed-integer quadratic programming models to increase the precision of existing methods given a fixed improvement budget and to reduce the measurement uncertainty in the system while limiting improvement costs. We quickly obtain similar or better solutions compared to several intuitive analyses that take much longer to perform.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MNRAS.477L.122W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MNRAS.477L.122W"><span>Model-independent curvature determination with 21 cm intensity mapping experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Witzemann, Amadeus; Bull, Philip; Clarkson, Chris; Santos, Mario G.; Spinelli, Marta; Weltman, Amanda</p> <p>2018-06-01</p> <p>Measurements of the spatial curvature of the Universe have improved significantly in recent years, but still tend to require strong assumptions to be made about the equation of state of dark energy (DE) in order to reach sub-percent precision. When these assumptions are relaxed, strong degeneracies arise that make it hard to disentangle DE and curvature, degrading the constraints. We show that forthcoming 21 cm intensity mapping experiments such as Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) are ideally designed to carry out model-independent curvature measurements, as they can measure the clustering signal at high redshift with sufficient precision to break many of the degeneracies. We consider two different model-independent methods, based on `avoiding' the DE-dominated regime and non-parametric modelling of the DE equation of state, respectively. Our forecasts show that HIRAX will be able to improve upon current model-independent constraints by around an order of magnitude, reaching percent-level accuracy even when an arbitrary DE equation of state is assumed. In the same model-independent analysis, the sample variance limit for a similar survey is another order of magnitude better.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4816634','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4816634"><span>Electrode Models for Electric Current Computed Tomography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>CHENG, KUO-SHENG; ISAACSON, DAVID; NEWELL, J. C.; GISSER, DAVID G.</p> <p>2016-01-01</p> <p>This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 Ω · cm were studied. Values of “effective” contact impedance z used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 Ω · cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an “effective” contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field. PMID:2777280</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/2777280','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/2777280"><span>Electrode models for electric current computed tomography.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cheng, K S; Isaacson, D; Newell, J C; Gisser, D G</p> <p>1989-09-01</p> <p>This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 omega.cm were studied. Values of "effective" contact impedance zeta used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 omega.cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an "effective" contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MNRAS.tmpL..68W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MNRAS.tmpL..68W"><span>Model-independent curvature determination with 21cm intensity mapping experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Witzemann, Amadeus; Bull, Philip; Clarkson, Chris; Santos, Mario G.; Spinelli, Marta; Weltman, Amanda</p> <p>2018-04-01</p> <p>Measurements of the spatial curvature of the Universe have improved significantly in recent years, but still tend to require strong assumptions to be made about the equation of state of dark energy (DE) in order to reach sub-percent precision. When these assumptions are relaxed, strong degeneracies arise that make it hard to disentangle DE and curvature, degrading the constraints. We show that forthcoming 21cm intensity mapping experiments such as HIRAX are ideally designed to carry out model-independent curvature measurements, as they can measure the clustering signal at high redshift with sufficient precision to break many of the degeneracies. We consider two different model-independent methods, based on `avoiding' the DE-dominated regime and non-parametric modelling of the DE equation of state respectively. Our forecasts show that HIRAX will be able to improve upon current model-independent constraints by around an order of magnitude, reaching percent-level accuracy even when an arbitrary DE equation of state is assumed. In the same model-independent analysis, the sample variance limit for a similar survey is another order of magnitude better.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014RScI...85l6102J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014RScI...85l6102J"><span>Note: High precision measurements using high frequency gigahertz signals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jin, Aohan; Fu, Siyuan; Sakurai, Atsunori; Liu, Liang; Edman, Fredrik; Pullerits, Tõnu; Öwall, Viktor; Karki, Khadga Jung</p> <p>2014-12-01</p> <p>Generalized lock-in amplifiers use digital cavities with Q-factors as high as 5 × 108 to measure signals with very high precision. In this Note, we show that generalized lock-in amplifiers can be used to analyze microwave (giga-hertz) signals with a precision of few tens of hertz. We propose that the physical changes in the medium of propagation can be measured precisely by the ultra-high precision measurement of the signal. We provide evidence to our proposition by verifying the Newton's law of cooling by measuring the effect of change in temperature on the phase and amplitude of the signals propagating through two calibrated cables. The technique could be used to precisely measure different physical properties of the propagation medium, for example, the change in length, resistance, etc. Real time implementation of the technique can open up new methodologies of in situ virtual metrology in material design.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940015904','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940015904"><span>Improved ceramic slip casting technique. [application to aircraft model fabrication</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)</p> <p>1993-01-01</p> <p>A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell mold of the calcium sulfate-bonded investment material. The shell mold is cooled to room temperature, and a ceramic slip is poured therein. After a ceramic shell of desired thickness has set up in the shell mold, excess ceramic slip is poured out. While still wet, the shell mold is peeled from the ceramic shell to expose any delicate or detailed parts, after which the ceramic shell is cured to provide a complete, detailed, precision ceramic article without parting lines.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26776729','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26776729"><span>Nucleation by rRNA Dictates the Precision of Nucleolus Assembly.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Falahati, Hanieh; Pelham-Webb, Bobbie; Blythe, Shelby; Wieschaus, Eric</p> <p>2016-02-08</p> <p>Membrane-less organelles are intracellular compartments specialized to carry out specific cellular functions. There is growing evidence supporting the possibility that such organelles form as a new phase, separating from cytoplasm or nucleoplasm. However, a main challenge to such phase separation models is that the initial assembly, or nucleation, of the new phase is typically a highly stochastic process and does not allow for the spatiotemporal precision observed in biological systems. Here, we investigate the initial assembly of the nucleolus, a membrane-less organelle involved in different cellular functions including ribosomal biogenesis. We demonstrate that the nucleolus formation is precisely timed in D. melanogaster embryos and follows the transcription of rRNA. We provide evidence that transcription of rRNA is necessary for overcoming the highly stochastic nucleation step in the formation of the nucleolus, through a seeding mechanism. In the absence of rDNA, the nucleolar proteins studied are able to form high-concentration assemblies. However, unlike the nucleolus, these assemblies are highly variable in number, location, and time at which they form. In addition, quantitative study of the changes in the nucleoplasmic concentration and distribution of these nucleolar proteins in the wild-type embryos is consistent with the role of rRNA in seeding the nucleolus formation. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGeod.tmp..155M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGeod.tmp..155M"><span>Precise orbit determination of the Sentinel-3A altimetry satellite using ambiguity-fixed GPS carrier phase observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Montenbruck, Oliver; Hackel, Stefan; Jäggi, Adrian</p> <p>2017-11-01</p> <p>The Sentinel-3 mission takes routine measurements of sea surface heights and depends crucially on accurate and precise knowledge of the spacecraft. Orbit determination with a targeted uncertainty of less than 2 cm in radial direction is supported through an onboard Global Positioning System (GPS) receiver, a Doppler Orbitography and Radiopositioning Integrated by Satellite instrument, and a complementary laser retroreflector for satellite laser ranging. Within this study, the potential of ambiguity fixing for GPS-only precise orbit determination (POD) of the Sentinel-3 spacecraft is assessed. A refined strategy for carrier phase generation out of low-level measurements is employed to cope with half-cycle ambiguities in the tracking of the Sentinel-3 GPS receiver that have so far inhibited ambiguity-fixed POD solutions. Rather than explicitly fixing double-difference phase ambiguities with respect to a network of terrestrial reference stations, a single-receiver ambiguity resolution concept is employed that builds on dedicated GPS orbit, clock, and wide-lane bias products provided by the CNES/CLS (Centre National d'Études Spatiales/Collecte Localisation Satellites) analysis center of the International GNSS Service. Compared to float ambiguity solutions, a notably improved precision can be inferred from laser ranging residuals. These decrease from roughly 9 mm down to 5 mm standard deviation for high-grade stations on average over low and high elevations. Furthermore, the ambiguity-fixed orbits offer a substantially improved cross-track accuracy and help to identify lateral offsets in the GPS antenna or center-of-mass (CoM) location. With respect to altimetry, the improved orbit precision also benefits the global consistency of sea surface measurements. However, modeling of the absolute height continues to rely on proper dynamical models for the spacecraft motion as well as ground calibrations for the relative position of the altimeter reference point and the CoM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27480435','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27480435"><span>[Evaluation of the quality of three-dimensional data acquired by using two kinds of structure light intra-oral scanner to scan the crown preparation model].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, X Y; Li, H; Zhao, Y J; Wang, Y; Sun, Y C</p> <p>2016-07-01</p> <p>To quantitatively evaluate the quality and accuracy of three-dimensional (3D) data acquired by using two kinds of structure intra-oral scanner to scan the typical teeth crown preparations. Eight typical teeth crown preparations model were scanned 3 times with two kinds of structured light intra-oral scanner(A, B), as test group. A high precision model scanner were used to scan the model as true value group. The data above the cervical margin was extracted. The indexes of quality including non-manifold edges, the self-intersections, highly-creased edges, spikes, small components, small tunnels, small holes and the anount of triangles were measured with the tool of mesh doctor in Geomagic studio 2012. The scanned data of test group were aligned to the data of true value group. 3D deviations of the test group compared with true value group were measured for each scanned point, each preparation and each group. Independent-samples Mann-Whitney U test was applied to analyze 3D deviations for each scanned point of A and B group. Correlation analysis was applied to index values and 3D deviation values. The total number of spikes in A group was 96, and that in B group and true value group were 5 and 0 respectively. Trueness: A group 8.0 (8.3) μm, B group 9.5 (11.5) μm(P>0.05). Correlation analysis of the number of spikes with data precision of A group was r=0.46. In the study, the qulity of the scanner B is better than scanner A, the difference of accuracy is not statistically significant. There is correlation between quality and data precision of the data scanned with scanner A.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JGeod.tmp...22L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JGeod.tmp...22L"><span>Three-frequency BDS precise point positioning ambiguity resolution based on raw observables</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Pan; Zhang, Xiaohong; Ge, Maorong; Schuh, Harald</p> <p>2018-02-01</p> <p>All BeiDou navigation satellite system (BDS) satellites are transmitting signals on three frequencies, which brings new opportunity and challenges for high-accuracy precise point positioning (PPP) with ambiguity resolution (AR). This paper proposes an effective uncalibrated phase delay (UPD) estimation and AR strategy which is based on a raw PPP model. First, triple-frequency raw PPP models are developed. The observation model and stochastic model are designed and extended to accommodate the third frequency. Then, the UPD is parameterized in raw frequency form while estimating with the high-precision and low-noise integer linear combination of float ambiguity which are derived by ambiguity decorrelation. Third, with UPD corrected, the LAMBDA method is used for resolving full or partial ambiguities which can be fixed. This method can be easily and flexibly extended for dual-, triple- or even more frequency. To verify the effectiveness and performance of triple-frequency PPP AR, tests with real BDS data from 90 stations lasting for 21 days were performed in static mode. Data were processed with three strategies: BDS triple-frequency ambiguity-float PPP, BDS triple-frequency PPP with dual-frequency (B1/B2) and three-frequency AR, respectively. Numerous experiment results showed that compared with the ambiguity-float solution, the performance in terms of convergence time and positioning biases can be significantly improved by AR. Among three groups of solutions, the triple-frequency PPP AR achieved the best performance. Compared with dual-frequency AR, additional the third frequency could apparently improve the position estimations during the initialization phase and under constraint environments when the dual-frequency PPP AR is limited by few satellite numbers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29718112','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29718112"><span>High precision in protein contact prediction using fully convolutional neural networks and minimal sequence features.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jones, David T; Kandathil, Shaun M</p> <p>2018-04-26</p> <p>In addition to substitution frequency data from protein sequence alignments, many state-of-the-art methods for contact prediction rely on additional sources of information, or features, of protein sequences in order to predict residue-residue contacts, such as solvent accessibility, predicted secondary structure, and scores from other contact prediction methods. It is unclear how much of this information is needed to achieve state-of-the-art results. Here, we show that using deep neural network models, simple alignment statistics contain sufficient information to achieve state-of-the-art precision. Our prediction method, DeepCov, uses fully convolutional neural networks operating on amino-acid pair frequency or covariance data derived directly from sequence alignments, without using global statistical methods such as sparse inverse covariance or pseudolikelihood estimation. Comparisons against CCMpred and MetaPSICOV2 show that using pairwise covariance data calculated from raw alignments as input allows us to match or exceed the performance of both of these methods. Almost all of the achieved precision is obtained when considering relatively local windows (around 15 residues) around any member of a given residue pairing; larger window sizes have comparable performance. Assessment on a set of shallow sequence alignments (fewer than 160 effective sequences) indicates that the new method is substantially more precise than CCMpred and MetaPSICOV2 in this regime, suggesting that improved precision is attainable on smaller sequence families. Overall, the performance of DeepCov is competitive with the state of the art, and our results demonstrate that global models, which employ features from all parts of the input alignment when predicting individual contacts, are not strictly needed in order to attain precise contact predictions. DeepCov is freely available at https://github.com/psipred/DeepCov. d.t.jones@ucl.ac.uk.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21850654','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21850654"><span>A spatial scan statistic for nonisotropic two-level risk cluster.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie</p> <p>2012-01-30</p> <p>Spatial scan statistic methods are commonly used for geographical disease surveillance and cluster detection. The standard spatial scan statistic does not model any variability in the underlying risks of subregions belonging to a detected cluster. For a multilevel risk cluster, the isotonic spatial scan statistic could model a centralized high-risk kernel in the cluster. Because variations in disease risks are anisotropic owing to different social, economical, or transport factors, the real high-risk kernel will not necessarily take the central place in a whole cluster area. We propose a spatial scan statistic for a nonisotropic two-level risk cluster, which could be used to detect a whole cluster and a noncentralized high-risk kernel within the cluster simultaneously. The performance of the three methods was evaluated through an intensive simulation study. Our proposed nonisotropic two-level method showed better power and geographical precision with two-level risk cluster scenarios, especially for a noncentralized high-risk kernel. Our proposed method is illustrated using the hand-foot-mouth disease data in Pingdu City, Shandong, China in May 2009, compared with two other methods. In this practical study, the nonisotropic two-level method is the only way to precisely detect a high-risk area in a detected whole cluster. Copyright © 2011 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..339a2029L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..339a2029L"><span>Numerical Simulation Analysis of High-precision Dispensing Needles for Solid-liquid Two-phase Grinding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Junye; Hu, Jinglei; Wang, Binyu; Sheng, Liang; Zhang, Xinming</p> <p>2018-03-01</p> <p>In order to investigate the effect of abrasive flow polishing surface variable diameter pipe parts, with high precision dispensing needles as the research object, the numerical simulation of the process of polishing high precision dispensing needle was carried out. Analysis of different volume fraction conditions, the distribution of the dynamic pressure and the turbulence viscosity of the abrasive flow field in the high precision dispensing needle, through comparative analysis, the effectiveness of the abrasive grain polishing high precision dispensing needle was studied, controlling the volume fraction of silicon carbide can change the viscosity characteristics of the abrasive flow during the polishing process, so that the polishing quality of the abrasive grains can be controlled.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvC..97f5201S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvC..97f5201S"><span>Strangeness S =-1 hyperon-nucleon interactions: Chiral effective field theory versus lattice QCD</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Song, Jing; Li, Kai-Wen; Geng, Li-Sheng</p> <p>2018-06-01</p> <p>Hyperon-nucleon interactions serve as basic inputs to studies of hypernuclear physics and dense (neutron) stars. Unfortunately, a precise understanding of these important quantities has lagged far behind that of the nucleon-nucleon interaction due to lack of high-precision experimental data. Historically, hyperon-nucleon interactions are either formulated in quark models or meson exchange models. In recent years, lattice QCD simulations and chiral effective field theory approaches start to offer new insights from first principles. In the present work, we contrast the state-of-the-art lattice QCD simulations with the latest chiral hyperon-nucleon forces and show that the leading order relativistic chiral results can already describe the lattice QCD data reasonably well. Given the fact that the lattice QCD simulations are performed with pion masses ranging from the (almost) physical point to 700 MeV, such studies provide a useful check on both the chiral effective field theory approaches as well as lattice QCD simulations. Nevertheless more precise lattice QCD simulations are eagerly needed to refine our understanding of hyperon-nucleon interactions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7850E..18G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7850E..18G"><span>PSF estimation for defocus blurred image based on quantum back-propagation neural network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang</p> <p>2010-11-01</p> <p>Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..187a2002Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..187a2002Y"><span>Research on Turbofan Engine Model above Idle State Based on NARX Modeling Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yu, Bing; Shu, Wenjun</p> <p>2017-03-01</p> <p>The nonlinear model for turbofan engine above idle state based on NARX is studied. Above all, the data sets for the JT9D engine from existing model are obtained via simulation. Then, a nonlinear modeling scheme based on NARX is proposed and several models with different parameters are built according to the former data sets. Finally, the simulations have been taken to verify the precise and dynamic performance the models, the results show that the NARX model can well reflect the dynamics characteristic of the turbofan engine with high accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9795E..1HX','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9795E..1HX"><span>Research on motion model for the hypersonic boost-glide aircraft</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Shenda; Wu, Jing; Wang, Xueying</p> <p>2015-11-01</p> <p>A motion model for the hypersonic boost-glide aircraft(HBG) was proposed in this paper, which also analyzed the precision of model through simulation. Firstly the trajectory of HBG was analyzed, and a scheme which divide the trajectory into two parts then build the motion model on each part. Secondly a restrained model of boosting stage and a restrained model of J2 perturbation were established, and set up the observe model. Finally the analysis of simulation results show the feasible and high-accuracy of the model, and raise a expectation for intensive research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdSpR..61..367C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdSpR..61..367C"><span>GNSS global real-time augmentation positioning: Real-time precise satellite clock estimation, prototype system construction and performance analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang</p> <p>2018-01-01</p> <p>Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm for BeiDou GEO satellites. The real-time positioning results prove that the GPS + BeiDou + Galileo RT-PPP comparing to GPS-only can effectively accelerate convergence time by about 60%, improve the positioning accuracy by about 30% and obtain averaged RMS 4 cm in horizontal and 6 cm in vertical; additionally RT-SPP accuracy in the prototype system can realize positioning accuracy with about averaged RMS 1 m in horizontal and 1.5-2 m in vertical, which are improved by 60% and 70% to SPP based on broadcast ephemeris, respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28752821','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28752821"><span>Lotus-on-chip: computer-aided design and 3D direct laser writing of bioinspired surfaces for controlling the wettability of materials and devices.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lantada, Andrés Díaz; Hengsbach, Stefan; Bade, Klaus</p> <p>2017-10-16</p> <p>In this study we present the combination of a math-based design strategy with direct laser writing as high-precision technology for promoting solid free-form fabrication of multi-scale biomimetic surfaces. Results show a remarkable control of surface topography and wettability properties. Different examples of surfaces inspired on the lotus leaf, which to our knowledge are obtained for the first time following a computer-aided design with this degree of precision, are presented. Design and manufacturing strategies towards microfluidic systems whose fluid driving capabilities are obtained just by promoting a design-controlled wettability of their surfaces, are also discussed and illustrated by means of conceptual proofs. According to our experience, the synergies between the presented computer-aided design strategy and the capabilities of direct laser writing, supported by innovative writing strategies to promote final size while maintaining high precision, constitute a relevant step forward towards materials and devices with design-controlled multi-scale and micro-structured surfaces for advanced functionalities. To our knowledge, the surface geometry of the lotus leaf, which has relevant industrial applications thanks to its hydrophobic and self-cleaning behavior, has not yet been adequately modeled and manufactured in an additive way with the degree of precision that we present here.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29504159','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29504159"><span>Nanomaterials for Cancer Precision Medicine.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Yilong; Sun, Shuyang; Zhang, Zhiyuan; Shi, Donglu</p> <p>2018-04-01</p> <p>Medical science has recently advanced to the point where diagnosis and therapeutics can be carried out with high precision, even at the molecular level. A new field of "precision medicine" has consequently emerged with specific clinical implications and challenges that can be well-addressed by newly developed nanomaterials. Here, a nanoscience approach to precision medicine is provided, with a focus on cancer therapy, based on a new concept of "molecularly-defined cancers." "Next-generation sequencing" is introduced to identify the oncogene that is responsible for a class of cancers. This new approach is fundamentally different from all conventional cancer therapies that rely on diagnosis of the anatomic origins where the tumors are found. To treat cancers at molecular level, a recently developed "microRNA replacement therapy" is applied, utilizing nanocarriers, in order to regulate the driver oncogene, which is the core of cancer precision therapeutics. Furthermore, the outcome of the nanomediated oncogenic regulation has to be accurately assessed by the genetically characterized, patient-derived xenograft models. Cancer therapy in this fashion is a quintessential example of precision medicine, presenting many challenges to the materials communities with new issues in structural design, surface functionalization, gene/drug storage and delivery, cell targeting, and medical imaging. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26480291','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26480291"><span>Improved accuracy and precision of tracer kinetic parameters by joint fitting to variable flip angle and dynamic contrast enhanced MRI data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J</p> <p>2016-10-01</p> <p>To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25739225','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25739225"><span>[Analyzing and modeling methods of near infrared spectroscopy for in-situ prediction of oil yield from oil shale].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong</p> <p>2014-10-01</p> <p>In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7997E..16W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7997E..16W"><span>Study on the cutting mechanism and the brittle ductile transition model of isotropic pyrolyric graphite</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Minghai; Wang, Hujun; Liu, Zhonghai</p> <p>2011-05-01</p> <p>Isotropic pyrolyric graphite (IPG) is a new kind of brittle material, it can be used for sealing the aero-engine turbine shaft and the ethylene high-temperature equipment. It not only has the general advantages of ordinal carbonaceous materials such as high temperature resistance, lubrication and abrasion resistance, but also has the advantages of impermeability and machinability that carbon/carbon composite doesn't have. Therefore, it has broad prospects for development. Mechanism of brittle-ductile transition of IPG is the foundation of precision cutting while the plastic deformation of IPG is the essential and the most important mechanical behavior of precision cutting. Using the theory of strain gradient, the mechanism of this material removal during the precision cutting is analyzed. The critical cutting thickness of IPG is calculated for the first time. Furthermore, the cutting process parameters such as cutting depth, feed rate which corresponding to the scale of brittle-ductile transition deformation of IPG are calculated. In the end, based on the theory of micromechanics, the deformation behaviors of IPG such as brittle fracture, plastic deformation and mutual transformation process are all simulated under the Sih.G.C fracture criterion. The condition of the simulation is that the material under the pressure-shear loading conditions .The result shows that the best angle during the IPG precision cutting is -30°. The theoretical analysis and the simulation result are validated by precision cutting experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002EGSGA..27.6725B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002EGSGA..27.6725B"><span>Evaluation of The Coherence of The Doris, Slr and GPS Reference Frames With Jason-1</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berthias, J.-P.; Broca, P.; Ferrier, C.; Gratton, S.; Guitart, A.; Houry, S.; Mercier, F.; Piuzzi, A.</p> <p></p> <p>The French-American satellite Jason-1 was launched in December 2001 to continue the high precision altimeter mission of TOPEX/Poseidon. The goal for Jason-1 is to outperform TOPEX in terms of orbit precision, and to bring the radial orbit error level to 1 cm. Great care was taken to reduce spacecraft related error sources: the shape of the spacecraft is simple and symmetrical, thermal blankets cover potential light traps, the tanks are designed to keep the center of mass moving along a single axis as precisely as possible. Thus, equipped with the most advanced second generation miniaturized DORIS receiver, with a quality Laser retroreflector array and with a high performance dual-frequency GPS receiver, Jason-1 should become the new laboratory for precision orbit determination. Preliminary results indicate that all systems perform remarkably well. The first orbits computed using each of the data types separately agree astonishingly well. This is a clear sign that a good coherence between the ref- erence frames has been achieved with the ITRF 2000. We will present the details of these results, as well as the status of our efforts to combine the various data types to improve the orbit precision. In addition, we will present the time evolution of the vari- ous empirical corrections over a nearly complete solar angle cycle, which provides an evaluation of the quality of the pre-launch spacecraft surface force model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27134128','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27134128"><span>Accounting for regional variation in both natural environment and human disturbance to improve performance of multimetric indices of lotic benthic diatoms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tang, Tao; Stevenson, R Jan; Infante, Dana M</p> <p>2016-10-15</p> <p>Regional variation in both natural environment and human disturbance can influence performance of ecological assessments. In this study we calculated 5 types of benthic diatom multimetric indices (MMIs) with 3 different approaches to account for variation in ecological assessments. We used: site groups defined by ecoregions or diatom typologies; the same or different sets of metrics among site groups; and unmodeled or modeled MMIs, where models accounted for natural variation in metrics within site groups by calculating an expected reference condition for each metric and each site. We used data from the USEPA's National Rivers and Streams Assessment to calculate the MMIs and evaluate changes in MMI performance. MMI performance was evaluated with indices of precision, bias, responsiveness, sensitivity and relevancy which were respectively measured as MMI variation among reference sites, effects of natural variables on MMIs, difference between MMIs at reference and highly disturbed sites, percent of highly disturbed sites properly classified, and relation of MMIs to human disturbance and stressors. All 5 types of MMIs showed considerable discrimination ability. Using different metrics among ecoregions sometimes reduced precision, but it consistently increased responsiveness, sensitivity, and relevancy. Site specific metric modeling reduced bias and increased responsiveness. Combined use of different metrics among site groups and site specific modeling significantly improved MMI performance irrespective of site grouping approach. Compared to ecoregion site classification, grouping sites based on diatom typologies improved precision, but did not improve overall performance of MMIs if we accounted for natural variation in metrics with site specific models. We conclude that using different metrics among ecoregions and site specific metric modeling improve MMI performance, particularly when used together. Applications of these MMI approaches in ecological assessments introduced a tradeoff with assessment consistency when metrics differed across site groups, but they justified the convenient and consistent use of ecoregions. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5899273','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5899273"><span>Potential for noninvasive assessment of lung inhomogeneity using highly precise, highly time-resolved measurements of gas exchange</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mountain, James E.; Santer, Peter; O’Neill, David P.; Smith, Nicholas M. J.; Ciaffoni, Luca; Couper, John H.; Ritchie, Grant A. D.; Hancock, Gus; Whiteley, Jonathan P.</p> <p>2018-01-01</p> <p>Inhomogeneity in the lung impairs gas exchange and can be an early marker of lung disease. We hypothesized that highly precise measurements of gas exchange contain sufficient information to quantify many aspects of the inhomogeneity noninvasively. Our aim was to explore whether one parameterization of lung inhomogeneity could both fit such data and provide reliable parameter estimates. A mathematical model of gas exchange in an inhomogeneous lung was developed, containing inhomogeneity parameters for compliance, vascular conductance, and dead space, all relative to lung volume. Inputs were respiratory flow, cardiac output, and the inspiratory and pulmonary arterial gas compositions. Outputs were expiratory and pulmonary venous gas compositions. All values were specified every 10 ms. Some parameters were set to physiologically plausible values. To estimate the remaining unknown parameters and inputs, the model was embedded within a nonlinear estimation routine to minimize the deviations between model and data for CO2, O2, and N2 flows during expiration. Three groups, each of six individuals, were studied: young (20–30 yr); old (70–80 yr); and patients with mild to moderate chronic obstructive pulmonary disease (COPD). Each participant undertook a 15-min measurement protocol six times. For all parameters reflecting inhomogeneity, highly significant differences were found between the three participant groups (P < 0.001, ANOVA). Intraclass correlation coefficients were 0.96, 0.99, and 0.94 for the parameters reflecting inhomogeneity in deadspace, compliance, and vascular conductance, respectively. We conclude that, for the particular participants selected, highly repeatable estimates for parameters reflecting inhomogeneity could be obtained from noninvasive measurements of respiratory gas exchange. NEW & NOTEWORTHY This study describes a new method, based on highly precise measures of gas exchange, that quantifies three distributions that are intrinsic to the lung. These distributions represent three fundamentally different types of inhomogeneity that together give rise to ventilation-perfusion mismatch and result in impaired gas exchange. The measurement technique has potentially broad clinical applicability because it is simple for both patient and operator, it does not involve ionizing radiation, and it is completely noninvasive. PMID:29074714</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=risk+AND+ownership&pg=3&id=EJ1000739','ERIC'); return false;" href="https://eric.ed.gov/?q=risk+AND+ownership&pg=3&id=EJ1000739"><span>Peer Assessment with Online Tools to Improve Student Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Atkins, Leslie J.</p> <p>2012-01-01</p> <p>Introductory physics courses often require students to develop precise models of phenomena and represent these with diagrams, including free-body diagrams, light-ray diagrams, and maps of field lines. Instructors expect that students will adopt a certain rigor and precision when constructing these diagrams, but we want that rigor and precision to…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4476712','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4476712"><span>Improving the Rank Precision of Population Health Measures for Small Areas with Longitudinal and Joint Outcome Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Athens, Jessica K.; Remington, Patrick L.; Gangnon, Ronald E.</p> <p>2015-01-01</p> <p>Objectives The University of Wisconsin Population Health Institute has published the County Health Rankings since 2010. These rankings use population-based data to highlight health outcomes and the multiple determinants of these outcomes and to encourage in-depth health assessment for all United States counties. A significant methodological limitation, however, is the uncertainty of rank estimates, particularly for small counties. To address this challenge, we explore the use of longitudinal and pooled outcome data in hierarchical Bayesian models to generate county ranks with greater precision. Methods In our models we used pooled outcome data for three measure groups: (1) Poor physical and poor mental health days; (2) percent of births with low birth weight and fair or poor health prevalence; and (3) age-specific mortality rates for nine age groups. We used the fixed and random effects components of these models to generate posterior samples of rates for each measure. We also used time-series data in longitudinal random effects models for age-specific mortality. Based on the posterior samples from these models, we estimate ranks and rank quartiles for each measure, as well as the probability of a county ranking in its assigned quartile. Rank quartile probabilities for univariate, joint outcome, and/or longitudinal models were compared to assess improvements in rank precision. Results The joint outcome model for poor physical and poor mental health days resulted in improved rank precision, as did the longitudinal model for age-specific mortality rates. Rank precision for low birth weight births and fair/poor health prevalence based on the univariate and joint outcome models were equivalent. Conclusion Incorporating longitudinal or pooled outcome data may improve rank certainty, depending on characteristics of the measures selected. For measures with different determinants, joint modeling neither improved nor degraded rank precision. This approach suggests a simple way to use existing information to improve the precision of small-area measures of population health. PMID:26098858</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014MNRAS.444..776S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014MNRAS.444..776S"><span>High-precision photometry by telescope defocussing - VI. WASP-24, WASP-25 and WASP-26</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Southworth, John; Hinse, T. C.; Burgdorf, M.; Calchi Novati, S.; Dominik, M.; Galianni, P.; Gerner, T.; Giannini, E.; Gu, S.-H.; Hundertmark, M.; Jørgensen, U. G.; Juncher, D.; Kerins, E.; Mancini, L.; Rabus, M.; Ricci, D.; Schäfer, S.; Skottfelt, J.; Tregloan-Reed, J.; Wang, X.-B.; Wertz, O.; Alsubai, K. A.; Andersen, J. M.; Bozza, V.; Bramich, D. M.; Browne, P.; Ciceri, S.; D'Ago, G.; Damerdji, Y.; Diehl, C.; Dodds, P.; Elyiv, A.; Fang, X.-S.; Finet, F.; Figuera Jaimes, R.; Hardis, S.; Harpsøe, K.; Jessen-Hansen, J.; Kains, N.; Kjeldsen, H.; Korhonen, H.; Liebig, C.; Lund, M. N.; Lundkvist, M.; Mathiasen, M.; Penny, M. T.; Popovas, A.; Prof., S.; Rahvar, S.; Sahu, K.; Scarpetta, G.; Schmidt, R. W.; Schönebeck, F.; Snodgrass, C.; Street, R. A.; Surdej, J.; Tsapras, Y.; Vilela, C.</p> <p>2014-10-01</p> <p>We present time series photometric observations of 13 transits in the planetary systems WASP-24, WASP-25 and WASP-26. All three systems have orbital obliquity measurements, WASP-24 and WASP-26 have been observed with Spitzer, and WASP-25 was previously comparatively neglected. Our light curves were obtained using the telescope-defocussing method and have scatters of 0.5-1.2 mmag relative to their best-fitting geometric models. We use these data to measure the physical properties and orbital ephemerides of the systems to high precision, finding that our improved measurements are in good agreement with previous studies. High-resolution Lucky Imaging observations of all three targets show no evidence for faint stars close enough to contaminate our photometry. We confirm the eclipsing nature of the star closest to WASP-24 and present the detection of a detached eclipsing binary within 4.25 arcmin of WASP-26.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28042846','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28042846"><span>High-Precision Registration of Point Clouds Based on Sphere Feature Constraints.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter</p> <p>2016-12-30</p> <p>Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5298645','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5298645"><span>High-Precision Registration of Point Clouds Based on Sphere Feature Constraints</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter</p> <p>2016-01-01</p> <p>Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method. PMID:28042846</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017WRR....53.2199M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017WRR....53.2199M"><span>Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George</p> <p>2017-03-01</p> <p>Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996AcSpA..52.1041N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996AcSpA..52.1041N"><span>Diode laser spectroscopy: precise spectral line shape measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nadezhdinskii, A. I.</p> <p>1996-07-01</p> <p>When one speaks about modern trends in tunable diode laser spectroscopy (TDLS) one should mention that precise line shape measurements have become one of the most promising applications of diode lasers in high resolution molecular spectroscopy. Accuracy limitations of TDL spectrometers are considered in this paper, proving the ability to measure spectral line profile with precision better than 1%. A four parameter Voigt profile is used to fit the experimental spectrum, and the possibility of line shift measurements with an accuracy of 2 × 10 -5 cm -1 is shown. Test experiments demonstrate the error line intensity ratios to be less than 0.3% for the proposed approach. Differences between "soft" and "hard" models of line shape have been observed experimentally for the first time. Some observed resonance effects are considered with respect to collision adiabacity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001MNRAS.326..274L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001MNRAS.326..274L"><span>Precision timing measurements of PSR J1012+5307</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lange, Ch.; Camilo, F.; Wex, N.; Kramer, M.; Backer, D. C.; Lyne, A. G.; Doroshenko, O.</p> <p>2001-09-01</p> <p>We present results and applications of high-precision timing measurements of the binary millisecond pulsar J1012+5307. Combining our radio timing measurements with results based on optical observations, we derive complete 3D velocity information for this system. Correcting for Doppler effects, we derive the intrinsic spin parameters of this pulsar and a characteristic age of 8.6+/-1.9Gyr. Our upper limit for the orbital eccentricity of only 8×10-7 (68 per cent confidence level) is the smallest ever measured for a binary system. We demonstrate that this makes the pulsar an ideal laboratory in which to test certain aspects of alternative theories of gravitation. Our precision measurements suggest deviations from a simple pulsar spin-down timing model, which are consistent with timing noise and the extrapolation of the known behaviour of slowly rotating pulsars.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20822794','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20822794"><span>Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan</p> <p>2010-09-01</p> <p>Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18045801','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18045801"><span>Highly efficient classification and identification of human pathogenic bacteria by MALDI-TOF MS.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hsieh, Sen-Yung; Tseng, Chiao-Li; Lee, Yun-Shien; Kuo, An-Jing; Sun, Chien-Feng; Lin, Yen-Hsiu; Chen, Jen-Kun</p> <p>2008-02-01</p> <p>Accurate and rapid identification of pathogenic microorganisms is of critical importance in disease treatment and public health. Conventional work flows are time-consuming, and procedures are multifaceted. MS can be an alternative but is limited by low efficiency for amino acid sequencing as well as low reproducibility for spectrum fingerprinting. We systematically analyzed the feasibility of applying MS for rapid and accurate bacterial identification. Directly applying bacterial colonies without further protein extraction to MALDI-TOF MS analysis revealed rich peak contents and high reproducibility. The MS spectra derived from 57 isolates comprising six human pathogenic bacterial species were analyzed using both unsupervised hierarchical clustering and supervised model construction via the Genetic Algorithm. Hierarchical clustering analysis categorized the spectra into six groups precisely corresponding to the six bacterial species. Precise classification was also maintained in an independently prepared set of bacteria even when the numbers of m/z values were reduced to six. In parallel, classification models were constructed via Genetic Algorithm analysis. A model containing 18 m/z values accurately classified independently prepared bacteria and identified those species originally not used for model construction. Moreover bacteria fewer than 10(4) cells and different species in bacterial mixtures were identified using the classification model approach. In conclusion, the application of MALDI-TOF MS in combination with a suitable model construction provides a highly accurate method for bacterial classification and identification. The approach can identify bacteria with low abundance even in mixed flora, suggesting that a rapid and accurate bacterial identification using MS techniques even before culture can be attained in the near future.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005PhRvC..71d4309G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005PhRvC..71d4309G"><span>High precision measurements of 26Naβ- decay</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grinyer, G. F.; Svensson, C. E.; Andreoiu, C.; Andreyev, A. N.; Austin, R. A.; Ball, G. C.; Chakrawarthy, R. S.; Finlay, P.; Garrett, P. E.; Hackman, G.; Hardy, J. C.; Hyland, B.; Iacob, V. E.; Koopmans, K. A.; Kulp, W. D.; Leslie, J. R.; MacDonald, J. A.; Morton, A. C.; Ormand, W. E.; Osborne, C. J.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Schumaker, M. A.; Scraggs, H. C.; Schwarzenberg, J.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Wood, J. L.; Zganjar, E. F.</p> <p>2005-04-01</p> <p>High-precision measurements of the half-life and β-branching ratios for the β- decay of 26Na to 26Mg have been measured in β-counting and γ-decay experiments, respectively. A 4π proportional counter and fast tape transport system were employed for the half-life measurement, whereas the γ rays emitted by the daughter nucleus 26Mg were detected with the 8π γ-ray spectrometer, both located at TRIUMF's isotope separator and accelerator radioactive beam facility. The half-life of 26Na was determined to be T1/2=1.07128±0.00013±0.00021s, where the first error is statistical and the second systematic. The logft values derived from these experiments are compared with theoretical values from a full sd-shell model calculation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MeScT..29g5003T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MeScT..29g5003T"><span>An approach for real-time fast point positioning of the BeiDou Navigation Satellite System using augmentation information</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tu, Rui; Zhang, Rui; Zhang, Pengfei; Liu, Jinhai; Lu, Xiaochun</p> <p>2018-07-01</p> <p>This study proposes an approach to facilitate real-time fast point positioning of the BeiDou Navigation Satellite System (BDS) based on regional augmentation information. We term this as the precise positioning based on augmentation information (BPP) approach. The coordinates of the reference stations were highly constrained to extract the augmentation information, which contained not only the satellite orbit clock error correlated with the satellite running state, but also included the atmosphere error and unmodeled error, which are correlated with the spatial and temporal states. Based on these mixed augmentation corrections, a precise point positioning (PPP) model could be used for the coordinates estimation of the user stations, and the float ambiguity could be easily fixed for the single-difference between satellites. Thus, this technique provided a quick and high-precision positioning service. Three different datasets with small, medium, and large baselines (0.6 km, 30 km and 136 km) were used to validate the feasibility and effectiveness of the proposed BPP method. The validations showed that using the BPP model, 1–2 cm positioning service can be provided in a 100 km wide area after just 2 s of initialization. Thus, as the proposed approach not only capitalized on both PPP and RTK but also provided consistent application, it can be used for area augmentation positioning.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhDT.......127L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhDT.......127L"><span>Aspects of Particle Physics Beyond the Standard Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lu, Xiaochuan</p> <p></p> <p>This dissertation describes a few aspects of particles beyond the Standard Model, with a focus on the remaining questions after the discovery of a Standard Model-like Higgs boson. In specific, three topics are discussed in sequence: neutrino mass and baryon asymmetry, naturalness problem of Higgs mass, and placing constraints on theoretical models from precision measurements. First, the consequence of the neutrino mass anarchy on cosmology is studied. Attentions are paid in particular to the total mass of neutrinos and baryon asymmetry through leptogenesis. With the assumption of independence among mass matrix entries in addition to the basis independence, Gaussian measure is the only choice. On top of Gaussian measure, a simple approximate U(1) flavor symmetry makes leptogenesis highly successful. Correlations between the baryon asymmetry and the light-neutrino quantities are investigated. Also discussed are possible implications of recently suggested large total mass of neutrinos by the SDSS/BOSS data. Second, the Higgs mass implies fine-tuning for minimal theories of weak-scale supersymmetry (SUSY). Non-decoupling effects can boost the Higgs mass when new states interact with the Higgs, but new sources of SUSY breaking that accompany such extensions threaten naturalness. I will show that two singlets with a Dirac mass can increase the Higgs mass while maintaining naturalness in the presence of large SUSY breaking in the singlet sector. The modified Higgs phenomenology of this scenario, termed "Dirac NMSSM", is also studied. Finally, the sensitivities of future precision measurements in probing physics beyond the Standard Model are studied. A practical three-step procedure is presented for using the Standard Model effective field theory (SM EFT) to connect ultraviolet (UV) models of new physics with weak scale precision observables. With this procedure, one can interpret precision measurements as constraints on the UV model concerned. A detailed explanation is given for calculating the effective action up to one-loop order in a manifestly gauge covariant fashion. This covariant derivative expansion method dramatically simplifies the process of matching a UV model with the SM EFT, and also makes available a universal formalism that is easy to use for a variety of UV models. A few general aspects of RG running effects and choosing operator bases are discussed. Mapping results are provided between the bosonic sector of the SM EFT and a complete set of precision electroweak and Higgs observables to which present and near future experiments are sensitive. Many results and tools which should prove useful to those wishing to use the SM EFT are detailed in several appendices.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFM.V23A2787B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFM.V23A2787B"><span>High-Precision U-Pb Geochronology of Ice River Perovskite: A Possible Interlaboratory and Intertechnique EARTHTIME Standard</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Burgess, S. D.; Bowring, S. A.; Heaman, L. M.</p> <p>2012-12-01</p> <p>Accurate and precise U-Pb geochronology of accessory phases other than zircon are required for dating some LIP basalts or determining the temporal patterns of kimberlite pipes, for example. Advances in precision and accuracy lead directly to an increase in the complexity of questions that can be posed. U-Pb geochronology of perovskite (CaTiO3) has been applied to silica-undersaturated basalts, carbonatites, alkaline igneous rocks, and kimberlites. Most published IDTIMS perovskite dates have 2-sigma precisions at the ~0.2% level for weighted mean 206Pb/238U dates, much less than possible with IDTIMS analyses of zircons, which limits the applicability of perovskite in high-precision applications. Precision on perovskite dates is lower than zircon because of common Pb, which in some cases can be up to 50% of the total Pb and must be corrected for and accurately partitioned between blank and initial. Relatively small changes in the composition of common Pb can result in inaccurate but precise dates. In many cases minerals with significant common Pb are corrected using Stacey and Kramers (1975) two stage Pb evolution model. This can be done without serious consequence to the final date for minerals with high U/Pb ratios. In the more common case where U/Pb ratios are relatively low and the proportion of common Pb is large, applying a model-derived Pb isotopic composition rather than measuring it directly can introduce percent-level inaccuracy to dates calculated with precisely known U/Pb ratios. Direct measurement of the common Pb composition can be done on a U-poor mineral that co-crystallized with perovskite; feldspar and clinopyroxene are commonly used. Clinopyroxene can contain significant in-grown radiogenic Pb and our experiments indicate that it is not eliminated by aggressive step-wise leaching. The U/Pb ratio in clinopyroxene is generally low (20 < mu < 50) but significant. Other workers (e.g. Kamo et al., 2003; Corfu and Dahlgren, 2008), have used two methods to determine the amount of ingrown Pb. First, by measuring the U/Pb ratio in clinopyroxene and assuming a crystallization age the amount of ingrown Pb can be calculated. Second, by assuming that perovskite and clinopyroxene (± other phases) are isochronous, the initial Pb isotopic composition can be calculated using the y-intercept on 206Pb/238U, 207Pb/235U, and 3-D isochron diagrams. To further develop a perovskite mineral standard for use in high-precision dating applications, we have focused on single grains/fragments of perovskite and multi-grain clinopyroxene fractions from a melteigite sample (IR90.3) within the Ice River complex, a zoned alkaline-ultramafic intrusion in southeastern British Columbia. Perovskite from this sample has variable measured 206Pb/204Pb (22-263), making this an ideal sample on which to test the sensitivity of the date on grains with variable amounts of radiogenic Pb to changes in common Pb composition. Using co-existing clinopyroxene for the initial common Pb composition by both direct measurement and by the isochron method allows us to calculate an accurate weighted-mean 206Pb/238U date on perovskite at the < 0.1% level, which overlaps within uncertainty for the two different methods. We recommend the Ice River 90.3 perovskite as a suitable EARTHTIME standard for interlaboratory and intertechnique comparison.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29254605','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29254605"><span>Direct Comparison of the Precision of the New Hologic Horizon Model With the Old Discovery Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Whittaker, LaTarsha G; McNamara, Elizabeth A; Vath, Savoun; Shaw, Emily; Malabanan, Alan O; Parker, Robert A; Rosen, Harold N</p> <p>2017-11-22</p> <p>Previous publications suggested that the precision of the new Hologic Horizon densitometer might be better than that of the previous Discovery model, but these observations were confounded by not using the same participants and technologists on both densitometers. We sought to study this issue methodically by measuring in vivo precision in both densitometers using the same patients and technologists. Precision studies for the Horizon and Discovery models were done by acquiring spine, hip, and forearm bone mineral density twice on 30 participants. The set of 4 scans on each participant (2 on the Discovery, 2 on the Horizon) was acquired by the same technologist using the same scanning mode. The pairs of data were used to calculate the least significant change according to the International Society for Clinical Densitometry guidelines. The significance of the difference between least significant changes was assessed using a Wilcoxon signed-rank test of the difference between the mean square error of the absolute value of the differences between paired measurements on the Discovery (Δ-Discovery) and the mean square error of the absolute value of the differences between paired measurements on the Horizon (Δ-Horizon). At virtually all anatomic sites, there was a nonsignificant trend for the precision to be better for the Horizon than for the Discovery. As more vertebrae were excluded from analysis, the precision deteriorated on both densitometers. The precision between densitometers was almost identical when reporting only 1 vertebral body. (1) There was a nonsignificant trend for greater precision on the new Hologic Horizon compared with the older Discovery model. (2) The difference in precision of the spine bone mineral density between the Horizon and the Discovery models decreases as fewer vertebrae are included. (3) These findings are substantially similar to previously published results which had not controlled as well for confounding from using different subjects and technologists. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5191156','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5191156"><span>Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia</p> <p>2016-01-01</p> <p>The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27999351','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27999351"><span>Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia</p> <p>2016-12-18</p> <p>The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/813080','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/813080"><span>Physics with e{sup +}e{sup -} Linear Colliders</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Barklow, Timothy L</p> <p>2003-05-05</p> <p>We describe the physics potential of e{sup +}e{sup -} linear colliders in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosonsmore » and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, like compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders up to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e{sup +}e{sup -} linear colliders and the high-precision with which the properties of particles and their interactions can be analyzed, define an exciting physics programme complementary to hadron machines.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28252988','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28252988"><span>A two-phase model of resource allocation in visual working memory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ye, Chaoxiong; Hu, Zhonghua; Li, Hong; Ristaniemi, Tapani; Liu, Qiang; Liu, Taosheng</p> <p>2017-10-01</p> <p>Two broad theories of visual working memory (VWM) storage have emerged from current research, a discrete slot-based theory and a continuous resource theory. However, neither the discrete slot-based theory or continuous resource theory clearly stipulates how the mental commodity for VWM (discrete slot or continuous resource) is allocated. Allocation may be based on the number of items via stimulus-driven factors, or it may be based on task demands via voluntary control. Previous studies have obtained conflicting results regarding the automaticity versus controllability of such allocation. In the current study, we propose a two-phase allocation model, in which the mental commodity could be allocated only by stimulus-driven factors in the early consolidation phase. However, when there is sufficient time to complete the early phase, allocation can enter the late consolidation phase, where it can be flexibly and voluntarily controlled according to task demands. In an orientation recall task, we instructed participants to store either fewer items at high-precision or more items at low-precision. In 3 experiments, we systematically manipulated memory set size and exposure duration. We did not find an effect of task demands when the set size was high and exposure duration was short. However, when we either decreased the set size or increased the exposure duration, we found a trade-off between the number and precision of VWM representations. These results can be explained by a two-phase model, which can also account for previous conflicting findings in the literature. (PsycINFO Database Record (c) 2017 APA, all rights reserved).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvE..97c3312H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvE..97c3312H"><span>Numerical simulation of three-component multiphase flows at high density and viscosity ratios using lattice Boltzmann methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Haghani Hassan Abadi, Reza; Fakhari, Abbas; Rahimian, Mohammad Hassan</p> <p>2018-03-01</p> <p>In this paper, we propose a multiphase lattice Boltzmann model for numerical simulation of ternary flows at high density and viscosity ratios free from spurious velocities. The proposed scheme, which is based on the phase-field modeling, employs the Cahn-Hilliard theory to track the interfaces among three different fluid components. Several benchmarks, such as the spreading of a liquid lens, binary droplets, and head-on collision of two droplets in binary- and ternary-fluid systems, are conducted to assess the reliability and accuracy of the model. The proposed model can successfully simulate both partial and total spreadings while reducing the parasitic currents to the machine precision.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22075799-superallowed-nuclear-beta-decay-precision-measurements-basic-physics','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22075799-superallowed-nuclear-beta-decay-precision-measurements-basic-physics"><span>Superallowed nuclear beta decay: Precision measurements for basic physics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hardy, J. C.</p> <p>2012-11-20</p> <p>For 60 years, superallowed 0{sup +}{yields}0{sup +} nuclear beta decay has been used to probe the weak interaction, currently verifying the conservation of the vector current (CVC) to high precision ({+-}0.01%) and anchoring the most demanding available test of the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix ({+-}0.06%), a fundamental pillar of the electroweak standard model. Each superallowed transition is characterized by its ft-value, a result obtained from three measured quantities: the total decay energy of the transition, its branching ratio, and the half-life of the parent state. Today's data set is composed of some 150 independent measurements of 13 separatemore » superallowed transitions covering a wide range of parent nuclei from {sup 10}C to {sup 74}Rb. Excellent consistency among the average results for all 13 transitions - a prediction of CVC - also confirms the validity of the small transition-dependent theoretical corrections that have been applied to account for isospin symmetry breaking. With CVC consistency established, the value of the vector coupling constant, G{sub V}, has been extracted from the data and used to determine the top left element of the CKM matrix, V{sub ud}. With this result the top-row unitarity test of the CKM matrix yields the value 0.99995(61), a result that sets a tight limit on possible new physics beyond the standard model. To have any impact on these fundamental weak-interaction tests, any measurement must be made with a precision of 0.1% or better - a substantial experimental challenge well beyond the requirements of most nuclear physics measurements. I overview the current state of the field and outline some of the requirements that need to be met by experimentalists if they aim to make measurements with this high level of precision.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H41J1577S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H41J1577S"><span>Inexpensive, Low Power, Open-Source Data Logging hardware development</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sandell, C. T.; Schulz, B.; Wickert, A. D.</p> <p>2017-12-01</p> <p>Over the past six years, we have developed a suite of open-source, low-cost, and lightweight data loggers for scientific research. These loggers employ the popular and easy-to-use Arduino programming environment, but consist of custom hardware optimized for field research. They may be connected to a broad and expanding range of off-the-shelf sensors, with software support built in directly to the "ALog" library. Three main models exist: The ALog (for Autonomous or Arduino Logger) is the extreme low-power model for years-long deployments with only primary AA or D batteries. The ALog shield is a stripped-down ALog that nests with a standard Arduino board for prototyping or education. The TLog (for Telemetering Logger) contains an embedded radio with 500 m range and a GPS for communications and precision timekeeping. This enables meshed networks of loggers that can send their data back to an internet-connected "home base" logger for near-real-time field data retrieval. All boards feature feature a high-precision clock, full size SD card slot for high-volume data storage, large screw terminals to connect sensors, interrupts, SPI and I2C communication capability, and 3.3V/5V power outputs. The ALog and TLog have fourteen 16-bit analog inputs with a precision voltage reference for precise analog measurements. Their components are rated -40 to +85 degrees C, and they have been tested in harsh field conditions. These low-cost and open-source data loggers have enabled our research group to collect field data across North and South America on a limited budget, support student projects, and build toward better future scientific data systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29562524','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29562524"><span>Revolution of Alzheimer Precision Neurology Passageway of Systems Biology and Neurophysiology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hampel, Harald; Toschi, Nicola; Babiloni, Claudio; Baldacci, Filippo; Black, Keith L; Bokde, Arun L W; Bun, René S; Cacciola, Francesco; Cavedo, Enrica; Chiesa, Patrizia A; Colliot, Olivier; Coman, Cristina-Maria; Dubois, Bruno; Duggento, Andrea; Durrleman, Stanley; Ferretti, Maria-Teresa; George, Nathalie; Genthon, Remy; Habert, Marie-Odile; Herholz, Karl; Koronyo, Yosef; Koronyo-Hamaoui, Maya; Lamari, Foudil; Langevin, Todd; Lehéricy, Stéphane; Lorenceau, Jean; Neri, Christian; Nisticò, Robert; Nyasse-Messene, Francis; Ritchie, Craig; Rossi, Simone; Santarnecchi, Emiliano; Sporns, Olaf; Verdooner, Steven R; Vergallo, Andrea; Villain, Nicolas; Younesi, Erfan; Garaci, Francesco; Lista, Simone</p> <p>2018-03-16</p> <p>The Precision Neurology development process implements systems theory with system biology and neurophysiology in a parallel, bidirectional research path: a combined hypothesis-driven investigation of systems dysfunction within distinct molecular, cellular, and large-scale neural network systems in both animal models as well as through tests for the usefulness of these candidate dynamic systems biomarkers in different diseases and subgroups at different stages of pathophysiological progression. This translational research path is paralleled by an "omics"-based, hypothesis-free, exploratory research pathway, which will collect multimodal data from progressing asymptomatic, preclinical, and clinical neurodegenerative disease (ND) populations, within the wide continuous biological and clinical spectrum of ND, applying high-throughput and high-content technologies combined with powerful computational and statistical modeling tools, aimed at identifying novel dysfunctional systems and predictive marker signatures associated with ND. The goals are to identify common biological denominators or differentiating classifiers across the continuum of ND during detectable stages of pathophysiological progression, characterize systems-based intermediate endophenotypes, validate multi-modal novel diagnostic systems biomarkers, and advance clinical intervention trial designs by utilizing systems-based intermediate endophenotypes and candidate surrogate markers. Achieving these goals is key to the ultimate development of early and effective individualized treatment of ND, such as Alzheimer's disease. The Alzheimer Precision Medicine Initiative (APMI) and cohort program (APMI-CP), as well as the Paris based core of the Sorbonne University Clinical Research Group "Alzheimer Precision Medicine" (GRC-APM) were recently launched to facilitate the passageway from conventional clinical diagnostic and drug development toward breakthrough innovation based on the investigation of the comprehensive biological nature of aging individuals. The APMI movement is gaining momentum to systematically apply both systems neurophysiology and systems biology in exploratory translational neuroscience research on ND.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=6008221','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=6008221"><span>Revolution of Alzheimer Precision Neurology: Passageway of Systems Biology and Neurophysiology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hampel, Harald; Toschi, Nicola; Babiloni, Claudio; Baldacci, Filippo; Black, Keith L.; Bokde, Arun L.W.; Bun, René S.; Cacciola, Francesco; Cavedo, Enrica; Chiesa, Patrizia A.; Colliot, Olivier; Coman, Cristina-Maria; Dubois, Bruno; Duggento, Andrea; Durrleman, Stanley; Ferretti, Maria-Teresa; George, Nathalie; Genthon, Remy; Habert, Marie-Odile; Herholz, Karl; Koronyo, Yosef; Koronyo-Hamaoui, Maya; Lamari, Foudil; Langevin, Todd; Lehéricy, Stéphane; Lorenceau, Jean; Neri, Christian; Nisticò, Robert; Nyasse-Messene, Francis; Ritchie, Craig; Rossi, Simone; Santarnecchi, Emiliano; Sporns, Olaf; Verdooner, Steven R.; Vergallo, Andrea; Villain, Nicolas; Younesi, Erfan; Garaci, Francesco; Lista, Simone</p> <p>2018-01-01</p> <p>The Precision Neurology development process implements systems theory with system biology and neurophysiology in a parallel, bidirectional research path: a combined hypothesis-driven investigation of systems dysfunction within distinct molecular, cellular and large-scale neural network systems in both animal models as well as through tests for the usefulness of these candidate dynamic systems biomarkers in different diseases and subgroups at different stages of pathophysiological progression. This translational research path is paralleled by an “omics”-based, hypothesis-free, exploratory research pathway, which will collect multimodal data from progressing asymptomatic, preclinical and clinical neurodegenerative disease (ND) populations, within the wide continuous biological and clinical spectrum of ND, applying high-throughput and high-content technologies combined with powerful computational and statistical modeling tools, aimed at identifying novel dysfunctional systems and predictive marker signatures associated with ND. The goals are to identify common biological denominators or differentiating classifiers across the continuum of ND during detectable stages of pathophysiological progression, characterize systems-based intermediate endophenotypes, validate multi-modal novel diagnostic systems biomarkers, and advance clinical intervention trial designs by utilizing systems-based intermediate endophenotypes and candidate surrogate markers. Achieving these goals is key to the ultimate development of early and effective individualized treatment of ND, such as Alzheimer’s disease (AD). The Alzheimer Precision Medicine Initiative (APMI) and cohort program (APMI-CP), as well as the Paris based core of the Sorbonne University Clinical Research Group “Alzheimer Precision Medicine” (GRC-APM) were recently launched to facilitate the passageway from conventional clinical diagnostic and drug development towards breakthrough innovation based on the investigation of the comprehensive biological nature of aging individuals. The APMI movement is gaining momentum to systematically apply both systems neurophysiology and systems biology in exploratory translational neuroscience research on ND. PMID:29562524</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=podiatry&pg=7&id=EJ244078','ERIC'); return false;" href="https://eric.ed.gov/?q=podiatry&pg=7&id=EJ244078"><span>The Use of Instructional Objectives: A Model for Second-Year Podiatric Surgical Residency.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Lepow, Gary M.; Levy, Leonard A.</p> <p>1980-01-01</p> <p>The use of highly specific objectives can be the basis for a second-year podiatric surgical residency program. They show both residents and attending staff precisely the knowledge and skills to be achieved and aid evaluation of students. A series of objectives is provided. (MSE)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=oncology&pg=4&id=EJ783062','ERIC'); return false;" href="https://eric.ed.gov/?q=oncology&pg=4&id=EJ783062"><span>Step-by-Step: A Model for Practice-Based Learning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kane, Gabrielle M.</p> <p>2007-01-01</p> <p>Introduction: Innovative technology has led to high-precision radiation therapy that has dramatically altered the practice of radiation oncology. This qualitative study explored the implementation of this innovation into practice from the perspective of the practitioners in a large academic radiation medicine program and aimed to improve…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA266418','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA266418"><span>Performance Evaluation of the Honeywell GG1308 Miniature Ring Laser Gyroscope</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1993-01-01</p> <p>information. The final display line provides the current DSB configuration status. An external strobe was established between the Contraves motion...components and systems. The core of the facility is a Contraves -Goerz Model 57CD 2-axis motion simulator capable of highly precise position, rate and</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=270530','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=270530"><span>Precision agricultural systems: a model of integrative science and technology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>In the world of science research, long gone are the days when investigations are done in isolation. More often than not, science funding starts with one or more well-defined challenges or problems, judged by society as high-priority and needing immediate attention. As such, problems are not defined...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20060035277&hterms=SANJAY&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DSANJAY','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20060035277&hterms=SANJAY&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DSANJAY"><span>Benefits of Model Updating: A Case Study Using the Micro-Precision Interferometer Testbed</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Neat, Gregory W.; Kissil, Andrew; Joshi, Sanjay S.</p> <p>1997-01-01</p> <p>This paper presents a case study on the benefits of model updating using the Micro-Precision Interferometer (MPI) testbed, a full-scale model of a future spaceborne optical interferometer located at JPL.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ISPAr42W1..225N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ISPAr42W1..225N"><span>Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.</p> <p>2017-05-01</p> <p>In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22609098-light-leptonic-new-physics-precision-frontier','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22609098-light-leptonic-new-physics-precision-frontier"><span>Light leptonic new physics at the precision frontier</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Le Dall, Matthias, E-mail: mledall@uvic.ca</p> <p>2016-06-21</p> <p>Precision probes of new physics are often interpreted through their indirect sensitivity to short-distance scales. In this proceedings contribution, we focus on the question of which precision observables, at current sensitivity levels, allow for an interpretation via either short-distance new physics or consistent models of long-distance new physics, weakly coupled to the Standard Model. The electroweak scale is chosen to set the dividing line between these scenarios. In particular, we find that inverse see-saw models of neutrino mass allow for light new physics interpretations of most precision leptonic observables, such as lepton universality, lepton flavor violation, but not for themore » electron EDM.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5713487','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5713487"><span>Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cemgil, Ali Taylan</p> <p>2017-01-01</p> <p>We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking. PMID:29109375</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29109375','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29109375"><span>Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Daniş, F Serhan; Cemgil, Ali Taylan</p> <p>2017-10-29</p> <p>We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA464370','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA464370"><span>Defense Science Board 2006 Summer Study on 21st Century Strategic Technology Vectors, Volume 2: Critical Capabilities and Enabling Technologies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2007-02-01</p> <p>neurosciences , 12 I CH APT ER 2 particularly those analytic elements that create models to assist in understanding individual and...precision geo-location 10. Cause-effect models (environment, infrastructure, socio-cultural, DIME, PMESII) 11. Storytelling , gisting and advanced...sources/TRL 5 Storytelling , gisting and advanced visualization)/TRL 2-5 High fidelity, socio-culturally relevant immersive games, training and mission</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4749996','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4749996"><span>Illuminating necrosis: From mechanistic exploration to preclinical application using fluorescence molecular imaging with indocyanine green</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fang, Cheng; Wang, Kun; Zeng, Chaoting; Chi, Chongwei; Shang, Wenting; Ye, Jinzuo; Mao, Yamin; Fan, Yingfang; Yang, Jian; Xiang, Nan; Zeng, Ning; Zhu, Wen; Fang, Chihua; Tian, Jie</p> <p>2016-01-01</p> <p>Tissue necrosis commonly accompanies the development of a wide range of serious diseases. Therefore, highly sensitive detection and precise boundary delineation of necrotic tissue via effective imaging techniques are crucial for clinical treatments; however, no imaging modalities have achieved satisfactory results to date. Although fluorescence molecular imaging (FMI) shows potential in this regard, no effective necrosis-avid fluorescent probe has been developed for clinical applications. Here, we demonstrate that indocyanine green (ICG) can achieve high avidity of necrotic tissue owing to its interaction with lipoprotein (LP) and phospholipids. The mechanism was explored at the cellular and molecular levels through a series of in vitro studies. Detection of necrotic tissue and real-time image-guided surgery were successfully achieved in different organs of different animal models with the help of FMI using in house-designed imaging devices. The results indicated that necrotic tissue with a 0.6 mm diameter could be effectively detected with precise boundary definition. We believe that the new discovery and the associated imaging techniques will improve personalized and precise surgery in the near future. PMID:26864116</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7656E..2XW','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7656E..2XW"><span>Influence of non-ideal performance of lasers on displacement precision in single-grating heterodyne interferometry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Guochao; Xie, Xuedong; Yan, Shuhua</p> <p>2010-10-01</p> <p>Principle of the dual-wavelength single grating nanometer displacement measuring system, with a long range, high precision, and good stability, is presented. As a result of the nano-level high-precision displacement measurement, the error caused by a variety of adverse factors must be taken into account. In this paper, errors, due to the non-ideal performance of the dual-frequency laser, including linear error caused by wavelength instability and non-linear error caused by elliptic polarization of the laser, are mainly discussed and analyzed. On the basis of theoretical modeling, the corresponding error formulas are derived as well. Through simulation, the limit value of linear error caused by wavelength instability is 2nm, and on the assumption that 0.85 x T = , 1 Ty = of the polarizing beam splitter(PBS), the limit values of nonlinear-error caused by elliptic polarization are 1.49nm, 2.99nm, 4.49nm while the non-orthogonal angle is selected correspondingly at 1°, 2°, 3° respectively. The law of the error change is analyzed based on different values of Tx and Ty .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4701303','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4701303"><span>First Results of Field Absolute Calibration of the GPS Receiver Antenna at Wuhan University</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hu, Zhigang; Zhao, Qile; Chen, Guo; Wang, Guangxing; Dai, Zhiqiang; Li, Tao</p> <p>2015-01-01</p> <p>GNSS receiver antenna phase center variations (PCVs), which arise from the non-spherical phase response of GNSS signals have to be well corrected for high-precision GNSS applications. Without using a precise antenna phase center correction (PCC) model, the estimated position of a station monument will lead to a bias of up to several centimeters. The Chinese large-scale research project “Crustal Movement Observation Network of China” (CMONOC), which requires high-precision positions in a comprehensive GPS observational network motived establishment of a set of absolute field calibrations of the GPS receiver antenna located at Wuhan University. In this paper the calibration facilities are firstly introduced and then the multipath elimination and PCV estimation strategies currently used are elaborated. The validation of estimated PCV values of test antenna are finally conducted, compared with the International GNSS Service (IGS) type values. Examples of TRM57971.00 NONE antenna calibrations from our calibration facility demonstrate that the derived PCVs and IGS type mean values agree at the 1 mm level. PMID:26580616</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28796645','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28796645"><span>On the use of particle filters for electromagnetic tracking in high dose rate brachytherapy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Götz, Th I; Lahmer, G; Brandt, T; Kallis, K; Strnad, V; Bert, Ch; Hensel, B; Tomé, A M; Lang, E W</p> <p>2017-09-12</p> <p>Modern radiotherapy of female breast cancers often employs high dose rate brachytherapy, where a radioactive source is moved inside catheters, implanted in the female breast, according to a prescribed treatment plan. Source localization relative to the patient's anatomy is determined with solenoid sensors whose spatial positions are measured with an electromagnetic tracking system. Precise sensor dwell position determination is of utmost importance to assure irradiation of the cancerous tissue according to the treatment plan. We present a hybrid data analysis system which combines multi-dimensional scaling with particle filters to precisely determine sensor dwell positions in the catheters during subsequent radiation treatment sessions. Both techniques are complemented with empirical mode decomposition for the removal of superimposed breathing artifacts. We show that the hybrid model robustly and reliably determines the spatial positions of all catheters used during the treatment and precisely determines any deviations of actual sensor dwell positions from the treatment plan. The hybrid system only relies on sensor positions measured with an EMT system and relates them to the spatial positions of the implanted catheters as initially determined with a computed x-ray tomography.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26932121','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26932121"><span>Envirotyping for deciphering environmental impacts on crop plants.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xu, Yunbi</p> <p>2016-04-01</p> <p>Global climate change imposes increasing impacts on our environments and crop production. To decipher environmental impacts on crop plants, the concept "envirotyping" is proposed, as a third "typing" technology, complementing with genotyping and phenotyping. Environmental factors can be collected through multiple environmental trials, geographic and soil information systems, measurement of soil and canopy properties, and evaluation of companion organisms. Envirotyping contributes to crop modeling and phenotype prediction through its functional components, including genotype-by-environment interaction (GEI), genes responsive to environmental signals, biotic and abiotic stresses, and integrative phenotyping. Envirotyping, driven by information and support systems, has a wide range of applications, including environmental characterization, GEI analysis, phenotype prediction, near-iso-environment construction, agronomic genomics, precision agriculture and breeding, and development of a four-dimensional profile of crop science involving genotype (G), phenotype (P), envirotype (E) and time (T) (developmental stage). In the future, envirotyping needs to zoom into specific experimental plots and individual plants, along with the development of high-throughput and precision envirotyping platforms, to integrate genotypic, phenotypic and envirotypic information for establishing a high-efficient precision breeding and sustainable crop production system based on deciphered environmental impacts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AdSpR..60..865B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AdSpR..60..865B"><span>Overview of galactic cosmic ray solar modulation in the AMS-02 era</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bindi, V.; Corti, C.; Consolandi, C.; Hoffman, J.; Whitman, K.</p> <p>2017-08-01</p> <p>A new era in cosmic rays physics has started thanks to the precise and continuous observations from space experiments such as PAMELA and AMS-02. Invaluable results are coming out from these new data that are rewriting the theory of acceleration and propagation of cosmic rays. Both at high energies, where several new behaviors have been measured, challenging the accuracy of theoretical models, and also at low energies, in the region affected by the solar modulation. Precise measurements are increasing our knowledge of the effects of solar modulation on low energy cosmic rays, allowing a detailed study of propagation and composition as it has never been done before. These measurements will serve as a high-precision baseline for continued studies of GCR composition, GCR modulation over the solar cycle, space radiation hazards, and other topics. In this review paper, the status of the latest measurements of the cosmic rays in the context of solar modulation are presented together with the current open questions and the future prospects. How new measurements from the AMS-02 experiment will address these questions is also discussed.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21432918-precision-determination-weak-charge-sup-cs-from-atomic-parity-violation','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21432918-precision-determination-weak-charge-sup-cs-from-atomic-parity-violation"><span>Precision determination of weak charge of {sup 133}Cs from atomic parity violation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Porsev, S. G.; School of Physics, University of New South Wales, Sydney, New South Wales 2052; Petersburg Nuclear Physics Institute, Gatchina, Leningrad District 188300</p> <p>2010-08-01</p> <p>We discuss results of the most accurate to-date test of the low-energy electroweak sector of the standard model of elementary particles. Combining previous measurements with our high-precision calculations we extracted the weak charge of the {sup 133}Cs nucleus, Q{sub W}=-73.16(29){sub exp}(20){sub th}[S. G. Porsev, K. Beloy, and A. Derevianko, Phys. Rev. Lett. 102, 181601 (2009)]. The result is in perfect agreement with Q{sub W}{sup SM} predicted by the standard model, Q{sub W}{sup SM}=-73.16(3), and confirms energy dependence (or running) of the electroweak interaction and places constraints on a variety of new physics scenarios beyond the standard model. In particular, wemore » increase the lower limit on the masses of extra Z-bosons predicted by models of grand unification and string theories. This paper provides additional details to the earlier paper. We discuss large-scale calculations in the framework of the coupled-cluster method, including full treatment of single, double, and valence triple excitations. To determine the accuracy of the calculations we computed energies, electric-dipole amplitudes, and hyperfine-structure constants. An extensive comparison with high-accuracy experimental data was carried out.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005E%26PSL.240..694H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005E%26PSL.240..694H"><span>A stratigraphic network across the Subtropical Front in the central South Atlantic: Multi-parameter correlation of magnetic susceptibility, density, X-ray fluorescence and @d^1^8O records [rapid communication</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hofmann, Daniela I.; Fabian, Karl; Schmieder, Frank; Donner, Barbara; Bleil, Ulrich</p> <p>2005-12-01</p> <p>Computer aided multi-parameter signal correlation is used to develop a common high-precision age model for eight gravity cores from the subtropical and subantarctic South Atlantic. Since correlations between all pairs of multi-parameter sequences are used, and correlation errors between core pairs ( A, B) and ( B, C) are controlled by comparison with ( A, C), the resulting age model is called a stratigraphic network. Precise inter-core correlation is achieved using high-resolution records of magnetic susceptibility κ, wet bulk density ρ and X-ray fluorescence scans of elemental composition. Additional δ18O records are available for two cores. The data indicate nearly undisturbed sediment series and the absence of significant hiatuses or turbidites. After establishing a high-precision common depth scale by synchronously correlating four densely measured parameters (Fe, Ca, κ, ρ), the final age model is obtained by simultaneously fitting the aligned δ18O and κ records of the stratigraphic network to orbitally tuned oxygen isotope [J. Imbrie, J. D. Hays, D. G. Martinson, A. McIntyre, A. C. Mix, J. J. Morley, N. G. Pisias, W. L. Prell, N. J. Shackleton, The orbital theory of Pleistocene climate: support from a revised chronology of the marine δ18O record, in: A. Berger, J. Imbrie, J. Hays, G. Kukla, B. Saltzman (Eds.), Milankovitch and Climate: Understanding the Response to Orbital Forcing, Reidel Publishing, Dordrecht, 1984, pp. 269-305; D. Martinson, N. Pisias, J. Hays, J. Imbrie, T. C. Moore Jr., N. Shackleton, Age dating and the orbital theory of the Ice Ages: development of a high-resolution 0 to 300.000-Year chronostratigraphy, Quat. Res. 27 (1987) 1-29.] or susceptibility stacks [T. von Dobeneck, F.Schmieder, Using rock magnetic proxy records for orbital tuning and extended time series analyses into the super-and sub-Milankovitch Bands, in: G. Fischer, G. Wefer (Eds.), Use of proxies in paleoceanography: Examples from the South Atlantic, Springer-Verlag, Berlin (1999), pp. 601-633.]. Besides the detection and elimination of errors in single records, the stratigraphic network approach allows to check the intrinsic consistency of the final result by comparing it to the outcome of more restricted alignment procedures. The final South Atlantic stratigraphic network covers the last 400 kyr south and the last 1200 kyr north of the Subtropical Front (STF) and provides a highly precise age model across the STF representing extremely different sedimentary regimes. This allows to detect temporal shifts of the STF by mapping δMn / Fe. It turns out that the apparent STF movements by about 200 km are not directly related to marine oxygen isotope stages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22303824-electron-kinetic-effects-interferometry-polarimetry-thomson-scattering-measurements-burning-plasmas-invited','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22303824-electron-kinetic-effects-interferometry-polarimetry-thomson-scattering-measurements-burning-plasmas-invited"><span>Electron kinetic effects on interferometry, polarimetry and Thomson scattering measurements in burning plasmas (invited)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Mirnov, V. V.; Hartog, D. J. Den; Duff, J.</p> <p>2014-11-15</p> <p>At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = T{sub e}/m{sub e}c{sup 2} model may be insufficient; we present a more precise model with τ{sup 2}-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused bymore » equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of T{sub e} measurement relevant to ITER operational scenarios.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10605E..0FC','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10605E..0FC"><span>Optimization of single photon detection model based on GM-APD</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Yu; Yang, Yi; Hao, Peiyu</p> <p>2017-11-01</p> <p>One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018QSRv..188..104V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018QSRv..188..104V"><span>Integrating chronological uncertainties for annually laminated lake sediments using layer counting, independent chronologies and Bayesian age modelling (Lake Ohau, South Island, New Zealand)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vandergoes, Marcus J.; Howarth, Jamie D.; Dunbar, Gavin B.; Turnbull, Jocelyn C.; Roop, Heidi A.; Levy, Richard H.; Li, Xun; Prior, Christine; Norris, Margaret; Keller, Liz D.; Baisden, W. Troy; Ditchburn, Robert; Fitzsimons, Sean J.; Bronk Ramsey, Christopher</p> <p>2018-05-01</p> <p>Annually resolved (varved) lake sequences are important palaeoenvironmental archives as they offer a direct incremental dating technique for high-frequency reconstruction of environmental and climate change. Despite the importance of these records, establishing a robust chronology and quantifying its precision and accuracy (estimations of error) remains an essential but challenging component of their development. We outline an approach for building reliable independent chronologies, testing the accuracy of layer counts and integrating all chronological uncertainties to provide quantitative age and error estimates for varved lake sequences. The approach incorporates (1) layer counts and estimates of counting precision; (2) radiometric and biostratigrapic dating techniques to derive independent chronology; and (3) the application of Bayesian age modelling to produce an integrated age model. This approach is applied to a case study of an annually resolved sediment record from Lake Ohau, New Zealand. The most robust age model provides an average error of 72 years across the whole depth range. This represents a fractional uncertainty of ∼5%, higher than the <3% quoted for most published varve records. However, the age model and reported uncertainty represent the best fit between layer counts and independent chronology and the uncertainties account for both layer counting precision and the chronological accuracy of the layer counts. This integrated approach provides a more representative estimate of age uncertainty and therefore represents a statistically more robust chronology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900000797','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900000797"><span>Experimental validation of flexible robot arm modeling and control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ulsoy, A. Galip</p> <p>1989-01-01</p> <p>Flexibility is important for high speed, high precision operation of lightweight manipulators. Accurate dynamic modeling of flexible robot arms is needed. Previous work has mostly been based on linear elasticity with prescribed rigid body motions (i.e., no effect of flexible motion on rigid body motion). Little or no experimental validation of dynamic models for flexible arms is available. Experimental results are also limited for flexible arm control. Researchers include the effects of prismatic as well as revolute joints. They investigate the effect of full coupling between the rigid and flexible motions, and of axial shortening, and consider the control of flexible arms using only additional sensors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22938339','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22938339"><span>Ultra-precise tracking control of piezoelectric actuators via a fuzzy hysteresis model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Pengzhi; Yan, Feng; Ge, Chuan; Zhang, Mingchao</p> <p>2012-08-01</p> <p>In this paper, a novel Takagi-Sugeno (T-S) fuzzy system based model is proposed for hysteresis in piezoelectric actuators. The antecedent and consequent structures of the fuzzy hysteresis model (FHM) can be, respectively, identified on-line through uniform partition approach and recursive least squares (RLS) algorithm. With respect to controller design, the inverse of FHM is used to develop a feedforward controller to cancel out the hysteresis effect. Then a hybrid controller is designed for high-performance tracking. It combines the feedforward controller with a proportional integral differential (PID) controller favourable for stabilization and disturbance compensation. To achieve nanometer-scale tracking precision, the enhanced adaptive hybrid controller is further developed. It uses real-time input and output data to update FHM, thus changing the feedforward controller to suit the on-site hysteresis character of the piezoelectric actuator. Finally, as to 3 cases of 50 Hz sinusoidal, multiple frequency sinusoidal and 50 Hz triangular trajectories tracking, experimental results demonstrate the efficiency of the proposed controllers. Especially, being only 0.35% of the maximum desired displacement, the maximum error of 50 Hz sinusoidal tracking is greatly reduced to 5.8 nm, which clearly shows the ultra-precise nanometer-scale tracking performance of the developed adaptive hybrid controller.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25056572','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25056572"><span>The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schwartenbeck, Philipp; FitzGerald, Thomas H B; Mathys, Christoph; Dolan, Ray; Friston, Karl</p> <p>2015-10-01</p> <p>Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a "limited offer" game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. © The Author 2014. Published by Oxford University Press.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4585497','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4585497"><span>The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Friston, Karl</p> <p>2015-01-01</p> <p>Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a “limited offer” game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. PMID:25056572</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE10155E..38L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE10155E..38L"><span>A defocus-information-free autostereoscopic three-dimensional (3D) digital reconstruction method using direct extraction of disparity information (DEDI)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Da; Cheung, Chifai; Zhao, Xing; Ren, Mingjun; Zhang, Juan; Zhou, Liqiu</p> <p>2016-10-01</p> <p>Autostereoscopy based three-dimensional (3D) digital reconstruction has been widely applied in the field of medical science, entertainment, design, industrial manufacture, precision measurement and many other areas. The 3D digital model of the target can be reconstructed based on the series of two-dimensional (2D) information acquired by the autostereoscopic system, which consists multiple lens and can provide information of the target from multiple angles. This paper presents a generalized and precise autostereoscopic three-dimensional (3D) digital reconstruction method based on Direct Extraction of Disparity Information (DEDI) which can be used to any transform autostereoscopic systems and provides accurate 3D reconstruction results through error elimination process based on statistical analysis. The feasibility of DEDI method has been successfully verified through a series of optical 3D digital reconstruction experiments on different autostereoscopic systems which is highly efficient to perform the direct full 3D digital model construction based on tomography-like operation upon every depth plane with the exclusion of the defocused information. With the absolute focused information processed by DEDI method, the 3D digital model of the target can be directly and precisely formed along the axial direction with the depth information.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18593639','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18593639"><span>Monitoring groundwater variation by satellite and implications for in-situ gravity measurements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fukuda, Yoichi; Yamamoto, Keiko; Hasegawa, Takashi; Nakaegawa, Toshiyuki; Nishijima, Jun; Taniguchi, Makoto</p> <p>2009-04-15</p> <p>In order to establish a new technique for monitoring groundwater variations in urban areas, the applicability of precise in-situ gravity measurements and extremely high precision satellite gravity data via GRACE (Gravity Recovery and Climate Experiment) was tested. Using the GRACE data, regional scale water mass variations in four major river basins of the Indochina Peninsula were estimated. The estimated variations were compared with Soil-Vegetation-Atmosphere Transfer Scheme (SVATS) models with a river flow model of 1) globally uniform river velocity, 2) river velocity tuned by each river basin, 3) globally uniform river velocity considering groundwater storage, and 4) river velocity tuned by each river basin considering groundwater storage. Model 3) attained the best fit to the GRACE data, and the model 4) yielded almost the same values. This implies that the groundwater plays an important role in estimating the variation of total terrestrial storage. It also indicates that tuning river velocity, which is based on the in-situ measurements, needs further investigations in combination with the GRACE data. The relationships among GRACE data, SVATS models, and in-situ measurements were also discussed briefly.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22436312','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22436312"><span>Using sensors to measure activity in people with stroke.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fulk, George D; Sazonov, Edward</p> <p>2011-01-01</p> <p>The purpose of this study was to determine the ability of a novel shoe-based sensor that uses accelerometers, pressure sensors, and pattern recognition with a support vector machine (SVM) to accurately identify sitting, standing, and walking postures in people with stroke. Subjects with stroke wore the shoe-based sensor while randomly assuming 3 main postures: sitting, standing, and walking. A SVM classifier was used to train and validate the data to develop individual and group models, which were tested for accuracy, recall, and precision. Eight subjects participated. Both individual and group models were able to accurately identify the different postures (99.1% to 100% individual models and 76.9% to 100% group models). Recall and precision were also high for both individual (0.99 to 1.00) and group (0.82 to 0.99) models. The unique combination of accelerometer and pressure sensors built into the shoe was able to accurately identify postures. This shoe sensor could be used to provide accurate information on community performance of activities in people with stroke as well as provide behavioral enhancing feedback as part of a telerehabilitation intervention.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6364E..0MK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6364E..0MK"><span>Random fluctuations of optical signal path delay in the atmosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kral, L.; Prochazka, I.; Hamal, K.</p> <p>2006-09-01</p> <p>Atmospheric turbulence induces random delay fluctuations to any optical signal transmitted through the air. These fluctuations can influence for example the measurement precision of laser rangefinders. We have found an appropriate theoretical model based on geometrical optics that allows us to predict the amplitude of the random delay fluctuations for different observing conditions. We have successfully proved the applicability of this model by a series of experiments, directly determining the amplitude of the turbulence-induced pulse delay fluctuations by analysis of a high precision laser ranging data. Moreover, we have also shown that a standard theoretical approach based on diffractive propagation of light through inhomogeneous media and implemented using the GLAD software is not suitable for modeling of the optical signal delay fluctuations caused by the atmosphere. These models based on diffractive propagation predict the turbulence-induced optical path length fluctuations of the order of micrometers, whereas the fluctuations predicted by the geometrical optics model (in agreement with our experimental data) are generally larger by two orders of magnitude, i.e. in the submillimeter range. The reason of this discrepancy is a subject to discussion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3602832','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3602832"><span>A simple model of mechanotransduction in primate glabrous skin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Dong, Yi; Mihalas, Stefan; Kim, Sung Soo; Yoshioka, Takashi; Niebur, Ernst</p> <p>2013-01-01</p> <p>Tactile stimulation of the hand evokes highly precise and repeatable patterns of activity in mechanoreceptive afferents; the strength (i.e., firing rate) and timing of these responses have been shown to convey stimulus information. To achieve an understanding of the mechanisms underlying the representation of tactile stimuli in the nerve, we developed a two-stage computational model consisting of a nonlinear mechanical transduction stage followed by a generalized integrate-and-fire mechanism. The model improves upon a recently published counterpart in two important ways. First, complexity is dramatically reduced (at least one order of magnitude fewer parameters). Second, the model comprises a saturating nonlinearity and therefore can be applied to a much wider range of stimuli. We show that both the rate and timing of afferent responses are predicted with remarkable precision and that observed adaptation patterns and threshold behavior are well captured. We conclude that the responses of mechanoreceptive afferents can be understood using a very parsimonious mechanistic model, which can then be used to accurately simulate the responses of afferent populations. PMID:23236001</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27364619','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27364619"><span>An evaluation of the accuracy and precision of methane prediction equations for beef cattle fed high-forage and high-grain diets.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Escobar-Bahamondes, P; Oba, M; Beauchemin, K A</p> <p>2017-01-01</p> <p>The study determined the performance of equations to predict enteric methane (CH4) from beef cattle fed forage- and grain-based diets. Many equations are available to predict CH4 from beef cattle and the predictions vary substantially among equations. The aims were to (1) construct a database of CH4 emissions for beef cattle from published literature, and (2) identify the most precise and accurate extant CH4 prediction models for beef cattle fed diets varying in forage content. The database was comprised of treatment means of CH4 production from in vivo beef studies published from 2000 to 2015. Criteria to include data in the database were as follows: animal description, intakes, diet composition and CH4 production. In all, 54 published equations that predict CH4 production from diet composition were evaluated. Precision and accuracy of the equations were evaluated using the concordance correlation coefficient (r c ), root mean square prediction error (RMSPE), model efficiency and analysis of errors. Equations were ranked using a combined index of the various statistical assessments based on principal component analysis. The final database contained 53 studies and 207 treatment means that were divided into two data sets: diets containing ⩾400 g/kg dry matter (DM) forage (n=116) and diets containing ⩽200 g/kg DM forage (n=42). Diets containing between ⩽400 and ⩾200 g/kg DM forage were not included in the analysis because of their limited numbers (n=6). Outliers, treatment means where feed was fed restrictively and diets with CH4 mitigation additives were omitted (n=43). Using the high-forage dataset the best-fit equations were the International Panel on Climate Change Tier 2 method, 3 equations for steers that considered gross energy intake (GEI) and body weight and an equation that considered dry matter intake and starch:neutral detergent fiber with r c ranging from 0.60 to 0.73 and RMSPE from 35.6 to 45.9 g/day. For the high-grain diets, the 5 best-fit equations considered intakes of metabolisable energy, cellulose, hemicellulose and fat, or for steers GEI and body weight, with r c ranging from 0.35 to 0.52 and RMSPE from 47.4 to 62.9 g/day. Ranking of extant CH4 prediction equations for their accuracy and precision differed with forage content of the diet. When used for cattle fed high-grain diets, extant CH4 prediction models were generally imprecise and lacked accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9684E..0SW','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9684E..0SW"><span>Study on high-precision measurement of long radius of curvature</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Dongcheng; Peng, Shijun; Gao, Songtao</p> <p>2016-09-01</p> <p>It is hard to get high-precision measurement of the radius of curvature (ROC), because of many factors that affect the measurement accuracy. For the measurement of long radius of curvature, some factors take more important position than others'. So, at first this paper makes some research about which factor is related to the long measurement distance, and also analyse the uncertain of the measurement accuracy. At second this article also study the influence about the support status and the adjust error about the cat's eye and confocal position. At last, a 1055micrometer radius of curvature convex is measured in high-precision laboratory. Experimental results show that the proper steady support (three-point support) can guarantee the high-precision measurement of radius of curvature. Through calibrating the gain of cat's eye and confocal position, is useful to ensure the precise position in order to increase the measurement accuracy. After finish all the above process, the high-precision long ROC measurement is realized.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA272274','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA272274"><span>Proceedings of the Annual Symposium on Frequency Control (45th) held in Los Angeles, California on May 29 -31, 1991</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1991-05-31</p> <p>Corporation High Precision Nonlinear Computer Modelling Technique for Quartz Crystal Oscillators ............... 341 R. Brendel, F. Djian, CNRS & E. Robert...34) A.1.5% IV.1 Results of the computations for resonators having circular electrodes. The model was applied to compute the resonances 0f-.I frequencies...having circular electrodes. *- I The model was applied to compute the resonances frequencies of the fundamental mode and of its anharmonics ,odel and</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.8862W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.8862W"><span>Application of troposphere model from NWP and GNSS data into real-time precise positioning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wilgan, Karina; Hadas, Tomasz; Kazmierski, Kamil; Rohm, Witold; Bosy, Jaroslaw</p> <p>2016-04-01</p> <p>The tropospheric delay empirical models are usually functions of meteorological parameters (temperature, pressure and humidity). The application of standard atmosphere parameters or global models, such as GPT (global pressure/temperature) model or UNB3 (University of New Brunswick, version 3) model, may not be sufficient, especially for positioning in non-standard weather conditions. The possible solution is to use regional troposphere models based on real-time or near-real time measurements. We implement a regional troposphere model into the PPP (Precise Point Positioning) software GNSS-WARP (Wroclaw Algorithms for Real-time Positioning) developed at Wroclaw University of Environmental and Life Sciences. The software is capable of processing static and kinematic multi-GNSS data in real-time and post-processing mode and takes advantage of final IGS (International GNSS Service) products as well as IGS RTS (Real-Time Service) products. A shortcoming of PPP technique is the time required for the solution to converge. One of the reasons is the high correlation among the estimated parameters: troposphere delay, receiver clock offset and receiver height. To efficiently decorrelate these parameters, a significant change in satellite geometry is required. Alternative solution is to introduce the external high-quality regional troposphere delay model to constrain troposphere estimates. The proposed model consists of zenith total delays (ZTD) and mapping functions calculated from meteorological parameters from Numerical Weather Prediction model WRF (Weather Research and Forecasting) and ZTDs from ground-based GNSS stations using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zurich.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170001677','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170001677"><span>Optimetrics for Precise Navigation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yang, Guangning; Heckler, Gregory; Gramling, Cheryl</p> <p>2017-01-01</p> <p>Optimetrics for Precise Navigation will be implemented on existing optical communication links. The ranging and Doppler measurements are conducted over communication data frame and clock. The measurement accuracy is two orders of magnitude better than TDRSS. It also has other advantages of: The high optical carrier frequency enables: (1) Immunity from ionosphere and interplanetary Plasma noise floor, which is a performance limitation for RF tracking; and (2) High antenna gain reduces terminal size and volume, enables high precision tracking in Cubesat, and in deep space smallsat. High Optical Pointing Precision provides: (a) spacecraft orientation, (b) Minimal additional hardware to implement Precise Optimetrics over optical comm link; and (c) Continuous optical carrier phase measurement will enable the system presented here to accept future optical frequency standard with much higher clock accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3426800','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3426800"><span>Graph-based signal integration for high-throughput phenotyping</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2012-01-01</p> <p>Background Electronic Health Records aggregated in Clinical Data Warehouses (CDWs) promise to revolutionize Comparative Effectiveness Research and suggest new avenues of research. However, the effectiveness of CDWs is diminished by the lack of properly labeled data. We present a novel approach that integrates knowledge from the CDW, the biomedical literature, and the Unified Medical Language System (UMLS) to perform high-throughput phenotyping. In this paper, we automatically construct a graphical knowledge model and then use it to phenotype breast cancer patients. We compare the performance of this approach to using MetaMap when labeling records. Results MetaMap's overall accuracy at identifying breast cancer patients was 51.1% (n=428); recall=85.4%, precision=26.2%, and F1=40.1%. Our unsupervised graph-based high-throughput phenotyping had accuracy of 84.1%; recall=46.3%, precision=61.2%, and F1=52.8%. Conclusions We conclude that our approach is a promising alternative for unsupervised high-throughput phenotyping. PMID:23320851</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGeod..91..547H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGeod..91..547H"><span>Model improvements and validation of TerraSAR-X precise orbit determination</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hackel, S.; Montenbruck, O.; Steigenberger, P.; Balss, U.; Gisinger, C.; Eineder, M.</p> <p>2017-05-01</p> <p>The radar imaging satellite mission TerraSAR-X requires precisely determined satellite orbits for validating geodetic remote sensing techniques. Since the achieved quality of the operationally derived, reduced-dynamic (RD) orbit solutions limits the capabilities of the synthetic aperture radar (SAR) validation, an effort is made to improve the estimated orbit solutions. This paper discusses the benefits of refined dynamical models on orbit accuracy as well as estimated empirical accelerations and compares different dynamic models in a RD orbit determination. Modeling aspects discussed in the paper include the use of a macro-model for drag and radiation pressure computation, the use of high-quality atmospheric density and wind models as well as the benefit of high-fidelity gravity and ocean tide models. The Sun-synchronous dusk-dawn orbit geometry of TerraSAR-X results in a particular high correlation of solar radiation pressure modeling and estimated normal-direction positions. Furthermore, this mission offers a unique suite of independent sensors for orbit validation. Several parameters serve as quality indicators for the estimated satellite orbit solutions. These include the magnitude of the estimated empirical accelerations, satellite laser ranging (SLR) residuals, and SLR-based orbit corrections. Moreover, the radargrammetric distance measurements of the SAR instrument are selected for assessing the quality of the orbit solutions and compared to the SLR analysis. The use of high-fidelity satellite dynamics models in the RD approach is shown to clearly improve the orbit quality compared to simplified models and loosely constrained empirical accelerations. The estimated empirical accelerations are substantially reduced by 30% in tangential direction when working with the refined dynamical models. Likewise the SLR residuals are reduced from -3 ± 17 to 2 ± 13 mm, and the SLR-derived normal-direction position corrections are reduced from 15 to 6 mm, obtained from the 2012-2014 period. The radar range bias is reduced from -10.3 to -6.1 mm with the updated orbit solutions, which coincides with the reduced standard deviation of the SLR residuals. The improvements are mainly driven by the satellite macro-model for the purpose of solar radiation pressure modeling, improved atmospheric density models, and the use of state-of-the-art gravity field models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22126831-photometric-kinematic-structure-face-disk-galaxies-iii-kinematic-inclinations-from-alpha-velocity-fields','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22126831-photometric-kinematic-structure-face-disk-galaxies-iii-kinematic-inclinations-from-alpha-velocity-fields"><span>THE PHOTOMETRIC AND KINEMATIC STRUCTURE OF FACE-ON DISK GALAXIES. III. KINEMATIC INCLINATIONS FROM H{alpha} VELOCITY FIELDS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Andersen, David R.; Bershady, Matthew A., E-mail: david.andersen@nrc-cnrc.gc.ca, E-mail: mab@astro.wisc.edu</p> <p>2013-05-01</p> <p>Using the integral field unit DensePak on the WIYN 3.5 m telescope we have obtained H{alpha} velocity fields of 39 nearly face-on disks at echelle resolutions. High-quality, uniform kinematic data and a new modeling technique enabled us to derive accurate and precise kinematic inclinations with mean i{sub kin} = 23 Degree-Sign for 90% of these galaxies. Modeling the kinematic data as single, inclined disks in circular rotation improves upon the traditional tilted-ring method. We measure kinematic inclinations with a precision in sin i of 25% at 20 Degree-Sign and 6% at 30 Degree-Sign . Kinematic inclinations are consistent with photometricmore » and inverse Tully-Fisher inclinations when the sample is culled of galaxies with kinematic asymmetries, for which we give two specific prescriptions. Kinematic inclinations can therefore be used in statistical ''face-on'' Tully-Fisher studies. A weighted combination of multiple, independent inclination measurements yield the most precise and accurate inclination. Combining inverse Tully-Fisher inclinations with kinematic inclinations yields joint probability inclinations with a precision in sin i of 10% at 15 Degree-Sign and 5% at 30 Degree-Sign . This level of precision makes accurate mass decompositions of galaxies possible even at low inclination. We find scaling relations between rotation speed and disk-scale length identical to results from more inclined samples. We also observe the trend of more steeply rising rotation curves with increased rotation speed and light concentration. This trend appears to be uncorrelated with disk surface brightness.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5993128','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5993128"><span>Small field models with gravitational wave signature supported by CMB data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Brustein, Ramy</p> <p>2018-01-01</p> <p>We study scale dependence of the cosmic microwave background (CMB) power spectrum in a class of small, single-field models of inflation which lead to a high value of the tensor to scalar ratio. The inflaton potentials that we consider are degree 5 polynomials, for which we precisely calculate the power spectrum, and extract the cosmological parameters: the scalar index ns, the running of the scalar index nrun and the tensor to scalar ratio r. We find that for non-vanishing nrun and for r as small as r = 0.001, the precisely calculated values of ns and nrun deviate significantly from what the standard analytic treatment predicts. We study in detail, and discuss the probable reasons for such deviations. As such, all previously considered models (of this kind) are based upon inaccurate assumptions. We scan the possible values of potential parameters for which the cosmological parameters are within the allowed range by observations. The 5 parameter class is able to reproduce all of the allowed values of ns and nrun for values of r that are as high as 0.001. Subsequently this study at once refutes previous such models built using the analytical Stewart-Lyth term, and revives the small field brand, by building models that do yield an appreciable r while conforming to known CMB observables. PMID:29795608</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvL.116a2501G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvL.116a2501G"><span>High Precision Determination of the β Decay QEC Value of 11C and Implications on the Tests of the Standard Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gulyuz, K.; Bollen, G.; Brodeur, M.; Bryce, R. A.; Cooper, K.; Eibach, M.; Izzo, C.; Kwan, E.; Manukyan, K.; Morrissey, D. J.; Naviliat-Cuncic, O.; Redshaw, M.; Ringle, R.; Sandler, R.; Schwarz, S.; Sumithrarachchi, C. S.; Valverde, A. A.; Villari, A. C. C.</p> <p>2016-01-01</p> <p>We report the determination of the QEC value of the mirror transition of 11C by measuring the atomic masses of 11C and 11B using Penning trap mass spectrometry. More than an order of magnitude improvement in precision is achieved as compared to the 2012 Atomic Mass Evaluation (Ame2012) [Chin. Phys. C 36, 1603 (2012)]. This leads to a factor of 3 improvement in the calculated F t value. Using the new value, QEC=1981.690 (61 ) keV , the uncertainty on F t is no longer dominated by the uncertainty on the QEC value. Based on this measurement, we provide an updated estimate of the Gamow-Teller to Fermi mixing ratio and standard model values of the correlation coefficients.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AAS...22914810H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AAS...22914810H"><span>The Eclipsing Central Stars of the Planetary Nebulae Lo 16 and PHR J1040-5417</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hillwig, Todd C.; Frew, David; Jones, David; Crispo, Danielle</p> <p>2017-01-01</p> <p>Binary central stars of planetary nebula are a valuable tool in understanding common envelope evolution. In these cases both the resulting close binary system and the expanding envelope (the planetary nebula) can be studied directly. In order to compare observed systems with common envelope evolution models we need to determine precise physical parameters of the binaries and the nebulae. Eclipsing central stars provide us with the best opportunity to determine high precision values for mass, radius, and temperature of the component stars in these close binaries. We present photometry and spectroscopy for two of these eclipsing systems; the central stars of Lo 16 and PHR 1040-5417. Using light curves and radial velocity curves along with binary modeling we provide physical parameters for the stars in both of these systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26799013','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26799013"><span>High Precision Determination of the β Decay Q(EC) Value of (11)C and Implications on the Tests of the Standard Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gulyuz, K; Bollen, G; Brodeur, M; Bryce, R A; Cooper, K; Eibach, M; Izzo, C; Kwan, E; Manukyan, K; Morrissey, D J; Naviliat-Cuncic, O; Redshaw, M; Ringle, R; Sandler, R; Schwarz, S; Sumithrarachchi, C S; Valverde, A A; Villari, A C C</p> <p>2016-01-08</p> <p>We report the determination of the Q(EC) value of the mirror transition of (11)C by measuring the atomic masses of (11)C and (11)B using Penning trap mass spectrometry. More than an order of magnitude improvement in precision is achieved as compared to the 2012 Atomic Mass Evaluation (Ame2012) [Chin. Phys. C 36, 1603 (2012)]. This leads to a factor of 3 improvement in the calculated Ft value. Using the new value, Q(EC)=1981.690(61)  keV, the uncertainty on Ft is no longer dominated by the uncertainty on the Q(EC) value. Based on this measurement, we provide an updated estimate of the Gamow-Teller to Fermi mixing ratio and standard model values of the correlation coefficients.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhRvA..90a2501M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhRvA..90a2501M"><span>Consistent calculation of the screening and exchange effects in allowed β- transitions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mougeot, X.; Bisch, C.</p> <p>2014-07-01</p> <p>The atomic exchange effect has previously been demonstrated to have a great influence at low energy on the Pu241 β- transition. The screening effect has been given as a possible explanation for a remaining discrepancy. Improved calculations have been made to consistently evaluate these two atomic effects, compared here to the recent high-precision measurements of Pu241 and Ni63 β spectra. In this paper a screening correction has been defined to account for the spatial extension of the electron wave functions. Excellent overall agreement of about 1% from 500 eV to the end-point energy has been obtained for both β spectra, which demonstrates that a rather simple β decay model for allowed transitions, including atomic effects within an independent-particle model, is sufficient to describe well the current most precise measurements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014MmSAI..85..192C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014MmSAI..85..192C"><span>Precision measures of the primordial deuterium abundance.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cooke, R. J.; Pettini, M.; Jorgenson, R. A.; Murphy, M. T.; Steidel, C. C.</p> <p></p> <p>Near-pristine damped Lyman-alpha systems (DLAs) are the ideal environments to measure the primordial abundance of deuterium. In this conference report, I summarise our ongoing research programme to obtain the most precise determination of the primordial deuterium abundance from five high redshift DLAs. From this sample, we derive (D/H)_p = (2.53±0.04)×105, corresponding to a baryon density 100 Omega_b ,0 h2 = 2.202±0.046 assuming the standard model of Big Bang Nucleosynthesis. This value is in striking agreement with that measured from the temperature fluctuations imprinted on the cosmic microwave background. Although we find no strong evidence for new physics beyond the standard model, this line of research shows great promise in the near-future, when the next generation 30+ m telescopes equipped with echelle spectrographs come online.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11374535','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11374535"><span>Use of the preconditioned conjugate gradient algorithm as a generic solver for mixed-model equations in animal breeding applications.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tsuruta, S; Misztal, I; Strandén, I</p> <p>2001-05-01</p> <p>Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9683E..27Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9683E..27Y"><span>Study on manufacturing method of optical surface with high precision in angle and surface</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yu, Xin; Li, Xin; Yu, Ze; Zhao, Bin; Zhang, Xuebin; Sun, Lipeng; Tong, Yi</p> <p>2016-10-01</p> <p>This paper studied a manufacturing processing of optical surface with high precision in angel and surface. By theoretical analysis of the relationships between the angel precision and surface, the measurement conversion of the technical indicators, optical-cement method application, the optical-cement tooling design, the experiment has been finished successfully, the processing method has been verified, which can be also used in the manufacturing of the optical surface with similar high precision in angle and surface.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8905E..19W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8905E..19W"><span>Method of high precision interval measurement in pulse laser ranging system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong</p> <p>2013-09-01</p> <p>Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27864201','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27864201"><span>Precision medicine in oncology: New practice models and roles for oncology pharmacists.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Walko, Christine; Kiel, Patrick J; Kolesar, Jill</p> <p>2016-12-01</p> <p>Three different precision medicine practice models developed by oncology pharmacists are described, including strategies for implementation and recommendations for educating the next generation of oncology pharmacy practitioners. Oncology is unique in that somatic mutations can both drive the development of a tumor and serve as a therapeutic target for treating the cancer. Precision medicine practice models are a forum through which interprofessional teams, including pharmacists, discuss tumor somatic mutations to guide patient-specific treatment. The University of Wisconsin, Indiana University, and Moffit Cancer Center have implemented precision medicine practice models developed and led by oncology pharmacists. Different practice models, including a clinic, a clinical consultation service, and a molecular tumor board (MTB), were adopted to enhance integration into health systems and payment structures. Although the practice models vary, commonalities of three models include leadership by the clinical pharmacist, specific therapeutic recommendations, procurement of medications for off-label use, and a research component. These three practice models function as interprofessional training sites for pharmacy and medical students and residents, providing an important training resource at these institutions. Key implementation strategies include interprofessional involvement, institutional support, integration into clinical workflow, and selection of model by payer mix. MTBs are a pathway for clinical implementation of genomic medicine in oncology and are an emerging practice model for oncology pharmacists. Because pharmacists must be prepared to participate fully in contemporary practice, oncology pharmacy residents must be trained in genomic oncology, schools of pharmacy should expand precision medicine and genomics education, and opportunities for continuing education in precision medicine should be made available to practicing pharmacists. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ISPAr42.3..271D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ISPAr42.3..271D"><span>High Resolution Seamless Dom Generation Over CHANG'E-5 Landing Area Using Lroc Nac Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Di, K.; Jia, M.; Xin, X.; Liu, B.; Liu, Z.; Peng, M.; Yue, Z.</p> <p>2018-04-01</p> <p>Chang'e-5, China's first sample return lunar mission, will be launched in 2019, and the planned landing area is near Mons Rümker in Oceanus Procellarum. High-resolution and high-precision mapping of the landing area is of great importance for supporting scientific analysis and safe landing. This paper proposes a systematic method for large area seamless digital orthophoto map (DOM) generation, and presents the mapping result of Chang'e-5 landing area using over 700 LROC NAC images. The developed method mainly consists of two stages of data processing: stage 1 includes subarea block adjustment with rational function model (RFM) and seamless subarea DOM generation; stage 2 includes whole area adjustment through registration of the subarea DOMs with thin plate spline model and seamless DOM mosaicking. The resultant seamless DOM coves a large area (20° longitude × 4° latitude) and is tied to the widely used reference DEM - SLDEM2015. As a result, the RMS errors of the tie points are all around half pixel in image space, indicating a high internal precision; the RMS errors of the control points are about one grid cell size of SLDEM2015, indicating that the resultant DOM is tied to SLDEM2015 well.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27617635','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27617635"><span>Potassium isotopic evidence for a high-energy giant impact origin of the Moon.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Kun; Jacobsen, Stein B</p> <p>2016-10-27</p> <p>The Earth-Moon system has unique chemical and isotopic signatures compared with other planetary bodies; any successful model for the origin of this system therefore has to satisfy these chemical and isotopic constraints. The Moon is substantially depleted in volatile elements such as potassium compared with the Earth and the bulk solar composition, and it has long been thought to be the result of a catastrophic Moon-forming giant impact event. Volatile-element-depleted bodies such as the Moon were expected to be enriched in heavy potassium isotopes during the loss of volatiles; however such enrichment was never found. Here we report new high-precision potassium isotope data for the Earth, the Moon and chondritic meteorites. We found that the lunar rocks are significantly (>2σ) enriched in the heavy isotopes of potassium compared to the Earth and chondrites (by around 0.4 parts per thousand). The enrichment of the heavy isotope of potassium in lunar rocks compared with those of the Earth and chondrites can be best explained as the result of the incomplete condensation of a bulk silicate Earth vapour at an ambient pressure that is higher than 10 bar. We used these coupled constraints of the chemical loss and isotopic fractionation of K to compare two recent dynamic models that were used to explain the identical non-mass-dependent isotope composition of the Earth and the Moon. Our K isotope result is inconsistent with the low-energy disk equilibration model, but supports the high-energy, high-angular-momentum giant impact model for the origin of the Moon. High-precision potassium isotope data can also be used as a 'palaeo-barometer' to reveal the physical conditions during the Moon-forming event.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007SPIE.6616E..21J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007SPIE.6616E..21J"><span>Precision mechatronics based on high-precision measuring and positioning systems and machines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jäger, Gerd; Manske, Eberhard; Hausotte, Tino; Mastylo, Rostyslav; Dorozhovets, Natalja; Hofmann, Norbert</p> <p>2007-06-01</p> <p>Precision mechatronics is defined in the paper as the science and engineering of a new generation of high precision systems and machines. Nanomeasuring and nanopositioning engineering represents important fields of precision mechatronics. The nanometrology is described as the today's limit of the precision engineering. The problem, how to design nanopositioning machines with uncertainties as small as possible will be discussed. The integration of several optical and tactile nanoprobes makes the 3D-nanopositioning machine suitable for various tasks, such as long range scanning probe microscopy, mask and wafer inspection, nanotribology, nanoindentation, free form surface measurement as well as measurement of microoptics, precision molds, microgears, ring gauges and small holes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ISPAr49B3..251J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ISPAr49B3..251J"><span>Classification of LIDAR Data for Generating a High-Precision Roadway Map</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jeong, J.; Lee, I.</p> <p>2016-06-01</p> <p>Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19760007946','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19760007946"><span>Computer aided flexible envelope designs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Resch, R. D.</p> <p>1975-01-01</p> <p>Computer aided design methods are presented for the design and construction of strong, lightweight structures which require complex and precise geometric definition. The first, flexible structures, is a unique system of modeling folded plate structures and space frames. It is possible to continuously vary the geometry of a space frame to produce large, clear spans with curvature. The second method deals with developable surfaces, where both folding and bending are explored with the observed constraint of available building materials, and what minimal distortion result in maximum design capability. Alternative inexpensive fabrication techniques are being developed to achieve computer defined enclosures which are extremely lightweight and mathematically highly precise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27910041','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27910041"><span>Efficient Genome Editing in Induced Pluripotent Stem Cells with Engineered Nucleases In Vitro.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Termglinchan, Vittavat; Seeger, Timon; Chen, Caressa; Wu, Joseph C; Karakikes, Ioannis</p> <p>2017-01-01</p> <p>Precision genome engineering is rapidly advancing the application of the induced pluripotent stem cells (iPSCs) technology for in vitro disease modeling of cardiovascular diseases. Targeted genome editing using engineered nucleases is a powerful tool that allows for reverse genetics, genome engineering, and targeted transgene integration experiments to be performed in a precise and predictable manner. However, nuclease-mediated homologous recombination is an inefficient process. Herein, we describe the development of an optimized method combining site-specific nucleases and the piggyBac transposon system for "seamless" genome editing in pluripotent stem cells with high efficiency and fidelity in vitro.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950037856&hterms=gravity+model&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dgravity%2Bmodel','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950037856&hterms=gravity+model&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dgravity%2Bmodel"><span>Gravity model improvement using the DORIS tracking system on the SPOT 2 satellite</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nerem, R. S.; Lerch, F. J.; Williamson, R. G.; Klosko, S. M.; Robbins, J. W.; Patel, G. B.</p> <p>1994-01-01</p> <p>A high-precision radiometric satellite tracking system, Doppler Orbitography and Radio-positioning Integrated by Satellite system (DORIS), has recently been developed by the French space agency, Centre National d'Etudes Spatiales (CNES). DORIS was designed to provide tracking support for missions such as the joint United States/French TOPEX/Poseidon. As part of the flight testing process, a DORIS package was flown on the French SPOT 2 satellite. A substantial quantity of geodetic quality tracking data was obtained on SPOT 2 from an extensive international DORIS tracking network. These data were analyzed to assess their accuracy and to evaluate the gravitational modeling enhancements provided by these data in combination with the Goddard Earth Model-T3 (GEM-T3) gravitational model. These observations have noise levels of 0.4 to 0.5 mm/s, with few residual systematic effects. Although the SPOT 2 satellite experiences high atmospheric drag forces, the precision and global coverage of the DORIS tracking data have enabled more extensive orbit parameterization to mitigate these effects. As a result, the SPOT 2 orbital errors have been reduced to an estimated radial accuracy in the 10-20 cm RMS range. The addition of these data, which encompass many regions heretofore lacking in precision satellite tracking, has significantly improved GEM-T3 and allowed greatly improved orbit accuracies for Sun-synchronous satellites like SPOT 2 (such as ERS 1 and EOS). Comparison of the ensuing gravity model with other contemporary fields (GRIM-4C2, TEG2B, and OSU91A) provides a means to assess the current state of knowledge of the Earth's gravity field. Thus, the DORIS experiment on SPOT 2 has provided a strong basis for evaluating this new orbit tracking technology and has demonstrated the important contribution of the DORIS network to the success of the TOPEX/Poseidon mission.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25969089','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25969089"><span>On the convergence and accuracy of the FDTD method for nanoplasmonics.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora</p> <p>2015-04-20</p> <p>Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large errors but reduces the computational resources required.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24100765','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24100765"><span>Marker-based or model-based RSA for evaluation of hip resurfacing arthroplasty? A clinical validation and 5-year follow-up.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lorenzen, Nina Dyrberg; Stilling, Maiken; Jakobsen, Stig Storgaard; Gustafson, Klas; Søballe, Kjeld; Baad-Hansen, Thomas</p> <p>2013-11-01</p> <p>The stability of implants is vital to ensure a long-term survival. RSA determines micro-motions of implants as a predictor of early implant failure. RSA can be performed as a marker- or model-based analysis. So far, CAD and RE model-based RSA have not been validated for use in hip resurfacing arthroplasty (HRA). A phantom study determined the precision of marker-based and CAD and RE model-based RSA on a HRA implant. In a clinical study, 19 patients were followed with stereoradiographs until 5 years after surgery. Analysis of double-examination migration results determined the clinical precision of marker-based and CAD model-based RSA, and at the 5-year follow-up, results of the total translation (TT) and the total rotation (TR) for marker- and CAD model-based RSA were compared. The phantom study showed that comparison of the precision (SDdiff) in marker-based RSA analysis was more precise than model-based RSA analysis in TT (p CAD < 0.001; p RE = 0.04) and TR (p CAD = 0.01; p RE < 0.001). The clinical precision (double examination in 8 patients) comparing the precision SDdiff was better evaluating the TT using the marker-based RSA analysis (p = 0.002), but showed no difference between the marker- and CAD model-based RSA analysis regarding the TR (p = 0.91). Comparing the mean signed values regarding the TT and the TR at the 5-year follow-up in 13 patients, the TT was lower (p = 0.03) and the TR higher (p = 0.04) in the marker-based RSA compared to CAD model-based RSA. The precision of marker-based RSA was significantly better than model-based RSA. However, problems with occluded markers lead to exclusion of many patients which was not a problem with model-based RSA. HRA were stable at the 5-year follow-up. The detection limit was 0.2 mm TT and 1° TR for marker-based and 0.5 mm TT and 1° TR for CAD model-based RSA for HRA.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012amos.confE..51M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012amos.confE..51M"><span>Precision Orbit Derived Atmospheric Density: Development and Performance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.</p> <p>2012-09-01</p> <p>Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29596330','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29596330"><span>Modeling and Assessment of Precise Time Transfer by Using BeiDou Navigation Satellite System Triple-Frequency Signals.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tu, Rui; Zhang, Pengfei; Zhang, Rui; Liu, Jinhai; Lu, Xiaochun</p> <p>2018-03-29</p> <p>This study proposes two models for precise time transfer using the BeiDou Navigation Satellite System triple-frequency signals: ionosphere-free (IF) combined precise point positioning (PPP) model with two dual-frequency combinations (IF-PPP1) and ionosphere-free combined PPP model with a single triple-frequency combination (IF-PPP2). A dataset with a short baseline (with a common external time frequency) and a long baseline are used for performance assessments. The results show that IF-PPP1 and IF-PPP2 models can both be used for precise time transfer using BeiDou Navigation Satellite System (BDS) triple-frequency signals, and the accuracy and stability of time transfer is the same in both cases, except for a constant system bias caused by the hardware delay of different frequencies, which can be removed by the parameter estimation and prediction with long time datasets or by a priori calibration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27551656','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27551656"><span>Precise methane absorption measurements in the 1.64 μm spectral region for the MERLIN mission.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Delahaye, T; Maxwell, S E; Reed, Z D; Lin, H; Hodges, J T; Sung, K; Devi, V M; Warneke, T; Spietz, P; Tran, H</p> <p>2016-06-27</p> <p>In this article we describe a high-precision laboratory measurement targeting the R(6) manifold of the 2 ν 3 band of 12 CH 4 . Accurate physical models of this absorption spectrum will be required by the Franco-German, Methane Remote Sensing LIDAR (MERLIN) space mission for retrievals of atmospheric methane. The analysis uses the Hartmann-Tran profile for modeling line shape and also includes line-mixing effects. To this end, six high-resolution and high signal-to-noise absorption spectra of air-broadened methane were recorded using a frequency-stabilized cavity ring-down spectroscopy apparatus. Sample conditions corresponded to room temperature and spanned total sample pressures of 40 hPa - 1013 hPa with methane molar fractions between 1 μmol mol -1 and 12 μmol mol -1 . All spectroscopic model parameters were simultaneously adjusted in a multispectrum nonlinear least-squares fit to the six measured spectra. Comparison of the fitted model to the measured spectra reveals the ability to calculate the room-temperature, methane absorption coefficient to better than 0.1% at the on-line position of the MERLIN mission. This is the first time that such fidelity has been reached in modeling methane absorption in the investigated spectral region, fulfilling the accuracy requirements of the MERLIN mission. We also found excellent agreement when comparing the present results with measurements obtained over different pressure conditions and using other laboratory techniques. Finally, we also evaluated the impact of these new spectral parameters on atmospheric transmissions spectra calculations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4990787','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4990787"><span>Precise methane absorption measurements in the 1.64 μm spectral region for the MERLIN mission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Delahaye, T.; Maxwell, S.E.; Reed, Z.D.; Lin, H.; Hodges, J.T.; Sung, K.; Devi, V.M.; Warneke, T.; Spietz, P.; Tran, H.</p> <p>2016-01-01</p> <p>In this article we describe a high-precision laboratory measurement targeting the R(6) manifold of the 2ν3 band of 12CH4. Accurate physical models of this absorption spectrum will be required by the Franco-German, Methane Remote Sensing LIDAR (MERLIN) space mission for retrievals of atmospheric methane. The analysis uses the Hartmann-Tran profile for modeling line shape and also includes line-mixing effects. To this end, six high-resolution and high signal-to-noise absorption spectra of air-broadened methane were recorded using a frequency-stabilized cavity ring-down spectroscopy apparatus. Sample conditions corresponded to room temperature and spanned total sample pressures of 40 hPa – 1013 hPa with methane molar fractions between 1 μmol mol−1 and 12 μmol mol−1. All spectroscopic model parameters were simultaneously adjusted in a multispectrum nonlinear least-squares fit to the six measured spectra. Comparison of the fitted model to the measured spectra reveals the ability to calculate the room-temperature, methane absorption coefficient to better than 0.1% at the on-line position of the MERLIN mission. This is the first time that such fidelity has been reached in modeling methane absorption in the investigated spectral region, fulfilling the accuracy requirements of the MERLIN mission. We also found excellent agreement when comparing the present results with measurements obtained over different pressure conditions and using other laboratory techniques. Finally, we also evaluated the impact of these new spectral parameters on atmospheric transmissions spectra calculations. PMID:27551656</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5872631','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5872631"><span>Genome Editing Redefines Precision Medicine in the Cardiovascular Field</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lahm, Harald; Dreßen, Martina; Lange, Rüdiger; Wu, Sean M.; Krane, Markus</p> <p>2018-01-01</p> <p>Genome editing is a powerful tool to study the function of specific genes and proteins important for development or disease. Recent technologies, especially CRISPR/Cas9 which is characterized by convenient handling and high precision, revolutionized the field of genome editing. Such tools have enormous potential for basic science as well as for regenerative medicine. Nevertheless, there are still several hurdles that have to be overcome, but patient-tailored therapies, termed precision medicine, seem to be within reach. In this review, we focus on the achievements and limitations of genome editing in the cardiovascular field. We explore different areas of cardiac research and highlight the most important developments: (1) the potential of genome editing in human pluripotent stem cells in basic research for disease modelling, drug screening, or reprogramming approaches and (2) the potential and remaining challenges of genome editing for regenerative therapies. Finally, we discuss social and ethical implications of these new technologies. PMID:29731778</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.898d2022M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.898d2022M"><span>A precision device needs precise simulation: Software description of the CBM Silicon Tracking System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Malygina, Hanna; Friese, Volker; <author pre="for the"> CBM Collaboration</p> <p>2017-10-01</p> <p>Precise modelling of detectors in simulations is the key to the understanding of their performance, which, in turn, is a prerequisite for the proper design choice and, later, for the achievement of valid physics results. In this report, we describe the implementation of the Silicon Tracking System (STS), the main tracking device of the CBM experiment, in the CBM software environment. The STS makes uses of double-sided silicon micro-strip sensors with double metal layers. We present a description of transport and detector response simulation, including all relevant physical effects like charge creation and drift, charge collection, cross-talk and digitization. Of particular importance and novelty is the description of the time behaviour of the detector, since its readout will not be externally triggered but continuous. We also cover some aspects of local reconstruction, which in the CBM case has to be performed in real-time and thus requires high-speed algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..APR.B9008Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..APR.B9008Z"><span>PROSPECT - A Precision Oscillation and Spectrum Experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Xianyi; Prospect Collaboration</p> <p>2017-01-01</p> <p>PROSPECT, the PRecision Oscillation and SPECTrum Experiment, is a multi-phased short baseline reactor antineutrino experiment that aims to precisely measure the U-235 antineutrino spectrum and prob for oscillation effects involving a possible Δm2 1 eV2 scale sterile neutrino. In PROSPECT Phase-I, an optically segmented Li-6 loaded liquid scintillator detector will be deployed at at the baseline of 7-12m from the High Flux Isotope Reactor at the Oak Ridge National Laboratory. PROSPECT will measure the spectrum of U-235 to aid in resolving the unexplained inconsistency between predictive spectral models and recent experimental measurements using LEU cores, while the oscillation measurement will probe the best fit region suggested by global fitting studies within 1-year data taking. This talk will introduce the design of PROSPECT Phase-I, the discovery potential of the experiment, and the progress the collaboration has made toward realizing PROSPECT Phase-I. Department of Energy</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA394520','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA394520"><span>Technologies for Future Precision Strike Missile Systems (les Technologies des futurs systemes de missiles pour frappe de precision)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2001-07-01</p> <p>hardware - in - loop (HWL) simulation is also developed...Firings / Engine Tests Structure Test Hardware In - Loop Simulation Subsystem Test Lab Tests Seeker Actuators Sensors Electronics Propulsion Model Aero Model...Structure Test Hardware In - Loop Simulation Subsystem Test Lab Tests Seeker Actuators Sensors Electronics Propulsion Model Aero Model Model</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930017715','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930017715"><span>Process membership in asynchronous environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ricciardi, Aleta M.; Birman, Kenneth P.</p> <p>1993-01-01</p> <p>The development of reliable distributed software is simplified by the ability to assume a fail-stop failure model. The emulation of such a model in an asynchronous distributed environment is discussed. The solution proposed, called Strong-GMP, can be supported through a highly efficient protocol, and was implemented as part of a distributed systems software project at Cornell University. The precise definition of the problem, the protocol, correctness proofs, and an analysis of costs are addressed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4299107','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4299107"><span>Effects of Reduced Terrestrial LiDAR Point Density on High-Resolution Grain Crop Surface Models in Precision Agriculture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hämmerle, Martin; Höfle, Bernhard</p> <p>2014-01-01</p> <p>3D geodata play an increasingly important role in precision agriculture, e.g., for modeling in-field variations of grain crop features such as height or biomass. A common data capturing method is LiDAR, which often requires expensive equipment and produces large datasets. This study contributes to the improvement of 3D geodata capturing efficiency by assessing the effect of reduced scanning resolution on crop surface models (CSMs). The analysis is based on high-end LiDAR point clouds of grain crop fields of different varieties (rye and wheat) and nitrogen fertilization stages (100%, 50%, 10%). Lower scanning resolutions are simulated by keeping every n-th laser beam with increasing step widths n. For each iteration step, high-resolution CSMs (0.01 m2 cells) are derived and assessed regarding their coverage relative to a seamless CSM derived from the original point cloud, standard deviation of elevation and mean elevation. Reducing the resolution to, e.g., 25% still leads to a coverage of >90% and a mean CSM elevation of >96% of measured crop height. CSM types (maximum elevation or 90th-percentile elevation) react differently to reduced scanning resolutions in different crops (variety, density). The results can help to assess the trade-off between CSM quality and minimum requirements regarding equipment and capturing set-up. PMID:25521383</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015MNRAS.447..711S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015MNRAS.447..711S"><span>High-precision photometry by telescope defocusing - VII. The ultrashort period planet WASP-103</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Southworth, John; Mancini, L.; Ciceri, S.; Budaj, J.; Dominik, M.; Figuera Jaimes, R.; Haugbølle, T.; Jørgensen, U. G.; Popovas, A.; Rabus, M.; Rahvar, S.; von Essen, C.; Schmidt, R. W.; Wertz, O.; Alsubai, K. A.; Bozza, V.; Bramich, D. M.; Calchi Novati, S.; D'Ago, G.; Hinse, T. C.; Henning, Th.; Hundertmark, M.; Juncher, D.; Korhonen, H.; Skottfelt, J.; Snodgrass, C.; Starkey, D.; Surdej, J.</p> <p>2015-02-01</p> <p>We present 17 transit light curves of the ultrashort period planetary system WASP-103, a strong candidate for the detection of tidally-induced orbital decay. We use these to establish a high-precision reference epoch for transit timing studies. The time of the reference transit mid-point is now measured to an accuracy of 4.8 s, versus 67.4 s in the discovery paper, aiding future searches for orbital decay. With the help of published spectroscopic measurements and theoretical stellar models, we determine the physical properties of the system to high precision and present a detailed error budget for these calculations. The planet has a Roche lobe filling factor of 0.58, leading to a significant asphericity; we correct its measured mass and mean density for this phenomenon. A high-resolution Lucky Imaging observation shows no evidence for faint stars close enough to contaminate the point spread function of WASP-103. Our data were obtained in the Bessell RI and the SDSS griz passbands and yield a larger planet radius at bluer optical wavelengths, to a confidence level of 7.3σ. Interpreting this as an effect of Rayleigh scattering in the planetary atmosphere leads to a measurement of the planetary mass which is too small by a factor of 5, implying that Rayleigh scattering is not the main cause of the variation of radius with wavelength.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26703614','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26703614"><span>Instantaneous Real-Time Kinematic Decimeter-Level Positioning with BeiDou Triple-Frequency Signals over Medium Baselines.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>He, Xiyang; Zhang, Xiaohong; Tang, Long; Liu, Wanke</p> <p>2015-12-22</p> <p>Many applications, such as marine navigation, land vehicles location, etc., require real time precise positioning under medium or long baseline conditions. In this contribution, we develop a model of real-time kinematic decimeter-level positioning with BeiDou Navigation Satellite System (BDS) triple-frequency signals over medium distances. The ambiguities of two extra-wide-lane (EWL) combinations are fixed first, and then a wide lane (WL) combination is reformed based on the two EWL combinations for positioning. Theoretical analysis and empirical analysis is given of the ambiguity fixing rate and the positioning accuracy of the presented method. The results indicate that the ambiguity fixing rate can be up to more than 98% when using BDS medium baseline observations, which is much higher than that of dual-frequency Hatch-Melbourne-Wübbena (HMW) method. As for positioning accuracy, decimeter level accuracy can be achieved with this method, which is comparable to that of carrier-smoothed code differential positioning method. Signal interruption simulation experiment indicates that the proposed method can realize fast high-precision positioning whereas the carrier-smoothed code differential positioning method needs several hundreds of seconds for obtaining high precision results. We can conclude that a relatively high accuracy and high fixing rate can be achieved for triple-frequency WL method with single-epoch observations, displaying significant advantage comparing to traditional carrier-smoothed code differential positioning method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3446201','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3446201"><span>Precision of working memory for visual motion sequences and transparent motion surfaces</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zokaei, Nahid; Gorgoraptis, Nikos; Bahrami, Bahador; Bays, Paul M; Husain, Masud</p> <p>2012-01-01</p> <p>Recent studies investigating working memory for location, colour and orientation support a dynamic resource model. We examined whether this might also apply to motion, using random dot kinematograms (RDKs) presented sequentially or simultaneously. Mean precision for motion direction declined as sequence length increased, with precision being lower for earlier RDKs. Two alternative models of working memory were compared specifically to distinguish between the contributions of different sources of error that corrupt memory (Zhang & Luck (2008) vs. Bays et al (2009)). The latter provided a significantly better fit for the data, revealing that decrease in memory precision for earlier items is explained by an increase in interference from other items in a sequence, rather than random guessing or a temporal decay of information. Misbinding feature attributes is an important source of error in working memory. Precision of memory for motion direction decreased when two RDKs were presented simultaneously as transparent surfaces, compared to sequential RDKs. However, precision was enhanced when one motion surface was prioritized, demonstrating that selective attention can improve recall precision. These results are consistent with a resource model that can be used as a general conceptual framework for understanding working memory across a range of visual features. PMID:22135378</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=345592','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=345592"><span>Using experimental design and spatial analyses to improve the precision of NDVI estimates in upland cotton field trials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>Controlling for spatial variability is important in high-throughput phenotyping studies that enable large numbers of genotypes to be evaluated across time and space. In the current study, we compared the efficacy of different experimental designs and spatial models in the analysis of canopy spectral...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=incredibles&pg=5&id=EJ990082','ERIC'); return false;" href="https://eric.ed.gov/?q=incredibles&pg=5&id=EJ990082"><span>Expert Anticipatory Skill in Striking Sports: A Review and a Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Muller, Sean; Abernethy, Bruce</p> <p>2012-01-01</p> <p>Expert performers in striking sports can hit objects moving at high speed with incredible precision. Exceptionally well developed anticipation skills are necessary to cope with the severe constraints on interception. In this paper, we provide a review of the empirical evidence regarding expert interception in striking sports and propose a…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017LPICo1987.6355F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017LPICo1987.6355F"><span>Oxygen, Magnesium, and Aluminum Isotopes in the Ivuna CAI: Re-Examining High-Temperature Fractionations in CI Chondrites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Frank, D. R.; Huss, G. R.; Nagashima, K.; Zolensky, M. E.; Le, L.</p> <p>2017-07-01</p> <p>The only whole CAI preserved in the aqueously altered CI chondrites is 16O-rich and has no resolvable radiogenic Mg. Accretion of CAIs by the CI parent object(s) may limit the precision of cosmochemical models that require a CI starting composition.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990092486&hterms=macmillan&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dmacmillan','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990092486&hterms=macmillan&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dmacmillan"><span>Global Velocities from VLBI</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ma, Chopo; Gordon, David; MacMillan, Daniel</p> <p>1999-01-01</p> <p>Precise geodetic Very Long Baseline Interferometry (VLBI) measurements have been made since 1979 at about 130 points on all major tectonic plates, including stable interiors and deformation zones. From the data set of about 2900 observing sessions and about 2.3 million observations, useful three-dimensional velocities can be derived for about 80 sites using an incremental least-squares adjustment of terrestrial, celestial, Earth rotation and site/session-specific parameters. The long history and high precision of the data yield formal errors for horizontal velocity as low as 0.1 mm/yr, but the limitation on the interpretation of individual site velocities is the tie to the terrestrial reference frame. Our studies indicate that the effect of converting precise relative VLBI velocities to individual site velocities is an error floor of about 0.4 mm/yr. Most VLBI horizontal velocities in stable plate interiors agree with the NUVEL-1A model, but there are significant departures in Africa and the Pacific. Vertical precision is worse by a factor of 2-3, and there are significant non-zero values that can be interpreted as post-glacial rebound, regional effects, and local disturbances.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4581206','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4581206"><span>Molecular Classification and Pharmacogenetics of Primary Plasma Cell Leukemia: An Initial Approach toward Precision Medicine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Simeon, Vittorio; Todoerti, Katia; La Rocca, Francesco; Caivano, Antonella; Trino, Stefania; Lionetti, Marta; Agnelli, Luca; De Luca, Luciana; Laurenzana, Ilaria; Neri, Antonino; Musto, Pellegrino</p> <p>2015-01-01</p> <p>Primary plasma cell leukemia (pPCL) is a rare and aggressive variant of multiple myeloma (MM) which may represent a valid model for high-risk MM. This disease is associated with a very poor prognosis, and unfortunately, it has not significantly improved during the last three decades. New high-throughput technologies have allowed a better understanding of the molecular basis of this disease and moved toward risk stratification, providing insights for targeted therapy studies. This knowledge, added to the pharmacogenetic profile of new and old agents in the analysis of efficacy and safety, could contribute to help clinical decisions move toward a precision medicine and a better clinical outcome for these patients. In this review, we describe the available literature concerning the genomic characterization and pharmacogenetics of plasma cell leukemia (PCL). PMID:26263974</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26263974','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26263974"><span>Molecular Classification and Pharmacogenetics of Primary Plasma Cell Leukemia: An Initial Approach toward Precision Medicine.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Simeon, Vittorio; Todoerti, Katia; La Rocca, Francesco; Caivano, Antonella; Trino, Stefania; Lionetti, Marta; Agnelli, Luca; De Luca, Luciana; Laurenzana, Ilaria; Neri, Antonino; Musto, Pellegrino</p> <p>2015-07-30</p> <p>Primary plasma cell leukemia (pPCL) is a rare and aggressive variant of multiple myeloma (MM) which may represent a valid model for high-risk MM. This disease is associated with a very poor prognosis, and unfortunately, it has not significantly improved during the last three decades. New high-throughput technologies have allowed a better understanding of the molecular basis of this disease and moved toward risk stratification, providing insights for targeted therapy studies. This knowledge, added to the pharmacogenetic profile of new and old agents in the analysis of efficacy and safety, could contribute to help clinical decisions move toward a precision medicine and a better clinical outcome for these patients. In this review, we describe the available literature concerning the genomic characterization and pharmacogenetics of plasma cell leukemia (PCL).</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27775671','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27775671"><span>Moving Object Detection Using Scanning Camera on a High-Precision Intelligent Holder.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Shuoyang; Xu, Tingfa; Li, Daqun; Zhang, Jizhou; Jiang, Shenwang</p> <p>2016-10-21</p> <p>During the process of moving object detection in an intelligent visual surveillance system, a scenario with complex background is sure to appear. The traditional methods, such as "frame difference" and "optical flow", may not able to deal with the problem very well. In such scenarios, we use a modified algorithm to do the background modeling work. In this paper, we use edge detection to get an edge difference image just to enhance the ability of resistance illumination variation. Then we use a "multi-block temporal-analyzing LBP (Local Binary Pattern)" algorithm to do the segmentation. In the end, a connected component is used to locate the object. We also produce a hardware platform, the core of which consists of the DSP (Digital Signal Processor) and FPGA (Field Programmable Gate Array) platforms and the high-precision intelligent holder.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.V53H..02W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.V53H..02W"><span>Calibrating Late Cretaceous Terrestrial Cyclostratigraphy with High-precision U-Pb Zircon Geochronology: Qingshankou Formation of the Songliao Basin, China</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, T.; Ramezani, J.; Wang, C.</p> <p>2015-12-01</p> <p>A continuous succession of Late Cretaceous lacustrine strata has been recovered from the SK-I south (SK-Is) and SKI north (SK-In) boreholes in the long-lived Cretaceous Songliao Basin in Northeast China. Establishing a high-resolution chronostratigraphic framework is a prerequisite for integrating the Songliao record with the global marine Cretaceous. We present high-precision U-Pb zircon geochronology by the chemical abrasion isotope dilution thermal-ionization mass spectrometry method from multiple bentonite core samples from the Late Cretaceous Qingshankou Formation in order to assess the astrochronological model for the Songliao Basin cyclostratigraphy. Our results from the SK-Is core present major improvements in precision and accuracy over the previously published geochronology and allow a cycle-level calibration of the cyclostratigraphy. The resulting choronostratigraphy suggest a good first-order agreement between the radioisotope geochronology and the established astrochronological time scale over the corresponding interval. The dated bentonite beds near the 1780 m depth straddle a prominent oil shale layer of the Qingshankou Formation, which records a basin-wide lake anoxic event (LAE1), providing a direct age constraint for the LAE1. The latter appears to coincide in time with the Late Cretaceous (Turonian) global sea level change event Tu4 presently constrained at 91.8 Ma.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJSyS..47.3078M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJSyS..47.3078M"><span>A robust and high precision optimal explicit guidance scheme for solid motor propelled launch vehicles with thrust and drag uncertainty</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maity, Arnab; Padhi, Radhakant; Mallaram, Sanjeev; Mallikarjuna Rao, G.; Manickavasagam, M.</p> <p>2016-10-01</p> <p>A new nonlinear optimal and explicit guidance law is presented in this paper for launch vehicles propelled by solid motors. It can ensure very high terminal precision despite not having the exact knowledge of the thrust-time curve apriori. This was motivated from using it for a carrier launch vehicle in a hypersonic mission, which demands an extremely narrow terminal accuracy window for the launch vehicle for successful initiation of operation of the hypersonic vehicle. The proposed explicit guidance scheme, which computes the optimal guidance command online, ensures the required stringent final conditions with high precision at the injection point. A key feature of the proposed guidance law is an innovative extension of the recently developed model predictive static programming guidance with flexible final time. A penalty function approach is also followed to meet the input and output inequality constraints throughout the vehicle trajectory. In this paper, the guidance law has been successfully validated from nonlinear six degree-of-freedom simulation studies by designing an inner-loop autopilot as well, which enhances confidence of its usefulness significantly. In addition to excellent nominal results, the proposed guidance has been found to have good robustness for perturbed cases as well.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvD..95a5023L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvD..95a5023L"><span>New neutrino physics and the altered shapes of solar neutrino spectra</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lopes, Ilídio</p> <p>2017-01-01</p> <p>Neutrinos coming from the Sun's core have been measured with high precision, and fundamental neutrino oscillation parameters have been determined with good accuracy. In this work, we estimate the impact that a new neutrino physics model, the so-called generalized Mikheyev-Smirnov-Wolfenstein (MSW) oscillation mechanism, has on the shape of some of leading solar neutrino spectra, some of which will be partially tested by the next generation of solar neutrino experiments. In these calculations, we use a high-precision standard solar model in good agreement with helioseismology data. We found that the neutrino spectra of the different solar nuclear reactions of the pp chains and carbon-nitrogen-oxygen cycle have quite distinct sensitivities to the new neutrino physics. The He P and 8B neutrino spectra are the ones in which their shapes are more affected when neutrinos interact with quarks in addition to electrons. The shapes of the 15O and 17F neutrino spectra are also modified, although in these cases the impact is much smaller. Finally, the impact in the shapes of the P P and 13N neutrino spectra is practically negligible.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26833260','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26833260"><span>COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun</p> <p>2016-05-05</p> <p>An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21155802','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21155802"><span>Gene expression during blow fly development: improving the precision of age estimates in forensic entomology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tarone, Aaron M; Foran, David R</p> <p>2011-01-01</p> <p>Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA237073','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA237073"><span>Technology Insertion (TI)/Industrial Process Improvement (IPI) Task Order Number 1. Quick Fix Plan for WR-ALC, 7 RCC’s</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1989-09-25</p> <p>Orders and test specifications. Some mandatory replacement of high failure items are directed by Technical Orders to extend MTBF. Precision bearing and...Experience is very high but natural attrition is reducing the numbers faster than training is furnishing younger mechanics. Surge conditions would be...model validation run output revealed that utilization of equipment is very low and manpower is high . Based on this analysis and the brainstorming</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22612671-overall-picture-cascade-gamma-decay-neutron-resonances-within-modified-practical-model','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22612671-overall-picture-cascade-gamma-decay-neutron-resonances-within-modified-practical-model"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Sukhovoj, A. M., E-mail: suchovoj@nf.jinr.ru; Mitsyna, L. V., E-mail: mitsyna@nf.jinr.ru; Jovancevic, N., E-mail: nikola.jovancevic@uns.ac.rs</p> <p></p> <p>The intensities of two-step cascades in 43 nuclei of mass number in the range of 28 ≤ A ≤ 200 were approximated to a high degree of precision within a modified version of the practical cascade-gammadecay model introduced earlier. In this version, the rate of the decrease in the model-dependent density of vibrational levels has the same value for any Cooper pair undergoing breakdown. The most probable values of radiative strength functions both for E1 and for M1 transitions are determined by using one or two peaks against a smooth model dependence on the gamma-transition energy. The statement that themore » thresholds for the breaking of Cooper pairs are higher for spherical than for deformed nuclei is a basic result of the respective analysis. The parameters of the cascade-decay process are now determined to a precision that makes it possible to observe the systematic distinctions between them for nuclei characterized by different parities of neutrons and protons.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ESASP.722E.202J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ESASP.722E.202J"><span>Precise Orbit Determination Of Low Earth Satellites At AIUB Using GPS And SLR Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jaggi, A.; Bock, H.; Thaller, D.; Sosnica, K.; Meyer, U.; Baumann, C.; Dach, R.</p> <p>2013-12-01</p> <p>An ever increasing number of low Earth orbiting (LEO) satellites is, or will be, equipped with retro-reflectors for Satellite Laser Ranging (SLR) and on-board receivers to collect observations from Global Navigation Satellite Systems (GNSS) such as the Global Positioning System (GPS) and the Russian GLONASS and the European Galileo systems in the future. At the Astronomical Institute of the University of Bern (AIUB) LEO precise orbit determination (POD) using either GPS or SLR data is performed for a wide range of applications for satellites at different altitudes. For this purpose the classical numerical integration techniques, as also used for dynamic orbit determination of satellites at high altitudes, are extended by pseudo-stochastic orbit modeling techniques to efficiently cope with potential force model deficiencies for satellites at low altitudes. Accuracies of better than 2 cm may be achieved by pseudo-stochastic orbit modeling for satellites at very low altitudes such as for the GPS-based POD of the Gravity field and steady-state Ocean Circulation Explorer (GOCE).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27721568','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27721568"><span>Improving the Reliability of Student Scores from Speeded Assessments: An Illustration of Conditional Item Response Theory Using a Computer-Administered Measure of Vocabulary.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Petscher, Yaacov; Mitchell, Alison M; Foorman, Barbara R</p> <p>2015-01-01</p> <p>A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is possible that accounting for individual differences in response times may be an increasingly feasible option to strengthen the precision of individual scores. The present research evaluated the differential reliability of scores when using classical test theory and item response theory as compared to a conditional item response model which includes response time as an item parameter. Results indicated that the precision of student ability scores increased by an average of 5 % when using the conditional item response model, with greater improvements for those who were average or high ability. Implications for measurement models of speeded assessments are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5053774','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5053774"><span>Improving the Reliability of Student Scores from Speeded Assessments: An Illustration of Conditional Item Response Theory Using a Computer-Administered Measure of Vocabulary</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Petscher, Yaacov; Mitchell, Alison M.; Foorman, Barbara R.</p> <p>2016-01-01</p> <p>A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is possible that accounting for individual differences in response times may be an increasingly feasible option to strengthen the precision of individual scores. The present research evaluated the differential reliability of scores when using classical test theory and item response theory as compared to a conditional item response model which includes response time as an item parameter. Results indicated that the precision of student ability scores increased by an average of 5 % when using the conditional item response model, with greater improvements for those who were average or high ability. Implications for measurement models of speeded assessments are discussed. PMID:27721568</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..96c3301H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..96c3301H"><span>Numerical estimation of structure constants in the three-dimensional Ising conformal field theory through Markov chain uv sampler</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Herdeiro, Victor</p> <p>2017-09-01</p> <p>Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] introduced a numerical recipe, dubbed uv sampler, offering precise estimations of the conformal field theory (CFT) data of the planar two-dimensional (2D) critical Ising model. It made use of scale invariance emerging at the critical point in order to sample finite sublattice marginals of the infinite plane Gibbs measure of the model by producing holographic boundary distributions. The main ingredient of the Markov chain Monte Carlo sampler is the invariance under dilation. This paper presents a generalization to higher dimensions with the critical 3D Ising model. This leads to numerical estimations of a subset of the CFT data—scaling weights and structure constants—through fitting of measured correlation functions. The results are shown to agree with the recent most precise estimations from numerical bootstrap methods [Kos, Poland, Simmons-Duffin, and Vichi, J. High Energy Phys. 08 (2016) 036, 10.1007/JHEP08(2016)036].</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20150006613&hterms=Cady&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3DCady','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20150006613&hterms=Cady&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3DCady"><span>High Precision Thermal, Structural and Optical Analysis of an External Occulter Using a Common Model and the General Purpose Multi-Physics Analysis Tool Cielo</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hoff, Claus; Cady, Eric; Chainyk, Mike; Kissil, Andrew; Levine, Marie; Moore, Greg</p> <p>2011-01-01</p> <p>The efficient simulation of multidisciplinary thermo-opto-mechanical effects in precision deployable systems has for years been limited by numerical toolsets that do not necessarily share the same finite element basis, level of mesh discretization, data formats, or compute platforms. Cielo, a general purpose integrated modeling tool funded by the Jet Propulsion Laboratory and the Exoplanet Exploration Program, addresses shortcomings in the current state of the art via features that enable the use of a single, common model for thermal, structural and optical aberration analysis, producing results of greater accuracy, without the need for results interpolation or mapping. This paper will highlight some of these advances, and will demonstrate them within the context of detailed external occulter analyses, focusing on in-plane deformations of the petal edges for both steady-state and transient conditions, with subsequent optical performance metrics including intensity distributions at the pupil and image plane.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EPJC...78..155B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EPJC...78..155B"><span>CP-violating top quark couplings at future linear e^+e^- colliders</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bernreuther, W.; Chen, L.; García, I.; Perelló, M.; Poeschl, R.; Richard, F.; Ros, E.; Vos, M.</p> <p>2018-02-01</p> <p>We study the potential of future lepton colliders to probe violation of the CP symmetry in the top quark sector. In certain extensions of the Standard Model, such as the two-Higgs-doublet model (2HDM), sizeable anomalous top quark dipole moments can arise, which may be revealed by a precise measurement of top quark pair production. We present results from detailed Monte Carlo studies for the ILC at 500 GeV and CLIC at 380 GeV and use parton-level simulations to explore the potential of high-energy operation. We find that precise measurements in e^+e^- → t\\bar{t} production with subsequent decay to lepton plus jets final states can provide sufficient sensitivity to detect Higgs-boson-induced CP violation in a viable two-Higgs-doublet model. The potential of a linear e^+e^- collider to detect CP-violating electric and weak dipole form factors of the top quark exceeds the prospects of the HL-LHC by over an order of magnitude.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28080962','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28080962"><span>'Bodily precision': a predictive coding account of individual differences in interoceptive accuracy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ainley, Vivien; Apps, Matthew A J; Fotopoulou, Aikaterini; Tsakiris, Manos</p> <p>2016-11-19</p> <p>Individuals differ in their awareness of afferent information from within their bodies, which is typically assessed by a heartbeat perception measure of 'interoceptive accuracy' (IAcc). Neural and behavioural correlates of this trait have been investigated, but a theoretical explanation has yet to be presented. Building on recent models that describe interoception within the free energy/predictive coding framework, this paper applies similar principles to IAcc, proposing that individual differences in IAcc depend on 'precision' in interoceptive systems, i.e. the relative weight accorded to 'prior' representations and 'prediction errors' (that part of incoming interoceptive sensation not accounted for by priors), at various levels within the cortical hierarchy and between modalities. Attention has the effect of optimizing precision both within and between sensory modalities. Our central assumption is that people with high IAcc are able, with attention, to prioritize interoception over other sensory modalities and can thus adjust the relative precision of their interoceptive priors and prediction errors, where appropriate, given their personal history. This characterization explains key findings within the interoception literature; links results previously seen as unrelated or contradictory; and may have important implications for understanding cognitive, behavioural and psychopathological consequences of both high and low interoceptive awareness.This article is part of the themed issue 'Interoception beyond homeostasis: affect, cognition and mental health'. © 2016 The Author(s).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3554418','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3554418"><span>Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27605429','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27605429"><span>Comparison of linear and nonlinear implementation of the compartmental tissue uptake model for dynamic contrast-enhanced MRI.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kallehauge, Jesper F; Sourbron, Steven; Irving, Benjamin; Tanderup, Kari; Schnabel, Julia A; Chappell, Michael A</p> <p>2017-06-01</p> <p>Fitting tracer kinetic models using linear methods is much faster than using their nonlinear counterparts, although this comes often at the expense of reduced accuracy and precision. The aim of this study was to derive and compare the performance of the linear compartmental tissue uptake (CTU) model with its nonlinear version with respect to their percentage error and precision. The linear and nonlinear CTU models were initially compared using simulations with varying noise and temporal sampling. Subsequently, the clinical applicability of the linear model was demonstrated on 14 patients with locally advanced cervical cancer examined with dynamic contrast-enhanced magnetic resonance imaging. Simulations revealed equal percentage error and precision when noise was within clinical achievable ranges (contrast-to-noise ratio >10). The linear method was significantly faster than the nonlinear method, with a minimum speedup of around 230 across all tested sampling rates. Clinical analysis revealed that parameters estimated using the linear and nonlinear CTU model were highly correlated (ρ ≥ 0.95). The linear CTU model is computationally more efficient and more stable against temporal downsampling, whereas the nonlinear method is more robust to variations in noise. The two methods may be used interchangeably within clinical achievable ranges of temporal sampling and noise. Magn Reson Med 77:2414-2423, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1998SPIE.3369...14D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1998SPIE.3369...14D"><span>Introduction to multiresolution modeling (MMR) with an example involving precision fires</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Davis, Paul K.; Bigelow, James H.</p> <p>1998-08-01</p> <p>In this paper we review motivations for multilevel resolution modeling (MRM) within a single model, an integrated hierarchical family of models, or both. We then present a new depiction of consistency criteria for models at different levels. After describing our hypotheses for studying the process of MRM with examples, we define a simple but policy-relevant problem involving the use of precision fires to halt an invading army. We then illustrate MRM with a sequence of abstractions suggested by formal theory, visual representation, and approximation. We milk the example for insights about why MRM is different and often difficult, and how it might be accomplished more routinely. It should be feasible even in complex systems such as JWARS and JSIMS, but it is by no means easy. Comprehensive MRM designs are unlikely. It is useful to take the view that some MRM is a great deal better than none and that approximate MRM relationships are often quite adequate. Overall, we conclude that high-quality MRM requires new theory, design practices, modeling tools, and software tools, all of which will take some years to develop. Current object-oriented programming practices may actually be a hindrance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JPhCS.486a2002V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JPhCS.486a2002V"><span>High precision spectroscopy and imaging in THz frequency range</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vaks, Vladimir L.</p> <p>2014-03-01</p> <p>Application of microwave methods for development of the THz frequency range has resulted in elaboration of high precision THz spectrometers based on nonstationary effects. The spectrometers characteristics (spectral resolution and sensitivity) meet the requirements for high precision analysis. The gas analyzers, based on the high precision spectrometers, have been successfully applied for analytical investigations of gas impurities in high pure substances. These investigations can be carried out both in absorption cell and in reactor. The devices can be used for ecological monitoring, detecting the components of chemical weapons and explosive in the atmosphere. The great field of THz investigations is the medicine application. Using the THz spectrometers developed one can detect markers for some diseases in exhaled air.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=gaussian&pg=4&id=EJ920294','ERIC'); return false;" href="https://eric.ed.gov/?q=gaussian&pg=4&id=EJ920294"><span>A Linear Variable-[theta] Model for Measuring Individual Differences in Response Precision</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ferrando, Pere J.</p> <p>2011-01-01</p> <p>Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>