Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright © 2015 Elsevier B.V. All rights reserved.
Precise terrestrial time: A means for improved ballistic missile guidance analysis
NASA Technical Reports Server (NTRS)
Ehrsam, E. E.; Cresswell, S. A.; Mckelvey, G. R.; Matthews, F. L.
1978-01-01
An approach developed to improve the ground instrumentation time tagging accuracy and adapted to support the Minuteman ICBM program is desired. The Timing Insertion Unit (TIU) technique produces a telemetry data time tagging resolution of one tenth of a microsecond, with a relative intersite accuracy after corrections and velocity data (range, azimuth, elevation and range rate) also used in missile guidance system analysis can be correlated to within ten microseconds of the telemetry guidance data. This requires precise timing synchronization between the metric and telemetry instrumentation sites. The timing synchronization can be achieved by using the radar automatic phasing system time correlation methods. Other time correlation techniques such as Television (TV) Line-10 and the Geostationary Operational Environmental Satellites (GEOS) terrestial timing receivers are also considered.
Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli
Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.
2010-01-01
Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356
Millisecond-timescale local network coding in the rat primary somatosensory cortex.
Eldawlatly, Seif; Oweiss, Karim G
2011-01-01
Correlation among neocortical neurons is thought to play an indispensable role in mediating sensory processing of external stimuli. The role of temporal precision in this correlation has been hypothesized to enhance information flow along sensory pathways. Its role in mediating the integration of information at the output of these pathways, however, remains poorly understood. Here, we examined spike timing correlation between simultaneously recorded layer V neurons within and across columns of the primary somatosensory cortex of anesthetized rats during unilateral whisker stimulation. We used bayesian statistics and information theory to quantify the causal influence between the recorded cells with millisecond precision. For each stimulated whisker, we inferred stable, whisker-specific, dynamic bayesian networks over many repeated trials, with network similarity of 83.3±6% within whisker, compared to only 50.3±18% across whiskers. These networks further provided information about whisker identity that was approximately 6 times higher than what was provided by the latency to first spike and 13 times higher than what was provided by the spike count of individual neurons examined separately. Furthermore, prediction of individual neurons' precise firing conditioned on knowledge of putative pre-synaptic cell firing was 3 times higher than predictions conditioned on stimulus onset alone. Taken together, these results suggest the presence of a temporally precise network coding mechanism that integrates information across neighboring columns within layer V about vibrissa position and whisking kinetics to mediate whisker movement by motor areas innervated by layer V.
Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin
2013-01-01
Previous research has found a relationship between individual differences in children’s precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the present study we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of two years. Additionally, at the last time point, we tested children’s informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3; Ginsburg & Baroody, 2003). We found that children’s numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned, non-symbolic system of quantity representation and the system of mathematical reasoning that children come to master through instruction. PMID:24076381
Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin
2013-12-01
Previous research has found a relationship between individual differences in children's precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the current study, we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of 2years. In addition, at the final time point, we tested children's informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3). We found that children's numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned nonsymbolic system of quantity representation and the system of mathematics reasoning that children come to master through instruction. Copyright © 2013 Elsevier Inc. All rights reserved.
Development and test of photon counting lidar
NASA Astrophysics Data System (ADS)
Wang, Chun-hui; Wang, Ao-you; Tao, Yu-liang; Li, Xu; Peng, Huan; Meng, Pei-bei
2018-02-01
In order to satisfy the application requirements of spaceborne three dimensional imaging lidar , a prototype of nonscanning multi-channel lidar based on receiver field of view segmentation was designed and developed. High repetition frequency micro-pulse lasers, optics fiber array and Geiger-mode APD, combination with time-correlated single photon counting technology, were adopted to achieve multi-channel detection. Ranging experiments were carried out outdoors. In low echo photon condition, target photon counting showed time correlated and noise photon counting were random. Detection probability and range precision versus threshold were described and range precision increased from 0.44 to 0.11 when threshold increased from 4 to 8.
Optimizing correlation techniques for improved earthquake location
Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.
2004-01-01
Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, C.G.
1993-03-01
An abrupt lithofacies change between calcareous shale and non-calcareous shale occurs in strata deposited in the mid-Cretaceous Greenhorn Seaway in the extreme southeastern corner of Montana, U.S.A. This strata, north of the Black Hills has previously been miss-correlated due to the extreme difficulty in locating unique continuous marker beds. Supplemental Graphic Correlation techniques of Lucy Edwards, which expand on those of Shaw, were employed in the difficult task of correlating across the Little Missouri River Valley. Precise correlation was necessary in order to interpret the cause of the lithofacies change. Edwards's use of non-unique event marker beds and the side-by-sidemore » graph method proved to be invaluable tools. Precise correlation across the facies change was accomplished using a combination of bentonite beds, calcarenite beds, ammonite species, foraminiferal and calcareous nannofossil assemblages. Supplemental Graphic Correlation techniques allowed the definition of twenty-five time slices and permitted the identification of an ocean front during each of these time slices.« less
Femtosecond Photon-Counting Receiver
NASA Technical Reports Server (NTRS)
Krainak, Michael A.; Rambo, Timothy M.; Yang, Guangning; Lu, Wei; Numata, Kenji
2016-01-01
An optical correlation receiver is described that provides ultra-precise distance and/or time/pulse-width measurements even for weak (single photons) and short (femtosecond) optical signals. A new type of optical correlation receiver uses a fourth-order (intensity) interferometer to provide micron distance measurements even for weak (single photons) and short (femtosecond) optical signals. The optical correlator uses a low-noise-integrating detector that can resolve photon number. The correlation (range as a function of path delay) is calculated from the variance of the photon number of the difference of the optical signals on the two detectors. Our preliminary proof-of principle data (using a short-pulse diode laser transmitter) demonstrates tens of microns precision.
Femtosecond Photon-Counting Receiver
NASA Technical Reports Server (NTRS)
Krainak, Michael A.; Rambo, Timothy M.; Yang, Guangning; Lu, Wei; Numata, Kenji
2016-01-01
An optical correlation receiver is described that provides ultra-precise distance and/or time-pulse-width measurements even for weak (single photons) and short (femtosecond) optical signals. A new type of optical correlation receiver uses a fourth-order (intensity) interferometer to provide micron distance measurements even for weak (single photons) and short (femtosecond) optical signals. The optical correlator uses a low-noise-integrating detector that can resolve photon number. The correlation (range as a function of path delay) is calculated from the variance of the photon number of the difference of the optical signals on the two detectors. Our preliminary proof-of principle data (using a short-pulse diode laser transmitter) demonstrates tens of microns precision.
Single neuron firing properties impact correlation-based population coding
Hong, Sungho; Ratté, Stéphanie; Prescott, Steven A.; De Schutter, Erik
2012-01-01
Correlated spiking has been widely observed but its impact on neural coding remains controversial. Correlation arising from co-modulation of rates across neurons has been shown to vary with the firing rates of individual neurons. This translates into rate and correlation being equivalently tuned to the stimulus; under those conditions, correlated spiking does not provide information beyond that already available from individual neuron firing rates. Such correlations are irrelevant and can reduce coding efficiency by introducing redundancy. Using simulations and experiments in rat hippocampal neurons, we show here that pairs of neurons receiving correlated input also exhibit correlations arising from precise spike-time synchronization. Contrary to rate co-modulation, spike-time synchronization is unaffected by firing rate, thus enabling synchrony- and rate-based coding to operate independently. The type of output correlation depends on whether intrinsic neuron properties promote integration or coincidence detection: “ideal” integrators (with spike generation sensitive to stimulus mean) exhibit rate co-modulation whereas “ideal” coincidence detectors (with spike generation sensitive to stimulus variance) exhibit precise spike-time synchronization. Pyramidal neurons are sensitive to both stimulus mean and variance, and thus exhibit both types of output correlation proportioned according to which operating mode is dominant. Our results explain how different types of correlations arise based on how individual neurons generate spikes, and why spike-time synchronization and rate co-modulation can encode different stimulus properties. Our results also highlight the importance of neuronal properties for population-level coding insofar as neural networks can employ different coding schemes depending on the dominant operating mode of their constituent neurons. PMID:22279226
Aulenbach, Brent T.
2013-01-01
A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.
Metallic scattering lifetime measurements with terahertz time-domain spectroscopy
NASA Astrophysics Data System (ADS)
Lea, Graham Bryce
The momentum scattering lifetime is a fundamental parameter of metallic conduction that can be measured with terahertz time-domain spectroscopy. This technique has an important strength over optical reflectance spectroscopy: it is capable of measuring both the phase and the amplitude of the probing radiation. This allows simultaneous, independent measurements of the scattering lifetime and resistivity. Broadly, it is the precision of the phase measurement that determines the precision of scattering lifetime measurements. This thesis describes milliradian-level phase measurement refinements in the experimental technique and measures the conductivity anisotropy in the correlated electron system CaRuO3. These phase measurement refinements translate to femtosecond-level refinements in scattering lifetime measurements of thin metallic films. Keywords: terahertz time-domain spectroscopy, calcium ruthenate, ruthenium oxides, correlated electrons, experimental technique.
van Heeswijk, Miriam M; Lambregts, Doenja M J; Maas, Monique; Lahaye, Max J; Ayas, Z; Slenter, Jos M G M; Beets, Geerard L; Bakers, Frans C H; Beets-Tan, Regina G H
2017-06-01
The apparent diffusion coefficient (ADC) is a potential prognostic imaging marker in rectal cancer. Typically, mean ADC values are used, derived from precise manual whole-volume tumor delineations by experts. The aim was first to explore whether non-precise circular delineation combined with histogram analysis can be a less cumbersome alternative to acquire similar ADC measurements and second to explore whether histogram analyses provide additional prognostic information. Thirty-seven patients who underwent a primary staging MRI including diffusion-weighted imaging (DWI; b0, 25, 50, 100, 500, 1000; 1.5 T) were included. Volumes-of-interest (VOIs) were drawn on b1000-DWI: (a) precise delineation, manually tracing tumor boundaries (2 expert readers), and (b) non-precise delineation, drawing circular VOIs with a wide margin around the tumor (2 non-experts). Mean ADC and histogram metrics (mean, min, max, median, SD, skewness, kurtosis, 5th-95th percentiles) were derived from the VOIs and delineation time was recorded. Measurements were compared between the two methods and correlated with prognostic outcome parameters. Median delineation time reduced from 47-165 s (precise) to 21-43 s (non-precise). The 45th percentile of the non-precise delineation showed the best correlation with the mean ADC from the precise delineation as the reference standard (ICC 0.71-0.75). None of the mean ADC or histogram parameters showed significant prognostic value; only the total tumor volume (VOI) was significantly larger in patients with positive clinical N stage and mesorectal fascia involvement. When performing non-precise tumor delineation, histogram analysis (in specific 45th ADC percentile) may be used as an alternative to obtain similar ADC values as with precise whole tumor delineation. Histogram analyses are not beneficial to obtain additional prognostic information.
Precise orbit determination for NASA's earth observing system using GPS (Global Positioning System)
NASA Technical Reports Server (NTRS)
Williams, B. G.
1988-01-01
An application of a precision orbit determination technique for NASA's Earth Observing System (EOS) using the Global Positioning System (GPS) is described. This technique allows the geometric information from measurements of GPS carrier phase and P-code pseudo-range to be exploited while minimizing requirements for precision dynamical modeling. The method combines geometric and dynamic information to determine the spacecraft trajectory; the weight on the dynamic information is controlled by adjusting fictitious spacecraft accelerations in three dimensions which are treated as first order exponentially time correlated stochastic processes. By varying the time correlation and uncertainty of the stochastic accelerations, the technique can range from purely geometric to purely dynamic. Performance estimates for this technique as applied to the orbit geometry planned for the EOS platforms indicate that decimeter accuracies for EOS orbit position may be obtainable. The sensitivity of the predicted orbit uncertainties to model errors for station locations, nongravitational platform accelerations, and Earth gravity is also presented.
Precision measurements with LPCTrap at GANIL
NASA Astrophysics Data System (ADS)
Liénard, E.; Ban, G.; Couratin, C.; Delahaye, P.; Durand, D.; Fabian, X.; Fabre, B.; Fléchard, X.; Finlay, P.; Mauger, F.; Méry, A.; Naviliat-Cuncic, O.; Pons, B.; Porobic, T.; Quéméner, G.; Severijns, N.; Thomas, J. C.; Velten, Ph.
2015-11-01
The experimental achievements and the results obtained so far with the LPCTrap device installed at GANIL are presented. The apparatus is dedicated to the study of the weak interaction at low energy by means of precise measurements of the β - ν angular correlation parameter in nuclear β decays. So far, the data collected with three isotopes have enabled to determine, for the first time, the charge state distributions of the recoiling ions, induced by shakeoff process. The analysis is presently refined to deduce the correlation parameters, with the potential of improving both the constraint deduced at low energy on exotic tensor currents (6He1+) and the precision on the V u d element of the quark-mixing matrix (35Ar1+ and 19Ne1+) deduced from the mirror transitions dataset.
J-GFT NMR for precise measurement of mutually correlated nuclear spin-spin couplings.
Atreya, Hanudatta S; Garcia, Erwin; Shen, Yang; Szyperski, Thomas
2007-01-24
G-matrix Fourier transform (GFT) NMR spectroscopy is presented for accurate and precise measurement of chemical shifts and nuclear spin-spin couplings correlated according to spin system. The new approach, named "J-GFT NMR", is based on a largely extended GFT NMR formalism and promises to have a broad impact on projection NMR spectroscopy. Specifically, constant-time J-GFT (6,2)D (HA-CA-CO)-N-HN was implemented for simultaneous measurement of five mutually correlated NMR parameters, that is, 15N backbone chemical shifts and the four one-bond spin-spin couplings 13Calpha-1Halpha, 13Calpha-13C', 15N-13C', and 15N-1HNu. The experiment was applied for measuring residual dipolar couplings (RDCs) in an 8 kDa protein Z-domain aligned with Pf1 phages. Comparison with RDC values extracted from conventional NMR experiments reveals that RDCs are measured with high precision and accuracy, which is attributable to the facts that (i) the use of constant time evolution ensures that signals do not broaden whenever multiple RDCs are jointly measured in a single dimension and (ii) RDCs are multiply encoded in the multiplets arising from the joint sampling. This corresponds to measuring the couplings multiple times in a statistically independent manner. A key feature of J-GFT NMR, i.e., the correlation of couplings according to spin systems without reference to sequential resonance assignments, promises to be particularly valuable for rapid identification of backbone conformation and classification of protein fold families on the basis of statistical analysis of dipolar couplings.
The MTV experiment: a test of time reversal symmetry using polarized 8Li
NASA Astrophysics Data System (ADS)
Murata, J.; Baba, H.; Behr, J. A.; Hirayama, Y.; Iguri, T.; Ikeda, M.; Kato, T.; Kawamura, H.; Kishi, R.; Levy, C. D. P.; Nakaya, Y.; Ninomiya, K.; Ogawa, N.; Onishi, J.; Openshaw, R.; Pearson, M.; Seitaibashi, E.; Tanaka, S.; Tanuma, R.; Totsuka, Y.; Toyoda, T.
2014-01-01
The MTV ( Mott Polarimetry for T- Violation Experiment) experiment at TRIUMF-ISAC ( Isotope Separator and ACcelerator), which aims to achieve the highest precision test of time reversal symmetry in polarized nuclear beta decay by measuring a triple correlation ( R-correlation), is motivated by the search for a new physics beyond the Standard Model. In this experiment, the existence of non-zero transverse electron polarization is examined utilizing the analyzing power of Mott scattering from a thin metal foil. Backward scattering electron tracks are measured using a multi-wire drift chamber for the first time. The MTV experiment was commissioned at ISAC in 2009 using an 80 % polarized 8Li beam at 107 pps, resulting in 0.1 % statistical precision on the R-parameter in the first physics run performed in 2010. Next generation cylindrical drift chamber (CDC) is now being installed for the future run.
Single photon ranging system using two wavelengths laser and analysis of precision
NASA Astrophysics Data System (ADS)
Chen, Yunfei; He, Weiji; Miao, Zhuang; Gu, Guohua; Chen, Qian
2013-09-01
The laser ranging system based on time correlation single photon counting technology and single photon detector has the feature of high precision and low emergent energy etc. In this paper, we established a single photon laser ranging system that use the supercontinuum laser as light source, and two wavelengths (532nm and 830nm) of echo signal as the stop signal. We propose a new method that is capable to improve the single photon ranging system performance. The method is implemented by using two single-photon detectors to receive respectively the two different wavelength signals at the same time. We extracted the firings of the two detectors triggered by the same laser pulse at the same time and then took mean time of the two firings as the combined detection time-of-flight. The detection by two channels using two wavelengths will effectively improve the detection precision and decrease the false alarm probability. Finally, an experimental single photon ranging system was established. Through a lot of experiments, we got the system precision using both single and two wavelengths and verified the effectiveness of the method.
Renne, Walter; Ludlow, Mark; Fryml, John; Schurch, Zach; Mennito, Anthony; Kessler, Ray; Lauer, Abigail
2017-07-01
As digital impressions become more common and more digital impression systems are released onto the market, it is essential to systematically and objectively evaluate their accuracy. The purpose of this in vitro study was to evaluate and compare the trueness and precision of 6 intraoral scanners and 1 laboratory scanner in both sextant and complete-arch scenarios. Furthermore, time of scanning was evaluated and correlated with trueness and precision. A custom complete-arch model was fabricated with a refractive index similar to that of tooth structure. Seven digital impression systems were used to scan the custom model for both posterior sextant and complete arch scenarios. Analysis was performed using 3-dimensional metrology software to measure discrepancies between the master model and experimental casts. Of the intraoral scanners, the Planscan was found to have the best trueness and precision while the 3Shape Trios was found to have the poorest for sextant scanning (P<.001). The order of trueness for complete arch scanning was as follows: 3Shape D800 >iTero >3Shape TRIOS 3 >Carestream 3500 >Planscan >CEREC Omnicam >CEREC Bluecam. The order of precision for complete-arch scanning was as follows: CS3500 >iTero >3Shape D800 >3Shape TRIOS 3 >CEREC Omnicam >Planscan >CEREC Bluecam. For the secondary outcome evaluating the effect time has on trueness and precision, the complete- arch scan time was highly correlated with both trueness (r=0.771) and precision (r=0.771). For sextant scanning, the Planscan was found to be the most precise and true scanner. For complete-arch scanning, the 3Shape Trios was found to have the best balance of speed and accuracy. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Multichannel heterodyning for wideband interferometry, correlation and signal processing
Erskine, David J.
1999-01-01
A method of signal processing a high bandwidth signal by coherently subdividing it into many narrow bandwidth channels which are individually processed at lower frequencies in a parallel manner. Autocorrelation and correlations can be performed using reference frequencies which may drift slowly with time, reducing cost of device. Coordinated adjustment of channel phases alters temporal and spectral behavior of net signal process more precisely than a channel used individually. This is a method of implementing precision long coherent delays, interferometers, and filters for high bandwidth optical or microwave signals using low bandwidth electronics. High bandwidth signals can be recorded, mathematically manipulated, and synthesized.
Multichannel heterodyning for wideband interferometry, correlation and signal processing
Erskine, D.J.
1999-08-24
A method is disclosed of signal processing a high bandwidth signal by coherently subdividing it into many narrow bandwidth channels which are individually processed at lower frequencies in a parallel manner. Autocorrelation and correlations can be performed using reference frequencies which may drift slowly with time, reducing cost of device. Coordinated adjustment of channel phases alters temporal and spectral behavior of net signal process more precisely than a channel used individually. This is a method of implementing precision long coherent delays, interferometers, and filters for high bandwidth optical or microwave signals using low bandwidth electronics. High bandwidth signals can be recorded, mathematically manipulated, and synthesized. 50 figs.
Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F
2014-03-24
Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.
Luo, Yiyang; Xia, Li; Xu, Zhilin; Yu, Can; Sun, Qizhen; Li, Wei; Huang, Di; Liu, Deming
2015-02-09
An optical chaos and hybrid wavelength division multiplexing/time division multiplexing (WDM/TDM) based large capacity quasi-distributed sensing network with real-time fiber fault monitoring is proposed. Chirped fiber Bragg grating (CFBG) intensity demodulation is adopted to improve the dynamic range of the measurements. Compared with the traditional sensing interrogation methods in time, radio frequency and optical wavelength domains, the measurand sensing and the precise locating of the proposed sensing network can be simultaneously interrogated by the relative amplitude change (RAC) and the time delay of the correlation peak in the cross-correlation spectrum. Assisted with the WDM/TDM technology, hundreds of sensing units could be potentially multiplexed in the multiple sensing fiber lines. Based on the proof-of-concept experiment for axial strain measurement with three sensing fiber lines, the strain sensitivity up to 0.14% RAC/με and the precise locating of the sensors are achieved. Significantly, real-time fiber fault monitoring in the three sensing fiber lines is also implemented with a spatial resolution of 2.8 cm.
Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice
Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.
2010-01-01
In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777
Spike timing precision of neuronal circuits.
Kilinc, Deniz; Demir, Alper
2018-06-01
Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.
An improved correlation method for determining the period of a torsion pendulum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo Jie; Wang Dianhong
Considering variation of environment temperature and unhomogeneity of background gravitational field, an improved correlation method was proposed to determine the variational period of a torsion pendulum with high precision. The result of processing experimental data shows that the uncertainty of determining the period with this method has been improved about twofolds than traditional correlation method, which is significant for the determination of gravitational constant with time-of-swing method.
Approximate number sense correlates with math performance in gifted adolescents.
Wang, Jinjing Jenny; Halberda, Justin; Feigenson, Lisa
2017-05-01
Nonhuman animals, human infants, and human adults all share an Approximate Number System (ANS) that allows them to imprecisely represent number without counting. Among humans, people differ in the precision of their ANS representations, and these individual differences have been shown to correlate with symbolic mathematics performance in both children and adults. For example, children with specific math impairment (dyscalculia) have notably poor ANS precision. However, it remains unknown whether ANS precision contributes to individual differences only in populations of people with lower or average mathematical abilities, or whether this link also is present in people who excel in math. Here we tested non-symbolic numerical approximation in 13- to 16-year old gifted children enrolled in a program for talented adolescents (the Center for Talented Youth). We found that in this high achieving population, ANS precision significantly correlated with performance on the symbolic math portion of two common standardized tests (SAT and ACT) that typically are administered to much older students. This relationship was robust even when controlling for age, verbal performance, and reaction times in the approximate number task. These results suggest that the Approximate Number System is linked to symbolic math performance even at the top levels of math performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Thermodynamic Bounds on Precision in Ballistic Multiterminal Transport
NASA Astrophysics Data System (ADS)
Brandner, Kay; Hanazato, Taro; Saito, Keiji
2018-03-01
For classical ballistic transport in a multiterminal geometry, we derive a universal trade-off relation between total dissipation and the precision, at which particles are extracted from individual reservoirs. Remarkably, this bound becomes significantly weaker in the presence of a magnetic field breaking time-reversal symmetry. By working out an explicit model for chiral transport enforced by a strong magnetic field, we show that our bounds are tight. Beyond the classical regime, we find that, in quantum systems far from equilibrium, the correlated exchange of particles makes it possible to exponentially reduce the thermodynamic cost of precision.
Maldacena, Juan; Shenker, Stephen H.; Stanford, Douglas
2016-08-17
We conjecture a sharp bound on the rate of growth of chaos in thermal quantum systems with a large number of degrees of freedom. Chaos can be diagnosed using an out-of-time-order correlation function closely related to the commutator of operators separated in time. We conjecture that the influence of chaos on this correlator can develop no faster than exponentially, with Lyapunov exponent λ L ≤ 2πk B T/ℏ. We give a precise mathematical argument, based on plausible physical assumptions, establishing this conjecture.
Temporal precision and the capacity of auditory-verbal short-term memory.
Gilbert, Rebecca A; Hitch, Graham J; Hartley, Tom
2017-12-01
The capacity of serially ordered auditory-verbal short-term memory (AVSTM) is sensitive to the timing of the material to be stored, and both temporal processing and AVSTM capacity are implicated in the development of language. We developed a novel "rehearsal-probe" task to investigate the relationship between temporal precision and the capacity to remember serial order. Participants listened to a sub-span sequence of spoken digits and silently rehearsed the items and their timing during an unfilled retention interval. After an unpredictable delay, a tone prompted report of the item being rehearsed at that moment. An initial experiment showed cyclic distributions of item responses over time, with peaks preserving serial order and broad, overlapping tails. The spread of the response distributions increased with additional memory load and correlated negatively with participants' auditory digit spans. A second study replicated the negative correlation and demonstrated its specificity to AVSTM by controlling for differences in visuo-spatial STM and nonverbal IQ. The results are consistent with the idea that a common resource underpins both the temporal precision and capacity of AVSTM. The rehearsal-probe task may provide a valuable tool for investigating links between temporal processing and AVSTM capacity in the context of speech and language abilities.
Rasmuson, James O; Roggli, Victor L; Boelter, Fred W; Rasmuson, Eric J; Redinger, Charles F
2014-01-01
A detailed evaluation of the correlation and linearity of industrial hygiene retrospective exposure assessment (REA) for cumulative asbestos exposure with asbestos lung burden analysis (LBA) has not been previously performed, but both methods are utilized for case-control and cohort studies and other applications such as setting occupational exposure limits. (a) To correlate REA with asbestos LBA for a large number of cases from varied industries and exposure scenarios; (b) to evaluate the linearity, precision, and applicability of both industrial hygiene exposure reconstruction and LBA; and (c) to demonstrate validation methods for REA. A panel of four experienced industrial hygiene raters independently estimated the cumulative asbestos exposure for 363 cases with limited exposure details in which asbestos LBA had been independently determined. LBA for asbestos bodies was performed by a pathologist by both light microscopy and scanning electron microscopy (SEM) and free asbestos fibers by SEM. Precision, reliability, correlation and linearity were evaluated via intraclass correlation, regression analysis and analysis of covariance. Plaintiff's answers to interrogatories, work history sheets, work summaries or plaintiff's discovery depositions that were obtained in court cases involving asbestos were utilized by the pathologist to provide a summarized brief asbestos exposure and work history for each of the 363 cases. Linear relationships between REA and LBA were found when adjustment was made for asbestos fiber-type exposure differences. Significant correlation between REA and LBA was found with amphibole asbestos lung burden and mixed fiber-types, but not with chrysotile. The intraclass correlation coefficients (ICC) for the precision of the industrial hygiene rater cumulative asbestos exposure estimates and the precision of repeated laboratory analysis were found to be in the excellent range. The ICC estimates were performed independent of specific asbestos fiber-type. Both REA and pathology assessment are reliable and complementary predictive methods to characterize asbestos exposures. Correlation analysis between the two methods effectively validates both REA methodology and LBA procedures within the determined precision, particularly for cumulative amphibole asbestos exposures since chrysotile fibers, for the most part, are not retained in the lung for an extended period of time.
High-precision tracking of brownian boomerang colloidal particles confined in quasi two dimensions.
Chakrabarty, Ayan; Wang, Feng; Fan, Chun-Zhen; Sun, Kai; Wei, Qi-Huo
2013-11-26
In this article, we present a high-precision image-processing algorithm for tracking the translational and rotational Brownian motion of boomerang-shaped colloidal particles confined in quasi-two-dimensional geometry. By measuring mean square displacements of an immobilized particle, we demonstrate that the positional and angular precision of our imaging and image-processing system can achieve 13 nm and 0.004 rad, respectively. By analyzing computer-simulated images, we demonstrate that the positional and angular accuracies of our image-processing algorithm can achieve 32 nm and 0.006 rad. Because of zero correlations between the displacements in neighboring time intervals, trajectories of different videos of the same particle can be merged into a very long time trajectory, allowing for long-time averaging of different physical variables. We apply this image-processing algorithm to measure the diffusion coefficients of boomerang particles of three different apex angles and discuss the angle dependence of these diffusion coefficients.
NASA Astrophysics Data System (ADS)
Tamborini, D.; Portaluppi, D.; Villa, F.; Tisa, S.; Tosi, A.
2014-11-01
We present a Time-to-Digital Converter (TDC) card with a compact form factor, suitable for multichannel timing instruments or for integration into more complex systems. The TDC Card provides 10 ps timing resolution over the whole measurement range, which is selectable from 160 ns up to 10 μs, reaching 21 ps rms precision, 1.25% LSB rms differential nonlinearity, up to 3 Mconversion/s with 400 mW power consumption. The I/O edge card connector provides timing data readout through either a parallel bus or a 100 MHz serial interface and further measurement information like input signal rate and valid conversion rate (typically useful for time-correlated single-photon counting application) through an independent serial link.
Tamborini, D; Portaluppi, D; Villa, F; Tisa, S; Tosi, A
2014-11-01
We present a Time-to-Digital Converter (TDC) card with a compact form factor, suitable for multichannel timing instruments or for integration into more complex systems. The TDC Card provides 10 ps timing resolution over the whole measurement range, which is selectable from 160 ns up to 10 μs, reaching 21 ps rms precision, 1.25% LSB rms differential nonlinearity, up to 3 Mconversion/s with 400 mW power consumption. The I/O edge card connector provides timing data readout through either a parallel bus or a 100 MHz serial interface and further measurement information like input signal rate and valid conversion rate (typically useful for time-correlated single-photon counting application) through an independent serial link.
Berman, E S F; Melanson, E L; Swibas, T; Snaith, S P; Speakman, J R
2015-10-01
The method of choice for measuring total energy expenditure in free-living individuals is the doubly labeled water (DLW) method. This experiment examined the behavior of natural background isotope abundance fluctuations within and between individuals over time to assess possible methods of accounting for variations in the background isotope abundances to potentially improve the precision of the DLW measurement. In this work, we measured natural background variations in (2)H, (18)O and (17)O in water from urine samples collected from 40 human subjects who resided in the same geographical area. Each subject provided a urine sample for 30 consecutive days. Isotopic abundances in the samples were measured using Off-Axis Integrated Cavity Output Spectroscopy. Autocorrelation analyses demonstrated that the background isotopes in a given individual were not temporally correlated over the time scales of typical DLW studies. Using samples obtained from different individuals on the same calendar day, cross-correlation analyses demonstrated that the background variations of different individuals were not correlated in time. However, the measured ratios of the three isotopes (2)H, (18)O and (17)O were highly correlated (R(2)=0.89-0.96). Although neither specific timing of DLW water studies nor intraindividual comparisons were found to be avenues for reducing the impact of background isotope abundance fluctuations on DLW studies, strong inter-isotope correlations within an individual confirm that use of a dosing ratio of 8‰:1‰ (0.6 p.p.m.: p.p.m.) optimizes DLW precision. Theoretical implications for the possible use of (17)O measurements within a DLW study require further study.
NASA Astrophysics Data System (ADS)
Borycki, Dawid; Kholiqov, Oybek; Zhou, Wenjun; Srinivasan, Vivek J.
2017-03-01
Sensing and imaging methods based on the dynamic scattering of coherent light, including laser speckle, laser Doppler, and diffuse correlation spectroscopy quantify scatterer motion using light intensity (speckle) fluctuations. The underlying optical field autocorrelation (OFA), rather than being measured directly, is typically inferred from the intensity autocorrelation (IA) through the Siegert relationship, by assuming that the scattered field obeys Gaussian statistics. In this work, we demonstrate interferometric near-infrared spectroscopy (iNIRS) for measurement of time-of-flight (TOF) resolved field and intensity autocorrelations in fluid tissue phantoms and in vivo. In phantoms, we find a breakdown of the Siegert relationship for short times-of-flight due to a contribution from static paths whose optical field does not decorrelate over experimental time scales, and demonstrate that eliminating such paths by polarization gating restores the validity of the Siegert relationship. Inspired by these results, we developed a method, called correlation gating, for separating the OFA into static and dynamic components. Correlation gating enables more precise quantification of tissue dynamics. To prove this, we show that iNIRS and correlation gating can be applied to measure cerebral hemodynamics of the nude mouse in vivo using dynamically scattered (ergodic) paths and not static (non-ergodic) paths, which may not be impacted by blood. More generally, correlation gating, in conjunction with TOF resolution, enables more precise separation of diffuse and non-diffusive contributions to OFA than is possible with TOF resolution alone. Finally, we show that direct measurements of OFA are statistically more efficient than indirect measurements based on IA.
What can neuromorphic event-driven precise timing add to spike-based pattern recognition?
Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad
2015-03-01
This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.
Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration
NASA Technical Reports Server (NTRS)
Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.
1996-01-01
An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.
Modelling nematode movement using time-fractional dynamics.
Hapca, Simona; Crawford, John W; MacMillan, Keith; Wilson, Mike J; Young, Iain M
2007-09-07
We use a correlated random walk model in two dimensions to simulate the movement of the slug parasitic nematode Phasmarhabditis hermaphrodita in homogeneous environments. The model incorporates the observed statistical distributions of turning angle and speed derived from time-lapse studies of individual nematode trails. We identify strong temporal correlations between the turning angles and speed that preclude the case of a simple random walk in which successive steps are independent. These correlated random walks are appropriately modelled using an anomalous diffusion model, more precisely using a fractional sub-diffusion model for which the associated stochastic process is characterised by strong memory effects in the probability density function.
[Value of the space perception test for evaluation of the aptitude for precision work in geodesy].
Remlein-Mozolewska, G
1982-01-01
The visual spatial localization ability of geodesy and cartography - employers and of the pupils trained for the mentioned profession has been examined. The examination has been based on work duration and the time of its performance. A correlation between the localization ability and the precision of the hand - movements required in everyday work has been proven. The better the movement precision, the more efficient the visual spatial localization. The length of work has not been significant. The test concerned appeared to be highly useful in geodesy for qualifying workers for the posts requiring good hands efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamborini, D., E-mail: davide.tamborini@polimi.it; Portaluppi, D.; Villa, F.
We present a Time-to-Digital Converter (TDC) card with a compact form factor, suitable for multichannel timing instruments or for integration into more complex systems. The TDC Card provides 10 ps timing resolution over the whole measurement range, which is selectable from 160 ns up to 10 μs, reaching 21 ps rms precision, 1.25% LSB rms differential nonlinearity, up to 3 Mconversion/s with 400 mW power consumption. The I/O edge card connector provides timing data readout through either a parallel bus or a 100 MHz serial interface and further measurement information like input signal rate and valid conversion rate (typically usefulmore » for time-correlated single-photon counting application) through an independent serial link.« less
Cortical activity patterns predict speech discrimination ability
Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P
2010-01-01
Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123
Optical Links and RF Distribution for Antenna Arrays
NASA Technical Reports Server (NTRS)
Huang, Shouhua; Calhoun, Malcolm; Tjoelker, Robert
2006-01-01
An array of three antennas has recently been developed at the NASA Jet Propulsion Laboratory capable of detecting signals at X and Ka band. The array requires a common frequency reference and high precision phase alignment to correlate received signals. Frequency and timing references are presently provided from a remotely located hydrogen maser and clock through a combination of commercially and custom developed optical links. The selected laser, photodetector, and fiber components have been tested under anticipated thermal and simulated antenna rotation conditions. The resulting stability limitations due to thermal perturbations or induced stress on the optical fiber have been characterized. Distribution of the X band local oscillator includes a loop back and precision phase monitor to enable correlation of signals received from each antenna.
Voronoi Tessellation for reducing the processing time of correlation functions
NASA Astrophysics Data System (ADS)
Cárdenas-Montes, Miguel; Sevilla-Noarbe, Ignacio
2018-01-01
The increase of data volume in Cosmology is motivating the search of new solutions for solving the difficulties associated with the large processing time and precision of calculations. This is specially true in the case of several relevant statistics of the galaxy distribution of the Large Scale Structure of the Universe, namely the two and three point angular correlation functions. For these, the processing time has critically grown with the increase of the size of the data sample. Beyond parallel implementations to overcome the barrier of processing time, space partitioning algorithms are necessary to reduce the computational load. These can delimit the elements involved in the correlation function estimation to those that can potentially contribute to the final result. In this work, Voronoi Tessellation is used to reduce the processing time of the two-point and three-point angular correlation functions. The results of this proof-of-concept show a significant reduction of the processing time when preprocessing the galaxy positions with Voronoi Tessellation.
A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks.
Cui, Xuerong; Li, Juan; Wu, Chunlei; Liu, Jian-Hang
2015-11-13
Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS) are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU) vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR) environments.
Improving Ramsey spectroscopy in the extreme-ultraviolet region with a random-sampling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eramo, R.; Bellini, M.; European Laboratory for Non-linear Spectroscopy
2011-04-15
Ramsey-like techniques, based on the coherent excitation of a sample by delayed and phase-correlated pulses, are promising tools for high-precision spectroscopic tests of QED in the extreme-ultraviolet (xuv) spectral region, but currently suffer experimental limitations related to long acquisition times and critical stability issues. Here we propose a random subsampling approach to Ramsey spectroscopy that, by allowing experimentalists to reach a given spectral resolution goal in a fraction of the usual acquisition time, leads to substantial improvements in high-resolution spectroscopy and may open the way to a widespread application of Ramsey-like techniques to precision measurements in the xuv spectral region.
Characterizing and estimating noise in InSAR and InSAR time series with MODIS
Barnhart, William D.; Lohman, Rowena B.
2013-01-01
InSAR time series analysis is increasingly used to image subcentimeter displacement rates of the ground surface. The precision of InSAR observations is often affected by several noise sources, including spatially correlated noise from the turbulent atmosphere. Under ideal scenarios, InSAR time series techniques can substantially mitigate these effects; however, in practice the temporal distribution of InSAR acquisitions over much of the world exhibit seasonal biases, long temporal gaps, and insufficient acquisitions to confidently obtain the precisions desired for tectonic research. Here, we introduce a technique for constraining the magnitude of errors expected from atmospheric phase delays on the ground displacement rates inferred from an InSAR time series using independent observations of precipitable water vapor from MODIS. We implement a Monte Carlo error estimation technique based on multiple (100+) MODIS-based time series that sample date ranges close to the acquisitions times of the available SAR imagery. This stochastic approach allows evaluation of the significance of signals present in the final time series product, in particular their correlation with topography and seasonality. We find that topographically correlated noise in individual interferograms is not spatially stationary, even over short-spatial scales (<10 km). Overall, MODIS-inferred displacements and velocities exhibit errors of similar magnitude to the variability within an InSAR time series. We examine the MODIS-based confidence bounds in regions with a range of inferred displacement rates, and find we are capable of resolving velocities as low as 1.5 mm/yr with uncertainties increasing to ∼6 mm/yr in regions with higher topographic relief.
Precise measurement of the angular correlation parameter aβν in the β decay of 35Ar with LPCTrap
NASA Astrophysics Data System (ADS)
Fabian, X.; Ban, G.; Boussaïd, R.; Breitenfeldt, M.; Couratin, C.; Delahaye, P.; Durand, D.; Finlay, P.; Fléchard, X.; Guillon, B.; Lemière, Y.; Leredde, A.; Liénard, E.; Méry, A.; Naviliat-Cuncic, O.; Pierre, E.; Porobic, T.; Quéméner, G.; Rodríguez, D.; Severijns, N.; Thomas, J. C.; Van Gorp, S.
2014-03-01
Precise measurements in the β decay of the 35Ar nucleus enable to search for deviations from the Standard Model (SM) in the weak sector. These measurements enable either to check the CKM matrix unitarity or to constrain the existence of exotic currents rejected in the V-A theory of the SM. For this purpose, the β-ν angular correlation parameter, aβν, is inferred from a comparison between experimental and simulated recoil ion time-of-flight distributions following the quasi-pure Fermi transition of 35Ar1+ ions confined in the transparent Paul trap of the LPCTrap device at GANIL. During the last experiment, 1.5×106 good events have been collected, which corresponds to an expected precision of less than 0.5% on the aβν value. The required simulation is divided between the use of massive GPU parallelization and the GEANT4 toolkit for the source-cloud kinematics and the tracking of the decay products.
Benefits of Time Correlation Measurements for Passive Screening
NASA Astrophysics Data System (ADS)
Murer, David; Blackie, Douglas; Peerani, Paolo
2014-02-01
The “FLASH Portals Project” is a collaboration between Arktis Radiation Detectors Ltd (CH), the Atomic Weapons Establishment (UK), and the Joint Research Centre (European Commission), supported by the Technical Support Working Group (TSWG). The program's goal was to develop and demonstrate a technology to detect shielded special nuclear materials (SNM) more efficiently and less ambiguously by exploiting time correlation. This study presents experimental results of a two-sided portal monitor equipped with in total 16 4He fast neutron detectors as well as four polyvinyltoluene (PVT) plastic scintillators. All detectors have been synchronized to nanosecond precision, thereby allowing the resolution of time correlations from timescales of tens of microseconds (such as (n, γ) reactions) down to prompt fission correlations directly. Our results demonstrate that such correlations can be detected in a typical radiation portal monitor (RPM) geometry and within operationally acceptable time scales, and that exploiting these signatures significantly improves the performance of the RPM compared to neutron counting. Furthermore, the results show that some time structure remains even in the presence of heavy shielding, thus significantly improving the sensitivity of the detection system to shielded SNM.
Double resonance calibration of g factor standards: Carbon fibers as a high precision standard
NASA Astrophysics Data System (ADS)
Herb, Konstantin; Tschaggelar, Rene; Denninger, Gert; Jeschke, Gunnar
2018-04-01
The g factor of paramagnetic defects in commercial high performance carbon fibers was determined by a double resonance experiment based on the Overhauser shift due to hyperfine coupled protons. Our carbon fibers exhibit a single, narrow and perfectly Lorentzian shaped ESR line and a g factor slightly higher than gfree with g = 2.002644 =gfree · (1 + 162ppm) with a relative uncertainty of 15ppm . This precisely known g factor and their inertness qualify them as a high precision g factor standard for general purposes. The double resonance experiment for calibration is applicable to other potential standards with a hyperfine interaction averaged by a process with very short correlation time.
Correlational signatures of time-reversal symmetry breaking in two-dimensional flow
NASA Astrophysics Data System (ADS)
Hogg, Charlie; Ouellette, Nicholas
2015-11-01
Classical turbulence theories posit that broken spatial symmetries should be (statistically) restored at small scales. But since turbulent flows are inherently dissipative, time reversal symmetry is expected to remain broken throughout the cascade. However, the precise dynamical signature of this broken symmetry is not well understood. Recent work has shed new light on this fundamental question by considering the Lagrangian structure functions of power. Here, we take a somewhat different approach by studying the Lagrangian correlation functions of velocity and acceleration. We measured these correlations using particle tracking velocimetry in a quasi-two-dimensional electromagnetically driven flow that displayed net inverse energy transfer. We show that the correlation functions of the velocity and acceleration magnitudes are not symmetric in time, and that the degree of asymmetry can be related to the flux of energy between scales, suggesting that the asymmetry has a dynamical origin.
Interferometric constraints on quantum geometrical shear noise correlations
Chou, Aaron; Glass, Henry; Richard Gustafson, H.; ...
2017-07-20
Final measurements and analysis are reported from the first-generation Holometer, the first instrument capable of measuring correlated variations in space-time position at strain noise power spectral densities smaller than a Planck time. The apparatus consists of two co-located, but independent and isolated, 40 m power-recycled Michelson interferometers, whose outputs are cross-correlated to 25 MHz. The data are sensitive to correlations of differential position across the apparatus over a broad band of frequencies up to and exceeding the inverse light crossing time, 7.6 MHz. By measuring with Planck precision the correlation of position variations at spacelike separations, the Holometer searches formore » faint, irreducible correlated position noise backgrounds predicted by some models of quantum space-time geometry. The first-generation optical layout is sensitive to quantum geometrical noise correlations with shear symmetry---those that can be interpreted as a fundamental noncommutativity of space-time position in orthogonal directions. General experimental constraints are placed on parameters of a set of models of spatial shear noise correlations, with a sensitivity that exceeds the Planck-scale holographic information bound on position states by a large factor. This result significantly extends the upper limits placed on models of directional noncommutativity by currently operating gravitational wave observatories.« less
Interferometric constraints on quantum geometrical shear noise correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Aaron; Glass, Henry; Richard Gustafson, H.
Final measurements and analysis are reported from the first-generation Holometer, the first instrument capable of measuring correlated variations in space-time position at strain noise power spectral densities smaller than a Planck time. The apparatus consists of two co-located, but independent and isolated, 40 m power-recycled Michelson interferometers, whose outputs are cross-correlated to 25 MHz. The data are sensitive to correlations of differential position across the apparatus over a broad band of frequencies up to and exceeding the inverse light crossing time, 7.6 MHz. By measuring with Planck precision the correlation of position variations at spacelike separations, the Holometer searches formore » faint, irreducible correlated position noise backgrounds predicted by some models of quantum space-time geometry. The first-generation optical layout is sensitive to quantum geometrical noise correlations with shear symmetry---those that can be interpreted as a fundamental noncommutativity of space-time position in orthogonal directions. General experimental constraints are placed on parameters of a set of models of spatial shear noise correlations, with a sensitivity that exceeds the Planck-scale holographic information bound on position states by a large factor. This result significantly extends the upper limits placed on models of directional noncommutativity by currently operating gravitational wave observatories.« less
Real-Time Maps of Fluid Flow Fields in Porous Biomaterials
Mack, Julia J.; Youssef, Khalid; Noel, Onika D.V.; Lake, Michael P.; Wu, Ashley; Iruela-Arispe, M. Luisa; Bouchard, Louis-S.
2013-01-01
Mechanical forces such as fluid shear have been shown to enhance cell growth and differentiation, but knowledge of their mechanistic effect on cells is limited because the local flow patterns and associated metrics are not precisely known. Here we present real-time, noninvasive measures of local hydrodynamics in 3D biomaterials based on nuclear magnetic resonance. Microflow maps were further used to derive pressure, shear and fluid permeability fields. Finally, remodeling of collagen gels in response to precise fluid flow parameters was correlated with structural changes. It is anticipated that accurate flow maps within 3D matrices will be a critical step towards understanding cell behavior in response to controlled flow dynamics. PMID:23245922
Temporal Precision of Neuronal Information in a Rapid Perceptual Judgment
Ghose, Geoffrey M.; Harrison, Ian T.
2009-01-01
In many situations, such as pedestrians crossing a busy street or prey evading predators, rapid decisions based on limited perceptual information are critical for survival. The brevity of these perceptual judgments constrains how neuronal signals are integrated or pooled over time because the underlying sequence of processes, from sensation to perceptual evaluation to motor planning and execution, all occur within several hundred milliseconds. Because most previous physiological studies of these processes have relied on tasks requiring considerably longer temporal integration, the neuronal basis of such rapid decisions remains largely unexplored. In this study, we examine the temporal precision of neuronal activity associated with a rapid perceptual judgment. We find that the activity of individual neurons over tens of milliseconds can reliably convey information about sensory events and was well correlated with the animals' judgments. There was a strong correlation between sensory reliability and the correlation with behavioral choice, suggesting that rapid decisions were preferentially based on the most reliable sensory signals. We also find that a simple model in which the responses of a small number of individual neurons (<5) are summed can completely explain behavioral performance. These results suggest that neuronal circuits are sufficiently precise to allow for cognitive decisions to be based on small numbers of action potentials from highly reliable neurons. PMID:19109454
Temporal precision of neuronal information in a rapid perceptual judgment.
Ghose, Geoffrey M; Harrison, Ian T
2009-03-01
In many situations, such as pedestrians crossing a busy street or prey evading predators, rapid decisions based on limited perceptual information are critical for survival. The brevity of these perceptual judgments constrains how neuronal signals are integrated or pooled over time because the underlying sequence of processes, from sensation to perceptual evaluation to motor planning and execution, all occur within several hundred milliseconds. Because most previous physiological studies of these processes have relied on tasks requiring considerably longer temporal integration, the neuronal basis of such rapid decisions remains largely unexplored. In this study, we examine the temporal precision of neuronal activity associated with a rapid perceptual judgment. We find that the activity of individual neurons over tens of milliseconds can reliably convey information about sensory events and was well correlated with the animals' judgments. There was a strong correlation between sensory reliability and the correlation with behavioral choice, suggesting that rapid decisions were preferentially based on the most reliable sensory signals. We also find that a simple model in which the responses of a small number of individual neurons (<5) are summed can completely explain behavioral performance. These results suggest that neuronal circuits are sufficiently precise to allow for cognitive decisions to be based on small numbers of action potentials from highly reliable neurons.
Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.
Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young
2016-01-01
Introduction . We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods . We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results . The real values and the PACS measurement changes according to tilt value have no significant correlations ( p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements ( p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion . Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.
Urinalysis: The Automated Versus Manual Techniques; Is It Time To Change?.
Ahmed, Asmaa Ismail; Baz, Heba; Lotfy, Sarah
2016-01-01
Urinalysis is the third major test in clinical laboratory. Manual technique imprecision urges the need for a rapid reliable automated test. We evaluated the H800-FUSIOO automatic urine sediment analyzer and compared it to the manual urinalysis technique to determine if it may be a competitive substitute in laboratories of central hospitals. 1000 urine samples were examined by the two methods in parallel. Agreement, precision, carryover, drift, sensitivity, specificity, and practicability criteria were tested. Agreement ranged from excellent to good for all urine semi-quantitative components (K > 0.4, p = 0.000), except for granular casts (K = 0.317, p = 0.000). Specific gravity results correlated well between the two methods (r = 0.884, p = 0.000). RBCS and WBCs showed moderate correlation (r = 0.42, p = 0.000) and (r = 0.44, p = 0.000), respectively. The auto-analyzer's within-run precision was > 75% for all semi-quantitative components except for proteins (50% precision). This finding in addition to the granular casts poor agreement indicate the necessity of operator interference at the critical cutoff values. As regards quantitative contents, RBCs showed a mean of 69.8 +/- 3.95, C.V. = 5.7, WBCs showed a mean of 38.9 +/- 1.9, C.V. = 4.9). Specific gravity, pH, microalbumin, and creatinine also showed good precision results with C.Vs of 0.000, 2.6, 9.1, and 0.00 respectively. In the between run precision, positive control showed good precision (C.V. = 2.9), while negative control's C.V. was strikingly high (C.V. = 127). Carryover and drift studies were satisfactory. Manual examination of inter-observer results showed major discrepancies (< 60% similar readings), while intra-observer's results correlated well with each other (r = 0.99, p = 0.000). Automation of urinalysis decreases observer-associated variation and offers prompt competitive results when standardized for screening away from the borderline cutoffs.
Precision Positional Data of General Aviation Air Traffic in Terminal Air Space
NASA Technical Reports Server (NTRS)
Melson, W. E., Jr.; Parker, L. C.; Northam, A. M.; Singh, R. P.
1978-01-01
Three dimensional radar tracks of general aviation air traffic at three uncontrolled airports are considered. Contained are data which describe the position-time histories, other derived parameters, and reference data for the approximately 1200 tracks. All information was correlated such that the date, time, flight number, and runway number match the pattern type, aircraft type, wind, visibility, and cloud conditions.
Correlation between Identification Accuracy and Response Confidence for Common Environmental Sounds
set of environmental sounds with stimulus control and precision. The present study is one in a series of efforts to provide a baseline evaluation of a...sounds from six broad categories: household items, alarms, animals, human generated, mechanical, and vehicle sounds. Each sound was presented five times
High-performance time-resolved fluorescence by direct waveform recording.
Muretta, Joseph M; Kyrychenko, Alexander; Ladokhin, Alexey S; Kast, David J; Gillispie, Gregory D; Thomas, David D
2010-10-01
We describe a high-performance time-resolved fluorescence (HPTRF) spectrometer that dramatically increases the rate at which precise and accurate subnanosecond-resolved fluorescence emission waveforms can be acquired in response to pulsed excitation. The key features of this instrument are an intense (1 μJ/pulse), high-repetition rate (10 kHz), and short (1 ns full width at half maximum) laser excitation source and a transient digitizer (0.125 ns per time point) that records a complete and accurate fluorescence decay curve for every laser pulse. For a typical fluorescent sample containing a few nanomoles of dye, a waveform with a signal/noise of about 100 can be acquired in response to a single laser pulse every 0.1 ms, at least 10(5) times faster than the conventional method of time-correlated single photon counting, with equal accuracy and precision in lifetime determination for lifetimes as short as 100 ps. Using standard single-lifetime samples, the detected signals are extremely reproducible, with waveform precision and linearity to within 1% error for single-pulse experiments. Waveforms acquired in 0.1 s (1000 pulses) with the HPTRF instrument were of sufficient precision to analyze two samples having different lifetimes, resolving minor components with high accuracy with respect to both lifetime and mole fraction. The instrument makes possible a new class of high-throughput time-resolved fluorescence experiments that should be especially powerful for biological applications, including transient kinetics, multidimensional fluorescence, and microplate formats.
Electro-optic modulation for high-speed characterization of entangled photon pairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukens, Joseph M.; Odele, Ogaga D.; Leaird, Daniel E.
In this study, we demonstrate a new biphoton manipulation and characterization technique based on electro-optic intensity modulation and time shifting. By applying fast modulation signals with a sharply peaked cross-correlation to each photon from an entangled pair, it is possible to measure temporal correlations with significantly higher precision than that attainable using standard single-photon detection. Low-duty-cycle pulses and maximal-length sequences are considered as modulation functions, reducing the time spread in our correlation measurement by a factor of five compared to our detector jitter. With state-of-the-art electro-optic components, we expect the potential to surpass the speed of any single-photon detectors currentlymore » available.« less
Electro-optic modulation for high-speed characterization of entangled photon pairs
Lukens, Joseph M.; Odele, Ogaga D.; Leaird, Daniel E.; ...
2015-11-10
In this study, we demonstrate a new biphoton manipulation and characterization technique based on electro-optic intensity modulation and time shifting. By applying fast modulation signals with a sharply peaked cross-correlation to each photon from an entangled pair, it is possible to measure temporal correlations with significantly higher precision than that attainable using standard single-photon detection. Low-duty-cycle pulses and maximal-length sequences are considered as modulation functions, reducing the time spread in our correlation measurement by a factor of five compared to our detector jitter. With state-of-the-art electro-optic components, we expect the potential to surpass the speed of any single-photon detectors currentlymore » available.« less
Sponberg, S; Daniel, T L
2012-10-07
Muscles driving rhythmic locomotion typically show strong dependence of power on the timing or phase of activation. This is particularly true in insects' main flight muscles, canonical examples of muscles thought to have a dedicated power function. However, in the moth (Manduca sexta), these muscles normally activate at a phase where the instantaneous slope of the power-phase curve is steep and well below maximum power. We provide four lines of evidence demonstrating that, contrary to the current paradigm, the moth's nervous system establishes significant control authority in these muscles through precise timing modulation: (i) left-right pairs of flight muscles normally fire precisely, within 0.5-0.6 ms of each other; (ii) during a yawing optomotor response, left-right muscle timing differences shift throughout a wider 8 ms timing window, enabling at least a 50 per cent left-right power differential; (iii) timing differences correlate with turning torque; and (iv) the downstroke power muscles alone causally account for 47 per cent of turning torque. To establish (iv), we altered muscle activation during intact behaviour by stimulating individual muscle potentials to impose left-right timing differences. Because many organisms also have muscles operating with high power-phase gains (Δ(power)/Δ(phase)), this motor control strategy may be ubiquitous in locomotor systems.
Sponberg, S.; Daniel, T. L.
2012-01-01
Muscles driving rhythmic locomotion typically show strong dependence of power on the timing or phase of activation. This is particularly true in insects' main flight muscles, canonical examples of muscles thought to have a dedicated power function. However, in the moth (Manduca sexta), these muscles normally activate at a phase where the instantaneous slope of the power–phase curve is steep and well below maximum power. We provide four lines of evidence demonstrating that, contrary to the current paradigm, the moth's nervous system establishes significant control authority in these muscles through precise timing modulation: (i) left–right pairs of flight muscles normally fire precisely, within 0.5–0.6 ms of each other; (ii) during a yawing optomotor response, left—right muscle timing differences shift throughout a wider 8 ms timing window, enabling at least a 50 per cent left–right power differential; (iii) timing differences correlate with turning torque; and (iv) the downstroke power muscles alone causally account for 47 per cent of turning torque. To establish (iv), we altered muscle activation during intact behaviour by stimulating individual muscle potentials to impose left—right timing differences. Because many organisms also have muscles operating with high power–phase gains (Δpower/Δphase), this motor control strategy may be ubiquitous in locomotor systems. PMID:22833272
Study of image matching algorithm and sub-pixel fitting algorithm in target tracking
NASA Astrophysics Data System (ADS)
Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu
2015-03-01
Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image was processed by mean filter and median filter, then image matching was processed. The result show that when the noise is little, mean filter and median filter can achieve a good result. But when the noise density of salt and pepper noise is bigger than 0.4, or the variance of Gaussian noise is bigger than 0.0015, the result of image matching will be wrong.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Karisa M.; Wright, Bob W.; Synovec, Robert E.
2007-02-02
First, simulated chromatographic separations with declining retention time precision were used to study the performance of the piecewise retention time alignment algorithm and to demonstrate an unsupervised parameter optimization method. The average correlation coefficient between the first chromatogram and every other chromatogram in the data set was used to optimize the alignment parameters. This correlation method does not require a training set, so it is unsupervised and automated. This frees the user from needing to provide class information and makes the alignment algorithm more generally applicable to classifying completely unknown data sets. For a data set of simulated chromatograms wheremore » the average chromatographic peak was shifted past two neighboring peaks between runs, the average correlation coefficient of the raw data was 0.46 ± 0.25. After automated, optimized piecewise alignment, the average correlation coefficient was 0.93 ± 0.02. Additionally, a relative shift metric and principal component analysis (PCA) were used to independently quantify and categorize the alignment performance, respectively. The relative shift metric was defined as four times the standard deviation of a given peak’s retention time in all of the chromatograms, divided by the peak-width-at-base. The raw simulated data sets that were studied contained peaks with average relative shifts ranging between 0.3 and 3.0. Second, a “real” data set of gasoline separations was gathered using three different GC methods to induce severe retention time shifting. In these gasoline separations, retention time precision improved ~8 fold following alignment. Finally, piecewise alignment and the unsupervised correlation optimization method were applied to severely shifted GC separations of reformate distillation fractions. The effect of piecewise alignment on peak heights and peak areas is also reported. Piecewise alignment either did not change the peak height, or caused it to slightly decrease. The average relative difference in peak height after piecewise alignment was –0.20%. Piecewise alignment caused the peak areas to either stay the same, slightly increase, or slightly decrease. The average absolute relative difference in area after piecewise alignment was 0.15%.« less
NASA Astrophysics Data System (ADS)
Jing, Chao; Liu, Zhongling; Zhou, Ge; Zhang, Yimo
2011-11-01
The nanometer-level precise phase-shift system is designed to realize the phase-shift interferometry in electronic speckle shearography pattern interferometry. The PZT is used as driving component of phase-shift system and translation component of flexure hinge is developed to realize micro displacement of non-friction and non-clearance. Closed-loop control system is designed for high-precision micro displacement, in which embedded digital control system is developed for completing control algorithm and capacitive sensor is used as feedback part for measuring micro displacement in real time. Dynamic model and control model of the nanometer-level precise phase-shift system is analyzed, and high-precision micro displacement is realized with digital PID control algorithm on this basis. It is proved with experiments that the location precision of the precise phase-shift system to step signal of displacement is less than 2nm and the location precision to continuous signal of displacement is less than 5nm, which is satisfied with the request of the electronic speckle shearography and phase-shift pattern interferometry. The stripe images of four-step phase-shift interferometry and the final phase distributed image correlated with distortion of objects are listed in this paper to prove the validity of nanometer-level precise phase-shift system.
High precision laser ranging by time-of-flight measurement of femtosecond pulses
NASA Astrophysics Data System (ADS)
Lee, Joohyung; Lee, Keunwoo; Lee, Sanghyun; Kim, Seung-Woo; Kim, Young-Jin
2012-06-01
Time-of-flight (TOF) measurement of femtosecond light pulses was investigated for laser ranging of long distances with sub-micrometer precision in the air. The bandwidth limitation of the photo-detection electronics used in timing femtosecond pulses was overcome by adopting a type-II nonlinear second-harmonic crystal that permits the production of a balanced optical cross-correlation signal between two overlapping light pulses. This method offered a sub-femtosecond timing resolution in determining the temporal offset between two pulses through lock-in control of the pulse repetition rate with reference to the atomic clock. The exceptional ranging capability was verified by measuring various distances of 1.5, 60 and 700 m. This method is found well suited for future space missions based on formation-flying satellites as well as large-scale industrial applications for land surveying, aircraft manufacturing and shipbuilding.
Measuring Sizes & Shapes of Galaxies
NASA Astrophysics Data System (ADS)
Kusmic, Samir; Willemn Holwerda, Benne
2018-01-01
Software is how galaxy morphometrics are calculated, cutting down on time needed to categorize galaxies. However, new surveys coming in the next decade is expected to count upwards of a thousand times more galaxies than with current surveys. This issue would create longer time consumption just processing data. In this research, we looked into how we can reduce the time it takes to get morphometric parameters in order to classify galaxies, but also how precise we can get with other findings. The software of choice is Source Extractor, known for taking a short amount of time, as well as being recently updated to get compute morphometric parameters. This test is being done by running CANDELS data, five fields in the J and H filters, through Source Extractor and then cross-correlating the new catalog with one created with GALFIT, obtained from van der Wel et al. 2014, and then with spectroscopic redshift data. With Source Extractor, we look at how many galaxies counted, how precise the computation, how to classify morphometry, and how the results stand with other findings. The run-time was approximately 10 hours when cross-correlated with GALFIT and approximately 8 hours with the spectroscopic redshift; these were expected times as Source Extractor and already faster than GALFIT's run-time by a large factor. As well, Source Extractor's recovery was large: 79.24\\% of GALFIT's count. However, the precision is highly variable. We have created two thresholds to see which would be better in order to combat this;we ended up picking an unbiased isophotal area threshold as the better choice. Still, with such a threshold, spread was relatively wide. However, comparing the parameters with redshift showed agreeable findings, however, not necessarily to the numerical value. From the results, we see Source Extractor as a good first-look, to be followed up by other software.
NASA Astrophysics Data System (ADS)
Ellsworth, W. L.; Shelly, D. R.; Hardebeck, J.; Hill, D. P.
2017-12-01
Microseismicity often conveys the most direct information about active processes in the earth's subsurface. However, routine network processing typically leaves most earthquakes uncharacterized. These "sub-catalog" events can provide critical clues to ongoing processes in the source region. To address this issue, we have developed waveform-based processing that leverages the existing routine catalog of earthquakes to detect and characterize "sub-catalog" events (those absent in routine catalogs). By correlating waveforms of cataloged events with the continuous data stream, we 1) identify events with similar waveform signatures in the continuous data across multiple stations, 2) precisely measure relative time lags across these stations for both P- and S-wave time windows, and 3) estimate the relative polarity between events by the sign of the peak absolute value correlations and its height above the secondary peak. When combined, these inter-event comparisons yield robust measurements, which enable sensitive event detection, relative relocation, and relative magnitude estimation. The most recent addition, focal mechanisms derived from correlation-based relative polarities, addresses a significant shortcoming in microseismicity analyses (see Shelly et al., JGR, 2016). Depending on the application, we can characterize 2-10 times as many events as included in the initial catalog. This technique is particularly well suited for compact zones of active seismicity such as seismic swarms. Application to a 2014 swarm in Long Valley Caldera, California, illuminates complex patterns of faulting that would have otherwise remained obscured. The prevalence of such features in other environments remains an important, as yet unresolved, question.
A temporal basis for Weber's law in value perception.
Namboodiri, Vijay Mohan K; Mihalas, Stefan; Hussain Shuler, Marshall G
2014-01-01
Weber's law-the observation that the ability to perceive changes in magnitudes of stimuli is proportional to the magnitude-is a widely observed psychophysical phenomenon. It is also believed to underlie the perception of reward magnitudes and the passage of time. Since many ecological theories state that animals attempt to maximize reward rates, errors in the perception of reward magnitudes and delays must affect decision-making. Using an ecological theory of decision-making (TIMERR), we analyze the effect of multiple sources of noise (sensory noise, time estimation noise, and integration noise) on reward magnitude and subjective value perception. We show that the precision of reward magnitude perception is correlated with the precision of time perception and that Weber's law in time estimation can lead to Weber's law in value perception. The strength of this correlation is predicted to depend on the reward history of the animal. Subsequently, we show that sensory integration noise (either alone or in combination with time estimation noise) also leads to Weber's law in reward magnitude perception in an accumulator model, if it has balanced Poisson feedback. We then demonstrate that the noise in subjective value of a delayed reward, due to the combined effect of noise in both the perception of reward magnitude and delay, also abides by Weber's law. Thus, in our theory we prove analytically that the perception of reward magnitude, time, and subjective value change all approximately obey Weber's law.
Elliott, Mark A; du Bois, Naomi
2017-01-01
From the point of view of the cognitive dynamicist the organization of brain circuitry into assemblies defined by their synchrony at particular (and precise) oscillation frequencies is important for the correct correlation of all independent cortical responses to the different aspects of a given complex thought or object. From the point of view of anyone operating complex mechanical systems, i.e., those comprising independent components that are required to interact precisely in time, it follows that the precise timing of such a system is essential - not only essential but measurable, and scalable. It must also be reliable over observations to bring about consistent behavior, whatever that behavior is. The catastrophic consequence of an absence of such precision, for instance that required to govern the interference engine in many automobiles, is indicative of how important timing is for the function of dynamical systems at all levels of operation. The dynamics and temporal considerations combined indicate that it is necessary to consider the operating characteristic of any dynamical, cognitive brain system in terms, superficially at least, of oscillation frequencies. These may, themselves, be forensic of an underlying time-related taxonomy. Currently there are only two sets of relevant and necessarily systematic observations in this field: one of these reports the precise dynamical structure of the perceptual systems engaged in dynamical binding across form and time; the second, derived both empirically from perceptual performance data, as well as obtained from theoretical models, demonstrates a timing taxonomy related to a fundamental operator referred to as the time quantum. In this contribution both sets of theory and observations are reviewed and compared for their predictive consistency. Conclusions about direct comparability are discussed for both theories of cognitive dynamics and time quantum models. Finally, a brief review of some experimental data measuring sensitivity to visual information presented to the visual blind field (blindsight), as well as from studies of temporal processing in autism and schizophrenia, indicates that an understanding of a precise and metrical dynamic structure may be very important for an operational understanding of perception as well as more general cognitive function in psychopathology.
Estimated correlation matrices and portfolio optimization
NASA Astrophysics Data System (ADS)
Pafka, Szilárd; Kondor, Imre
2004-11-01
Correlations of returns on various assets play a central role in financial theory and also in many practical applications. From a theoretical point of view, the main interest lies in the proper description of the structure and dynamics of correlations, whereas for the practitioner the emphasis is on the ability of the models to provide adequate inputs for the numerous portfolio and risk management procedures used in the financial industry. The theory of portfolios, initiated by Markowitz, has suffered from the “curse of dimensions” from the very outset. Over the past decades a large number of different techniques have been developed to tackle this problem and reduce the effective dimension of large bank portfolios, but the efficiency and reliability of these procedures are extremely hard to assess or compare. In this paper, we propose a model (simulation)-based approach which can be used for the systematical testing of all these dimensional reduction techniques. To illustrate the usefulness of our framework, we develop several toy models that display some of the main characteristic features of empirical correlations and generate artificial time series from them. Then, we regard these time series as empirical data and reconstruct the corresponding correlation matrices which will inevitably contain a certain amount of noise, due to the finiteness of the time series. Next, we apply several correlation matrix estimators and dimension reduction techniques introduced in the literature and/or applied in practice. As in our artificial world the only source of error is the finite length of the time series and, in addition, the “true” model, hence also the “true” correlation matrix, are precisely known, therefore in sharp contrast with empirical studies, we can precisely compare the performance of the various noise reduction techniques. One of our recurrent observations is that the recently introduced filtering technique based on random matrix theory performs consistently well in all the investigated cases. Based on this experience, we believe that our simulation-based approach can also be useful for the systematic investigation of several related problems of current interest in finance.
Precise Hypocenter Determination around Palu Koro Fault: a Preliminary Results
NASA Astrophysics Data System (ADS)
Fawzy Ismullah, M. Muhammad; Nugraha, Andri Dian; Ramdhan, Mohamad; Wandono
2017-04-01
Sulawesi area is located in complex tectonic pattern. High seismicity activity in the middle of Sulawesi is related to Palu Koro fault (PKF). In this study, we determined precise hypocenter around PKF by applying double-difference method. We attempt to investigate of the seismicity rate, geometry of the fault and distribution of focus depth around PKF. We first re-pick P-and S-wave arrival time of the PKF events to determine the initial hypocenter location using Hypoellipse method through updated 1-D seismic velocity. Later on, we relocated the earthquake event using double-difference method. Our preliminary results show the distribution of relocated events are located around PKF and have smaller residual time than the initial location. We will enhance the hypocenter location through updating of arrival time by applying waveform cross correlation method as input for double-difference relocation.
NASA Astrophysics Data System (ADS)
Tu, Rui; Zhang, Rui; Zhang, Pengfei; Liu, Jinhai; Lu, Xiaochun
2018-07-01
This study proposes an approach to facilitate real-time fast point positioning of the BeiDou Navigation Satellite System (BDS) based on regional augmentation information. We term this as the precise positioning based on augmentation information (BPP) approach. The coordinates of the reference stations were highly constrained to extract the augmentation information, which contained not only the satellite orbit clock error correlated with the satellite running state, but also included the atmosphere error and unmodeled error, which are correlated with the spatial and temporal states. Based on these mixed augmentation corrections, a precise point positioning (PPP) model could be used for the coordinates estimation of the user stations, and the float ambiguity could be easily fixed for the single-difference between satellites. Thus, this technique provided a quick and high-precision positioning service. Three different datasets with small, medium, and large baselines (0.6 km, 30 km and 136 km) were used to validate the feasibility and effectiveness of the proposed BPP method. The validations showed that using the BPP model, 1–2 cm positioning service can be provided in a 100 km wide area after just 2 s of initialization. Thus, as the proposed approach not only capitalized on both PPP and RTK but also provided consistent application, it can be used for area augmentation positioning.
Statistical measures of Planck scale signal correlations in interferometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig J.; Kwon, Ohkyung
2015-06-22
A model-independent statistical framework is presented to interpret data from systems where the mean time derivative of positional cross correlation between world lines, a measure of spreading in a quantum geometrical wave function, is measured with a precision smaller than the Planck time. The framework provides a general way to constrain possible departures from perfect independence of classical world lines, associated with Planck scale bounds on positional information. A parametrized candidate set of possible correlation functions is shown to be consistent with the known causal structure of the classical geometry measured by an apparatus, and the holographic scaling of informationmore » suggested by gravity. Frequency-domain power spectra are derived that can be compared with interferometer data. As a result, simple projections of sensitivity for specific experimental set-ups suggests that measurements will directly yield constraints on a universal time derivative of the correlation function, and thereby confirm or rule out a class of Planck scale departures from classical geometry.« less
VLBI observations to the APOD satellite
NASA Astrophysics Data System (ADS)
Sun, Jing; Tang, Geshi; Shu, Fengchun; Li, Xie; Liu, Shushi; Cao, Jianfeng; Hellerschmied, Andreas; Böhm, Johannes; McCallum, Lucia; McCallum, Jamie; Lovell, Jim; Haas, Rüdiger; Neidhardt, Alexander; Lu, Weitao; Han, Songtao; Ren, Tianpeng; Chen, Lue; Wang, Mei; Ping, Jinsong
2018-02-01
The APOD (Atmospheric density detection and Precise Orbit Determination) is the first LEO (Low Earth Orbit) satellite in orbit co-located with a dual-frequency GNSS (GPS/BD) receiver, an SLR reflector, and a VLBI X/S dual band beacon. From the overlap statistics between consecutive solution arcs and the independent validation by SLR measurements, the orbit position deviation was below 10 cm before the on-board GNSS receiver got partially operational. In this paper, the focus is on the VLBI observations to the LEO satellite from multiple geodetic VLBI radio telescopes, since this is the first implementation of a dedicated VLBI transmitter in low Earth orbit. The practical problems of tracking a fast moving spacecraft with current VLBI ground infrastructure were solved and strong interferometric fringes were obtained by cross-correlation of APOD carrier and DOR (Differential One-way Ranging) signals. The precision in X-band time delay derived from 0.1 s integration time of the correlator output is on the level of 0.1 ns. The APOD observations demonstrate encouraging prospects of co-location of multiple space geodetic techniques in space, as a first prototype.
Measures for the Dynamics in a Few-Body Quantum System with Harmonic Interactions
NASA Astrophysics Data System (ADS)
Nagy, I.; Pipek, J.; Glasser, M. L.
2018-01-01
We determine the exact time-dependent non-idempotent one-particle reduced density matrix and its spectral decomposition for a harmonically confined two-particle correlated one-dimensional system when the interaction terms in the Schrödinger Hamiltonian are changed abruptly. Based on this matrix in coordinate space we derive a precise condition for the equivalence of the purity and the overlap-square of the correlated and non-correlated wave functions as the model system with harmonic interactions evolves in time. This equivalence holds only if the interparticle interactions are affected, while the confinement terms are unaffected within the stability range of the system. Under this condition we analyze various time-dependent measures of entanglement and demonstrate that, depending on the magnitude of the changes made in the Hamiltonian, periodic, logarithmically increasing or constant value behavior of the von Neumann entropy can occur.
Double resonance calibration of g factor standards: Carbon fibers as a high precision standard.
Herb, Konstantin; Tschaggelar, Rene; Denninger, Gert; Jeschke, Gunnar
2018-04-01
The g factor of paramagnetic defects in commercial high performance carbon fibers was determined by a double resonance experiment based on the Overhauser shift due to hyperfine coupled protons. Our carbon fibers exhibit a single, narrow and perfectly Lorentzian shaped ESR line and a g factor slightly higher than g free with g=2.002644=g free ·(1+162ppm) with a relative uncertainty of 15ppm. This precisely known g factor and their inertness qualify them as a high precision g factor standard for general purposes. The double resonance experiment for calibration is applicable to other potential standards with a hyperfine interaction averaged by a process with very short correlation time. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
The Holometer: An instrument to probe Planckian quantum geometry
Chou, Aaron; Glass, Henry; Gustafson, H. Richard; ...
2017-02-28
This paper describes the Fermilab Holometer, an instrument for measuring correlations of position variations over a four-dimensional volume of space-time. The apparatus consists of two co-located, but independent and isolated, 40 m power-recycled Michelson interferometers, whose outputs are cross-correlated to 25 MHz. The data are sensitive to correlations of differential position across the apparatus over a broad band of frequencies up to and exceeding the inverse light crossing time, 7.6 MHz. A noise model constrained by diagnostic and environmental data distinguishes among physical origins of measured correlations, and is used to verify shot-noise-limited performance. These features allow searches for exoticmore » quantum correlations that depart from classical trajectories at spacelike separations, with a strain noise power spectral density sensitivity smaller than the Planck time. As a result, the Holometer in current and future configurations is projected to provide precision tests of a wide class of models of quantum geometry at the Planck scale, beyond those already constrained by currently operating gravitational wave observatories.« less
Clark, C W; Ellison, W T
2000-06-01
Between 1984 and 1993, visual and acoustic methods were combined to census the Bering-Chukchi-Beaufort bowhead whale, Balaena mysticetus, population. Passive acoustic location was based on arrival-time differences of transient bowhead sounds detected on sparse arrays of three to five hydrophones distributed over distances of 1.5-4.5 km along the ice edge. Arrival-time differences were calculated from either digital cross correlation of spectrograms (old method), or digital cross correlation of time waveforms (new method). Acoustic calibration was conducted in situ in 1985 at five sites with visual site position determined by triangulation using two theodolites. The discrepancy between visual and acoustic locations was <1%-5% of visual range and less than 0.7 degrees of visual bearing for either method. Comparison of calibration results indicates that the new method yielded slightly more precise and accurate positions than the old method. Comparison of 217 bowhead whale call locations from both acoustic methods showed that the new method was more precise, with location errors 3-4 times smaller than the old method. Overall, low-frequency bowhead transients were reliably located out to ranges of 3-4 times array size. At these ranges in shallow water, signal propagation appears to be dominated by the fundamental mode and is not corrupted by multipath.
NASA Astrophysics Data System (ADS)
Gundacker, S.; Turtos, R. M.; Auffray, E.; Lecoq, P.
2018-05-01
The emergence of new solid-state avalanche photodetectors, e.g. SiPMs, with unprecedented timing capabilities opens new ways to profit from ultrafast and prompt photon emission in scintillators. In time of flight positron emission tomography (TOF-PET) and high energy timing detectors based on scintillators the ultimate coincidence time resolution (CTR) achievable is proportional to the square root of the scintillation rise time, decay time and the reciprocal light yield, CTR ∝√{τrτd / LY }. Hence, the precise study of light emission in the very first tens of picoseconds is indispensable to understand time resolution limitations imposed by the scintillator. We developed a time correlated single photon counting setup having a Gaussian impulse response function (IRF) of 63ps sigma, allowing to precisely measure the scintillation rise time of various materials with 511keV excitation. In L(Y)SO:Ce we found two rise time components, the first below the resolution of our setup <10 ps and a second component being ∼380 ps. Co-doping with Ca2+ completely suppresses the slow rise component leading to a very fast initial scintillation emission with a rise time of <10ps. A very similar behavior is observed in LGSO:Ce crystals. The results are further confirmed by complementary measurements using a streak-camera system with pulsed X-ray excitation and additional 511 keV excited measurements of Mg2+ co-doped LuAG:Ce, YAG:Ce and GAGG:Ce samples.
Külling, Fabrice Alexander; Ebneter, Lukas; Rempfler, Georg Stefan; Zdravkovic, Vilijam
2018-05-01
To prove that a modified closing mechanism of the rongeur gives better precision compared to the old Kerrison rongeur. Forty persons from the departments of orthopaedic surgery, urology and neurosurgery (35 orthopaedic, 2 urology and 3 neurosurgery) took part in the study. All participants were asked to punch ten times in a first step with either the old Kerrison rongeur with the scissors-like handle or the modified punch with a new parallel closing mechanism. In a second step, they punched 10 times with the other instrument. Shaft movement in three dimensions was measured with a stereoscopic, contactless, full-field digital image correlation system. The new rongeur is significantly more precise with less movement in all three dimensions. The mechanical model of the new rongeur shows that the momentum needed to keep the tip at the initial position changes only minimally during the closing act on the new model. The new rongeur is more precise compared to the old Kerrison model. It is more robust against changes in the direction of the finger forces and may reduce soreness, fatigue and CTS in spine surgeons. Not applicable: technical study.
Martin-Duverneuil, N; Guillevin, R; Chiras, J
2008-11-01
The imaging of gliomas, as well as diffuse infiltrative gliomas or as more recently individualized entities, has been profoundly modified these last years. Correlated with the classic morphological MRI, numerous new sequences have appeared that allowed a more metabolic approach of the tumors, such as diffusion, perfusion--related to angiogenesis--and spectroscopy--reflecting metabolic data. Their development in daily practice allows to precise the diagnostic, to definite the more active areas (correlated with the hyperperfused or more metabolic active areas in relation with the Ki67 index) and so optimize the biopsy and/or evaluate the evolution of the lesion. When associated, they allow also and perhaps especially to precise the diagnostic, particularly with other tumoral masses such as lymphomas or metastases that can present misleading patterns, but also with other more benign lesions such as abcesses. Always critically analysed, and reevaluated along the time if necessary, they can sometimes help the histological diagnosis, but never can be used in place of it.
Satellite orbit determination using quantum correlation technology
NASA Astrophysics Data System (ADS)
Zhang, Bo; Sun, Fuping; Zhu, Xinhui; Jia, Xiaolin
2018-03-01
After the presentation of second-order correlation ranging principles with quantum entanglement, the concept of quantum measurement is introduced to dynamic satellite precise orbit determination. Based on the application of traditional orbit determination models for correcting the systematic errors within the satellite, corresponding models for quantum orbit determination (QOD) are established. This paper experiments on QOD with the BeiDou Navigation Satellite System (BDS) by first simulating quantum observations of 1 day arc-length. Then the satellite orbits are resolved and compared with the reference precise ephemerides. Subsequently, some related factors influencing the accuracy of QOD are discussed. Furthermore, the accuracy for GEO, IGSO and MEO satellites increase about 20, 30 and 10 times, respectively, compared with the results from the resolution by measured data. Therefore, it can be expected that quantum technology may also bring delightful surprises to satellite orbit determination as have already emerged in other fields.
NASA Astrophysics Data System (ADS)
André, Matthieu A.; Burns, Ross A.; Danehy, Paul M.; Cadell, Seth R.; Woods, Brian G.; Bardet, Philippe M.
2018-01-01
A molecular tagging velocity (MTV) technique is developed to non-intrusively measure velocity in an integral effect test (IET) facility simulating a high-temperature helium-cooled nuclear reactor in accident scenarios. In these scenarios, the velocities are expected to be low, on the order of 1 m/s or less, which forces special requirements on the MTV tracer selection. Nitrous oxide (N_2O) is identified as a suitable seed gas to generate NO tracers capable of probing the flow over a large range of pressure, temperature, and flow velocity. The performance of N_2O-MTV is assessed in the laboratory at temperature and pressure ranging from 295 to 781 K and 1 to 3 atm. MTV signal improves with a temperature increase, but decreases with a pressure increase. Velocity precision down to 0.004 m/s is achieved with a probe time of 40 ms at ambient pressure and temperature. Measurement precision is limited by tracer diffusion, and absorption of the tag laser beam by the seed gas. Processing by cross-correlation of single-shot images with high signal-to-noise ratio reference images improves the precision by about 10% compared to traditional single-shot image correlations. The instrument is then deployed to the IET facility. Challenges associated with heat, vibrations, safety, beam delivery, and imaging are addressed in order to successfully operate this sensitive instrument in-situ. Data are presented for an isothermal depressurized conduction cooldown. Velocity profiles from MTV reveal a complex flow transient driven by buoyancy, diffusion, and instability taking place over short (<1 s) and long (>30 min) time scales at sub-meter per second speed. The precision of the in-situ results is estimated at 0.027, 0.0095, and 0.006 m/s for a probe time of 5, 15, and 35 ms, respectively.
Using GLONASS signal for clock synchronization
NASA Technical Reports Server (NTRS)
Gouzhva, Yuri G.; Gevorkyan, Arvid G.; Bogdanov, Pyotr P.; Ovchinnikov, Vitaly V.
1994-01-01
Although in accuracy parameters GLONASS is correlated with GPS, using GLONASS signals for high-precision clock synchronization was, up to the recent time, of limited utility due to the lack of specialized time receivers. In order to improve this situation, in late 1992 the Russian Institute of Radionavigation and Time (RMT) began to develop a GLONASS time receiver using as a basis the airborne ASN-16 receiver. This paper presents results of estimating user clock synchronization accuracy via GLONASS signals using ASN-16 receiver in the direct synchronization and common-view modes.
Dating Tips for Divergence-Time Estimation.
O'Reilly, Joseph E; Dos Reis, Mario; Donoghue, Philip C J
2015-11-01
The molecular clock is the only viable means of establishing an accurate timescale for Life on Earth, but it remains reliant on a capricious fossil record for calibration. 'Tip-dating' promises a conceptual advance, integrating fossil species among their living relatives using molecular/morphological datasets and evolutionary models. Fossil species of known age establish calibration directly, and their phylogenetic uncertainty is accommodated through the co-estimation of time and topology. However, challenges remain, including a dearth of effective models of morphological evolution, rate correlation, the non-random nature of missing characters in fossil data, and, most importantly, accommodating uncertainty in fossil age. We show uncertainty in fossil-dating propagates to divergence-time estimates, yielding estimates that are older and less precise than those based on traditional node calibration. Ultimately, node and tip calibrations are not mutually incompatible and may be integrated to achieve more accurate and precise evolutionary timescales. Copyright © 2015 Elsevier Ltd. All rights reserved.
BONNSAI: correlated stellar observables in Bayesian methods
NASA Astrophysics Data System (ADS)
Schneider, F. R. N.; Castro, N.; Fossati, L.; Langer, N.; de Koter, A.
2017-02-01
In an era of large spectroscopic surveys of stars and big data, sophisticated statistical methods become more and more important in order to infer fundamental stellar parameters such as mass and age. Bayesian techniques are powerful methods because they can match all available observables simultaneously to stellar models while taking prior knowledge properly into account. However, in most cases it is assumed that observables are uncorrelated which is generally not the case. Here, we include correlations in the Bayesian code Bonnsai by incorporating the covariance matrix in the likelihood function. We derive a parametrisation of the covariance matrix that, in addition to classical uncertainties, only requires the specification of a correlation parameter that describes how observables co-vary. Our correlation parameter depends purely on the method with which observables have been determined and can be analytically derived in some cases. This approach therefore has the advantage that correlations can be accounted for even if information for them are not available in specific cases but are known in general. Because the new likelihood model is a better approximation of the data, the reliability and robustness of the inferred parameters are improved. We find that neglecting correlations biases the most likely values of inferred stellar parameters and affects the precision with which these parameters can be determined. The importance of these biases depends on the strength of the correlations and the uncertainties. For example, we apply our technique to massive OB stars, but emphasise that it is valid for any type of stars. For effective temperatures and surface gravities determined from atmosphere modelling, we find that masses can be underestimated on average by 0.5σ and mass uncertainties overestimated by a factor of about 2 when neglecting correlations. At the same time, the age precisions are underestimated over a wide range of stellar parameters. We conclude that accounting for correlations is essential in order to derive reliable stellar parameters including robust uncertainties and will be vital when entering an era of precision stellar astrophysics thanks to the Gaia satellite.
NASA Astrophysics Data System (ADS)
Nice, David; NANOGrav
2018-01-01
The North American Observatory for Nanohertz Gravitational Waves (NANOGrav) collaboration is thirteen years into a program of long-term, high-precision millisecond pulsar timing, undertaken with the goal of detecting and characterization nanohertz gravitational waves (i.e., gravitational waves with periods of many years) by measuring their effect on observed pulse arrival times. Our primary instruments are the Arecibo Observatory, used to observe 37 pulsars with declinations between 0 and 39 degrees; and the Green Bank Telescope, used for 24 pulsars, of which 22 are outside the Arecibo range, and 2 are overlaps with the Arecibo source list. Additional observations are made with the VLA and (soon) CHIME.Most pulsars in our program are observed at intervals of three to four weeks, and seven are observed weekly. Observations of each pulsar are made over a wide range of radio frequencies at each epoch in order to measure and mitigate effects of the ionized interstellar medium on the pulse arrival times. Our targets are pulsars for which we can achieve timing precision of 1 microsecond or better in at each epoch; we achieve precision better than 100 nanoseconds in the best cases. Observing a large number of pulsars will allow for robust measurements of gravitational waves by analyzing correlations in the timing of pairs of pulsars depending on their separation on the sky. Our data are pooled with data from telescopes worldwide via the International Pulsar Timing Array (IPTA) collaboration, further increasing our sensitivity to gravitational waves.We release data at regular intervals. We will describe the NANOGrav 5-, 9- and 11-year data sets and give a status report on the NANOGrav 12.5-year data set.
Influence of the time scale on the construction of financial networks.
Emmert-Streib, Frank; Dehmer, Matthias
2010-09-30
In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis.
NASA Astrophysics Data System (ADS)
Lin, G.
2012-12-01
We investigate the seismic and magmatic activity during an 11-month-long seismic swarm between 1989 and 1990 beneath Mammoth Mountain (MM) at the southwest rim of Long Valley caldera in eastern California. This swarm is believed to be results of a shallow intrusion of magma beneath MM. It was followed by the emissions of carbon dioxide (CO2) gas, which caused tree-killings in 1990 and posed a significant human health risk around MM. In this study, we develop a new three-dimensional (3-D) P-wave velocity model using first-arrival picks by applying the simul2000 tomographic algorithm. The resulting 3-D model is correlated with the surface geological features at shallow depths and is used to constrain absolute earthquake locations for all local events in our study. We compute both P- and S-wave differential times using a time-domain waveform cross-correlation method. We then apply similar event cluster analysis and differential time location approach to further improve relative event location accuracy. A dramatic sharpening of seismicity pattern is obtained after these processes. The estimated uncertainties are a few meters in relative location and ~100 meters in absolute location. We also apply a high-resolution approach to estimate in situ near-source Vp/Vs ratios using differential times from waveform cross-correlation. This method provides highly precise results because cross-correlation can measure differential times to within a few milliseconds and can achieve a precision of 0.001 in estimated Vp/Vs ratio. Our results show a circular ring-like seismicity pattern with a diameter of 2 km between 3 and 8 km depth. These events are distributed in an anomalous body with low Vp and high Vp/Vs, which may be caused by over-pressured magmatically derived fluids. At shallower depths, we observe very low Vp/Vs anomalies beneath MM from the surface to 1 km below sea level whose locations agree with the proposed CO2 reservoir in previous studies. The systematic spatial and temporal migration of seismicity suggests fluid involvement in the seismic swarm. Our results will provide more robust constraints on the crustal structure and volcanic processes beneath Mammoth Mountain.
Mutation Rates across Budding Yeast Chromosome VI Are Correlated with Replication Timing
Lang, Gregory I.; Murray, Andrew W.
2011-01-01
Previous experimental studies suggest that the mutation rate is nonuniform across the yeast genome. To characterize this variation across the genome more precisely, we measured the mutation rate of the URA3 gene integrated at 43 different locations tiled across Chromosome VI. We show that mutation rate varies 6-fold across a single chromosome, that this variation is correlated with replication timing, and we propose a model to explain this variation that relies on the temporal separation of two processes for replicating past damaged DNA: error-free DNA damage tolerance and translesion synthesis. This model is supported by the observation that eliminating translesion synthesis decreases this variation. PMID:21666225
Time-to-digital converter card for multichannel time-resolved single-photon counting applications
NASA Astrophysics Data System (ADS)
Tamborini, Davide; Portaluppi, Davide; Tisa, Simone; Tosi, Alberto
2015-03-01
We present a high performance Time-to-Digital Converter (TDC) card that provides 10 ps timing resolution and 20 ps (rms) timing precision with a programmable full-scale-range from 160 ns to 10 μs. Differential Non-Linearity (DNL) is better than 1.3% LSB (rms) and Integral Non-Linearity (INL) is 5 ps rms. Thanks to the low power consumption (400 mW) and the compact size (78 mm x 28 mm x 10 mm), this card is the building block for developing compact multichannel time-resolved instrumentation for Time-Correlated Single-Photon Counting (TCSPC). The TDC-card outputs the time measurement results together with the rates of START and STOP signals and the number of valid TDC conversions. These additional information are needed by many TCSPC-based applications, such as: Fluorescence Lifetime Imaging (FLIM), Time-of-Flight (TOF) ranging measurements, time-resolved Positron Emission Tomography (PET), single-molecule spectroscopy, Fluorescence Correlation Spectroscopy (FCS), Diffuse Optical Tomography (DOT), Optical Time-Domain Reflectometry (OTDR), quantum optics, etc.
Precision measurement of electric organ discharge timing from freely moving weakly electric fish.
Jun, James J; Longtin, André; Maler, Leonard
2012-04-01
Physiological measurements from an unrestrained, untethered, and freely moving animal permit analyses of neural states correlated to naturalistic behaviors of interest. Precise and reliable remote measurements remain technically challenging due to animal movement, which perturbs the relative geometries between the animal and sensors. Pulse-type electric fish generate a train of discrete and stereotyped electric organ discharges (EOD) to sense their surroundings actively, and rapid modulation of the discharge rate occurs while free swimming in Gymnotus sp. The modulation of EOD rates is a useful indicator of the fish's central state such as resting, alertness, and learning associated with exploration. However, the EOD pulse waveforms remotely observed at a pair of dipole electrodes continuously vary as the fish swims relative to the electrodes, which biases the judgment of the actual pulse timing. To measure the EOD pulse timing more accurately, reliably, and noninvasively from a free-swimming fish, we propose a novel method based on the principles of waveform reshaping and spatial averaging. Our method is implemented using envelope extraction and multichannel summation, which is more precise and reliable compared with other widely used threshold- or peak-based methods according to the tests performed under various source-detector geometries. Using the same method, we constructed a real-time electronic pulse detector performing an additional online pulse discrimination routine to enhance further the detection reliability. Our stand-alone pulse detector performed with high temporal precision (<10 μs) and reliability (error <1 per 10(6) pulses) and permits longer recording duration by storing only event time stamps (4 bytes/pulse).
Optical joint correlator for real-time image tracking and retinal surgery
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Inventor)
1991-01-01
A method for tracking an object in a sequence of images is described. Such sequence of images may, for example, be a sequence of television frames. The object in the current frame is correlated with the object in the previous frame to obtain the relative location of the object in the two frames. An optical joint transform correlator apparatus is provided to carry out the process. Such joint transform correlator apparatus forms the basis for laser eye surgical apparatus where an image of the fundus of an eyeball is stabilized and forms the basis for the correlator apparatus to track the position of the eyeball caused by involuntary movement. With knowledge of the eyeball position, a surgical laser can be precisely pointed toward a position on the retina.
Time Periods of Unusual Density Behavior Observed by GRACE and CHAMP
NASA Astrophysics Data System (ADS)
McLaughlin, C. A.; Fattig, E.; Mysore Krishna, D.; Locke, T.; Mehta, P. M.
2011-12-01
Time periods of low cross correlation between precision orbit ephemeris (POE) derived density and accelerometer density for CHAMP and GRACE are examined. In particular, the cross correlation for GRACE dropped from typical values near 0.9 to much lower values and then returned to typical over the time period of late October to late December of 2005. This time period includes a maneuver where GRACE-A and GRACE-B swapped positions. However, the drop in cross correlation begins and reaches its low point before the maneuvers begin. In addition, the densities were found using GRACE-A, but GRACE-B did most of the maneuvering. The time period is characterized by high frequency variations in accelerometer density of the same magnitude as the daylight to eclipse variations over the course of an orbit. However, the daylight to eclipse variations are particularly small during this time period because the orbit plane is near the terminator. Additionally, the difference between the accelerometer and POE derived densities are not unusually large during this time period. This implies the variations are not unusual, just more significant when the orbit plane is near terminator. Cyclical variations in correlation of the POE derived densities with accelerometer derived densities are seen for both GRACE and CHAMP, but the magnitude of the variations are much larger for GRACE, possibly because of the higher altitude of GRACE. The cycles seem to be phased so that low correlations occur with low beta angle when the orbit plane is near the terminator. The low correlation is possibly caused by the lower amplitude of the daylight to eclipse signal making higher frequency variations relatively more important. However, another possible explanation is terminator waves in density that propagate to the thermosphere from lower in the atmosphere. These waves have been observed in CHAMP accelerometer data and global circulation model simulations. Further investigation is needed to see if the variations correspond to terminator waves or if they represent typical high frequency signal from another source that is more apparent when the orbit plane is near the terminator. 1. C. A. McLaughlin, E. Fattig, D. Mysore Krishna, and P. M. Mehta, "Time Periods of Anomalous Density for GRACE and CHAMP," AAS/AIAA Astrodynamics Specialists Conference, AAS 11-613, Girdwood, AK, August 2011. 2. C. A. McLaughlin, A. Hiatt, and T. Lechtenberg, "Calibrating Precision Orbit Derived Total Density," Journal of Spacecraft and Rockets, Vol. 48, No. 1, January-February 2011, pp. 166-174.
Stability of MINERVA Spectrograph's Instrumental Profile
NASA Astrophysics Data System (ADS)
Wilson, Maurice; Eastman, Jason; Johnson, John Asher
2018-01-01
For most Earth-like exoplanets, their physical properties cannot be determined without high precision photometry and radial velocities. For this reason, the MINiature Exoplanet Radial Velocity Array (MINERVA) was designed to obtain photometric and radial velocity measurements with precision sufficient for finding, confirming, and characterizing rocky planets around our nearest stars. MINERVA is an array of four robotic telescopes located on Mt. Hopkins in Arizona. We aim to improve our radial velocity precision with MINERVA by analyzing the stability of our spectrograph’s instrumental profile. We have taken several spectra of the daytime sky each month and have checked for variability over a span of six months. We investigate the variation over time to see if it correlates with temperature and pressure changes in the spectrograph. We discuss the implications of our daytime sky spectra and how the instrumental profile’s stability may be improved.
An evaluation of study design for estimating a time-of-day noise weighting
NASA Technical Reports Server (NTRS)
Fields, J. M.
1986-01-01
The relative importance of daytime and nighttime noise of the same noise level is represented by a time-of-day weight in noise annoyance models. The high correlations between daytime and nighttime noise were regarded as a major reason that previous social surveys of noise annoyance could not accurately estimate the value of the time-of-day weight. Study designs which would reduce the correlation between daytime and nighttime noise are described. It is concluded that designs based on short term variations in nighttime noise levels would not be able to provide valid measures of response to nighttime noise. The accuracy of the estimate of the time-of-day weight is predicted for designs which are based on long term variations in nighttime noise levels. For these designs it is predicted that it is not possible to form satisfactorily precise estimates of the time-of-day weighting.
NASA Astrophysics Data System (ADS)
Johnson, David M.
2016-10-01
An exploratory assessment was undertaken to determine the correlation strength and optimal timing of several commonly used Moderate Resolution Imaging Spectroradiometer (MODIS) composited imagery products against crop yields for 10 globally significant agricultural commodities. The crops analyzed included barley, canola, corn, cotton, potatoes, rice, sorghum, soybeans, sugarbeets, and wheat. The MODIS data investigated included the Normalized Difference Vegetation Index (NDVI), Fraction of Photosynthetically Active Radiation (FPAR), Leaf Area Index (LAI), and Gross Primary Production (GPP), in addition to daytime Land Surface Temperature (DLST) and nighttime LST (NLST). The imagery utilized all had 8-day time intervals, but NDVI had a 250 m spatial resolution while the other products were 1000 m. These MODIS datasets were also assessed from both the Terra and Aqua satellites, with their differing overpass times, to document any differences. A follow-on analysis, using the Terra 250 m NDVI data as a benchmark, looked at the yield prediction utility of NDVI at two spatial scales (250 m vs. 1000 m), two time precisions (8-day vs. 16-day), and also assessed the Enhanced Vegetation Index (EVI, at 250 m, 16-day). The analyses spanned the major farming areas of the United States (US) from the summers of 2008-2013 and used annual county-level average crop yield data from the US Department of Agriculture as a basis. All crops, except rice, showed at least some positive correlations to each of the vegetation related indices in the middle of the growing season, with NDVI performing slightly better than FPAR. LAI was somewhat less strongly correlated and GPP weak overall. Conversely, some of the crops, particularly canola, corn, and soybeans, also showed negative correlations to DLST mid-summer. NLST, however, was never correlated to crop yield, regardless of the crop or seasonal timing. Differences between the Terra and Aqua results were found to be minimal. The 1000 m resolution NDVI showed somewhat poorer performance than the 250 m and suggests spatial resolution is helpful but not a necessity. The 8-day versus 16-day NDVI relationships to yields were very similar other than for the temporal precision. Finally, the EVI often showed the very best performance of all the variables, all things considered.
Temporal processing dysfunction in schizophrenia.
Carroll, Christine A; Boggs, Jennifer; O'Donnell, Brian F; Shekhar, Anantha; Hetrick, William P
2008-07-01
Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the pathophysiology of schizophrenia, there remains a paucity of research directly examining overt timing performance in the disorder. Accordingly, the present study investigated timing in schizophrenia using a well-established task of time perception. Twenty-three individuals with schizophrenia and 22 non-psychiatric control participants completed a temporal bisection task, which required participants to make temporal judgments about auditory and visually presented durations ranging from 300 to 600 ms. Both schizophrenia and control groups displayed greater visual compared to auditory timing variability, with no difference between groups in the visual modality. However, individuals with schizophrenia exhibited less temporal precision than controls in the perception of auditory durations. These findings correlated with parameter estimates obtained from a quantitative model of time estimation, and provide evidence of a fundamental deficit in temporal auditory precision in schizophrenia.
Automatic processing of induced events in the geothermal reservoirs Landau and Insheim, Germany
NASA Astrophysics Data System (ADS)
Olbert, Kai; Küperkoch, Ludger; Meier, Thomas
2016-04-01
Induced events can be a risk to local infrastructure that need to be understood and evaluated. They represent also a chance to learn more about the reservoir behavior and characteristics. Prior to the analysis, the waveform data must be processed consistently and accurately to avoid erroneous interpretations. In the framework of the MAGS2 project an automatic off-line event detection and a phase onset time determination algorithm are applied to induced seismic events in geothermal systems in Landau and Insheim, Germany. The off-line detection algorithm works based on a cross-correlation of continuous data taken from the local seismic network with master events. It distinguishes events between different reservoirs and within the individual reservoirs. Furthermore, it provides a location and magnitude estimation. Data from 2007 to 2014 are processed and compared with other detections using the SeisComp3 cross correlation detector and a STA/LTA detector. The detected events are analyzed concerning spatial or temporal clustering. Furthermore the number of events are compared to the existing detection lists. The automatic phase picking algorithm combines an AR-AIC approach with a cost function to find precise P1- and S1-phase onset times which can be used for localization and tomography studies. 800 induced events are processed, determining 5000 P1- and 6000 S1-picks. The phase onset times show a high precision with mean residuals to manual phase picks of 0s (P1) to 0.04s (S1) and standard deviations below ±0.05s. The received automatic picks are applied to relocate a selected number of events to evaluate influences on the location precision.
Resolving discrete pulsar spin-down states with current and future instrumentation
NASA Astrophysics Data System (ADS)
Shaw, B.; Stappers, B. W.; Weltevrede, P.
2018-04-01
An understanding of pulsar timing noise offers the potential to improve the timing precision of a large number of pulsars as well as facilitating our understanding of pulsar magnetospheres. For some sources, timing noise is attributable to a pulsar switching between two different spin-down rates (\\dot{ν }). Such transitions may be common but difficult to resolve using current techniques. In this work, we use simulations of \\dot{ν }-variable pulsars to investigate the likelihood of resolving individual \\dot{ν } transitions. We inject step changes in the value of \\dot{ν } with a wide range of amplitudes and switching time-scales. We then attempt to redetect these transitions using standard pulsar timing techniques. The pulse arrival-time precision and the observing cadence are varied. Limits on \\dot{ν } detectability based on the effects such transitions have on the timing residuals are derived. With the typical cadences and timing precision of current timing programmes, we find that we are insensitive to a large region of Δ \\dot{ν } parameter space that encompasses small, short time-scale switches. We find, where the rotation and emission states are correlated, that using changes to the pulse shape to estimate \\dot{ν } transition epochs can improve detectability in certain scenarios. The effects of cadence on Δ \\dot{ν } detectability are discussed, and we make comparisons with a known population of intermittent and mode-switching pulsars. We conclude that for short time-scale, small switches, cadence should not be compromised when new generations of ultra-sensitive radio telescopes are online.
Study on digital closed-loop system of silicon resonant micro-sensor
NASA Astrophysics Data System (ADS)
Xu, Yefeng; He, Mengke
2008-10-01
Designing a micro, high reliability weak signal extracting system is a critical problem need to be solved in the application of silicon resonant micro-sensor. The closed-loop testing system based on FPGA uses software to replace hardware circuit which dramatically decrease the system's mass and power consumption and make the system more compact, both correlation theory and frequency scanning scheme are used in extracting weak signal, the adaptive frequency scanning arithmetic ensures the system real-time. The error model was analyzed to show the solution to enhance the system's measurement precision. The experiment results show that the closed-loop testing system based on FPGA has the personality of low power consumption, high precision, high-speed, real-time etc, and also the system is suitable for different kinds of Silicon Resonant Micro-sensor.
D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C
2014-07-01
Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.
Evaluation of the precision of contrast sensitivity function assessment on a tablet device
Dorr, Michael; Lesmes, Luis A.; Elze, Tobias; Wang, Hui; Lu, Zhong-Lin; Bex, Peter J.
2017-01-01
The contrast sensitivity function (CSF) relates the visibility of a spatial pattern to both its size and contrast, and is therefore a more comprehensive assessment of visual function than acuity, which only determines the smallest resolvable pattern size. Because of the additional dimension of contrast, estimating the CSF can be more time-consuming. Here, we compare two methods for rapid assessment of the CSF that were implemented on a tablet device. For a single-trial assessment, we asked 63 myopes and 38 emmetropes to tap the peak of a “sweep grating” on the tablet’s touch screen. For a more precise assessment, subjects performed 50 trials of the quick CSF method in a 10-AFC letter recognition task. Tests were performed with and without optical correction, and in monocular and binocular conditions; one condition was measured twice to assess repeatability. Results show that both methods are highly correlated; using both common and novel measures for test-retest repeatability, however, the quick CSF delivers more precision with testing times of under three minutes. Further analyses show how a population prior can improve convergence rate of the quick CSF, and how the multi-dimensional output of the quick CSF can provide greater precision than scalar outcome measures. PMID:28429773
Intuitive Sense of Number Correlates With Math Scores on College-Entrance Examination
Libertus, Melissa E.; Odic, Darko; Halberda, Justin
2012-01-01
Many educated adults possess exact mathematical abilities in addition to an approximate, intuitive sense of number, often referred to as the Approximate Number System (ANS). Here we investigate the link between ANS precision and mathematics performance in adults by testing participants on an ANS-precision test and collecting their scores on the Scholastic Aptitude Test (SAT), a standardized college-entrance exam in the USA. In two correlational studies, we found that ANS precision correlated with SAT-Quantitative (i.e., mathematics) scores. This relationship remained robust even when controlling for SAT-Verbal scores, suggesting a small but specific relationship between our primitive sense for number and formal mathematical abilities. PMID:23098904
Learmonth, Yvonne C; Dlugonski, Deirdre D; Pilutti, Lara A; Sandroff, Brian M; Motl, Robert W
2013-11-01
Assessing walking impairment in those with multiple sclerosis (MS) is common, however little is known about the reliability, precision and clinically important change of walking outcomes. The purpose of this study was to determine the reliability, precision and clinically important change of the Timed 25-Foot Walk (T25FW), Six-Minute Walk (6MW), Multiple Sclerosis Walking Scale-12 (MSWS-12) and accelerometry. Data were collected from 82 persons with MS at two time points, six months apart. Analyses were undertaken for the whole sample and stratified based on disability level and usage of walking aids. Intraclass correlation coefficient (ICC) analyses established reliability: standard error of measurement (SEM) and coefficient of variation (CV) determined precision; and minimal detectable change (MDC) defined clinically important change. All outcome measures were reliable with precision and MDC varying between measures in the whole sample: T25FW: ICC=0.991; SEM=1 s; CV=6.2%; MDC=2.7 s (36%), 6MW: ICC=0.959; SEM=32 m; CV=6.2%; MDC=88 m (20%), MSWS-12: ICC=0.927; SEM=8; CV=27%; MDC=22 (53%), accelerometry counts/day: ICC=0.883; SEM=28450; CV=17%; MDC=78860 (52%), accelerometry steps/day: ICC=0.907; SEM=726; CV=16%; MDC=2011 (45%). Variation in these estimates was seen based on disability level and walking aid. The reliability of these outcomes is good and falls within acceptable ranges. Precision and clinically important change estimates provide guidelines for interpreting these outcomes in clinical and research settings.
Effect of inhibitory feedback on correlated firing of spiking neural network.
Xie, Jinli; Wang, Zhijie
2013-08-01
Understanding the properties and mechanisms that generate different forms of correlation is critical for determining their role in cortical processing. Researches on retina, visual cortex, sensory cortex, and computational model have suggested that fast correlation with high temporal precision appears consistent with common input, and correlation on a slow time scale likely involves feedback. Based on feedback spiking neural network model, we investigate the role of inhibitory feedback in shaping correlations on a time scale of 100 ms. Notably, the relationship between the correlation coefficient and inhibitory feedback strength is non-monotonic. Further, computational simulations show how firing rate and oscillatory activity form the basis of the mechanisms underlying this relationship. When the mean firing rate holds unvaried, the correlation coefficient increases monotonically with inhibitory feedback, but the correlation coefficient keeps decreasing when the network has no oscillatory activity. Our findings reveal that two opposing effects of the inhibitory feedback on the firing activity of the network contribute to the non-monotonic relationship between the correlation coefficient and the strength of the inhibitory feedback. The inhibitory feedback affects the correlated firing activity by modulating the intensity and regularity of the spike trains. Finally, the non-monotonic relationship is replicated with varying transmission delay and different spatial network structure, demonstrating the universality of the results.
Measuring Time-of-Flight in an Ultrasonic LPS System Using Generalized Cross-Correlation
Villladangos, José Manuel; Ureña, Jesús; García, Juan Jesús; Mazo, Manuel; Hernández, Álvaro; Jiménez, Ana; Ruíz, Daniel; De Marziani, Carlos
2011-01-01
In this article, a time-of-flight detection technique in the frequency domain is described for an ultrasonic Local Positioning System (LPS) based on encoded beacons. Beacon transmissions have been synchronized and become simultaneous by means of the DS-CDMA (Direct-Sequence Code Division Multiple Access) technique. Every beacon has been associated to a 255-bit Kasami code. The detection of signal arrival instant at the receiver, from which the distance to each beacon can be obtained, is based on the application of the Generalized Cross-Correlation (GCC), by using the cross-spectral density between the received signal and the sequence to be detected. Prior filtering to enhance the frequency components around the carrier frequency (40 kHz) has improved estimations when obtaining the correlation function maximum, which implies an improvement in distance measurement precision. Positioning has been achieved by using hyperbolic trilateration, based on the Time Differences of Arrival (TDOA) between a reference beacon and the others. PMID:22346645
Measuring time-of-flight in an ultrasonic LPS system using generalized cross-correlation.
Villladangos, José Manuel; Ureña, Jesús; García, Juan Jesús; Mazo, Manuel; Hernández, Alvaro; Jiménez, Ana; Ruíz, Daniel; De Marziani, Carlos
2011-01-01
In this article, a time-of-flight detection technique in the frequency domain is described for an ultrasonic local positioning system (LPS) based on encoded beacons. Beacon transmissions have been synchronized and become simultaneous by means of the DS-CDMA (direct-sequence code Division multiple access) technique. Every beacon has been associated to a 255-bit Kasami code. The detection of signal arrival instant at the receiver, from which the distance to each beacon can be obtained, is based on the application of the generalized cross-correlation (GCC), by using the cross-spectral density between the received signal and the sequence to be detected. Prior filtering to enhance the frequency components around the carrier frequency (40 kHz) has improved estimations when obtaining the correlation function maximum, which implies an improvement in distance measurement precision. Positioning has been achieved by using hyperbolic trilateration, based on the time differences of arrival (TDOA) between a reference beacon and the others.
Soto, Marcelo A; Lu, Xin; Martins, Hugo F; Gonzalez-Herraez, Miguel; Thévenaz, Luc
2015-09-21
In this paper a technique to measure the distributed birefringence profile along optical fibers is proposed and experimentally validated. The method is based on the spectral correlation between two sets of orthogonally-polarized measurements acquired using a phase-sensitive optical time-domain reflectometer (ϕOTDR). The correlation between the two measured spectra gives a resonance (correlation) peak at a frequency detuning that is proportional to the local refractive index difference between the two orthogonal polarization axes of the fiber. In this way the method enables local phase birefringence measurements at any position along optical fibers, so that any longitudinal fluctuation can be precisely evaluated with metric spatial resolution. The method has been experimentally validated by measuring fibers with low and high birefringence, such as standard single-mode fibers as well as conventional polarization-maintaining fibers. The technique has potential applications in the characterization of optical fibers for telecommunications as well as in distributed optical fiber sensing.
Varnes, D.J.; Bufe, C.G.
1996-01-01
Seismic activity in the 10 months preceding the 1980 February 14, mb 4.8 earthquake in the Virgin Islands, reported on by Frankel in 1982, consisted of four principal cycles. Each cycle began with a relatively large event or series of closely spaced events, and the duration of the cycles progressively shortened by a factor of about 3/4. Had this regular shortening of the cycles been recognized prior to the earthquake, the time of the next episode of setsmicity (the main shock) might have been closely estimated 41 days in advance. That this event could be much larger than the previous events is indicated from time-to-failure analysis of the accelerating rise in released seismic energy, using a non-linear time- and slip-predictable foreshock model. Examination of the timing of all events in the sequence shows an even higher degree of order. Rates of seismicity, measured by consecutive interevent times, when plotted on an iteration diagram of a rate versus the succeeding rate, form a triangular circulating trajectory. The trajectory becomes an ascending helix if extended in a third dimension, time. This construction reveals additional and precise relations among the time intervals between times of relatively high or relatively low rates of seismic activity, including period halving and doubling. The set of 666 time intervals between all possible pairs of the 37 recorded events appears to be a fractal; the set of time points that define the intervals has a finite, non-integer correlation dimension of 0.70. In contrast, the average correlation dimension of 50 random sequences of 37 events is significantly higher, dose to 1.0. In a similar analysis, the set of distances between pairs of epicentres has a fractal correlation dimension of 1.52. Well-defined cycles, numerous precise ratios among time intervals, and a non-random temporal fractal dimension suggest that the seismic series is not a random process, but rather the product of a deterministic dynamic system.
Detector Development for the abBA Experiment.
Seo, P-N; Bowman, J D; Mitchell, G S; Penttila, S I; Wilburn, W S
2005-01-01
We have developed a new type of field-expansion spectrometer to measure the neutron beta decay correlations (a, b, B, and A). A precision measurement of these correlations places stringent requirements on charged particle detectors. The design employs large area segmented silicon detectors to detect both protons and electrons in coincidence. Other requirements include good energy resolution (< 5 keV), a thin dead layer to allow observation of 30-keV protons, fast timing resolution (~1 ns) to reconstruct electron-backscattering events, and nearly unity efficiency. We report results of testing commercially available surface-barrier silicon detectors for energy resolution and timing performance, and measurement of the dead-layer thickness of ion-implanted silicon detectors with a 3.2 MeV alpha source.
Quantum interference and complex photon statistics in waveguide QED
NASA Astrophysics Data System (ADS)
Zhang, Xin H. H.; Baranger, Harold U.
2018-02-01
We obtain photon statistics by using a quantum jump approach tailored to a system in which one or two qubits are coupled to a one-dimensional waveguide. Photons confined in the waveguide have strong interference effects, which are shown to play a vital role in quantum jumps and photon statistics. For a single qubit, for instance, the bunching of transmitted photons is heralded by a jump that increases the qubit population. We show that the distribution and correlations of waiting times offer a clearer and more precise characterization of photon bunching and antibunching. Further, the waiting times can be used to characterize complex correlations of photons which are hidden in g(2 )(τ ) , such as a mixture of bunching and antibunching.
Minakata, Hisakazu; Parke, Stephen J.
2013-06-04
Precision measurement of the leptonic CP violating phase δ will suffer from the, then surviving, large uncertainty of sin 2θ 23 of 10–20% in the experimentally interesting region near maximal mixing of θ 23. We advocate a new method for determination of both θ 23 and δ at the same time using only the ν e and ν̄ e appearance channels and show that sin 2θ 23 can be determined automatically with much higher accuracy, approximately a factor of six, than sinδ. In this method, we identify a new degeneracy for the simultaneous determination of θ 23 and δ, themore » θ 23 intrinsic degeneracy, which must be resolved in order to achieve precision measurement of these two parameters. Spectral information around the vacuum oscillation maxima is shown to be the best way to resolve this degeneracy.« less
Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses
Das, Jayajit
2016-01-01
Single cells often generate precise responses by involving dissipative out-of-thermodynamic-equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high-precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early-time T cell signaling. Using exact analytical calculations and numerical simulations, I show that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in determining single-cell kinetics from cell-population results. PMID:26958894
Quantitative analysis of the correlations in the Boltzmann-Grad limit for hard spheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pulvirenti, M.
2014-12-09
In this contribution I consider the problem of the validity of the Boltzmann equation for a system of hard spheres in the Boltzmann-Grad limit. I briefly review the results available nowadays with a particular emphasis on the celebrated Lanford’s validity theorem. Finally I present some recent results, obtained in collaboration with S. Simonella, concerning a quantitative analysis of the propagation of chaos. More precisely we introduce a quantity (the correlation error) measuring how close a j-particle rescaled correlation function at time t (sufficiently small) is far from the full statistical independence. Roughly speaking, a correlation error of order k, measuresmore » (in the context of the BBKGY hierarchy) the event in which k tagged particles form a recolliding group.« less
Precision half-life measurement of 17F
NASA Astrophysics Data System (ADS)
Brodeur, M.; Nicoloff, C.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Gupta, Y. K.; Hall, M. R.; Hall, O.; Hu, J.; Kelly, J. M.; Kolata, J. J.; Long, J.; O'Malley, P.; Schultz, B. E.
2016-02-01
Background: The precise determination of f t values for superallowed mixed transitions between mirror nuclide are gaining attention as they could provide an avenue to test the theoretical corrections used to extract the Vu d matrix element from superallowed pure Fermi transitions. The 17F decay is particularly interesting as it proceeds completely to the ground state of 17O, removing the need for branching ratio measurements. The dominant uncertainty on the f t value of the 17F mirror transition stems from a number of conflicting half-life measurements. Purpose: A precision half-life measurement of 17F was performed and compared to previous results. Methods: The life-time was determined from the β counting of implanted 17F on a Ta foil that was removed from the beam for counting. The 17F beam was produced by transfers reaction and separated by the TwinSol facility of the Nuclear Science Laboratory of the University of Notre Dame. Results: The measured value of t1/2 new=64.402 (42) s is in agreement with several past measurements and represents one of the most precise measurements to date. In anticipation of future measurements of the correlation parameters for the decay and using the new world average t1/2 world=64.398 (61) s, we present a new estimate of the mixing ratio ρ for the mixed transition as well as the correlation parameters based on assuming Standard Model validity. Conclusions: The relative uncertainty on the new world average for the half-life is dominated by the large χ2=31 of the existing measurements. More precision measurements with different systematics are needed to remedy to the situation.
On information loss in AdS 3/CFT 2
Fitzpatrick, A. Liam; Kaplan, Jared; Li, Daliang; ...
2016-05-18
We discuss information loss from black hole physics in AdS 3, focusing on two sharp signatures infecting CFT 2 correlators at large central charge c: ‘forbidden singularities’ arising from Euclidean-time periodicity due to the effective Hawking temperature, and late-time exponential decay in the Lorentzian region. We study an infinite class of examples where forbidden singularities can be resolved by non-perturbative effects at finite c, and we show that the resolution has certain universal features that also apply in the general case. Analytically continuing to the Lorentzian regime, we find that the non-perturbative effects that resolve forbidden singularities qualitatively change themore » behavior of correlators at times t ~S BH, the black hole entropy. This may resolve the exponential decay of correlators at late times in black hole backgrounds. By Borel resumming the 1/c expansion of exact examples, we explicitly identify ‘information-restoring’ effects from heavy states that should correspond to classical solutions in AdS 3. Lastly, our results suggest a line of inquiry towards a more precise formulation of the gravitational path integral in AdS 3.« less
Influence of the Time Scale on the Construction of Financial Networks
Emmert-Streib, Frank; Dehmer, Matthias
2010-01-01
Background In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. Methodology/Principal Findings For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Conclusions/Significance Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis. PMID:20949124
Navigators for motion detection during real-time MRI-guided radiotherapy
NASA Astrophysics Data System (ADS)
Stam, Mette K.; Crijns, Sjoerd P. M.; Zonnenberg, Bernard A.; Barendrecht, Maurits M.; van Vulpen, Marco; Lagendijk, Jan J. W.; Raaymakers, Bas W.
2012-11-01
An MRI-linac system provides direct MRI feedback and with that the possibility of adapting radiation treatments to the actual tumour position. This paper addresses the use of fast 1D MRI, pencil-beam navigators, for this feedback. The accuracy of using navigators was determined on a moving phantom. The possibility of organ tracking and breath-hold monitoring based on navigator guidance was shown for the kidney. Navigators are accurate within 0.5 mm and the analysis has a minimal time lag smaller than 30 ms as shown for the phantom measurements. The correlation of 2D kidney images and navigators shows the possibility of complete organ tracking. Furthermore the breath-hold monitoring of the kidney is accurate within 1.5 mm, allowing gated radiotherapy based on navigator feedback. Navigators are a fast and precise method for monitoring and real-time tracking of anatomical landmarks. As such, they provide direct MRI feedback on anatomical changes for more precise radiation delivery.
Absence of Quantum Time Crystals.
Watanabe, Haruki; Oshikawa, Masaki
2015-06-26
In analogy with crystalline solids around us, Wilczek recently proposed the idea of "time crystals" as phases that spontaneously break the continuous time translation into a discrete subgroup. The proposal stimulated further studies and vigorous debates whether it can be realized in a physical system. However, a precise definition of the time crystal is needed to resolve the issue. Here we first present a definition of time crystals based on the time-dependent correlation functions of the order parameter. We then prove a no-go theorem that rules out the possibility of time crystals defined as such, in the ground state or in the canonical ensemble of a general Hamiltonian, which consists of not-too-long-range interactions.
High resolution distributed time-to-digital converter (TDC) in a White Rabbit network
NASA Astrophysics Data System (ADS)
Pan, Weibin; Gong, Guanghua; Du, Qiang; Li, Hongming; Li, Jianmin
2014-02-01
The Large High Altitude Air Shower Observatory (LHAASO) project consists of a complex detector array with over 6000 detector nodes spreading over 1.2 km2 areas. The arrival times of shower particles are captured by time-to-digital converters (TDCs) in the detectors' frontend electronics, the arrival direction of the high energy cosmic ray are then to be reconstructed from the space-time information of all detector nodes. To guarantee the angular resolution of 0.5°, a time synchronization of 500 ps (RMS) accuracy and 100 ps precision must be achieved among all TDC nodes. A technology enhancing Gigabit Ethernet, called the White Rabbit (WR), has shown the capability of delivering sub-nanosecond accuracy and picoseconds precision of synchronization over the standard data packet transfer. In this paper we demonstrate a distributed TDC prototype system combining the FPGA based TDC and the WR technology. With the time synchronization and data transfer services from a compact WR node, separate FPGA-TDC nodes can be combined to provide uniform time measurement information for correlated events. The design detail and test performance will be described in the paper.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
NASA Astrophysics Data System (ADS)
Dall'Ara, Enrico; Peña-Fernández, Marta; Palanca, Marco; Giorgi, Mario; Cristofolini, Luca; Tozzi, Gianluca
2017-11-01
Accurate measurement of local strain in heterogeneous and anisotropic bone tissue is fundamental to understand the pathophysiology of musculoskeletal diseases, to evaluate the effect of interventions from preclinical studies, and to optimize the design and delivery of biomaterials. Digital volume correlation (DVC) can be used to measure the three-dimensional displacement and strain fields from micro-Computed Tomography (µCT) images of loaded specimens. However, this approach is affected by the quality of the input images, by the morphology and density of the tissue under investigation, by the correlation scheme, and by the operational parameters used in the computation. Therefore, for each application the precision of the method should be evaluated. In this paper we present the results collected from datasets analyzed in previous studies as well as new data from a recent experimental campaign for characterizing the relationship between the precision of two different DVC approaches and the spatial resolution of the outputs. Different bone structures scanned with laboratory source µCT or Synchrotron light µCT (SRµCT) were processed in zero-strain tests to evaluate the precision of the DVC methods as a function of the subvolume size that ranged from 8 to 2500 micrometers. The results confirmed that for every microstructure the precision of DVC improves for larger subvolume size, following power laws. However, for the first time large differences in the precision of both local and global DVC approaches have been highlighted when SRµCT or in vivo µCT images were used instead of conventional ex vivo µCT. These findings suggest that in situ mechanical testing protocols applied in SRµCT facilities should be optimized in order to allow DVC analyses of localized strain measurements. Moreover, for in vivo µCT applications DVC analyses should be performed only with relatively course spatial resolution for achieving a reasonable precision of the method. In conclusion, we have extensively shown that the precision of both tested DVC approaches is affected by different bone structures, different input image resolution and different subvolume sizes. Before each specific application DVC users should always apply a similar approach to find the best compromise between precision and spatial resolution of the measurements.
1972-01-01
and police stations in Washington, and since 1877 to Western Union for nation-wide distribution. In 1904 the first operational radio time signals were...to do the job with the accuracy and low cost demanded in these days of tight operating budgets. In closing, I would like to acknowledge the fine...signal received from a celestial source is recorded at each antenna on magnetic tape, and the tapes are cross-correlated by matching the streams of
Ground truth seismic events and location capability at Degelen mountain, Kazakhstan
NASA Astrophysics Data System (ADS)
Trabant, Chad; Thurber, Clifford; Leith, William
2002-07-01
We utilized nuclear explosions from the Degelen Mountain sub-region of the Semipalatinsk Test Site (STS), Kazakhstan, to assess seismic location capability directly. Excellent ground truth information for these events was either known or was estimated from maps of the Degelen Mountain adit complex. Origin times were refined for events for which absolute origin time information was unknown using catalog arrival times, our ground truth location estimates, and a time baseline provided by fixing known origin times during a joint hypocenter determination (JHD). Precise arrival time picks were determined using a waveform cross-correlation process applied to the available digital data. These data were used in a JHD analysis. We found that very accurate locations were possible when high precision, waveform cross-correlation arrival times were combined with JHD. Relocation with our full digital data set resulted in a mean mislocation of 2 km and a mean 95% confidence ellipse (CE) area of 6.6 km 2 (90% CE: 5.1 km 2), however, only 5 of the 18 computed error ellipses actually covered the associated ground truth location estimate. To test a more realistic nuclear test monitoring scenario, we applied our JHD analysis to a set of seven events (one fixed) using data only from seismic stations within 40° epicentral distance. Relocation with these data resulted in a mean mislocation of 7.4 km, with four of the 95% error ellipses covering less than 570 km 2 (90% CE: 438 km 2), and the other two covering 1730 and 8869 km 2 (90% CE: 1331 and 6822 km 2). Location uncertainties calculated using JHD often underestimated the true error, but a circular region with a radius equal to the mislocation covered less than 1000 km 2 for all events having more than three observations.
Delo, Caroline; Leclercq, Pol; Martins, Dimitri; Pirson, Magali
2015-08-01
The objectives of this study are to analyze the variation of the surgical time and of disposable costs per surgical procedure and to analyze the association between disposable costs and the surgical time. The registration of data was done in an operating room of a 419 bed general hospital, over a period of three months (n = 1556 surgical procedures). Disposable material per procedure used was recorded through a barcode scanning method. The average cost (standard deviation) of disposable material is €183.66 (€183.44). The mean surgical time (standard deviation) is 96 min (63). Results have shown that the homogeneity of operating time and DM costs was quite good per surgical procedure. The correlation between the surgical time and DM costs is not high (r = 0.65). In a context of Diagnosis Related Group (DRG) based hospital payment, it is important that costs information systems are able to precisely calculate costs per case. Our results show that the correlation between surgical time and costs of disposable materials is not good. Therefore, empirical data or itemized lists should be used instead of surgical time as a cost driver for the allocation of costs of disposable materials to patients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Evaluation of AUC(0-4) predictive methods for cyclosporine in kidney transplant patients.
Aoyama, Takahiko; Matsumoto, Yoshiaki; Shimizu, Makiko; Fukuoka, Masamichi; Kimura, Toshimi; Kokubun, Hideya; Yoshida, Kazunari; Yago, Kazuo
2005-05-01
Cyclosporine (CyA) is the most commonly used immunosuppressive agent in patients who undergo kidney transplantation. Dosage adjustment of CyA is usually based on trough levels. Recently, trough levels have been replacing the area under the concentration-time curve during the first 4 h after CyA administration (AUC(0-4)). The aim of this study was to compare the predictive values obtained using three different methods of AUC(0-4) monitoring. AUC(0-4) was calculated from 0 to 4 h in early and stable renal transplant patients using the trapezoidal rule. The predicted AUC(0-4) was calculated using three different methods: the multiple regression equation reported by Uchida et al.; Bayesian estimation for modified population pharmacokinetic parameters reported by Yoshida et al.; and modified population pharmacokinetic parameters reported by Cremers et al. The predicted AUC(0-4) was assessed on the basis of predictive bias, precision, and correlation coefficient. The predicted AUC(0-4) values obtained using three methods through measurement of three blood samples showed small differences in predictive bias, precision, and correlation coefficient. In the prediction of AUC(0-4) measurement of one blood sample from stable renal transplant patients, the performance of the regression equation reported by Uchida depended on sampling time. On the other hand, the performance of Bayesian estimation with modified pharmacokinetic parameters reported by Yoshida through measurement of one blood sample, which is not dependent on sampling time, showed a small difference in the correlation coefficient. The prediction of AUC(0-4) using a regression equation required accurate sampling time. In this study, the prediction of AUC(0-4) using Bayesian estimation did not require accurate sampling time in the AUC(0-4) monitoring of CyA. Thus Bayesian estimation is assumed to be clinically useful in the dosage adjustment of CyA.
NASA Astrophysics Data System (ADS)
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval
NASA Astrophysics Data System (ADS)
Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan
2017-03-01
Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.
Reliability of the Brazilian version of the Physical Activity Checklist Interview in children.
Adami, Fernando; Cruciani, Fernanda; Douek, Michelle; Sewell, Carolina Dumit; Mariath, Aline Brandão; Hinnig, Patrícia de Fragas; Freaza, Silvia Rafaela Mascarenhas; Bergamaschi, Denise Pimentel
2011-04-01
To assess the reliability of the Lista de Atividades Físicas (Brazilian version of the Physical Activity Checklist Interview) in children. The study is part of a cross-cultural adaptation of the Physical Activity Checklist Interview, conducted with 83 school children aged between seven and ten years, enrolled between the 2nd and 5th grades of primary education in the city of São Paulo, Southeastern Brazil, in 2008. The questionnaire was responded by children through individual interviews. It is comprised of a list of 21 moderate to vigorous physical activities performed on the previous day, it is divided into periods (before, during and after school) and it has a section for interview assessment. This questionnaire enables the quantification of time spent in physical and sedentary activities and the total and weighed metabolic costs. Reliability was assessed by comparing two interviews conducted with a mean interval of three hours. For the interview assessment, data from the first interview and those from an external evaluator were compared. Bland-Altman's proposal, the intraclass correlation coefficient and Lin's concordance correlation coefficient were used to assess reliability. The intraclass correlation coefficient lower limits for the outcomes analyzed varied from 0.84 to 0.96. Precision and agreement varied between 0.83 and 0.97 and between 0.99 and 1, respectively. The line estimated from the pairs of values obtained in both interviews indicates high data precision. The interview item showing the poorest result was the ability to estimate time (fair in 27.7% of interviews). Interview assessment items showed intraclass correlation coefficients between 0.60 and 0.70, except for level of cooperation (0.46). The Brazilian version of the Physical Activity Checklist Interview shows high reliability to assess physical and sedentary activity on the previous day in children.
Effect of the Level of Coordinated Motor Abilities on Performance in Junior Judokas
Lech, Grzegorz; Jaworski, Janusz; Lyakh, Vladimir; Krawczyk, Robert
2011-01-01
The main focus of this study was to identify coordinated motor abilities that affect fighting methods and performance in junior judokas. Subjects were selected for the study in consideration of their age, competition experience, body mass and prior sports level. Subjects’ competition history was taken into consideration when analysing the effectiveness of current fight actions, and individual sports level was determined with consideration to rank in the analysed competitions. The study sought to determine the level of coordinated motor abilities of competitors. The scope of this analysis covered the following aspects: kinaesthetic differentiation, movement frequency, simple and selective reaction time (evoked by a visual or auditory stimulus), spatial orientation, visual-motor coordination, rhythmization, speed, accuracy and precision of movements and the ability to adapt movements and balance. A set of computer tests was employed for the analysis of all of the coordination abilities, while balance examinations were based on the Flamingo Balance Test. Finally, all relationships were determined based on the Spearman’s rank correlation coefficient. It was observed that the activity of the contestants during the fight correlated with the ability to differentiate movements and speed, accuracy and precision of movement, whereas the achievement level during competition was connected with reaction time. PMID:23486723
Weak-value amplification and optimal parameter estimation in the presence of correlated noise
NASA Astrophysics Data System (ADS)
Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.
2017-11-01
We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.
Coupling detrended fluctuation analysis for analyzing coupled nonstationary signals.
Hedayatifar, L; Vahabi, M; Jafari, G R
2011-08-01
When many variables are coupled to each other, a single case study could not give us thorough and precise information. When these time series are stationary, different methods of random matrix analysis and complex networks can be used. But, in nonstationary cases, the multifractal-detrended-cross-correlation-analysis (MF-DXA) method was introduced for just two coupled time series. In this article, we have extended the MF-DXA to the method of coupling detrended fluctuation analysis (CDFA) for the case when more than two series are correlated to each other. Here, we have calculated the multifractal properties of the coupled time series, and by comparing CDFA results of the original series with those of the shuffled and surrogate series, we can estimate the source of multifractality and the extent to which our series are coupled to each other. We illustrate the method by selected examples from air pollution and foreign exchange rates.
Coupling detrended fluctuation analysis for analyzing coupled nonstationary signals
NASA Astrophysics Data System (ADS)
Hedayatifar, L.; Vahabi, M.; Jafari, G. R.
2011-08-01
When many variables are coupled to each other, a single case study could not give us thorough and precise information. When these time series are stationary, different methods of random matrix analysis and complex networks can be used. But, in nonstationary cases, the multifractal-detrended-cross-correlation-analysis (MF-DXA) method was introduced for just two coupled time series. In this article, we have extended the MF-DXA to the method of coupling detrended fluctuation analysis (CDFA) for the case when more than two series are correlated to each other. Here, we have calculated the multifractal properties of the coupled time series, and by comparing CDFA results of the original series with those of the shuffled and surrogate series, we can estimate the source of multifractality and the extent to which our series are coupled to each other. We illustrate the method by selected examples from air pollution and foreign exchange rates.
NASA Astrophysics Data System (ADS)
Anggraeni, Novia Antika
2015-04-01
The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano's inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 - 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between -2.86 up to 5.49 days.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anggraeni, Novia Antika, E-mail: novia.antika.a@gmail.com
The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration ofmore » the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days.« less
High-resolution stratigraphy with strontium isotopes.
Depaolo, D J; Ingram, B L
1985-02-22
The isotopic ratio of strontium-87 to strontium-86 shows no detectable variation in present-day ocean water but changes slowly over millions of years. The strontium contained in carbonate shells of marine organisms records the ratio of strontium-87 to strontium-86 of the oceans at the time that the shells form. Sedimentary rocks composed of accumulated fossil carbonate shells can be dated and correlated with the use of high precision measurements of the ratio of strontium-87 to strontium-86 with a resolution that is similar to that of other techniques used in age correlation. This method may prove valuable for many geological, paleontological, paleooceanographic, and geochemical problems.
Active transport improves the precision of linear long distance molecular signalling
NASA Astrophysics Data System (ADS)
Godec, Aljaž; Metzler, Ralf
2016-09-01
Molecular signalling in living cells occurs at low copy numbers and is thereby inherently limited by the noise imposed by thermal diffusion. The precision at which biochemical receptors can count signalling molecules is intimately related to the noise correlation time. In addition to passive thermal diffusion, messenger RNA and vesicle-engulfed signalling molecules can transiently bind to molecular motors and are actively transported across biological cells. Active transport is most beneficial when trafficking occurs over large distances, for instance up to the order of 1 metre in neurons. Here we explain how intermittent active transport allows for faster equilibration upon a change in concentration triggered by biochemical stimuli. Moreover, we show how intermittent active excursions induce qualitative changes in the noise in effectively one-dimensional systems such as dendrites. Thereby they allow for significantly improved signalling precision in the sense of a smaller relative deviation in the concentration read-out by the receptor. On the basis of linear response theory we derive the exact mean field precision limit for counting actively transported molecules. We explain how intermittent active excursions disrupt the recurrence in the molecular motion, thereby facilitating improved signalling accuracy. Our results provide a deeper understanding of how recurrence affects molecular signalling precision in biological cells and novel medical-diagnostic devices.
Investigating MAI's Precision: Single Interferogram and Time Series Filtering
NASA Astrophysics Data System (ADS)
Bechor Ben Dov, N.; Herring, T.
2010-12-01
Multiple aperture InSAR (MAI) is a technique to obtain along-track displacements from InSAR phase data. Because InSAR measurements are insensitive to along-track displacements, it is only possible to retrieve them using none-interferometric approaches, either pixel-offset tracking or using data from different orbital configurations and assuming continuity/ displacement model. These approaches are limited by precision and data acquisition conflicts, respectively. MAI is promising in this respect as its precision is better than the former and its data is available whether additional acquisitions are there or not. Here we study the MAI noise and develop a filter to reduce it. We test the filtering with empirical noise and simulated signal data. Below we describe the filtered results single interferogram precision, and a Kalman filter approach for MAI time series. We use 14 interferograms taken over the larger Los Angeles/San Gabrial Mountains area in CA. The interferograms include a variety of decorrelation sources, both terrain-related (topographic variations, vegetation and agriculture), and imaging-related (spatial and temporal baselines of 200-500m and 1-12 months, respectively). Most of the pixels are in the low to average coherence range (below 0.7). The data were collected by ESA and made available by the WInSAR consortium. We assume the data contain “zero” along-track signal (less then the theoretical 4 cm for our coherence range), and use the images as 14 dependent realizations of the MAI noise. We find a wide distribution of phase values σ = 2-3 radians (wrapped). We superimpose a signal on our MAI noise interferograms using along-track displacement (-88 - 143 cm) calculated for the 1812 Wrightwood earthquake. To analyze single MAI interferograms, we design an iterative quantile-based filter and test it on the noise+signal MAI interferograms. The residuals reveal the following MAI noise characteristics: (1) a constant noise term, up to 90 cm (2) a displacement gradient term, up to 0.75cm/km (3) a coherence dependent root residuals sum of squares (RRSS), down to 5 cm at 0.8 coherence In the figure we present two measures of the MAI rmse. Prior to phase gradient correction the RRSS follows the circled line. With the correction, the RRSS follows the solid line. We next evaluate MAI's precision given a time series. We use a Kalman Filter to estimate the spatially and temporally correlated components of the MAI data. We reference the displacements to a given area in the interferograms, weight the data with coherence, and model the reminder terms of the spatially correlated noise as a quadratic phase gradient across the image. The results (not displayed) again vary with coherence. MAI single interferogram precision
Precise timing correlation in telemetry recording and processing systems
NASA Technical Reports Server (NTRS)
Pickett, R. B.; Matthews, F. L.
1973-01-01
Independent PCM telemetry data signals received from missiles must be correlated to within + or - 100 microseconds for comparison with radar data. Tests have been conducted to determine RF antenna receiving system delays; delays associated with wideband analog tape recorders used in the recording, dubbing and repdocuing processes; and uncertainties associated with computer processed time tag data. Several methods used in the recording of timing are evaluated. Through the application of a special time tagging technique, the cumulative timing bias from all sources is determined and the bias removed from final data. Conclusions show that relative time differences in receiving, recording, playback and processing of two telemetry links can be accomplished with a + or - 4 microseconds accuracy. In addition, the absolute time tag error (with respect to UTC) can be reduced to less than 15 microseconds. This investigation is believed to be the first attempt to identify the individual error contributions within the telemetry system and to describe the methods of error reduction within the telemetry system and to describe the methods of error reduction and correction.
Precise orbit determination and rapid orbit recovery supported by time synchronization
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhou, JianHua; Hu, XiaoGong; Liu, Li; Tang, Bo; Li, XiaoJie; Wu, Shan
2015-06-01
In order to maintain optimal signal coverage, GNSS satellites have to experience orbital maneuvers. For China's COMPASS system, precise orbit determination (POD) as well as rapid orbit recovery after maneuvers contribute to the overall Positioning, Navigation and Timing (PNT) service performance in terms of accuracy and availability. However, strong statistical correlations between clock offsets and the radial component of a satellite's positions require long data arcs for POD to converge. We propose here a new strategy which relies on time synchronization between ground tracking stations and in-orbit satellites. By fixing satellite clock offsets measured by the satellite station two-way synchronization (SSTS) systems and receiver clock offsets, POD and orbital recovery performance can be improved significantly. Using the Satellite Laser Ranging (SLR) as orbital accuracy evaluation, we find the 4-hr recovered orbit achieves about 0.71 m residual root mean square (RMS) error of fit SLR data, the recovery time is improved from 24-hr to 4-hr compared with the conventional POD without time synchronization support. In addition, SLR evaluation shows that for 1-hr prediction, about 1.47 m accuracy is achieved with the new proposed POD strategy.
Pedraza, Lizeth K; Sierra, Rodrigo O; Boos, Flávia Z; Haubrich, Josué; Quillfeldt, Jorge A; Alvares, Lucas de Oliveira
2016-03-01
Memory fades over time, becoming more schematic or abstract. The loss of contextual detail in memory may reflect a time-dependent change in the brain structures supporting memory. It has been well established that contextual fear memory relies on the hippocampus for expression shortly after learning, but it becomes hippocampus-independent at a later time point, a process called systems consolidation. This time-dependent process correlates with the loss of memory precision. Here, we investigated whether training intensity predicts the gradual decay of hippocampal dependency to retrieve memory, and the quality of the contextual memory representation over time. We have found that training intensity modulates the progressive decay of hippocampal dependency and memory precision. Strong training intensity accelerates systems consolidation and memory generalization in a remarkable timeframe match. The mechanisms underpinning such process are triggered by glucocorticoid and noradrenaline released during training. These results suggest that the stress levels during emotional learning act as a switch, determining the fate of memory quality. Moderate stress will create a detailed memory, whereas a highly stressful training will develop a generic gist-like memory. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Hensley, Winston; Giovanetti, Kevin
2008-10-01
A 1 ppm precision measurement of the muon lifetime is being conducted by the MULAN collaboration. The reason for this new measurement lies in recent advances in theory that have reduced the uncertainty in calculating the Fermi Coupling Constant from the measured lifetime to a few tenths ppm. The largest uncertainty is now experimental. To achieve a 1ppm level of precision it is necessary to control all sources of systematic error and to understand their influences on the lifetime measurement. James Madison University is contributing by examine the response of the timing system to uncorrelated events, randoms. A radioactive source was placed in front of paired detectors similar to those in the main experiment. These detectors were integrated in an identical fashion into the data acquisition and measurement system and data from these detectors was recorded during the entire experiment. The pair were placed in a shielded enclosure away from the main experiment to minimize interference. The data from these detectors should have a flat time spectrum as the decay of a radioactive source is a random event and has no time correlation. Thus the spectrum can be used as an important diagnostic in studying the method of determining event times and timing system performance.
Relative velocity change measurement based on seismic noise analysis in exploration geophysics
NASA Astrophysics Data System (ADS)
Corciulo, M.; Roux, P.; Campillo, M.; Dubuq, D.
2011-12-01
Passive monitoring techniques based on noise cross-correlation analysis are still debated in exploration geophysics even if recent studies showed impressive performance in seismology at larger scale. Time evolution of complex geological structure using noise data includes localization of noise sources and measurement of relative velocity variations. Monitoring relative velocity variations only requires the measurement of phase shifts of seismic noise cross-correlation functions computed for successive time recordings. The existing algorithms, such as the Stretching and the Doublet, classically need great efforts in terms of computation time, making them not practical when continuous dataset on dense arrays are acquired. We present here an innovative technique for passive monitoring based on the measure of the instantaneous phase of noise-correlated signals. The Instantaneous Phase Variation (IPV) technique aims at cumulating the advantages of the Stretching and Doublet methods while proposing a faster measurement of the relative velocity change. The IPV takes advantage of the Hilbert transform to compute in the time domain the phase difference between two noise correlation functions. The relative velocity variation is measured through the slope of the linear regression of the phase difference curve as a function of correlation time. The large amount of noise correlation functions, classically available at exploration scale on dense arrays, allows for a statistical analysis that further improves the precision of the estimation of the velocity change. In this work, numerical tests first aim at comparing the IPV performance to the Stretching and Doublet techniques in terms of accuracy, robustness and computation time. Then experimental results are presented using a seismic noise dataset with five days of continuous recording on 397 geophones spread on a ~1 km-squared area.
Cryptotephras: the revolution in correlation and precision dating1
DAVIES, SIWAN M
2015-01-01
From its Icelandic origins in the study of visible tephra horizons, tephrochronology took a remarkable step in the late 1980 s with the discovery of a ca. 4300-year-old microscopic ash layer in a Scottish peat bog. Since then, the search for these cryptotephra deposits in distal areas has gone from strength to strength. Indeed, a recent discovery demonstrates how a few fine-grained glass shards from an Alaskan eruption have been dispersed more than 7000 km to northern Europe. Instantaneous deposition of geochemically distinct volcanic ash over such large geographical areas gives rise to a powerful correlation tool with considerable potential for addressing a range of scientific questions. A prerequisite of this work is the establishment of regional tephrochronological frameworks that include well-constrained age estimates and robust geochemical signatures for each deposit. With distal sites revealing a complex record of previously unknown volcanic events, frameworks are regularly revised, and it has become apparent that some closely timed eruptions have similar geochemical signatures. The search for unique and robust geochemical fingerprints thus hinges on rigorous analysis by electron microprobe and laser ablation-inductively coupled plasma-mass spectrometry. Historical developments and significant breakthroughs are presented to chart the revolution in correlation and precision dating over the last 50 years using tephrochronology and cryptotephrochronology. PMID:27512240
Data-Driven Significance Estimation for Precise Spike Correlation
Grün, Sonja
2009-01-01
The mechanisms underlying neuronal coding and, in particular, the role of temporal spike coordination are hotly debated. However, this debate is often confounded by an implicit discussion about the use of appropriate analysis methods. To avoid incorrect interpretation of data, the analysis of simultaneous spike trains for precise spike correlation needs to be properly adjusted to the features of the experimental spike trains. In particular, nonstationarity of the firing of individual neurons in time or across trials, a spike train structure deviating from Poisson, or a co-occurrence of such features in parallel spike trains are potent generators of false positives. Problems can be avoided by including these features in the null hypothesis of the significance test. In this context, the use of surrogate data becomes increasingly important, because the complexity of the data typically prevents analytical solutions. This review provides an overview of the potential obstacles in the correlation analysis of parallel spike data and possible routes to overcome them. The discussion is illustrated at every stage of the argument by referring to a specific analysis tool (the Unitary Events method). The conclusions, however, are of a general nature and hold for other analysis techniques. Thorough testing and calibration of analysis tools and the impact of potentially erroneous preprocessing stages are emphasized. PMID:19129298
New instrumentation for precise (n,γ) measurements at ILL Grenoble
NASA Astrophysics Data System (ADS)
Urban, W.; Jentschel, M.; Märkisch, B.; Materna, Th; Bernards, Ch; Drescher, C.; Fransen, Ch; Jolie, J.; Köster, U.; Mutti, P.; Rzaca-Urban, T.; Simpson, G. S.
2013-03-01
An array of eight Ge detectors for coincidence measurements of γ rays from neutron-capture reactions has been constructed at the PF1B cold-neutron facility of the Institut Laue-Langevin. The detectors arranged in one plane every 45° can be used for angular correlation measurements. The neutron collimation line of the setup provides a neutron beam of 12 mm in diameter and the capture flux of about 108/(s × cm2) at the target position, with a negligible neutron halo. With the setup up to 109 γγ and up to 108 triple-γ coincidence events have been collected in a day measurement. Precise energy and efficiency calibrations up to 10 MeV are easily performed with 27Al(n,γ)28Al and 35Cl(n,γ)36Cl reactions. Test measurements have shown that neutron binding energies can be determined with an accuracy down to a few eV and angular correlation coefficients measured with a precision down to a percent level. The triggerless data collected with a digital electronics and acquisition allows to determine half-lives of excited levels in the nano- to microsecond range. The high resolving power of double- and triple-γ time coincidences allows significant improvements of excitation schemes reported in previous (n,γ) works and complements high-resolution γ-energy measurements at the double-crystal Bragg spectrometer GAMS of ILL.
Rabin, Ely; DiZio, Paul; Ventura, Joel; Lackner, James R
2008-02-01
Lightly touching a stable surface with one fingertip strongly stabilizes standing posture. The three main features of this phenomenon are fingertip contact forces maintained at levels too low to provide mechanical support, attenuation of postural sway relative to conditions without fingertip touch, and center of pressure (CP) lags changes in fingertip shear forces by approximately 250 ms. In the experiments presented here, we tested whether accurate arm proprioception and also whether the precision fingertip contact afforded by the arm's many degrees of freedom are necessary for postural stabilization by finger contact. In our first experiment, we perturbed arm proprioception and control with biceps brachii vibration (120-Hz, 2-mm amplitude). This degraded postural control, resulting in greater postural sway amplitudes. In a second study, we immobilized the touching arm with a splint. This prevented precision fingertip contact but had no effect on postural sway amplitude. In both experiments, the correlation and latency of fingertip contact forces to postural sway were unaffected. We conclude that postural control is executed based on information about arm orientation as well as tactile feedback from light touch, although precision fingertip contact is not essential. The consistent correlation and timing of CP movement and fingertip forces across conditions in which postural sway amplitude and fingertip contact are differentially disrupted suggests posture and the fingertip are controlled in parallel with feedback from the fingertip in this task.
Understanding the amplitudes of noise correlation measurements
Tsai, Victor C.
2011-01-01
Cross correlation of ambient seismic noise is known to result in time series from which station-station travel-time measurements can be made. Part of the reason that these cross-correlation travel-time measurements are reliable is that there exists a theoretical framework that quantifies how these travel times depend on the features of the ambient noise. However, corresponding theoretical results do not currently exist to describe how the amplitudes of the cross correlation depend on such features. For example, currently it is not possible to take a given distribution of noise sources and calculate the cross correlation amplitudes one would expect from such a distribution. Here, we provide a ray-theoretical framework for calculating cross correlations. This framework differs from previous work in that it explicitly accounts for attenuation as well as the spatial distribution of sources and therefore can address the issue of quantifying amplitudes in noise correlation measurements. After introducing the general framework, we apply it to two specific problems. First, we show that we can quantify the amplitudes of coherency measurements, and find that the decay of coherency with station-station spacing depends crucially on the distribution of noise sources. We suggest that researchers interested in performing attenuation measurements from noise coherency should first determine how the dominant sources of noise are distributed. Second, we show that we can quantify the signal-to-noise ratio of noise correlations more precisely than previous work, and that these signal-to-noise ratios can be estimated for given situations prior to the deployment of seismometers. It is expected that there are applications of the theoretical framework beyond the two specific cases considered, but these applications await future work.
Shear wave arrival time estimates correlate with local speckle pattern.
Mcaleavey, Stephen A; Osapoetra, Laurentius O; Langdon, Jonathan
2015-12-01
We present simulation and phantom studies demonstrating a strong correlation between errors in shear wave arrival time estimates and the lateral position of the local speckle pattern in targets with fully developed speckle. We hypothesize that the observed arrival time variations are largely due to the underlying speckle pattern, and call the effect speckle bias. Arrival time estimation is a key step in quantitative shear wave elastography, performed by tracking tissue motion via cross-correlation of RF ultrasound echoes or similar methods. Variations in scatterer strength and interference of echoes from scatterers within the tracking beam result in an echo that does not necessarily describe the average motion within the beam, but one favoring areas of constructive interference and strong scattering. A swept-receive image, formed by fixing the transmit beam and sweeping the receive aperture over the region of interest, is used to estimate the local speckle pattern. Metrics for the lateral position of the speckle are found to correlate strongly (r > 0.7) with the estimated shear wave arrival times both in simulations and in phantoms. Lateral weighting of the swept-receive pattern improved the correlation between arrival time estimates and speckle position. The simulations indicate that high RF echo correlation does not equate to an accurate shear wave arrival time estimate-a high correlation coefficient indicates that motion is being tracked with high precision, but the location tracked is uncertain within the tracking beam width. The presence of a strong on-axis speckle is seen to imply high RF correlation and low bias. The converse does not appear to be true-highly correlated RF echoes can still produce biased arrival time estimates. The shear wave arrival time bias is relatively stable with variations in shear wave amplitude and sign (-20 μm to 20 μm simulated) compared with the variation with different speckle realizations obtained along a given tracking vector. We show that the arrival time bias is weakly dependent on shear wave amplitude compared with the variation with axial position/ local speckle pattern. Apertures of f/3 to f/8 on transmit and f/2 and f/4 on receive were simulated. Arrival time error and correlation with speckle pattern are most strongly determined by the receive aperture.
Shear Wave Arrival Time Estimates Correlate with Local Speckle Pattern
McAleavey, Stephen A.; Osapoetra, Laurentius O.; Langdon, Jonathan
2016-01-01
We present simulation and phantom studies demonstrating a strong correlation between errors in shear wave arrival time estimates and the lateral position of the local speckle pattern in targets with fully developed speckle. We hypothesize that the observed arrival time variations are largely due to the underlying speckle pattern, and call the effect speckle bias. Arrival time estimation is a key step in quantitative shear wave elastography, performed by tracking tissue motion via cross correlation of RF ultrasound echoes or similar methods. Variations in scatterer strength and interference of echoes from scatterers within the tracking beam result in an echo that does not necessarily describe the average motion within the beam, but one favoring areas of constructive interference and strong scattering. A swept-receive image, formed by fixing the transmit beam and sweeping the receive aperture over the region of interest, is used to estimate the local speckle pattern. Metrics for the lateral position of the speckle are found to correlate strongly (r>0.7) with the estimated shear wave arrival times both in simulations and in phantoms. Lateral weighting of the swept-receive pattern improved the correlation between arrival time estimates and speckle position. The simulations indicate that high RF echo correlation does not equate to an accurate shear wave arrival time estimate – a high correlation coefficient indicates that motion is being tracked with high precision, but the location tracked is uncertain within the tracking beam width. The presence of a strong on-axis speckle is seen to imply high RF correlation and low bias. The converse does not appear to be true – highly correlated RF echoes can still produce biased arrival time estimates. The shear wave arrival time bias is relatively stable with variations in shear wave amplitude and sign (−20 μm to 20 μm simulated) compared to the variation with different speckle realizations obtained along a given tracking vector. We show that the arrival time bias is weakly dependent on shear wave amplitude compared to the variation with axial position/local speckle pattern. Apertures of f/3 to f/8 on transmit and f/2 and f/4 on receive were simulated. Arrival time error and correlation with speckle pattern are most strongly determined by the receive aperture. PMID:26670847
Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses.
Das, Jayajit
2016-03-08
Single cells often generate precise responses by involving dissipative out-of-thermodynamic-equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high-precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early-time T cell signaling. Using exact analytical calculations and numerical simulations, I show that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in determining single-cell kinetics from cell-population results. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
McPherson, Sue; Watson, Todd; Pate, Lindsey
2016-08-01
This study examined the reliability of sonographic measurements of the transversus abdominis of adults without low back pain during upright loaded functional tasks in real time, without relying on delayed recorded images. A single-group repeated-measures reliability study was conducted on 12 healthy participants without low back pain. Six of these adults reported a prior history of abdominal drawing-in maneuver training without sonographic measurement. The participants performed 3 trials of neutral standing, a loaded forward reach, and a loaded box lift under rest and with abdominal drawing-in maneuver instructions; task order was randomized. Transversus abdominis thickness measurements were obtained by an experienced rater using B-mode sonography in real-time via electronic calipers twice on the same static image during all trials by a rater. The rater was masked to group assignment and on-screen measurement output and required to respond to trivia questions between repeated measurements. The participants included 6 male and 6 female adults with a mean age ± SD of 26.3 ± 3.7 years. Intra-rater intraclass correlation coefficients (2,3) were high and precise for the rater's first and second measurements for all tasks and instruction conditions for mean transversus abdominis thickness and percent change in thickness measurements (eg, ranges were 0.968-0.997 for intraclass correlation coefficients, 0.01-0.21 mm for standard errors of the measurement, and 0.01-0.58 mm for minimal detectable changes). Calipers cleared by the rater or a research assistant produced similar findings of excellent reliability and precision. High intra-rater reliability and precision of transversus abdominis thickness measurements were obtained by a physical therapist in real time from asymptomatic adults performing upright loaded functional tasks under rest and with abdominal drawing-in maneuver instructions.
NASA Astrophysics Data System (ADS)
Nair, S. P.; Righetti, R.
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
Analysis of Active Lava Flows on Kilauea Volcano, Hawaii, Using SIR-C Radar Correlation Measurements
NASA Technical Reports Server (NTRS)
Zebker, H. A.; Rosen, P.; Hensley, S.; Mouginis-Mark, P. J.
1995-01-01
Precise eruption rates of active pahoehoe lava flows on Kilauea volcano, Hawaii, have been determined using spaceborne radar data acquired by the Space Shuttle Imaging Radar-C (SIR-C). Measurement of the rate of lava flow advance, and the determination of the volume of new material erupted in a given period of time, are among the most important observations that can be made when studying a volcano.
Immunoturbidimetric quantification of serum immunoglobulin G concentration in foals.
Bauer, J E; Brooks, T P
1990-08-01
Immunoturbidimetric determination of serum IgG concentration in foals was compared with the reference methods of single radial immunodiffusion and serum protein electrophoresis. High positive correlations were discovered when the technique was compared with either of these reference methods. The zinc sulfate turbidity test for serum IgG estimation was also evaluated. Although a positive correlation was discovered when the latter method was compared with reference methods, it was not as strong as the correlation between reference methods and the immunoturbidimetric method. The immunoturbidimetric method used in this study is specific and precise for equine serum IgG determination. It is rapid and, thus, is advantageous when timely evaluation of critically ill foals is necessary. The technique should be adaptable to various spectrophotometers and microcomputers for widespread application in veterinary medicine.
Shu, Bao; Liu, Hui; Xu, Longwei; Qian, Chuang; Gong, Xiaopeng; An, Xiangdong
2018-04-14
For GPS medium-long baseline real-time kinematic (RTK) positioning, the troposphere parameter is introduced along with coordinates, and the model is ill-conditioned due to its strong correlation with the height parameter. For BeiDou Navigation Satellite System (BDS), additional difficulties occur due to its special satellite constellation. In fact, relative zenith troposphere delay (RZTD) derived from high-precision empirical zenith troposphere models can be introduced. Thus, the model strength can be improved, which is also called the RZTD-constrained RTK model. In this contribution, we first analyze the factors affecting the precision of BDS medium-long baseline RTK; thereafter, 15 baselines ranging from 38 km to 167 km in different troposphere conditions are processed to assess the performance of RZTD-constrained RTK. Results show that the troposphere parameter is difficult to distinguish from the height component, even with long time filtering for BDS-only RTK. Due to the lack of variation in geometry for the BDS geostationary Earth orbit satellite, the long convergence time of ambiguity parameters may reduce the height precision of GPS/BDS-combined RTK in the initial period. When the RZTD-constrained model was used in BDS and GPS/BDS-combined situations compared with the traditional RTK, the standard deviation of the height component for the fixed solution was reduced by 52.4% and 34.0%, respectively.
Liu, Hui; Xu, Longwei; Qian, Chuang; Gong, Xiaopeng; An, Xiangdong
2018-01-01
For GPS medium-long baseline real-time kinematic (RTK) positioning, the troposphere parameter is introduced along with coordinates, and the model is ill-conditioned due to its strong correlation with the height parameter. For BeiDou Navigation Satellite System (BDS), additional difficulties occur due to its special satellite constellation. In fact, relative zenith troposphere delay (RZTD) derived from high-precision empirical zenith troposphere models can be introduced. Thus, the model strength can be improved, which is also called the RZTD-constrained RTK model. In this contribution, we first analyze the factors affecting the precision of BDS medium-long baseline RTK; thereafter, 15 baselines ranging from 38 km to 167 km in different troposphere conditions are processed to assess the performance of RZTD-constrained RTK. Results show that the troposphere parameter is difficult to distinguish from the height component, even with long time filtering for BDS-only RTK. Due to the lack of variation in geometry for the BDS geostationary Earth orbit satellite, the long convergence time of ambiguity parameters may reduce the height precision of GPS/BDS-combined RTK in the initial period. When the RZTD-constrained model was used in BDS and GPS/BDS-combined situations compared with the traditional RTK, the standard deviation of the height component for the fixed solution was reduced by 52.4% and 34.0%, respectively. PMID:29661999
NASA Astrophysics Data System (ADS)
Verbeke, Jérôme M.; Petit, Odile; Chebboubi, Abdelhazize; Litaize, Olivier
2018-01-01
Fission modeling in general-purpose Monte Carlo transport codes often relies on average nuclear data provided by international evaluation libraries. As such, only average fission multiplicities are available and correlations between fission neutrons and photons are missing. Whereas uncorrelated fission physics is usually sufficient for standard reactor core and radiation shielding calculations, correlated fission secondaries are required for specialized nuclear instrumentation and detector modeling. For coincidence counting detector optimization for instance, precise simulation of fission neutrons and photons that remain correlated in time from birth to detection is essential. New developments were recently integrated into the Monte Carlo transport code TRIPOLI-4 to model fission physics more precisely, the purpose being to access event-by-event fission events from two different fission models: FREYA and FIFRELIN. TRIPOLI-4 simulations can now be performed, either by connecting via an API to the LLNL fission library including FREYA, or by reading external fission event data files produced by FIFRELIN beforehand. These new capabilities enable us to easily compare results from Monte Carlo transport calculations using the two fission models in a nuclear instrumentation application. In the first part of this paper, broad underlying principles of the two fission models are recalled. We then present experimental measurements of neutron angular correlations for 252Cf(sf) and 240Pu(sf). The correlations were measured for several neutron kinetic energy thresholds. In the latter part of the paper, simulation results are compared to experimental data. Spontaneous fissions in 252Cf and 240Pu are modeled by FREYA or FIFRELIN. Emitted neutrons and photons are subsequently transported to an array of scintillators by TRIPOLI-4 in analog mode to preserve their correlations. Angular correlations between fission neutrons obtained independently from these TRIPOLI-4 simulations, using either FREYA or FIFRELIN, are compared to experimental results. For 240Pu(sf), the measured correlations were used to tune the model parameters.
NASA Astrophysics Data System (ADS)
Wahl, Michael; Rahn, Hans-Jürgen; Gregor, Ingo; Erdmann, Rainer; Enderlein, Jörg
2007-03-01
Time-correlated single photon counting is a powerful method for sensitive time-resolved fluorescence measurements down to the single molecule level. The method is based on the precisely timed registration of single photons of a fluorescence signal. Historically, its primary goal was the determination of fluorescence lifetimes upon optical excitation by a short light pulse. This goal is still important today and therefore has a strong influence on instrument design. However, modifications and extensions of the early designs allow for the recovery of much more information from the detected photons and enable entirely new applications. Here, we present a new instrument that captures single photon events on multiple synchronized channels with picosecond resolution and over virtually unlimited time spans. This is achieved by means of crystal-locked time digitizers with high resolution and very short dead time. Subsequent event processing in programmable logic permits classical histogramming as well as time tagging of individual photons and their streaming to the host computer. Through the latter, any algorithms and methods for the analysis of fluorescence dynamics can be implemented either in real time or offline. Instrument test results from single molecule applications will be presented.
Pseudorange error analysis for precise indoor positioning system
NASA Astrophysics Data System (ADS)
Pola, Marek; Bezoušek, Pavel
2017-05-01
There is a currently developed system of a transmitter indoor localization intended for fire fighters or members of rescue corps. In this system the transmitter of an ultra-wideband orthogonal frequency-division multiplexing signal position is determined by the time difference of arrival method. The position measurement accuracy highly depends on the directpath signal time of arrival estimation accuracy which is degraded by severe multipath in complicated environments such as buildings. The aim of this article is to assess errors in the direct-path signal time of arrival determination caused by multipath signal propagation and noise. Two methods of the direct-path signal time of arrival estimation are compared here: the cross correlation method and the spectral estimation method.
The 400 microsphere per piece "rule" does not apply to all blood flow studies.
Polissar, N L; Stanford, D C; Glenny, R W
2000-01-01
Microsphere experiments are useful in measuring regional organ perfusion as well as heterogeneity of blood flow within organs and correlation of perfusion between organ pieces at different time points. A 400 microspheres/piece "rule" is often used in planning experiments or to determine whether experiments are valid. This rule is based on the statement that 400 microspheres must lodge in a region for 95% confidence that the observed flow in the region is within 10% of the true flow. The 400 microspheres precision rule, however, only applies to measurements of perfusion to a single region or organ piece. Examples, simulations, and an animal experiment were carried out to show that good precision for measurements of heterogeneity and correlation can be obtained from many experiments with <400 microspheres/piece. Furthermore, methods were developed and tested for correcting the observed heterogeneity and correlation to remove the Poisson "noise" due to discrete microsphere measurements. The animal experiment shows adjusted values of heterogeneity and correlation that are in close agreement for measurements made with many or few microspheres/piece. Simulations demonstrate that the adjusted values are accurate for a variety of experiments with far fewer than 400 microspheres/piece. Thus the 400 microspheres rule does not apply to many experiments. A "rule of thumb" is that experiments with a total of at least 15,000 microspheres, for all pieces combined, are very likely to yield accurate estimates of heterogeneity. Experiments with a total of at least 25,000 microspheres are very likely to yield accurate estimates of correlation coefficients.
Zhang, X Y; Li, H; Zhao, Y J; Wang, Y; Sun, Y C
2016-07-01
To quantitatively evaluate the quality and accuracy of three-dimensional (3D) data acquired by using two kinds of structure intra-oral scanner to scan the typical teeth crown preparations. Eight typical teeth crown preparations model were scanned 3 times with two kinds of structured light intra-oral scanner(A, B), as test group. A high precision model scanner were used to scan the model as true value group. The data above the cervical margin was extracted. The indexes of quality including non-manifold edges, the self-intersections, highly-creased edges, spikes, small components, small tunnels, small holes and the anount of triangles were measured with the tool of mesh doctor in Geomagic studio 2012. The scanned data of test group were aligned to the data of true value group. 3D deviations of the test group compared with true value group were measured for each scanned point, each preparation and each group. Independent-samples Mann-Whitney U test was applied to analyze 3D deviations for each scanned point of A and B group. Correlation analysis was applied to index values and 3D deviation values. The total number of spikes in A group was 96, and that in B group and true value group were 5 and 0 respectively. Trueness: A group 8.0 (8.3) μm, B group 9.5 (11.5) μm(P>0.05). Correlation analysis of the number of spikes with data precision of A group was r=0.46. In the study, the qulity of the scanner B is better than scanner A, the difference of accuracy is not statistically significant. There is correlation between quality and data precision of the data scanned with scanner A.
NASA Astrophysics Data System (ADS)
Klokočník, J.; Kostelecký, J.; Böhm, V.; Böhm, B.; Vondrák, J.; Vítek, F.
2008-05-01
The Maya used their own very precise calendar. When transforming data from the Mayan calendar to ours, or vice versa, a surprisingly large uncertainty is found. The relationship between the two calendars has been investigated by many researchers during the last century and about 50 different values of the transformation coefficient, known as the correlation, have been deduced. They can differ by centuries, potentially yielding an incredibly large error in the relation of Mayan history to the history of other civilizations. The most frequently used correlation is the GMT one (of Goodman-Martínez-Thompson), based largely on historical evidence from colonial times. Astronomy (celestial mechanics) may resolve the problem of the correlation, provided that historians have correctly decoded the records of various astronomical phenomena discovered, namely, in one extremely important and rare Mayan book, the Dresden Codex (DC). This describes (among other matters) observations of various astronomical phenomena (eclipses, conjunctions, maximum elongations, heliacal aspects, etc), made by the Maya. Modern celestial mechanics enables us to compute exactly when the phenomena occurred in the sky for the given place on the Earth, even though far back in time. Here we check (by a completely independent method), confirming the value of the correlation obtained by Böhm & Böhm (1996, 1999). In view of these tests, we advocate rejecting the GMT correlation and replacing it by the Böhm's correlation. We also comment on the criticism of GMT by some investigators. The replacement of GMT by another correlation seems, however, unacceptable to many Mayanists, as they would need to rewrite the whole history of Mesoamerica. The history of the Maya would be - for example with Böhm's correlation - closer to our time by 104 years.
NASA Astrophysics Data System (ADS)
Gang, Zhang; Fansong, Meng; Jianzhong, Wang; Mingtao, Ding
2018-02-01
Determining magnetotelluric impedance precisely and accurately is fundamental to valid inversion and geological interpretation. This study aims to determine the minimum value of signal-to-noise ratio (SNR) which maintains the effectiveness of remote reference technique. Results of standard time series simulation, addition of different Gaussian noises to obtain the different SNR time series, and analysis of the intermediate data, such as polarization direction, correlation coefficient, and impedance tensor, show that when the SNR value is larger than 23.5743, the polarization direction disorder at morphology and a smooth and accurate sounding carve value can be obtained. At this condition, the correlation coefficient value of nearly complete segments between the base and remote station is larger than 0.9, and impedance tensor Zxy presents only one aggregation, which meet the natural magnetotelluric signal characteristic.
NASA Astrophysics Data System (ADS)
Chow, Yu Ting; Chen, Shuxun; Wang, Ran; Liu, Chichi; Kong, Chi-Wing; Li, Ronald A.; Cheng, Shuk Han; Sun, Dong
2016-04-01
Cell transfection is a technique wherein foreign genetic molecules are delivered into cells. To elucidate distinct responses during cell genetic modification, methods to achieve transfection at the single-cell level are of great value. Herein, we developed an automated micropipette-based quantitative microinjection technology that can deliver precise amounts of materials into cells. The developed microinjection system achieved precise single-cell microinjection by pre-patterning cells in an array and controlling the amount of substance delivered based on injection pressure and time. The precision of the proposed injection technique was examined by comparing the fluorescence intensities of fluorescent dye droplets with a standard concentration and water droplets with a known injection amount of the dye in oil. Injection of synthetic modified mRNA (modRNA) encoding green fluorescence proteins or a cocktail of plasmids encoding green and red fluorescence proteins into human foreskin fibroblast cells demonstrated that the resulting green fluorescence intensity or green/red fluorescence intensity ratio were well correlated with the amount of genetic material injected into the cells. Single-cell transfection via the developed microinjection technique will be of particular use in cases where cell transfection is challenging and genetically modified of selected cells are desired.
Chow, Yu Ting; Chen, Shuxun; Wang, Ran; Liu, Chichi; Kong, Chi-Wing; Li, Ronald A; Cheng, Shuk Han; Sun, Dong
2016-04-12
Cell transfection is a technique wherein foreign genetic molecules are delivered into cells. To elucidate distinct responses during cell genetic modification, methods to achieve transfection at the single-cell level are of great value. Herein, we developed an automated micropipette-based quantitative microinjection technology that can deliver precise amounts of materials into cells. The developed microinjection system achieved precise single-cell microinjection by pre-patterning cells in an array and controlling the amount of substance delivered based on injection pressure and time. The precision of the proposed injection technique was examined by comparing the fluorescence intensities of fluorescent dye droplets with a standard concentration and water droplets with a known injection amount of the dye in oil. Injection of synthetic modified mRNA (modRNA) encoding green fluorescence proteins or a cocktail of plasmids encoding green and red fluorescence proteins into human foreskin fibroblast cells demonstrated that the resulting green fluorescence intensity or green/red fluorescence intensity ratio were well correlated with the amount of genetic material injected into the cells. Single-cell transfection via the developed microinjection technique will be of particular use in cases where cell transfection is challenging and genetically modified of selected cells are desired.
Nikolaus, Stephanie; Bode, Christina; Taal, Erik; Vonkeman, Harald E.; Glas, Cees A. W.; van de Laar, Mart A. F. J.
2015-01-01
Objective Multidimensional computerized adaptive testing enables precise measurements of patient-reported outcomes at an individual level across different dimensions. This study examined the construct validity of a multidimensional computerized adaptive test (CAT) for fatigue in rheumatoid arthritis (RA). Methods The ‘CAT Fatigue RA’ was constructed based on a previously calibrated item bank. It contains 196 items and three dimensions: ‘severity’, ‘impact’ and ‘variability’ of fatigue. The CAT was administered to 166 patients with RA. They also completed a traditional, multidimensional fatigue questionnaire (BRAF-MDQ) and the SF-36 in order to examine the CAT’s construct validity. A priori criterion for construct validity was that 75% of the correlations between the CAT dimensions and the subscales of the other questionnaires were as expected. Furthermore, comprehensive use of the item bank, measurement precision and score distribution were investigated. Results The a priori criterion for construct validity was supported for two of the three CAT dimensions (severity and impact but not for variability). For severity and impact, 87% of the correlations with the subscales of the well-established questionnaires were as expected but for variability, 53% of the hypothesised relations were found. Eighty-nine percent of the items were selected between one and 137 times for CAT administrations. Measurement precision was excellent for the severity and impact dimensions, with more than 90% of the CAT administrations reaching a standard error below 0.32. The variability dimension showed good measurement precision with 90% of the CAT administrations reaching a standard error below 0.44. No floor- or ceiling-effects were found for the three dimensions. Conclusion The CAT Fatigue RA showed good construct validity and excellent measurement precision on the dimensions severity and impact. The dimension variability had less ideal measurement characteristics, pointing to the need to recalibrate the CAT item bank with a two-dimensional model, solely consisting of severity and impact. PMID:26710104
Decorrelation of the true and estimated classifier errors in high-dimensional settings.
Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R
2007-01-01
The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.
NASA Technical Reports Server (NTRS)
Lary, David J.; Mussa, Yussuf
2004-01-01
In this study a new extended Kalman filter (EKF) learning algorithm for feed-forward neural networks (FFN) is used. With the EKF approach, the training of the FFN can be seen as state estimation for a non-linear stationary process. The EKF method gives excellent convergence performances provided that there is enough computer core memory and that the machine precision is high. Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). The neural network was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9997. The neural network Fortran code used is available for download.
Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W
2015-06-01
Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Ramezani, Jahandar; Clyde, William; Wang, Tiantian; Johnson, Kirk; Bowring, Samuel
2016-04-01
Reversals in the Earth's magnetic polarity are geologically abrupt events of global magnitude that makes them ideal timelines for stratigraphic correlation across a variety of depositional environments, especially where diagnostic marine fossils are absent. Accurate and precise calibration of the Geomagnetic Polarity Timescale (GPTS) is thus essential to the reconstruction of Earth history and to resolving the mode and tempo of biotic and environmental change in deep time. The Late Cretaceous - Paleocene GPTS is of particular interest as it encompasses a critical period of Earth history marked by the Cretaceous greenhouse climate, the peak of dinosaur diversity, the end-Cretaceous mass extinction and its paleoecological aftermaths. Absolute calibration of the GPTS has been traditionally based on sea-floor spreading magnetic anomaly profiles combined with local magnetostratigraphic sequences for which a numerical age model could be established by interpolation between an often limited number of 40Ar/39Ar dates from intercalated volcanic ash deposits. Although the Neogene part of the GPTS has been adequately calibrated using cyclostratigraphy-based, astrochronological schemes, the application of these approaches to pre-Neogene parts of the timescale has been complicated given the uncertainties of the orbital models and the chaotic behavior of the solar system this far back in time. Here we present refined chronostratigraphic frameworks based on high-precision U-Pb geochronology of ash beds from the Western Interior Basin of North America and the Songliao Basin of Northeast China that places tight temporal constraints on the Late Cretaceous to Paleocene GPTS, either directly or by testing their astrochronological underpinnings. Further application of high-precision radioisotope geochronology and calibrated astrochronology promises a complete and robust Cretaceous-Paleogene GPTS, entirely independent of sea-floor magnetic anomaly profiles.
Lemieux, Genevieve; Carey, Jason P; Flores-Mir, Carlos; Secanell, Marc; Hart, Adam; Lagravère, Manuel O
2016-01-01
Our objective was to identify and evaluate the accuracy and precision (intrarater and interrater reliabilities) of various anatomic landmarks for use in 3-dimensional maxillary and mandibular regional superimpositions. We used cone-beam computed tomography reconstructions of 10 human dried skulls to locate 10 landmarks in the maxilla and the mandible. Precision and accuracy were assessed with intrarater and interrater readings. Three examiners located these landmarks in the cone-beam computed tomography images 3 times with readings scheduled at 1-week intervals. Three-dimensional coordinates were determined (x, y, and z coordinates), and the intraclass correlation coefficient was computed to determine intrarater and interrater reliabilities, as well as the mean error difference and confidence intervals for each measurement. Bilateral mental foramina, bilateral infraorbital foramina, anterior nasal spine, incisive canal, and nasion showed the highest precision and accuracy in both intrarater and interrater reliabilities. Subspinale and bilateral lingulae had the lowest precision and accuracy in both intrarater and interrater reliabilities. When choosing the most accurate and precise landmarks for 3-dimensional cephalometric analysis or plane-derived maxillary and mandibular superimpositions, bilateral mental and infraorbital foramina, landmarks in the anterior region of the maxilla, and nasion appeared to be the best options of the analyzed landmarks. Caution is needed when using subspinale and bilateral lingulae because of their higher mean errors in location. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
A new measurement of electron transverse polarization in polarized nuclear β-decay
NASA Astrophysics Data System (ADS)
Kawamura, H.; Akiyama, T.; Hata, M.; Hirayama, Y.; Ikeda, M.; Ikeda, Y.; Ishii, T.; Kameda, D.; Mitsuoka, S.; Miyatake, H.; Nagae, D.; Nakaya, Y.; Ninomiya, K.; Nitta, M.; Ogawa, N.; Onishi, J.; Seitaibashi, E.; Tanaka, S.; Tanuma, R.; Totsuka, Y.; Toyoda, T.; Watanabe, Y. X.; Murata, J.
2017-03-01
The Mott polarimetry for T-violation (MTV) experiment tests time-reversal symmetry in polarized nuclear β-decay by measuring an electron’s transverse polarization as a form of angular asymmetry in Mott scattering using a thin metal foil. A Mott scattering analyzer system developed using a tracking detector to measure scattering angles offers better event selectivity than conventional counter experiments. In this paper, we describe a pilot experiment conducted at KEK-TRIAC using a prototype system with a polarized 8Li beam. The experiment confirmed the sound performance of our Mott analyzer system to measure T-violating triple correlation (R correlation), and therefore recommends its use in higher-precision experiments at the TRIUMF-ISAC.
Atomically precise edge chlorination of nanographenes and its application in graphene nanoribbons
Tan, Yuan-Zhi; Yang, Bo; Parvez, Khaled; Narita, Akimitsu; Osella, Silvio; Beljonne, David; Feng, Xinliang; Müllen, Klaus
2013-01-01
Chemical functionalization is one of the most powerful and widely used strategies to control the properties of nanomaterials, particularly in the field of graphene. However, the ill-defined structure of the present functionalized graphene inhibits atomically precise structural characterization and structure-correlated property modulation. Here we present a general edge chlorination protocol for atomically precise functionalization of nanographenes at different scales from 1.2 to 3.4 nm and its application in graphene nanoribbons. The well-defined edge chlorination is unambiguously confirmed by X-ray single-crystal analysis, which also discloses the characteristic non-planar molecular shape and detailed bond lengths of chlorinated nanographenes. Chlorinated nanographenes and graphene nanoribbons manifest enhanced solution processability associated with decreases in the optical band gap and frontier molecular orbital energy levels, exemplifying the structure-correlated property modulation by precise edge chlorination. PMID:24212200
Ionospheric corrections to precise time transfer using GPS
NASA Technical Reports Server (NTRS)
Snow, Robert W.; Osborne, Allen W., III; Klobuchar, John A.; Doherty, Patricia H.
1994-01-01
The free electrons in the earth's ionosphere can retard the time of reception of GPS signals received at a ground station, compared to their time in free space, by many tens of nanoseconds, thus limiting the accuracy of time transfer by GPS. The amount of the ionospheric time delay is proportional to the total number of electrons encountered by the wave on its path from each GPS satellite to a receiver. This integrated number of electrons is called Total Electron Content, or TEC. Dual frequency GPS receivers designed by Allen Osborne Associates, Inc. (AOA) directly measure both the ionospheric differential group delay and the differential carrier phase advance for the two GPS frequencies and derive from this the TEC between the receiver and each GPS satellite in track. The group delay information is mainly used to provide an absolute calibration to the relative differential carrier phase, which is an extremely precise measure of relative TEC. The AOA Mini-Rogue ICS-4Z and the AOA TurboRogue ICS-4000Z receivers normally operate using the GPS P code, when available, and switch to cross-correlation signal processing when the GPS satellites are in the Anti-Spoofing (A-S) mode and the P code is encrypted. An AOA ICS-Z receiver has been operated continuously for over a year at Hanscom AFB, MA to determine the statistics of the variability of the TEC parameter using signals from up to four different directions simultaneously. The 4-channel ICS-4Z and the 8-channel ICS-4000Z, have proven capabilities to make precise, well calibrated, measurements of the ionosphere in several directions simultaneously. In addition to providing ionospheric corrections for precise time transfer via satellite, this dual frequency design allows full code and automatic codeless operation of both the differential group delay and differential carrier phase for numerous ionospheric experiments being conducted. Statistical results of the data collected from the ICS-4Z during the initial year of ionospheric time delay in the northeastern U.S., and initial results with the ICS-4000Z, will be presented.
NASA Astrophysics Data System (ADS)
Hofmann, Daniela I.; Fabian, Karl; Schmieder, Frank; Donner, Barbara; Bleil, Ulrich
2005-12-01
Computer aided multi-parameter signal correlation is used to develop a common high-precision age model for eight gravity cores from the subtropical and subantarctic South Atlantic. Since correlations between all pairs of multi-parameter sequences are used, and correlation errors between core pairs ( A, B) and ( B, C) are controlled by comparison with ( A, C), the resulting age model is called a stratigraphic network. Precise inter-core correlation is achieved using high-resolution records of magnetic susceptibility κ, wet bulk density ρ and X-ray fluorescence scans of elemental composition. Additional δ18O records are available for two cores. The data indicate nearly undisturbed sediment series and the absence of significant hiatuses or turbidites. After establishing a high-precision common depth scale by synchronously correlating four densely measured parameters (Fe, Ca, κ, ρ), the final age model is obtained by simultaneously fitting the aligned δ18O and κ records of the stratigraphic network to orbitally tuned oxygen isotope [J. Imbrie, J. D. Hays, D. G. Martinson, A. McIntyre, A. C. Mix, J. J. Morley, N. G. Pisias, W. L. Prell, N. J. Shackleton, The orbital theory of Pleistocene climate: support from a revised chronology of the marine δ18O record, in: A. Berger, J. Imbrie, J. Hays, G. Kukla, B. Saltzman (Eds.), Milankovitch and Climate: Understanding the Response to Orbital Forcing, Reidel Publishing, Dordrecht, 1984, pp. 269-305; D. Martinson, N. Pisias, J. Hays, J. Imbrie, T. C. Moore Jr., N. Shackleton, Age dating and the orbital theory of the Ice Ages: development of a high-resolution 0 to 300.000-Year chronostratigraphy, Quat. Res. 27 (1987) 1-29.] or susceptibility stacks [T. von Dobeneck, F.Schmieder, Using rock magnetic proxy records for orbital tuning and extended time series analyses into the super-and sub-Milankovitch Bands, in: G. Fischer, G. Wefer (Eds.), Use of proxies in paleoceanography: Examples from the South Atlantic, Springer-Verlag, Berlin (1999), pp. 601-633.]. Besides the detection and elimination of errors in single records, the stratigraphic network approach allows to check the intrinsic consistency of the final result by comparing it to the outcome of more restricted alignment procedures. The final South Atlantic stratigraphic network covers the last 400 kyr south and the last 1200 kyr north of the Subtropical Front (STF) and provides a highly precise age model across the STF representing extremely different sedimentary regimes. This allows to detect temporal shifts of the STF by mapping δMn / Fe. It turns out that the apparent STF movements by about 200 km are not directly related to marine oxygen isotope stages.
Johansen, Mette Dencker; Gjerløv, Irene; Christiansen, Jens Sandahl; Hejlesen, Ole K
2012-03-01
In glycemic control, postprandial glycemia may be important to monitor and optimize as it reveals glycemic control quality, and postprandial hyperglycemia partly predicts late diabetic complications. Self-monitoring of blood glucose (SMBG) may be an appropriate technology to use, but recommendations on measurement time are crucial. We retrospectively analyzed interindividual and intraindividual variations in postprandial glycemic peak time. Continuous glucose monitoring (CGM) and carbohydrate intake were collected in 22 patients with type 1 diabetes mellitus. Meals were identified from carbohydrate intake data. For each meal, peak time was identified as time from meal to CGM zenith within 40-150 min after meal start. Interindividual (one-way Anova) and intraindividual (intraclass correlation coefficient) variation was calculated. Nineteen patients were included with sufficient meal data quality. Mean peak time was 87 ± 29 min. Mean peak time differed significantly between patients (p = 0.02). Intraclass correlation coefficient was 0.29. Significant interindividual and intraindividual variations exist in postprandial glycemia peak time, thus hindering simple and general advice regarding postprandial SMBG for detection of maximum values. © 2012 Diabetes Technology Society.
Mapped Landmark Algorithm for Precision Landing
NASA Technical Reports Server (NTRS)
Johnson, Andrew; Ansar, Adnan; Matthies, Larry
2007-01-01
A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.
A Kepler study of starspot lifetimes with respect to light-curve amplitude and spectral type
NASA Astrophysics Data System (ADS)
Giles, Helen A. C.; Collier Cameron, Andrew; Haywood, Raphaëlle D.
2017-12-01
Wide-field high-precision photometric surveys such as Kepler have produced reams of data suitable for investigating stellar magnetic activity of cooler stars. Starspot activity produces quasi-sinusoidal light curves whose phase and amplitude vary as active regions grow and decay over time. Here we investigate, first, whether there is a correlation between the size of starspots - assumed to be related to the amplitude of the sinusoid - and their decay time-scale and, secondly, whether any such correlation depends on the stellar effective temperature. To determine this, we computed the auto-correlation functions of the light curves of samples of stars from Kepler and fitted them with apodised periodic functions. The light-curve amplitudes, representing spot size, were measured from the root-mean-squared scatter of the normalized light curves. We used a Monte Carlo Markov Chain to measure the periods and decay time-scales of the light curves. The results show a correlation between the decay time of starspots and their inferred size. The decay time also depends strongly on the temperature of the star. Cooler stars have spots that last much longer, in particular for stars with longer rotational periods. This is consistent with current theories of diffusive mechanisms causing starspot decay. We also find that the Sun is not unusually quiet for its spectral type - stars with solar-type rotation periods and temperatures tend to have (comparatively) smaller starspots than stars with mid-G or later spectral types.
Dubuc, Nicole; Haley, Stephen; Ni, Pengsheng; Kooyoomjian, Jill; Jette, Alan
2004-03-18
We evaluated the Late-Life Function and Disability Instrument's (LLFDI) concurrent validity, comprehensiveness and precision by comparing it with the Short-Form-36 physical functioning (PF-10) and the London Handicap Scale (LHS). We administered the LLFDI, PF-10 and LHS to 75 community-dwelling adults (> 60 years of age). We used Pearson correlation coefficients to examine concurrent validity and Rasch analysis to compare the item hierarchies, content ranges and precision of the PF-10 and LLFDI function domains, and the LHS and the LLFDI disability domains. LLFDI Function (lower extremity scales) and PF-10 scores were highly correlated (r = 0.74 - 0.86, p > 0.001); moderate correlations were found between the LHS and the LLFDI Disability limitation (r = 0.66, p < 0.0001) and Disability frequency (r = 0.47, p < 0.001) scores. The LLFDI had a wider range of content coverage, less ceiling effects and better relative precision across the spectrum of function and disability than the PF-10 and the LHS. The LHS had slightly more content range and precision in the lower end of the disability scale than the LLFDI. The LLFDI is a more comprehensive and precise instrument compared to the PF-10 and LHS for assessing function and disability in community-dwelling older adults.
Wang, Jinjing Jenny; Odic, Darko; Halberda, Justin; Feigenson, Lisa
2016-07-01
From early in life, humans have access to an approximate number system (ANS) that supports an intuitive sense of numerical quantity. Previous work in both children and adults suggests that individual differences in the precision of ANS representations correlate with symbolic math performance. However, this work has been almost entirely correlational in nature. Here we tested for a causal link between ANS precision and symbolic math performance by asking whether a temporary modulation of ANS precision changes symbolic math performance. First, we replicated a recent finding that 5-year-old children make more precise ANS discriminations when starting with easier trials and gradually progressing to harder ones, compared with the reverse. Next, we show that this brief modulation of ANS precision influenced children's performance on a subsequent symbolic math task but not a vocabulary task. In a supplemental experiment, we present evidence that children who performed ANS discriminations in a random trial order showed intermediate performance on both the ANS task and the symbolic math task, compared with children who made ordered discriminations. Thus, our results point to a specific causal link from the ANS to symbolic math performance. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, Jinjing (Jenny); Odic, Darko; Halberda, Justin; Feigenson, Lisa
2016-01-01
From early in life, humans have access to an Approximate Number System (ANS) that supports an intuitive sense of numerical quantity. Previous work in both children and adults suggests that individual differences in the precision of ANS representations correlate with symbolic math performance. However, this work has been almost entirely correlational in nature. Here we tested for a causal link between ANS precision and symbolic math performance by asking whether a temporary modulation of ANS precision changes symbolic math performance. First we replicated a recent finding that 5-year-old children make more precise ANS discriminations when starting with easier trials and gradually progressing to harder ones, compared to the reverse. Next, we show that this brief modulation of ANS precision influenced children’s performance on a subsequent symbolic math task, but not a vocabulary task. In a supplemental experiment we present evidence that children who performed ANS discriminations in a random trial order showed intermediate performance both on the ANS task and the symbolic math task, compared to the children who made ordered discriminations. Thus, our results point to a specific causal link from the ANS to symbolic math performance. PMID:27061668
Layered compression for high-precision depth data.
Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen
2015-12-01
With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.
Monte Carlo Study of the abBA Experiment: Detector Response and Physics Analysis.
Frlež, E
2005-01-01
The abBA collaboration proposes to conduct a comprehensive program of precise measurements of neutron β-decay coefficients a (the correlation between the neutrino momentum and the decay electron momentum), b (the electron energy spectral distortion term), A (the correlation between the neutron spin and the decay electron momentum), and B (the correlation between the neutron spin and the decay neutrino momentum) at a cold neutron beam facility. We have used a GEANT4-based code to simulate the propagation of decay electrons and protons in the electromagnetic spectrometer and study the energy and timing response of a pair of Silicon detectors. We used these results to examine systematic effects and find the uncertainties with which the physics parameters a, b, A, and B can be extracted from an over-determined experimental data set.
Egbewale, Bolaji E; Lewis, Martyn; Sim, Julius
2014-04-09
Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. 126 hypothetical trial scenarios were evaluated (126,000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power.
2014-01-01
Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304
NASA Astrophysics Data System (ADS)
Kim, J.; Park, M.; Baik, H. S.; Choi, Y.
2016-12-01
At the present time, arguments continue regarding the migration speeds of Martian dune fields and their correlation with atmospheric circulation. However, precisely measuring the spatial translation of Martian dunes has rarely conducted only a very few times Therefore, we developed a generic procedure to precisely measure the migration of dune fields with recently introduced 25-cm resolution High Resolution Imaging Science Experimen (HIRISE) employing a high-accuracy photogrammetric processor and sub-pixel image correlator. The processor was designed to trace estimated dune migration, albeit slight, over the Martian surface by 1) the introduction of very high resolution ortho images and stereo analysis based on hierarchical geodetic control for better initial point settings; 2) positioning error removal throughout the sensor model refinement with a non-rigorous bundle block adjustment, which makes possible the co-alignment of all images in a time series; and 3) improved sub-pixel co-registration algorithms using optical flow with a refinement stage conducted on a pyramidal grid processor and a blunder classifier. Moreover, volumetric changes of Martian dunes were additionally traced by means of stereo analysis and photoclinometry. The established algorithms have been tested using high-resolution HIRISE images over a large number of Martian dune fields covering whole Mars Global Dune Database. Migrations over well-known crater dune fields appeared to be almost static for the considerable temporal periods and were weakly correlated with wind directions estimated by the Mars Climate Database (Millour et al. 2015). Only over a few Martian dune fields, such as Kaiser crater, meaningful migration speeds (>1m/year) compared to phtotogrammetric error residual have been measured. Currently a technical improved processor to compensate error residual using time series observation is under developing and expected to produce the long term migration speed over Martian dune fields where constant HIRISE image acquisitions are available. ACKNOWLEDGEMENTS: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement Nr. 607379.
Toupin, Solenn; Bour, Pierre; Lepetit-Coiffé, Matthieu; Ozenne, Valéry; Denis de Senneville, Baudouin; Schneider, Rainer; Vaussy, Alexis; Chaumeil, Arnaud; Cochet, Hubert; Sacher, Frédéric; Jaïs, Pierre; Quesson, Bruno
2017-01-25
Clinical treatment of cardiac arrhythmia by radiofrequency ablation (RFA) currently lacks quantitative and precise visualization of lesion formation in the myocardium during the procedure. This study aims at evaluating thermal dose (TD) imaging obtained from real-time magnetic resonance (MR) thermometry on the heart as a relevant indicator of the thermal lesion extent. MR temperature mapping based on the Proton Resonance Frequency Shift (PRFS) method was performed at 1.5 T on the heart, with 4 to 5 slices acquired per heartbeat. Respiratory motion was compensated using navigator-based slice tracking. Residual in-plane motion and related magnetic susceptibility artifacts were corrected online. The standard deviation of temperature was measured on healthy volunteers (N = 5) in both ventricles. On animals, the MR-compatible catheter was positioned and visualized in the left ventricle (LV) using a bSSFP pulse sequence with active catheter tracking. Twelve MR-guided RFA were performed on three sheep in vivo at various locations in left ventricle (LV). The dimensions of the thermal lesions measured on thermal dose images, on 3D T1-weighted (T1-w) images acquired immediately after the ablation and at gross pathology were correlated. MR thermometry uncertainty was 1.5 °C on average over more than 96% of the pixels covering the left and right ventricles, on each volunteer. On animals, catheter repositioning in the LV with active slice tracking was successfully performed and each ablation could be monitored in real-time by MR thermometry and thermal dosimetry. Thermal lesion dimensions on TD maps were found to be highly correlated with those observed on post-ablation T1-w images (R = 0.87) that also correlated (R = 0.89) with measurements at gross pathology. Quantitative TD mapping from real-time rapid CMR thermometry during catheter-based RFA is feasible. It provides a direct assessment of the lesion extent in the myocardium with precision in the range of one millimeter. Real-time MR thermometry and thermal dosimetry may improve safety and efficacy of the RFA procedure by offering a reliable indicator of therapy outcome during the procedure.
Validity of a smartphone protractor to measure sagittal parameters in adult spinal deformity.
Kunkle, William Aaron; Madden, Michael; Potts, Shannon; Fogelson, Jeremy; Hershman, Stuart
2017-10-01
Smartphones have become an integral tool in the daily life of health-care professionals (Franko 2011). Their ease of use and wide availability often make smartphones the first tool surgeons use to perform measurements. This technique has been validated for certain orthopedic pathologies (Shaw 2012; Quek 2014; Milanese 2014; Milani 2014), but never to assess sagittal parameters in adult spinal deformity (ASD). This study was designed to assess the validity, reproducibility, precision, and efficiency of using a smartphone protractor application to measure sagittal parameters commonly measured in ASD assessment and surgical planning. This study aimed to (1) determine the validity of smartphone protractor applications, (2) determine the intra- and interobserver reliability of smartphone protractor applications when used to measure sagittal parameters in ASD, (3) determine the efficiency of using a smartphone protractor application to measure sagittal parameters, and (4) elucidate whether a physician's level of experience impacts the reliability or validity of using a smartphone protractor application to measure sagittal parameters in ASD. An experimental validation study was carried out. Thirty standard 36″ standing lateral radiographs were examined. Three separate measurements were performed using a marker and protractor; then at a separate time point, three separate measurements were performed using a smartphone protractor application for all 30 radiographs. The first 10 radiographs were then re-measured two more times, for a total of three measurements from both the smartphone protractor and marker and protractor. The parameters included lumbar lordosis, pelvic incidence, and pelvic tilt. Three raters performed all measurements-a junior level orthopedic resident, a senior level orthopedic resident, and a fellowship-trained spinal deformity surgeon. All data, including the time to perform the measurements, were recorded, and statistical analysis was performed to determine intra- and interobserver reliability, as well as accuracy, efficiency, and precision. Statistical analysis using the intra- and interclass correlation coefficient was calculated using R (version 3.3.2, 2016) to determine the degree of intra- and interobserver reliability. High rates of intra- and interobserver reliability were observed between the junior resident, senior resident, and attending surgeon when using the smartphone protractor application as demonstrated by high inter- and intra-class correlation coefficients greater than 0.909 and 0.874 respectively. High rates of inter- and intraobserver reliability were also seen between the junior resident, senior resident, and attending surgeon when a marker and protractor were used as demonstrated by high inter- and intra-class correlation coefficients greater than 0.909 and 0.807 respectively. The lumbar lordosis, pelvic incidence, and pelvic tilt values were accurately measured by all three raters, with excellent inter- and intra-class correlation coefficient values. When the first 10 radiographs were re-measured at different time points, a high degree of precision was noted. Measurements performed using the smartphone application were consistently faster than using a marker and protractor-this difference reached statistical significance of p<.05. Adult spinal deformity radiographic parameters can be measured accurately, precisely, reliably, and more efficiently using a smartphone protractor application than with a standard protractor and wax pencil. A high degree of intra- and interobserver reliability was seen between the residents and attending surgeon, indicating measurements made with a smartphone protractor are unaffected by an observer's level of experience. As a result, smartphone protractors may be used when planning ASD surgery. Copyright © 2017 Elsevier Inc. All rights reserved.
Modeling and prediction of relaxation of polar order in high-activity nonlinear optical polymers
NASA Astrophysics Data System (ADS)
Guenthner, Andrew J.; Lindsay, Geoffrey A.; Wright, Michael E.; Fallis, Stephen; Ashley, Paul R.; Sanghadasa, Mohan
2007-09-01
Mach-Zehnder optical modulators were fabricated using the CLD and FTC chromophores in polymer-on-silicon optical waveguides. Up to 17 months of oven-ageing stability are reported for the poled polymer films. Modulators containing an FTC-polyimide had the best over all aging performance. To model and extrapolate the ageing data, a relaxation correlation function attributed to A. K. Jonscher was compared to the well-established stretched exponential correlation function. Both models gave a good fit to the data. The Jonscher model predicted a slower relaxation rate in the out years. Analysis showed that collecting data for a longer period relative to the relaxation time was more important for generating useful predictions than the precision with which individual model parameters could be estimated. Thus from a practical standpoint, time-temperature superposition must be assumed in order to generate meaningful predictions. For this purpose, Arrhenius-type expressions were found to relate the model time constants to the ageing temperatures.
Robot-Assisted Arm Assessments in Spinal Cord Injured Patients: A Consideration of Concept Study
Albisser, Urs; Rudhe, Claudia; Curt, Armin; Riener, Robert; Klamroth-Marganska, Verena
2015-01-01
Robotic assistance is increasingly used in neurological rehabilitation for enhanced training. Furthermore, therapy robots have the potential for accurate assessment of motor function in order to diagnose the patient status, to measure therapy progress or to feedback the movement performance to the patient and therapist in real time. We investigated whether a set of robot-based assessments that encompasses kinematic, kinetic and timing metrics is applicable, safe, reliable and comparable to clinical metrics for measurement of arm motor function. Twenty-four healthy subjects and five patients after spinal cord injury underwent robot-based assessments using the exoskeleton robot ARMin. Five different tasks were performed with aid of a visual display. Ten kinematic, kinetic and timing assessment parameters were extracted on joint- and end-effector level (active and passive range of motion, cubic reaching volume, movement time, distance-path ratio, precision, smoothness, reaction time, joint torques and joint stiffness). For cubic volume, joint torques and the range of motion for most joints, good inter- and intra-rater reliability were found whereas precision, movement time, distance-path ratio and smoothness showed weak to moderate reliability. A comparison with clinical scores revealed good correlations between robot-based joint torques and the Manual Muscle Test. Reaction time and distance-path ratio showed good correlation with the “Graded and Redefined Assessment of Strength, Sensibility and Prehension” (GRASSP) and the Van Lieshout Test (VLT) for movements towards a predefined position in the center of the frontal plane. In conclusion, the therapy robot ARMin provides a comprehensive set of assessments that are applicable and safe. The first results with spinal cord injured patients and healthy subjects suggest that the measurements are widely reliable and comparable to clinical scales for arm motor function. The methods applied and results can serve as a basis for the future development of end-effector and exoskeleton-based robotic assessments. PMID:25996374
Ground truth seismic events and location capability at Degelen mountain, Kazakhstan
Trabant, C.; Thurber, C.; Leith, W.
2002-01-01
We utilized nuclear explosions from the Degelen Mountain sub-region of the Semipalatinsk Test Site (STS), Kazakhstan, to assess seismic location capability directly. Excellent ground truth information for these events was either known or was estimated from maps of the Degelen Mountain adit complex. Origin times were refined for events for which absolute origin time information was unknown using catalog arrival times, our ground truth location estimates, and a time baseline provided by fixing known origin times during a joint hypocenter determination (JHD). Precise arrival time picks were determined using a waveform cross-correlation process applied to the available digital data. These data were used in a JHD analysis. We found that very accurate locations were possible when high precision, waveform cross-correlation arrival times were combined with JHD. Relocation with our full digital data set resulted in a mean mislocation of 2 km and a mean 95% confidence ellipse (CE) area of 6.6 km2 (90% CE: 5.1 km2), however, only 5 of the 18 computed error ellipses actually covered the associated ground truth location estimate. To test a more realistic nuclear test monitoring scenario, we applied our JHD analysis to a set of seven events (one fixed) using data only from seismic stations within 40?? epicentral distance. Relocation with these data resulted in a mean mislocation of 7.4 km, with four of the 95% error ellipses covering less than 570 km2 (90% CE: 438 km2), and the other two covering 1730 and 8869 km2 (90% CE: 1331 and 6822 km2). Location uncertainties calculated using JHD often underestimated the true error, but a circular region with a radius equal to the mislocation covered less than 1000 km2 for all events having more than three observations. ?? 2002 Elsevier Science B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miall, A.D.
The basic premise of the recent Exxon cycle chart, that there exists a globally correlatable suite of third-order eustatic cycles, remains unproven. Many of the tests of this premise are based on circular reasoning. The implied precision of the Exxon global cycle chart is not supportable, because it is greater than that of the best available chronostratigraphic techniques, such as those used to construct the global standard time scale. Correlations of new stratigraphic sections with the Exxon chart will almost always succeed, because there are so many Exxon sequence-boundary events from which to choose. This is demonstrated by the usemore » of four synthetic sections constructed from tables of random numbers. A minimum of 77% successful correlations of random events with the Exxon chart was achieved. The existing cycle chart represents an amalgam of regional and local tectonic events and probably also includes unrecognized miscorrelations. It is of questionable value as an independent standard of geologic time.« less
Stroke-Related Changes in Neuromuscular Fatigue of the Hip Flexors and Functional Implications
Hyngstrom, Allison S.; Onushko, Tanya; Heitz, Robert P.; Rutkowski, Anthony; Hunter, Sandra K.; Schmit, Brian D.
2014-01-01
Objective To compare stroke-related changes in hip-flexor neuromuscular fatigue of the paretic leg during a sustained, isometric sub-maximal contraction with the non-paretic leg and controls, and correlate fatigue with clinical measures of function. Design Hip torques were measured during a fatiguing hip-flexion contraction at 20% of the hip flexion maximal voluntary contraction (MVC) in the paretic and non-paretic legs of 13 people with chronic stroke and 10 age-matched controls. In addition, participants with stroke performed a fatiguing contraction of the paretic leg at the absolute torque equivalent to 20% MVC of the non-paretic leg and were tested for self-selected walking speed (Ten-Meter Walk Test) and balance (Berg). Results When matching the non-paretic target torque, the paretic hip flexors had a shorter time to task failure compared with the non-paretic leg and controls (p<0.05). Time to failure of the paretic leg was inversely correlated with the reduction of hip flexion MVC torque. Self-selected walking speed was correlated with declines in torque and steadiness. Berg-Balance scores were inversely correlated with the force fluctuation amplitude. Conclusions Fatigue and precision of contraction are correlated with walking function and balance post stroke. PMID:22157434
Multipartite quantum correlations among atoms in QED cavities
NASA Astrophysics Data System (ADS)
Batle, J.; Farouk, A.; Tarawneh, O.; Abdalla, S.
2018-02-01
We study the nonlocality dynamics for two models of atoms in cavity quantum electrodynamics (QED); the first model contains atoms in a single cavity undergoing nearest-neighbor interactions with no initial correlation, and the second contains atoms confined in n different and noninteracting cavities, all of which were initially prepared in a maximally correlated state of n qubits corresponding to the atomic degrees of freedom. The nonlocality evolution of the states in the second model shows that the corresponding maximal violation of a multipartite Bell inequality exhibits revivals at precise times, defining, nonlocality sudden deaths and nonlocality sudden rebirths, in analogy with entanglement. These quantum correlations are provided analytically for the second model to make the study more thorough. Differences in the first model regarding whether the array of atoms inside the cavity is arranged in a periodic or open fashion are crucial to the generation or redistribution of quantum correlations. This contribution paves the way to using the nonlocality multipartite correlation measure for describing the collective complex behavior displayed by slightly interacting cavity QED arrays.
Noise in two-color electronic distance meter measurements revisited
Langbein, J.
2004-01-01
Frequent, high-precision geodetic data have temporally correlated errors. Temporal correlations directly affect both the estimate of rate and its standard error; the rate of deformation is a key product from geodetic measurements made in tectonically active areas. Various models of temporally correlated errors are developed and these provide relations between the power spectral density and the data covariance matrix. These relations are applied to two-color electronic distance meter (EDM) measurements made frequently in California over the past 15-20 years. Previous analysis indicated that these data have significant random walk error. Analysis using the noise models developed here indicates that the random walk model is valid for about 30% of the data. A second 30% of the data can be better modeled with power law noise with a spectral index between 1 and 2, while another 30% of the data can be modeled with a combination of band-pass-filtered plus random walk noise. The remaining 10% of the data can be best modeled as a combination of band-pass-filtered plus power law noise. This band-pass-filtered noise is a product of an annual cycle that leaks into adjacent frequency bands. For time spans of more than 1 year these more complex noise models indicate that the precision in rate estimates is better than that inferred by just the simpler, random walk model of noise.
Constrained Analysis of Fluorescence Anisotropy Decay:Application to Experimental Protein Dynamics
Feinstein, Efraim; Deikus, Gintaras; Rusinova, Elena; Rachofsky, Edward L.; Ross, J. B. Alexander; Laws, William R.
2003-01-01
Hydrodynamic properties as well as structural dynamics of proteins can be investigated by the well-established experimental method of fluorescence anisotropy decay. Successful use of this method depends on determination of the correct kinetic model, the extent of cross-correlation between parameters in the fitting function, and differences between the timescales of the depolarizing motions and the fluorophore's fluorescence lifetime. We have tested the utility of an independently measured steady-state anisotropy value as a constraint during data analysis to reduce parameter cross correlation and to increase the timescales over which anisotropy decay parameters can be recovered accurately for two calcium-binding proteins. Mutant rat F102W parvalbumin was used as a model system because its single tryptophan residue exhibits monoexponential fluorescence intensity and anisotropy decay kinetics. Cod parvalbumin, a protein with a single tryptophan residue that exhibits multiexponential fluorescence decay kinetics, was also examined as a more complex model. Anisotropy decays were measured for both proteins as a function of solution viscosity to vary hydrodynamic parameters. The use of the steady-state anisotropy as a constraint significantly improved the precision and accuracy of recovered parameters for both proteins, particularly for viscosities at which the protein's rotational correlation time was much longer than the fluorescence lifetime. Thus, basic hydrodynamic properties of larger biomolecules can now be determined with more precision and accuracy by fluorescence anisotropy decay. PMID:12524313
Stratigraphic framework for Pliocene paleoclimate reconstruction: The correlation conundrum
Dowsett, H.J.; Robinson, M.M.
2006-01-01
Pre-Holocene paleoclimate reconstructions face a correlation conundrum because complications inherent in the stratigraphic record impede the development of synchronous reconstruction. The Pliocene Research, Interpretation and Synoptic Mapping (PRISM) paleoenvironmental reconstructions have carefully balanced temporal resolution and paleoclimate proxy data to achieve a useful and reliable product and are the most comprehensive pre-Pleistocene data sets available for analysis of warmer-than-present climate and for climate modeling experiments. This paper documents the stratigraphic framework for the mid-Pliocene sea surface temperature (SST) reconstruction of the North Atlantic and explores the relationship between stratigraphic/temporal resolution and various paleoceanographic estimates of SST. The magnetobiostratigraphic framework for the PRISM North Atlantic region is constructed from planktic foraminifer, calcareous nannofossil and paleomagnetic reversal events recorded in deep-sea cores and calibrated to age. Planktic foraminifer census data from multiple samples within the mid-Pliocene yield multiple SST estimates for each site. Extracting a single SST value at each site from multiple estimates, given the limitations of the material and stratigraphic resolution, is problematic but necessary for climate model experiments. The PRISM reconstruction, unprecedented in its integration of many different types of data at a focused stratigraphic interval, utilizes a time slab approach and is based on warm peak average temperatures. A greater understanding of the dynamics of the climate system and significant advances in models now mandate more precise, globally distributed yet temporally synchronous SST estimates than are available through averaging techniques. Regardless of the precision used to correlate between sequences within the midd-Pliocene, a truly synoptic reconstruction in the temporal sense is unlikely. SST estimates from multiple proxies promise to further refine paleoclimate reconstructions but must consider the complications associated with each method, what each proxy actually records, and how these different proxies compare in time-averaged samples.
Johnson, Amy; Dawson, Jeffrey; Rizzo, Matthew
2012-01-01
Summary Driving simulators provide precise information on vehicular position at high capture rates. To analyze such data, we have previously proposed a time series model that reduces lateral position data into several parameters for measuring lateral control, and have shown that these parameters can detect differences between neurologically impaired and healthy drivers (Dawson et al, 2010a). In this paper, we focus on the “re-centering” parameter of this model, and test whether the parameter estimates are associated with off-road neuropsychological tests and/or with on-road safety errors. We assessed such correlations in 127 neurologically healthy drivers, ages 40 to 89. We found that our re-centering parameter had significant correlations with five neuropsychological tests: Judgment of Line Orientation (r = 0.38), Block Design (r = 0.27), Contrast Sensitivity (r = 0.31), Near Visual Acuity (r = -0.26), and Grooved Pegboard (r = -0.25). We also found that our re-centering parameter was associated with on-road safety errors at stop signs (r = -0.34) and on-road safety errors during turns (r = -0.22). These results suggest that our re-centering parameter may be a useful tool for measuring and monitoring ability to maintain vehicular lateral control. As GPS-based technology continues to improve in precision and reliability to measure vehicular positioning, our time-series model may potentially be applied as an automated index of driver performance in real world settings that is sensitive to cognitive decline. This work was supported by NIH/NIA awards AG17177, AG15071, and NS044930, and by a scholarship from Nissan Motor Company. PMID:24273756
NASA Astrophysics Data System (ADS)
Tada, Ryuji; Irino, Tomohisa; Ikehara, Ken; Karasuda, Akinori; Sugisaki, Saiko; Xuan, Chuang; Sagawa, Takuya; Itaki, Takuya; Kubota, Yoshimi; Lu, Song; Seki, Arisa; Murray, Richard W.; Alvarez-Zarikian, Carlos; Anderson, William T.; Bassetti, Maria-Angela; Brace, Bobbi J.; Clemens, Steven C.; da Costa Gurgel, Marcio H.; Dickens, Gerald R.; Dunlea, Ann G.; Gallagher, Stephen J.; Giosan, Liviu; Henderson, Andrew C. G.; Holbourn, Ann E.; Kinsley, Christopher W.; Lee, Gwang Soo; Lee, Kyung Eun; Lofi, Johanna; Lopes, Christina I. C. D.; Saavedra-Pellitero, Mariem; Peterson, Larry C.; Singh, Raj K.; Toucanne, Samuel; Wan, Shiming; Zheng, Hongbo; Ziegler, Martin
2018-12-01
The Quaternary hemipelagic sediments of the Japan Sea are characterized by centimeter- to decimeter-scale alternation of dark and light clay to silty clay, which are bio-siliceous and/or bio-calcareous to a various degree. Each of the dark and light layers are considered as deposited synchronously throughout the deeper (> 500 m) part of the sea. However, attempts for correlation and age estimation of individual layers are limited to the upper few tens of meters. In addition, the exact timing of the depositional onset of these dark and light layers and its synchronicity throughout the deeper part of the sea have not been explored previously, although the onset timing was roughly estimated as 1.5 Ma based on the result of Ocean Drilling Program legs 127/128. Consequently, it is not certain exactly when their deposition started, whether deposition of dark and light layers was synchronous and whether they are correlatable also in the earlier part of their depositional history. The Quaternary hemipelagic sediments of the Japan Sea were drilled at seven sites during Integrated Ocean Drilling Program Expedition 346 in 2013. Alternation of dark and light layers was recovered at six sites whose water depths are > 900 m, and continuous composite columns were constructed at each site. Here, we report our effort to correlate individual dark layers and estimate their ages based on a newly constructed age model at Site U1424 using the best available paleomagnetic datum and marker tephras. The age model is further tuned to LR04 δ18O curve using gamma ray attenuation density (GRA) since it reflects diatom contents that are higher during interglacial high-stands. The constructed age model for Site U1424 is projected to other sites using correlation of dark layers to form a high-resolution and high-precision paleo-observatory network that allows to reconstruct changes in material fluxes with high spatio-temporal resolutions.
Wang, Jianming; Ke, Chunlei; Yu, Zhinuan; Fu, Lei; Dornseif, Bruce
2016-05-01
For clinical trials with time-to-event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre-specified number of deaths. Often, correlated surrogate information, such as time-to-progression (TTP) and progression-free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression-free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Geographically correlated errors observed from a laser-based short-arc technique
NASA Astrophysics Data System (ADS)
Bonnefond, P.; Exertier, P.; Barlier, F.
1999-07-01
The laser-based short-arc technique has been developed in order to avoid local errors which affect the dynamical orbit computation, such as those due to mismodeling in the geopotential. It is based on a geometric method and consists in fitting short arcs (about 4000 km), issued from a global orbit, with satellite laser ranging tracking measurements from a ground station network. Ninety-two TOPEX/Poseidon (T/P) cycles of laser-based short-arc orbits have then been compared to JGM-2 and JGM-3 T/P orbits computed by the Precise Orbit Determination (POD) teams (Service d'Orbitographie Doris/Centre National d'Etudes Spatiales and Goddard Space Flight Center/NASA) over two areas: (1) the Mediterranean area and (2) a part of the Pacific (including California and Hawaii) called hereafter the U.S. area. Geographically correlated orbit errors in these areas are clearly evidenced: for example, -2.6 cm and +0.7 cm for the Mediterranean and U.S. areas, respectively, relative to JGM-3 orbits. However, geographically correlated errors (GCE) which are commonly linked to errors in the gravity model, can also be due to systematic errors in the reference frame and/or to biases in the tracking measurements. The short-arc technique being very sensitive to such error sources, our analysis however demonstrates that the induced geographical systematic effects are at the level of 1-2 cm on the radial orbit component. Results are also compared with those obtained with the GPS-based reduced dynamic technique. The time-dependent part of GCE has also been studied. Over 6 years of T/P data, coherent signals in the radial component of T/P Precise Orbit Ephemeris (POE) are clearly evidenced with a time period of about 6 months. In addition, impact of time varying-error sources coming from the reference frame and the tracking data accuracy has been analyzed, showing a possible linear trend of about 0.5-1 mm/yr in the radial component of T/P POE.
GPS/GLONASS Combined Precise Point Positioning with Receiver Clock Modeling
Wang, Fuhong; Chen, Xinghan; Guo, Fei
2015-01-01
Research has demonstrated that receiver clock modeling can reduce the correlation coefficients among the parameters of receiver clock bias, station height and zenith tropospheric delay. This paper introduces the receiver clock modeling to GPS/GLONASS combined precise point positioning (PPP), aiming to better separate the receiver clock bias and station coordinates and therefore improve positioning accuracy. Firstly, the basic mathematic models including the GPS/GLONASS observation equations, stochastic model, and receiver clock model are briefly introduced. Then datasets from several IGS stations equipped with high-stability atomic clocks are used for kinematic PPP tests. To investigate the performance of PPP, including the positioning accuracy and convergence time, a week of (1–7 January 2014) GPS/GLONASS data retrieved from these IGS stations are processed with different schemes. The results indicate that the positioning accuracy as well as convergence time can benefit from the receiver clock modeling. This is particularly pronounced for the vertical component. Statistic RMSs show that the average improvement of three-dimensional positioning accuracy reaches up to 30%–40%. Sometimes, it even reaches over 60% for specific stations. Compared to the GPS-only PPP, solutions of the GPS/GLONASS combined PPP are much better no matter if the receiver clock offsets are modeled or not, indicating that the positioning accuracy and reliability are significantly improved with the additional GLONASS satellites in the case of insufficient number of GPS satellites or poor geometry conditions. In addition to the receiver clock modeling, the impacts of different inter-system timing bias (ISB) models are investigated. For the case of a sufficient number of satellites with fairly good geometry, the PPP performances are not seriously affected by the ISB model due to the low correlation between the ISB and the other parameters. However, the refinement of ISB model weakens the correlation between coordinates and ISB estimates and finally enhance the PPP performance in the case of poor observation conditions. PMID:26134106
Reuter, Markus; Piller, Werner E; Brandano, Marco; Harzhauser, Mathias
2013-12-01
Shallow-marine sediment records have the strong potential to display sensitive environmental changes in sedimentary geometries and skeletal content. However, the time resolution of most neritic carbonate records is not high enough to be compared with climatic events as recorded in the deep-sea sediment archives. In order to resolve the paleoceanographic and paleoclimatic changes during the Oligocene-Miocene transition in the Mediterranean shallow water carbonate systems with the best possible time resolution, we re-evaluated the Decontra section on the Maiella Platform (central Apennines, Italy), which acts as a reference for the correlation of Oligocene-Miocene shallow water deposits in the Mediterranean region. The 120-m-thick late Oligocene-late Miocene carbonate succession is composed of larger foraminiferal, bryozoan and corallinacean limestones interlayered with distinct planktonic foraminiferal carbonates representing a mostly outer neritic setting. Integrated multi-proxy and facies analyses indicate that CaCO 3 and total organic carbon contents as well as gamma-ray display only local to regional processes on the carbonate platform and are not suited for stratigraphic correlation on a wider scale. In contrast, new biostratigraphic data correlate the Decontra stable carbon isotope record to the global deep-sea carbon isotope record. This links relative sea level fluctuations, which are reflected by facies and magnetic susceptibility changes, to third-order eustatic cycles. The new integrated bio-, chemo-, and sequence stratigraphic framework enables a more precise timing of environmental changes within the studied time interval and identifies Decontra as an important locality for correlating not only shallow and deep water sediments of the Mediterranean region but also on a global scale.
Comparison of in vivo 3D cone-beam computed tomography tooth volume measurement protocols.
Forst, Darren; Nijjar, Simrit; Flores-Mir, Carlos; Carey, Jason; Secanell, Marc; Lagravere, Manuel
2014-12-23
The objective of this study is to analyze a set of previously developed and proposed image segmentation protocols for precision in both intra- and inter-rater reliability for in vivo tooth volume measurements using cone-beam computed tomography (CBCT) images. Six 3D volume segmentation procedures were proposed and tested for intra- and inter-rater reliability to quantify maxillary first molar volumes. Ten randomly selected maxillary first molars were measured in vivo in random order three times with 10 days separation between measurements. Intra- and inter-rater agreement for all segmentation procedures was attained using intra-class correlation coefficient (ICC). The highest precision was for automated thresholding with manual refinements. A tooth volume measurement protocol for CBCT images employing automated segmentation with manual human refinement on a 2D slice-by-slice basis in all three planes of space possessed excellent intra- and inter-rater reliability. Three-dimensional volume measurements of the entire tooth structure are more precise than 3D volume measurements of only the dental roots apical to the cemento-enamel junction (CEJ).
Liu, Dan; Wang, Qisong; Liu, Xin; Niu, Ruixin; Zhang, Yan; Sun, Jinwei
2016-01-01
Accurately measuring the oil content and salt content of crude oil is very important for both estimating oil reserves and predicting the lifetime of an oil well. There are some problems with the current methods such as high cost, low precision, and difficulties in operation. To solve these problems, we present a multifunctional sensor, which applies, respectively, conductivity method and ultrasound method to measure the contents of oil, water, and salt. Based on cross sensitivity theory, these two transducers are ideally integrated for simplifying the structure. A concentration test of ternary solutions is carried out to testify its effectiveness, and then Canonical Correlation Analysis is applied to evaluate the data. From the perspective of statistics, the sensor inputs, for instance, oil concentration, salt concentration, and temperature, are closely related to its outputs including output voltage and time of flight of ultrasound wave, which further identify the correctness of the sensing theory and the feasibility of the integrated design. Combined with reconstruction algorithms, the sensor can realize the content measurement of the solution precisely. The potential development of the proposed sensor and method in the aspect of online test for crude oil is of important reference and practical value. PMID:27775640
Fidelity of the ensemble code for visual motion in primate retina.
Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J
2005-07-01
Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.
NASA Technical Reports Server (NTRS)
Guglielmi, G.; Selby, K.; Blunt, B. A.; Jergas, M.; Newitt, D. C.; Genant, H. K.; Majumdar, S.
1996-01-01
RATIONALE AND OBJECTIVES: Marrow transverse relaxation time (T2*) in magnetic resonance (MR) imaging may be related to the density and structure of the surrounding trabecular network. We investigated regional variations of T2* in the human calcaneus and compared the findings with bone mineral density (BMD), as measured by dual X-ray absorpiometry (DXA). Short- and long-term precisions were evaluated first to determine whether MR imaging would be useful for the clinical assessment of disease status and progression in osteoporosis. METHODS: Gradient-recalled echo MR images of the calcaneus were acquired at 1.5 T from six volunteers. Measurements of T2* were compared with BMD and (for one volunteer) conventional radiography. RESULTS: T2* values showed significant regional variation; they typically were shortest in the superior region of the calcaneus. There was a linear correlation between MR and DXA measurements (r = .66 for 1/T2* versus BMD). Differences in T2* attributable to variations in analysis region-of-interest placement were not significant for five of the six volunteers. Sagittal MR images had short- and long-term precision errors of 4.2% and 3.3%, respectively. For DXA, the precision was 1.3% (coefficient of variation). CONCLUSION: MR imaging may be useful for trabecular bone assessment in the calcaneus. However, given the large regional variations in bone density and structure, the choice of an ROI is likely to play a major role in the accuracy, precision, and overall clinical efficacy of T2* measurements.
Uwano, Ikuko; Sasaki, Makoto; Kudo, Kohsuke; Boutelier, Timothé; Kameda, Hiroyuki; Mori, Futoshi; Yamashita, Fumio
2017-01-10
The Bayesian estimation algorithm improves the precision of bolus tracking perfusion imaging. However, this algorithm cannot directly calculate Tmax, the time scale widely used to identify ischemic penumbra, because Tmax is a non-physiological, artificial index that reflects the tracer arrival delay (TD) and other parameters. We calculated Tmax from the TD and mean transit time (MTT) obtained by the Bayesian algorithm and determined its accuracy in comparison with Tmax obtained by singular value decomposition (SVD) algorithms. The TD and MTT maps were generated by the Bayesian algorithm applied to digital phantoms with time-concentration curves that reflected a range of values for various perfusion metrics using a global arterial input function. Tmax was calculated from the TD and MTT using constants obtained by a linear least-squares fit to Tmax obtained from the two SVD algorithms that showed the best benchmarks in a previous study. Correlations between the Tmax values obtained by the Bayesian and SVD methods were examined. The Bayesian algorithm yielded accurate TD and MTT values relative to the true values of the digital phantom. Tmax calculated from the TD and MTT values with the least-squares fit constants showed excellent correlation (Pearson's correlation coefficient = 0.99) and agreement (intraclass correlation coefficient = 0.99) with Tmax obtained from SVD algorithms. Quantitative analyses of Tmax values calculated from Bayesian-estimation algorithm-derived TD and MTT from a digital phantom correlated and agreed well with Tmax values determined using SVD algorithms.
Tone series and the nature of working memory capacity development.
Clark, Katherine M; Hardman, Kyle O; Schachtman, Todd R; Saults, J Scott; Glass, Bret A; Cowan, Nelson
2018-04-01
Recent advances in understanding visual working memory, the limited information held in mind for use in ongoing processing, are extended here to examine auditory working memory development. Research with arrays of visual objects has shown how to distinguish the capacity, in terms of the number of objects retained, from the precision of the object representations. We adapt the technique to sequences of nonmusical tones, in an investigation including children (6-13 years, N = 84) and adults (26-50 years, N = 31). For each series of 1 to 4 tones, the participant responded by using an 80-choice scale to try to reproduce the tone at a queried serial position. Despite the much longer-lasting usefulness of sensory memory for tones compared with visual objects, the observed tone capacity was similar to previous findings for visual capacity. The results also constrain theories of childhood working memory development, indicating increases with age in both the capacity and the precision of the tone representations, similar to the visual studies, rather than age differences in time-based memory decay. The findings, including patterns of correlations between capacity, precision, and some auxiliary tasks and questionnaires, establish capacity and precision as dissociable processes and place important constraints on various hypotheses of working memory development. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Semler, Joerg; Wellmann, Katharina; Wirth, Felicitas; Stein, Gregor; Angelova, Srebrina; Ashrafi, Mahak; Schempf, Greta; Ankerne, Janina; Ozsoy, Ozlem; Ozsoy, Umut; Schönau, Eckhard; Angelov, Doychin N; Irintchev, Andrey
2011-07-01
Precise assessment of motor deficits after traumatic spinal cord injury (SCI) in rodents is crucial for understanding the mechanisms of functional recovery and testing therapeutic approaches. Here we analyzed the applicability to a rat SCI model of an objective approach, the single-frame motion analysis, created and used for functional analysis in mice. Adult female Wistar rats were subjected to graded compression of the spinal cord. Recovery of locomotion was analyzed using video recordings of beam walking and inclined ladder climbing. Three out of four parameters used in mice appeared suitable: the foot-stepping angle (FSA) and the rump-height index (RHI), measured during beam walking, and for estimating paw placement and body weight support, respectively, and the number of correct ladder steps (CLS), assessing skilled limb movements. These parameters, similar to the Basso, Beattie, and Bresnahan (BBB) locomotor rating scores, correlated with lesion volume and showed significant differences between moderately and severely injured rats at 1-9 weeks after SCI. The beam parameters, but not CLS, correlated well with the BBB scores within ranges of poor and good locomotor abilities. FSA co-varied with RHI only in the severely impaired rats, while RHI and CLS were barely correlated. Our findings suggest that the numerical parameters estimate, as intended by design, predominantly different aspects of locomotion. The use of these objective measures combined with BBB rating provides a time- and cost-efficient opportunity for versatile and reliable functional evaluations in both severely and moderately impaired rats, combining clinical assessment with precise numerical measures.
Differential porosimetry and permeametry for random porous media.
Hilfer, R; Lemmer, A
2015-07-01
Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.
Super-Hubble de Sitter fluctuations and the dynamical RG
NASA Astrophysics Data System (ADS)
Burgess, C. P.; Leblond, L.; Holman, R.; Shandera, S.
2010-03-01
Perturbative corrections to correlation functions for interacting theories in de Sitter spacetime often grow secularly with time, due to the properties of fluctuations on super-Hubble scales. This growth can lead to a breakdown of perturbation theory at late times. We argue that Dynamical Renormalization Group (DRG) techniques provide a convenient framework for interpreting and resumming these secularly growing terms. In the case of a massless scalar field in de Sitter with quartic self-interaction, the resummed result is also less singular in the infrared, in precisely the manner expected if a dynamical mass is generated. We compare this improved infrared behavior with large-N expansions when applicable.
NASA Astrophysics Data System (ADS)
Milliner, C. W. D.; Dolan, J. F.; Hollingsworth, J.; Leprince, S.; Ayoub, F.
2014-12-01
Coseismic surface deformation is typically measured in the field by geologists and with a range of geophysical methods such as InSAR, LiDAR and GPS. Current methods, however, either fail to capture the near-field coseismic surface deformation pattern where vital information is needed, or lack pre-event data. We develop a standardized and reproducible methodology to fully constrain the surface, near-field, coseismic deformation pattern in high resolution using aerial photography. We apply our methodology using the program COSI-corr to successfully cross-correlate pairs of aerial, optical imagery before and after the 1992, Mw 7.3 Landers and 1999, Mw 7.1 Hector Mine earthquakes. This technique allows measurement of the coseismic slip distribution and magnitude and width of off-fault deformation with sub-pixel precision. This technique can be applied in a cost effective manner for recent and historic earthquakes using archive aerial imagery. We also use synthetic tests to constrain and correct for the bias imposed on the result due to use of a sliding window during correlation. Correcting for artificial smearing of the tectonic signal allows us to robustly measure the fault zone width along a surface rupture. Furthermore, the synthetic tests have constrained for the first time the measurement precision and accuracy of estimated fault displacements and fault-zone width. Our methodology provides the unique ability to robustly understand the kinematics of surface faulting while at the same time accounting for both off-fault deformation and measurement biases that typically complicates such data. For both earthquakes we find that our displacement measurements derived from cross-correlation are systematically larger than the field displacement measurements, indicating the presence of off-fault deformation. We show that the Landers and Hector Mine earthquake accommodated 46% and 38% of displacement away from the main primary rupture as off-fault deformation, over a mean deformation width of 183 m and 133 m, respectively. We envisage that correlation results derived from our methodology will provide vital data for near-field deformation patterns and will be of significant use for constraining inversion solutions for fault slip at depth.
Wang, Li; Wang, Xiaoyi; Jin, Xuebo; Xu, Jiping; Zhang, Huiyan; Yu, Jiabin; Sun, Qian; Gao, Chong; Wang, Lingbin
2017-03-01
The formation process of algae is described inaccurately and water blooms are predicted with a low precision by current methods. In this paper, chemical mechanism of algae growth is analyzed, and a correlation analysis of chlorophyll-a and algal density is conducted by chemical measurement. Taking into account the influence of multi-factors on algae growth and water blooms, the comprehensive prediction method combined with multivariate time series and intelligent model is put forward in this paper. Firstly, through the process of photosynthesis, the main factors that affect the reproduction of the algae are analyzed. A compensation prediction method of multivariate time series analysis based on neural network and Support Vector Machine has been put forward which is combined with Kernel Principal Component Analysis to deal with dimension reduction of the influence factors of blooms. Then, Genetic Algorithm is applied to improve the generalization ability of the BP network and Least Squares Support Vector Machine. Experimental results show that this method could better compensate the prediction model of multivariate time series analysis which is an effective way to improve the description accuracy of algae growth and prediction precision of water blooms.
Precise chronology of differentiation of developing human primary dentition.
Hu, Xuefeng; Xu, Shan; Lin, Chensheng; Zhang, Lishan; Chen, YiPing; Zhang, Yanding
2014-02-01
While correlation of developmental stage with embryonic age of the human primary dentition has been well documented, the available information regarding the differentiation timing of the primary teeth was largely based on the observation of initial mineralization and varies significantly. In this study, we aimed to document precise differentiation timing of the developing human primary dentition. We systematically examined the expression of odontogenic differentiation markers along with the formation of mineralized tissue in each developing maxillary and mandibular teeth from human embryos with well-defined embryonic age. We show that, despite that all primary teeth initiate development at the same time, odontogenic differentiation begins in the maxillary incisors at the 15th week and in the mandibular incisors at the 16th week of gestation, followed by the canine, the first primary premolar, and the second primary premolar at a week interval sequentially. Despite that the mandibular primary incisors erupt earlier than the maxillary incisors, this distal to proximal sequential differentiation of the human primary dentition coincides in general with the sequence of tooth eruption. Our results provide an accurate chronology of odontogenic differentiation of the developing human primary dentition, which could be used as reference for future studies of human tooth development.
First measurement of the cross-correlation of CMB lensing and galaxy lensing
Hand, Nick; Leauthaud, Alexie; Das, Sudeep; ...
2015-03-02
Here, we measure the cross-correlation of cosmic microwave background (CMB) lensing convergence maps derived from Atacama Cosmology Telescope data with galaxy lensing convergence maps as measured by the Canada-France-Hawaii Telescope Stripe 82 Survey. The CMB-galaxy lensing cross power spectrum is measured for the first time with a significance of 4.2 sigma, which corresponds to a 12% constraint on the amplitude of density fluctuations at redshifts ~0.9. With upcoming improved lensing data, this novel type of measurement will become a powerful cosmological probe, providing a precise measurement of the mass distribution at intermediate redshifts and serving as a calibrator for systematicmore » biases in weak lensing measurements.« less
Multimodal correlation and intraoperative matching of virtual models in neurosurgery
NASA Technical Reports Server (NTRS)
Ceresole, Enrico; Dalsasso, Michele; Rossi, Aldo
1994-01-01
The multimodal correlation between different diagnostic exams, the intraoperative calibration of pointing tools and the correlation of the patient's virtual models with the patient himself, are some examples, taken from the biomedical field, of a unique problem: determine the relationship linking representation of the same object in different reference frames. Several methods have been developed in order to determine this relationship, among them, the surface matching method is one that gives the patient minimum discomfort and the errors occurring are compatible with the required precision. The surface matching method has been successfully applied to the multimodal correlation of diagnostic exams such as CT, MR, PET and SPECT. Algorithms for automatic segmentation of diagnostic images have been developed to extract the reference surfaces from the diagnostic exams, whereas the surface of the patient's skull has been monitored, in our approach, by means of a laser sensor mounted on the end effector of an industrial robot. An integrated system for virtual planning and real time execution of surgical procedures has been realized.
Cramer, Bradley D.; Loydell, David K.; Samtleben, Christian; Munnecke, Axel; Kaljo, Dimitri; Mannik, Peep; Martma, Tonu; Jeppsson, Lennart; Kleffner, Mark A.; Barrick, James E.; Johnson, Craig A.; Emsbo, Poul; Joachimski, Michael M.; Bickert, Torsten; Saltzman, Matthew R.
2010-01-01
The resolution and fidelity of global chronostratigraphic correlation are direct functions of the time period under consideration. By virtue of deep-ocean cores and astrochronology, the Cenozoic and Mesozoic time scales carry error bars of a few thousand years (k.y.) to a few hundred k.y. In contrast, most of the Paleozoic time scale carries error bars of plus or minus a few million years (m.y.), and chronostratigraphic control better than ??1 m.y. is considered "high resolution." The general lack of Paleozoic abyssal sediments and paucity of orbitally tuned Paleozoic data series combined with the relative incompleteness of the Paleozoic stratigraphic record have proven historically to be such an obstacle to intercontinental chronostratigraphic correlation that resolving the Paleozoic time scale to the level achieved during the Mesozoic and Cenozoic was viewed as impractical, impossible, or both. Here, we utilize integrated graptolite, conodont, and carbonate carbon isotope (??13Ccarb) data from three paleocontinents (Baltica, Avalonia, and Laurentia) to demonstrate chronostratigraphic control for upper Llando very through middle Wenlock (Telychian-Sheinwoodian, ~436-426 Ma) strata with a resolution of a few hundred k.y. The interval surrounding the base of the Wenlock Series can now be correlated globally with precision approaching 100 k.y., but some intervals (e.g., uppermost Telychian and upper Shein-woodian) are either yet to be studied in sufficient detail or do not show sufficient biologic speciation and/or extinction or carbon isotopic features to delineate such small time slices. Although producing such resolution during the Paleozoic presents an array of challenges unique to the era, we have begun to demonstrate that erecting a Paleozoic time scale comparable to that of younger eras is achievable. ?? 2010 Geological Society of America.
Summability of Connected Correlation Functions of Coupled Lattice Fields
NASA Astrophysics Data System (ADS)
Lukkarinen, Jani; Marcozzi, Matteo; Nota, Alessia
2018-04-01
We consider two nonindependent random fields ψ and φ defined on a countable set Z. For instance, Z=Z^d or Z=Z^d× I, where I denotes a finite set of possible "internal degrees of freedom" such as spin. We prove that, if the cumulants of ψ and φ enjoy a certain decay property, then all joint cumulants between ψ and φ are ℓ _2-summable in the precise sense described in the text. The decay assumption for the cumulants of ψ and φ is a restricted ℓ _1 summability condition called ℓ _1-clustering property. One immediate application of the results is given by a stochastic process ψ _t(x) whose state is ℓ _1-clustering at any time t: then the above estimates can be applied with ψ =ψ _t and φ =ψ _0 and we obtain uniform in t estimates for the summability of time-correlations of the field. The above clustering assumption is obviously satisfied by any ℓ _1-clustering stationary state of the process, and our original motivation for the control of the summability of time-correlations comes from a quest for a rigorous control of the Green-Kubo correlation function in such a system. A key role in the proof is played by the properties of non-Gaussian Wick polynomials and their connection to cumulants
X-ray scattering measurements of strong ion-ion correlations in shock-compressed aluminum.
Ma, T; Döppner, T; Falcone, R W; Fletcher, L; Fortmann, C; Gericke, D O; Landen, O L; Lee, H J; Pak, A; Vorberger, J; Wünsch, K; Glenzer, S H
2013-02-08
The strong ion-ion correlation peak characteristic of warm dense matter (WDM) is observed for the first time using simultaneous angularly, temporally, and spectrally resolved x-ray scattering measurements in laser-driven shock-compressed aluminum. Laser-produced molybdenum x-ray line emission at an energy of 17.9 keV is employed to probe aluminum compressed to a density of ρ>8 g/cm(3). We observe a well pronounced peak in the static structure factor at a wave number of k=4.0 Å(-1). The measurements of the magnitude and position of this correlation peak are precise enough to test different theoretical models for the ion structure and show that only models taking the complex interaction in WDM into account agree with the data. This also demonstrates a new highly accurate diagnostic to directly measure the state of compression of warm dense matter.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope.
Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C
2015-10-12
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.
Full-Counting Many-Particle Dynamics: Nonlocal and Chiral Propagation of Correlations
NASA Astrophysics Data System (ADS)
Ashida, Yuto; Ueda, Masahito
2018-05-01
The ability to measure single quanta allows the complete characterization of small quantum systems known as full-counting statistics. Quantum gas microscopy enables one to observe many-body systems at the single-atom precision. We extend the idea of full-counting statistics to nonequilibrium open many-particle dynamics and apply it to discuss the quench dynamics. By way of illustration, we consider an exactly solvable model to demonstrate the emergence of unique phenomena such as nonlocal and chiral propagation of correlations, leading to a concomitant oscillatory entanglement growth. We find that correlations can propagate beyond the conventional maximal speed, known as the Lieb-Robinson bound, at the cost of probabilistic nature of quantum measurement. These features become most prominent at the real-to-complex spectrum transition point of an underlying parity-time-symmetric effective non-Hermitian Hamiltonian. A possible experimental situation with quantum gas microscopy is discussed.
Viladomat, Júlia; Mazumder, Rahul; McInturff, Alex; McCauley, Douglas J; Hastie, Trevor
2014-06-01
We propose a method to test the correlation of two random fields when they are both spatially autocorrelated. In this scenario, the assumption of independence for the pair of observations in the standard test does not hold, and as a result we reject in many cases where there is no effect (the precision of the null distribution is overestimated). Our method recovers the null distribution taking into account the autocorrelation. It uses Monte-Carlo methods, and focuses on permuting, and then smoothing and scaling one of the variables to destroy the correlation with the other, while maintaining at the same time the initial autocorrelation. With this simulation model, any test based on the independence of two (or more) random fields can be constructed. This research was motivated by a project in biodiversity and conservation in the Biology Department at Stanford University. © 2014, The International Biometric Society.
The Delicate Analysis of Short-Term Load Forecasting
NASA Astrophysics Data System (ADS)
Song, Changwei; Zheng, Yuan
2017-05-01
This paper proposes a new method for short-term load forecasting based on the similar day method, correlation coefficient and Fast Fourier Transform (FFT) to achieve the precision analysis of load variation from three aspects (typical day, correlation coefficient, spectral analysis) and three dimensions (time dimension, industry dimensions, the main factors influencing the load characteristic such as national policies, regional economic, holidays, electricity and so on). First, the branch algorithm one-class-SVM is adopted to selection the typical day. Second, correlation coefficient method is used to obtain the direction and strength of the linear relationship between two random variables, which can reflect the influence caused by the customer macro policy and the scale of production to the electricity price. Third, Fourier transform residual error correction model is proposed to reflect the nature of load extracting from the residual error. Finally, simulation result indicates the validity and engineering practicability of the proposed method.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Liu, Y.-L.; Perillo, E. P.
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we havemore » precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.« less
Fundamental limits of scintillation detector timing precision
NASA Astrophysics Data System (ADS)
Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.
2014-07-01
In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10 000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A-1/2 more than any other factor, we tabulated the parameter B, where R = BA-1/2. An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10 000 photoelectrons ns-1. A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10 000 photoelectrons ns-1.
Fundamental Limits of Scintillation Detector Timing Precision
Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.
2014-01-01
In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10,000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A−1/2 more than any other factor, we tabulated the parameter B, where R = BA−1/2. An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10,000 photoelectrons/ns. A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10,000 photoelectrons/ns. PMID:24874216
Precise GPS ephemerides from DMA and NGS tested by time transfer
NASA Technical Reports Server (NTRS)
Lewandowski, Wlodzimierz W.; Petit, Gerard; Thomas, Claudine
1992-01-01
It was shown that the use of the Defense Mapping Agency's (DMA) precise ephemerides brings a significant improvement to the accuracy of GPS time transfer. At present a new set of precise ephemerides produced by the National Geodetic Survey (NGS) has been made available to the timing community. This study demonstrates that both types of precise ephemerides improve long-distance GPS time transfer and remove the effects of Selective Availability (SA) degradation of broadcast ephemerides. The issue of overcoming SA is also discussed in terms of the routine availability of precise ephemerides.
NASA Astrophysics Data System (ADS)
Börries, S.; Metz, O.; Pranzas, P. K.; Bellosta von Colbe, J. M.; Bücherl, T.; Dornheim, M.; Klassen, T.; Schreyer, A.
2016-10-01
For the storage of hydrogen, complex metal hydrides are considered as highly promising with respect to capacity, reversibility and safety. The optimization of corresponding storage tanks demands a precise and time-resolved investigation of the hydrogen distribution in scaled-up metal hydride beds. In this study it is shown that in situ fission Neutron Radiography provides unique insights into the spatial distribution of hydrogen even for scaled-up compacts and therewith enables a direct study of hydrogen storage tanks. A technique is introduced for the precise quantification of both time-resolved data and a priori material distribution, allowing inter alia for an optimization of compacts manufacturing process. For the first time, several macroscopic fields are combined which elucidates the great potential of Neutron Imaging for investigations of metal hydrides by going further than solely 'imaging' the system: A combination of in-situ Neutron Radiography, IR-Thermography and thermodynamic quantities can reveal the interdependency of different driving forces for a scaled-up sodium alanate pellet by means of a multi-correlation analysis. A decisive and time-resolved, complex influence of material packing density is derived. The results of this study enable a variety of new investigation possibilities that provide essential information on the optimization of future hydrogen storage tanks.
Time resolved optical system for an early detection of prostate tumor
NASA Astrophysics Data System (ADS)
Hervé, Lionel; Laidevant, Aurélie; Debourdeau, Mathieu; Boutet, Jérôme; Dinten, Jean-Marc
2011-02-01
We developed an endorectal time-resolved optical probe aiming at an early detection of prostate tumors targeted by fluorescent markers. Optical fibers are embedded inside a clinical available ultrasound endorectal probe. Excitation light is driven sequentially from a femtosecond laser (775 nm) into 6 source fibers. 4 detection fibers collect the medium responses at the excitation and fluorescence wavelength (850 nm) by the mean of 4 photomultipliers associated with a 4 channel time-correlated single photon counting card. We also developed the method to process the experimental data. This involves the numerical computation of the forward model, the creation of robust features which are automatically correctly from numerous experimental possible biases and the reconstruction of the inclusion by using the intensity and mean time of these features. To evaluate our system performance, we acquired measurements of a 40 μL ICG inclusion (10 μmol.L-1) at various lateral and depth locations in a phantom. Analysis of results showed we correctly reconstructed the fluorophore for the lateral positions (16 mm range) and for a distance to the probe going up to 1.5 cm. Precision of localization was found to be around 1 mm which complies well with precision specifications needed for the clinical application.
Precise Relative Earthquake Magnitudes from Cross Correlation
Cleveland, K. Michael; Ammon, Charles J.
2015-04-21
We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.
Computational Calorimetry: High-Precision Calculation of Host–Guest Binding Thermodynamics
2015-01-01
We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van’t Hoff equation. Excellent agreement between the direct and van’t Hoff methods is demonstrated for both host–guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design. PMID:26523125
Tracer Kinetic Analysis of (S)-¹⁸F-THK5117 as a PET Tracer for Assessing Tau Pathology.
Jonasson, My; Wall, Anders; Chiotis, Konstantinos; Saint-Aubert, Laure; Wilking, Helena; Sprycha, Margareta; Borg, Beatrice; Thibblin, Alf; Eriksson, Jonas; Sörensen, Jens; Antoni, Gunnar; Nordberg, Agneta; Lubberink, Mark
2016-04-01
Because a correlation between tau pathology and the clinical symptoms of Alzheimer disease (AD) has been hypothesized, there is increasing interest in developing PET tracers that bind specifically to tau protein. The aim of this study was to evaluate tracer kinetic models for quantitative analysis and generation of parametric images for the novel tau ligand (S)-(18)F-THK5117. Nine subjects (5 with AD, 4 with mild cognitive impairment) received a 90-min dynamic (S)-(18)F-THK5117 PET scan. Arterial blood was sampled for measurement of blood radioactivity and metabolite analysis. Volume-of-interest (VOI)-based analysis was performed using plasma-input models; single-tissue and 2-tissue (2TCM) compartment models and plasma-input Logan and reference tissue models; and simplified reference tissue model (SRTM), reference Logan, and SUV ratio (SUVr). Cerebellum gray matter was used as the reference region. Voxel-level analysis was performed using basis function implementations of SRTM, reference Logan, and SUVr. Regionally averaged voxel values were compared with VOI-based values from the optimal reference tissue model, and simulations were made to assess accuracy and precision. In addition to 90 min, initial 40- and 60-min data were analyzed. Plasma-input Logan distribution volume ratio (DVR)-1 values agreed well with 2TCM DVR-1 values (R(2)= 0.99, slope = 0.96). SRTM binding potential (BP(ND)) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 (R(2)= 1.00, slope ≈ 1.00) whereas SUVr(70-90)-1 values correlated less well and overestimated binding. Agreement between parametric methods and SRTM was best for reference Logan (R(2)= 0.99, slope = 1.03). SUVr(70-90)-1 values were almost 3 times higher than BP(ND) values in white matter and 1.5 times higher in gray matter. Simulations showed poorer accuracy and precision for SUVr(70-90)-1 values than for the other reference methods. SRTM BP(ND) and reference Logan DVR-1 values were not affected by a shorter scan duration of 60 min. SRTM BP(ND) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 values. VOI-based data analyses indicated robust results for scan durations of 60 min. Reference Logan generated quantitative (S)-(18)F-THK5117 DVR-1 parametric images with the greatest accuracy and precision and with a much lower white-matter signal than seen with SUVr(70-90)-1 images. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Precision, time, and cost: a comparison of three sampling designs in an emergency setting.
Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles
2008-05-02
The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 x 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 x 30 cluster survey with two alternative sampling designs: a 33 x 6 cluster design (33 clusters, 6 observations per cluster) and a 67 x 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 x 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 x 6 and 67 x 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 x 6 and 67 x 3 designs provide wider confidence intervals than the 30 x 30 design for child anthropometric indicators, the 33 x 6 and 67 x 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 x 30 design does not. For the household-level indicators tested in this study, the 67 x 3 design provides the most precise results. However, our results show that neither the 33 x 6 nor the 67 x 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 x 6 and 67 x 3 designs required substantially less time and cost than that required for the 30 x 30 design. The findings of this study suggest the 33 x 6 and 67 x 3 designs can provide useful time- and resource-saving alternatives to the 30 x 30 method of data collection in emergency settings.
Precision, time, and cost: a comparison of three sampling designs in an emergency setting
Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles
2008-01-01
The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster) and a 67 × 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data collection in emergency settings. PMID:18454866
Improved Short-Term Clock Prediction Method for Real-Time Positioning.
Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan
2017-06-06
The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.
HÖHN, K.; FUCHS, J.; FRÖBER, A.; KIRMSE, R.; GLASS, B.; ANDERS‐ÖSSWEIN, M.; WALTHER, P.; KRÄUSSLICH, H.‐G.
2015-01-01
Summary In this study, we present a correlative microscopy workflow to combine detailed 3D fluorescence light microscopy data with ultrastructural information gained by 3D focused ion beam assisted scanning electron microscopy. The workflow is based on an optimized high pressure freezing/freeze substitution protocol that preserves good ultrastructural detail along with retaining the fluorescence signal in the resin embedded specimens. Consequently, cellular structures of interest can readily be identified and imaged by state of the art 3D confocal fluorescence microscopy and are precisely referenced with respect to an imprinted coordinate system on the surface of the resin block. This allows precise guidance of the focused ion beam assisted scanning electron microscopy and limits the volume to be imaged to the structure of interest. This, in turn, minimizes the total acquisition time necessary to conduct the time consuming ultrastructural scanning electron microscope imaging while eliminating the risk to miss parts of the target structure. We illustrate the value of this workflow for targeting virus compartments, which are formed in HIV‐pulsed mature human dendritic cells. PMID:25786567
Actual and estimated costs of disposable materials used during surgical procedures.
Toyabe, Shin-Ichi; Cao, Pengyu; Kurashima, Sachiko; Nakayama, Yukiko; Ishii, Yuko; Hosoyama, Noriko; Akazawa, Kouhei
2005-07-01
It is difficult to estimate precisely the costs of disposable materials used during surgical operations. To evaluate the actual costs of disposable materials, we calculated the actual costs of disposable materials used in 59 operations by taking account of costs of all disposable materials used for each operation. The costs of the disposable materials varied significantly from operation to operation (US$ 38-4230 per operation), and the median [25-percentile and 75-percentile] of the sum total of disposable material costs of a single operation was found to be US$ 686 [205 and 993]. Multiple regression analysis with a stepwise regression method showed that costs of disposable materials significantly correlated only with operation time (p<0.001). Based on the results, we propose a simple method for estimating costs of disposable materials by measuring operation time, and we found that the method gives reliable results. Since costs of disposable materials used during surgical operations are considerable, precise estimation of the costs is essential for hospital cost accounting. Our method should be useful for planning hospital administration strategies.
Improving OCD time to solution using Signal Response Metrology
NASA Astrophysics Data System (ADS)
Fang, Fang; Zhang, Xiaoxiao; Vaid, Alok; Pandev, Stilian; Sanko, Dimitry; Ramanathan, Vidya; Venkataraman, Kartik; Haupt, Ronny
2016-03-01
In recent technology nodes, advanced process and novel integration scheme have challenged the precision limits of conventional metrology; with critical dimensions (CD) of device reduce to sub-nanometer region. Optical metrology has proved its capability to precisely detect intricate details on the complex structures, however, conventional RCWA-based (rigorous coupled wave analysis) scatterometry has the limitations of long time-to-results and lack of flexibility to adapt to wide process variations. Signal Response Metrology (SRM) is a new metrology technique targeted to alleviate the consumption of engineering and computation resources by eliminating geometric/dispersion modeling and spectral simulation from the workflow. This is achieved by directly correlating the spectra acquired from a set of wafers with known process variations encoded. In SPIE 2015, we presented the results of SRM application in lithography metrology and control [1], accomplished the mission of setting up a new measurement recipe of focus/dose monitoring in hours. This work will demonstrate our recent field exploration of SRM implementation in 20nm technology and beyond, including focus metrology for scanner control; post etch geometric profile measurement, and actual device profile metrology.
Understanding Zeeman EIT Noise Correlation Spectra in Buffered Rb Vapor
NASA Astrophysics Data System (ADS)
O'Leary, Shannon; Zheng, Aojie; Crescimanno, Michael
2014-05-01
Noise correlation spectroscopy on systems manifesting Electromagnetically Induced Transparency (EIT) holds promise as a simple, robust method for performing high-resolution spectroscopy used in applications such as EIT-based atomic magnetometry and clocks. During laser light's propagation through a resonant medium, interaction with the medium converts laser phase noise into intensity noise. While this noise conversion can diminish the precision of EIT applications, noise correlation techniques transform the noise into a useful spectroscopic tool that can improve the application's precision. Using a single diode laser with large phase noise, we examine laser intensity noise and noise correlations from Zeeman EIT in a buffered Rb vapor. Of particular interest is a narrow noise correlation feature, resonant with EIT, that has been shown in earlier work to be power-broadening resistant at low powers. We report here on our recent experimental work and complementary theoretical modeling on EIT noise spectra, including a study of power broadening of the narrow noise correlation feature. Understanding the nature of the noise correlation spectrum is essential for optimizing EIT-noise applications.
Carleton, W. Christopher; Campbell, David
2018-01-01
Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating—the most common chronometric technique in archaeological and palaeoenvironmental research—creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20–30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence. PMID:29351329
Carleton, W Christopher; Campbell, David; Collard, Mark
2018-01-01
Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating-the most common chronometric technique in archaeological and palaeoenvironmental research-creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20-30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence.
Chen, XiaFang; Du, XueLiang; Zhu, JianXing; Xie, LiJuan; Zhang, YongJun; He, ZhenJuan
2012-07-01
The objective was to elucidate the relationships between serum concentrations of the gut hormone peptide YY (PYY) and ghrelin and growth development in infants for potential application to the clinical observation index. Serum concentrations of PYY and ghrelin were measured using radioimmunoassay from samples collected at the clinic. For each patient, gestational age, birth weight, time required to return to birth weight, rate of weight gain, time required to achieve recommended daily intake (RDI) standards, time required for full-gastric feeding, duration of hospitalization, and time of administration of total parenteral nutrition were recorded. Serum PYY and ghrelin concentrations were significantly higher in the preterm group (N = 20) than in the full-term group (N = 20; P < 0.01). Within the preterm infant group, the serum concentrations of PYY and ghrelin on postnatal day (PND) 7 (ghrelin = 1485.38 ± 409.24; PYY = 812.37 ± 153.77 ng/L) were significantly higher than on PND 1 (ghrelin = 956.85 ± 223.09; PYY = 545.27 ± 204.51 ng/L) or PND 3 (ghrelin = 1108.44 ± 351.36; PYY = 628.96 ± 235.63 ng/L; P < 0.01). Both serum PYY and ghrelin concentrations were negatively correlated with body weight, and the degree of correlation varied with age. Serum ghrelin concentration correlated negatively with birth weight and positively with the time required to achieve RDI (P < 0.05). In conclusion, serum PYY and ghrelin concentrations reflect a negative energy balance, predict postnatal growth, and enable compensation. Further studies are required to elucidate the precise concentration and roles of PYY and ghrelin in newborns and to determine the usefulness of measuring these hormones in clinical practice.
Chen, XiaFang; Du, Xueliang; Zhu, JianXing; Xie, LiJuan; Zhang, YongJun; He, ZhenJuan
2012-01-01
The objective was to elucidate the relationships between serum concentrations of the gut hormone peptide YY (PYY) and ghrelin and growth development in infants for potential application to the clinical observation index. Serum concentrations of PYY and ghrelin were measured using radioimmunoassay from samples collected at the clinic. For each patient, gestational age, birth weight, time required to return to birth weight, rate of weight gain, time required to achieve recommended daily intake (RDI) standards, time required for full-gastric feeding, duration of hospitalization, and time of administration of total parenteral nutrition were recorded. Serum PYY and ghrelin concentrations were significantly higher in the preterm group (N = 20) than in the full-term group (N = 20; P < 0.01). Within the preterm infant group, the serum concentrations of PYY and ghrelin on postnatal day (PND) 7 (ghrelin = 1485.38 ± 409.24; PYY = 812.37 ± 153.77 ng/L) were significantly higher than on PND 1 (ghrelin = 956.85 ± 223.09; PYY = 545.27 ± 204.51 ng/L) or PND 3 (ghrelin = 1108.44 ± 351.36; PYY = 628.96 ± 235.63 ng/L; P < 0.01). Both serum PYY and ghrelin concentrations were negatively correlated with body weight, and the degree of correlation varied with age. Serum ghrelin concentration correlated negatively with birth weight and positively with the time required to achieve RDI (P < 0.05). In conclusion, serum PYY and ghrelin concentrations reflect a negative energy balance, predict postnatal growth, and enable compensation. Further studies are required to elucidate the precise concentration and roles of PYY and ghrelin in newborns and to determine the usefulness of measuring these hormones in clinical practice. PMID:22527125
Ham, Melissa R; Okada, Pamela; White, Perrin C
2004-03-01
Diabetic ketoacidosis (DKA) is a serious complication of diabetes mellitus marked by characteristic biochemical derangements. Diagnosis and management involve frequent evaluation of these biochemical parameters. Reliable bedside equivalents for these laboratory studies may help reduce the time to treatment and reduce costs. We evaluated the precision and bias of a bedside serum ketone meter in the acute care setting. Serum ketone results using the Precision Xtra glucometer/ketone meter (Abbott Laboratories, MediSense Products Inc., Bedford, MA, USA) correlated strongly with the Children's Medical Center of Dallas' laboratory values within the meter's value range. Meter ketone values steadily decreased during the treatment of DKA as pH and CO(2) levels increased and acidosis resolved. Therefore, the meter may be useful in monitoring therapy for DKA. This meter may also prove useful in identifying patients at risk for DKA in physicians' offices or at home.
NASA Astrophysics Data System (ADS)
Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.
2017-08-01
The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.
Precise Geolocation Of Persistent Scatterers Aided And Validated By Lidar DSM
NASA Astrophysics Data System (ADS)
Chang, Ling; Dheenathayalan, Prabu; Hanessen, Ramon
2013-12-01
Persistent Scatterers (PS) interferometry results in the de- formation history of time-coherent scatterers. Although several applications focus on smooth, spatially correlated signals, we aim for the detection, identification and analysis of single anomalies. These targets can be indicative of, e.g., strain in structures, potentially leading to the failure of such structures. For the identification and analysis it is of the greatest importance to know the exact position of the effective scattering center, to avoid an improper interpretation of the driving mechanism. Here we present an approach to optimize the geolocation of important scatterers, when necessary aided by an a priori Lidar-derived DSM (AHN-1 data) with 15cm and 5m resolution in vertical and horizontal directions, respectively. The DSM is also used to validate the geocoding. We implement our approach on a near-collapse event of a shopping mall in Heerlen, the Netherlands, to generate the precise geolocation of local PS points.
Akuffo, Kwadwo Owusu; Beatty, Stephen; Stack, Jim; Peto, Tunde; Leung, Irene; Corcoran, Laura; Power, Rebecca; Nolan, John M
2015-12-01
We compared macular pigment (MP) measurements using customized heterochromatic flicker photometry (Macular Metrics Densitometer) and dual-wavelength fundus autofluorescence (Heidelberg Spectralis HRA + OCT MultiColor) in subjects with early age-related macular degeneration (AMD). Macular pigment was measured in 117 subjects with early AMD (age, 44-88 years) using the Densitometer and Spectralis, as part of the Central Retinal Enrichment Supplementation Trial (CREST; ISRCTN13894787). Baseline and 6-month study visits data were used for the analyses. Agreement was investigated at four different retinal eccentricities, graphically and using indices of agreement, including Pearson correlation coefficient (precision), accuracy coefficient, and concordance correlation coefficient (ccc). Agreement was poor between the Densitometer and Spectralis at all eccentricities, at baseline (e.g., at 0.25° eccentricity, accuracy = 0.63, precision = 0.35, ccc = 0.22) and at 6 months (e.g., at 0.25° eccentricity, accuracy = 0.52, precision = 0.43, ccc = 0.22). Agreement between the two devices was significantly greater for males at 0.5° and 1.0° of eccentricity. At all eccentricities, agreement was unaffected by cataract grade. In subjects with early AMD, MP measurements obtained using the Densitometer and Spectralis are not statistically comparable and should not be used interchangeably in either the clinical or research setting. Despite this lack of agreement, statistically significant increases in MP, following 6 months of supplementation with macular carotenoids, were detected with each device, confirming that these devices are capable of measuring change in MP within subjects over time. (http://www.controlled-trials.com number, ISRCTN13894787.).
Time in Science: Reversibility vs. Irreversibility
NASA Astrophysics Data System (ADS)
Pomeau, Yves
To discuss properly the question of irreversibility one needs to make a careful distinction between reversibility of the equations of motion and the choice of the initial conditions. This is also relevant for the rather confuse philosophy of the wave packet reduction in quantum mechanics. The explanation of this reduction requires also to make precise assumptions on what initial data are accessible in our world. Finally I discuss how a given (and long) time record can be shown in an objective way to record an irreversible or reversible process. Or: can a direction of time be derived from its analysis? This leads quite naturally to examine if there is a possible spontaneous breaking of the time reversal symmetry in many body systems, a symmetry breaking that would be put in evidence objectively by looking at certain specific time correlations.
Method of high precision interval measurement in pulse laser ranging system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong
2013-09-01
Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.
Bek, T; Prause, J U
1996-12-01
The histopathology of three eyes obtained post mortem from 2 patients with age-related macular degeneration was correlated with the pre mortem fluorescein angiographic morphology. A precise point-by-point correlation between histopathology and the corresponding angiographic appearance was ensured by using the cast retinal vascular system as a pattern of reference. The study showed that both the photoreceptors, the pigment epithelium, and substances accumulated between the retinal and the choroidal vascular systems, may have a blocking effect on choroidal background fluorescence as seen on fluorescein angiograms. Furthermore, it is confirmed that fluorescein angiographic hyperfluorescence may be due to a lack of blocking of the choroidal fluorescence because of a window defect in the retinal photoreceptor layer and/or the pigment epithelium.
A High-Resolution View of Global Seismicity
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Schaff, D. P.
2014-12-01
We present high-precision earthquake relocation results from our global-scale re-analysis of the combined seismic archives of parametric data for the years 1964 to present from the International Seismological Centre (ISC), the USGS's Earthquake Data Report (EDR), and selected waveform data from IRIS. We employed iterative, multistep relocation procedures that initially correct for large location errors present in standard global earthquake catalogs, followed by a simultaneous inversion of delay times formed from regional and teleseismic arrival times of first and later arriving phases. An efficient multi-scale double-difference (DD) algorithm is used to solve for relative event locations to the precision of a few km or less, while incorporating information on absolute hypocenter locations from catalogs such as EHB and GEM. We run the computations on both a 40-core cluster geared towards HTC problems (data processing) and a 500-core HPC cluster for data inversion. Currently, we are incorporating waveform correlation delay time measurements available for events in selected regions, but are continuously building up a comprehensive, global correlation database for densely distributed events recorded at stations with a long history of high-quality waveforms. The current global DD catalog includes nearly one million earthquakes, equivalent to approximately 70% of the number of events in the ISC/EDR catalogs initially selected for relocation. The relocations sharpen the view of seismicity in most active regions around the world, in particular along subduction zones where event density is high, but also along mid-ocean ridges where existing hypocenters are especially poorly located. The new data offers the opportunity to investigate earthquake processes and fault structures along entire plate boundaries at the ~km scale, and provides a common framework that facilitates analysis and comparisons of findings across different plate boundary systems.
Schneider, George J; Kuper, Kevin G; Abravaya, Klara; Mullen, Carolyn R; Schmidt, Marion; Bunse-Grassmann, Astrid; Sprenger-Haussels, Markus
2009-04-01
Automated sample preparation systems must meet the demands of routine diagnostics laboratories with regard to performance characteristics and compatibility with downstream assays. In this study, the performance of QIAGEN EZ1 DSP Virus Kit on the BioRobot EZ1 DSP was evaluated in combination with the Abbott RealTime HIV-1, HCV, and HBV assays, followed by thermalcycling and detection on the Abbott m2000rt platform. The following performance characteristics were evaluated: linear range and precision, sensitivity, cross-contamination, effects of interfering substances and correlation. Linearity was observed within the tested ranges (for HIV-1: 2.0-6.0 log copies/ml, HCV: 1.3-6.9 log IU/ml, HBV: 1.6-7.6 log copies/ml). Excellent precision was obtained (inter-assay standard deviation for HIV-1: 0.06-0.17 log copies/ml (>2.17 log copies/ml), HCV: 0.05-0.11 log IU/ml (>2.09 log IU/ml), HBV: 0.03-0.07 log copies/ml (>2.55 log copies/ml)), with good sensitivity (95% hit rates for HIV-1: 50 copies/ml, HCV: 12.5 IU/ml, HBV: 10 IU/ml). No cross-contamination was observed, as well as no negative impact of elevated levels of various interfering substances. In addition, HCV and HBV viral load measurements after BioRobot EZ1 DSP extraction correlated well with those obtained after Abbott m2000sp extraction. This evaluation demonstrates that the QIAGEN EZ1 DSP Virus Kit provides an attractive solution for fully automated, low throughput sample preparation for use with the Abbott RealTime HIV-1, HCV, and HBV assays.
Reuter, Markus; Piller, Werner E.; Brandano, Marco; Harzhauser, Mathias
2013-01-01
Shallow-marine sediment records have the strong potential to display sensitive environmental changes in sedimentary geometries and skeletal content. However, the time resolution of most neritic carbonate records is not high enough to be compared with climatic events as recorded in the deep-sea sediment archives. In order to resolve the paleoceanographic and paleoclimatic changes during the Oligocene–Miocene transition in the Mediterranean shallow water carbonate systems with the best possible time resolution, we re-evaluated the Decontra section on the Maiella Platform (central Apennines, Italy), which acts as a reference for the correlation of Oligocene–Miocene shallow water deposits in the Mediterranean region. The 120-m-thick late Oligocene–late Miocene carbonate succession is composed of larger foraminiferal, bryozoan and corallinacean limestones interlayered with distinct planktonic foraminiferal carbonates representing a mostly outer neritic setting. Integrated multi-proxy and facies analyses indicate that CaCO3 and total organic carbon contents as well as gamma-ray display only local to regional processes on the carbonate platform and are not suited for stratigraphic correlation on a wider scale. In contrast, new biostratigraphic data correlate the Decontra stable carbon isotope record to the global deep-sea carbon isotope record. This links relative sea level fluctuations, which are reflected by facies and magnetic susceptibility changes, to third-order eustatic cycles. The new integrated bio-, chemo-, and sequence stratigraphic framework enables a more precise timing of environmental changes within the studied time interval and identifies Decontra as an important locality for correlating not only shallow and deep water sediments of the Mediterranean region but also on a global scale. PMID:25844021
A Catalog of Transit Timing Posterior Distributions for all Kepler Planet Candidate Events
NASA Astrophysics Data System (ADS)
Montet, Benjamin Tyler; Becker, Juliette C.; Johnson, John
2015-08-01
Kepler has ushered in a new era of planetary dynamics, enabling the detection of interactions between multiple planets in transiting systems for hundreds of systems. These interactions, observed as transit timing variations (TTVs), have been used to find non-transiting companions to transiting systems and to measure masses, eccentricities, and inclinations of transiting planets. Often, physical parameters are inferred by comparing the observed light curve to the result of a photodynamical model, a time-intensive process that often ignores the effects of correlated noise in the light curve. Catalogs of transit timing observations have previously neglected non-Gaussian uncertainties in the times of transit, uncertainties in the transit shape, and short cadence data. Here, we present a catalog of not only times of transit centers, but also posterior distributions on the time of transit for every planet candidate transit event in the Kepler data, developed through importance sampling of each transit. This catalog allows us to marginalize over uncertainties in the transit shape and incorporate short cadence data, the effects of correlated noise, and non-Gaussian posteriors. Our catalog will enable dynamical studies that reflect accurately the precision of Kepler and its limitations without requiring the computational power to model the light curve completely with every integration.
Polyakova, Maryna; Schlögl, Haiko; Sacher, Julia; Schmidt-Kassow, Maren; Kaiser, Jochen; Stumvoll, Michael; Kratzsch, Jürgen; Schroeter, Matthias L
2017-06-03
Brain-derived neurotrophic factor (BDNF), an important neural growth factor, has gained growing interest in neuroscience, but many influencing physiological and analytical aspects still remain unclear. In this study we assessed the impact of storage time at room temperature, repeated freeze/thaw cycles, and storage at -80 °C up to 6 months on serum and ethylenediaminetetraacetic acid (EDTA)-plasma BDNF. Furthermore, we assessed correlations of serum and plasma BDNF concentrations in two independent sets of samples. Coefficients of variations (CVs) for serum BDNF concentrations were significantly lower than CVs of plasma concentrations ( n = 245, p = 0.006). Mean serum and plasma concentrations at all analyzed time points remained within the acceptable change limit of the inter-assay precision as declared by the manufacturer. Serum and plasma BDNF concentrations correlated positively in both sets of samples and at all analyzed time points of the stability assessment ( r = 0.455 to r s = 0.596; p < 0.004). In summary, when considering the acceptable change limit, BDNF was stable in serum and in EDTA-plasma up to 6 months. Due to a higher reliability, we suggest favoring serum over EDTA-plasma for future experiments assessing peripheral BDNF concentrations.
Brady, S L; Kaufman, R A
2012-06-01
The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ~25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Calibration precision was measured to be better than 5%-7%, 3%-5%, and 2%-4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy(-1) versus the CT scatter phantom 29.2 ± 1.0 mV cGy(-1) and FIA with x-ray 29.9 ± 1.1 mV cGy(-1) methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ~3000 mV. The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the eventual use for phantom dosimetry, a measurement error ~12% will be reflected in the dosimetry results. The calibration process must emulate the eventual CT dosimetry process by matching or excluding scatter when calibrating the MOSFETs. Finally, the authors recommend that the MOSFETs are energy calibrated approximately every 2500-3000 mV. © 2012 American Association of Physicists in Medicine.
Gómez-Ordóñez, Eva; Jiménez-Escrig, Antonio; Rupérez, Pilar
2012-05-15
Biological properties of polysaccharides from seaweeds are related to their composition and structure. Many factors such as the kind of sugar, type of linkage or sulfate content of algal biopolymers exert an influence in the relationship between structure and function. Besides, the molecular weight (MW) also plays an important role. Thus, a simple, reliable and fast HPSEC method with refractive index detection was developed and optimized for the MW estimation of soluble algal polysaccharides. Chromatogram shape and repeatability of retention time was considerably improved when sodium nitrate was used instead of ultrapure water as mobile phase. Pullulan and dextran standards of different MW were used for method calibration and validation. Also, main polysaccharide standards from brown (alginate, fucoidan, laminaran) and red seaweeds (kappa- and iota-carrageenan) were used for quantification and method precision and accuracy. Relative standard deviation (RSD) of repeatability for retention time, peak areas and inter-day precision was below 0.7%, 2.5% and 2.6%, respectively, which indicated good repeatability and precision. Recoveries (96.3-109.8%) also showed its fairly good accuracy. Regarding linearity, main polysaccharide standards from brown or red seaweeds showed a highly satisfactory correlation coefficient (r>0.999). Moreover, a good sensitivity was shown, with corresponding limits of detection and quantitation in mg/mL of 0.05-0.21 and 0.16-0.31, respectively. The method was applied to the MW estimation of standard algal polysaccharides, as well as to the soluble polysaccharide fractions from the brown seaweed Saccharina latissima and the red Mastocarpus stellatus, respectively. Although distribution of molecular weight was broad, the good repeatability for retention time provided a good precision in MW estimation of polysaccharides. Water- and alkali-soluble fractions from S. latissima ranged from very high (>2400 kDa) to low MW compounds (<6 kDa); this high heterogeneity could be attributable to the complex polysaccharide composition of brown algae. Regarding M. stellatus, sulfated galactans followed a descending order of MW (>1400 kDa to <10 kDa), related to the different solubility of carrageenans in red seaweeds. In summary, the method developed allows for the molecular weight analysis of seaweed polysaccharides with very good precision, accuracy, linearity and sensitivity within a short time. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bowring, S. A.; Grotzinger, J. P.; Amthor, J.; Martin, M. E.
2001-05-01
The precise, global correlation of Precambrian and Paleozoic sedimentary rocks can be achieved using temporally calibrated chemostratigraphic records. This approach is essential for determining rates and causes of environmental and faunal change, including mass extinctions. For example, The Neoproterozoic is marked by major environmental change, including periods of global glaciation, large fluctuations in the sequestration of carbon and major tectonic reorganization followed by the explosive diversification of animals in the earliest Cambrian. The extreme climatic change associated with these glaciations have been implicated as a possible trigger for the Cambrian explosion. The recognition of thin zircon-bearing air-fall ash in Neoproterozoic and Cambrian rocks has allowed the establishment of a high-precision temporal framework for animal evolution and is helping to untangle the history of glaciations. In some cases analytical uncertainties translate to age uncertainties of less than 1 Ma and when integrated with chemostratigraphy, the potential for global correlations at even higher resolution. Progress in the global correlation of Neoproterozoic strata has been achieved through the use of C and Sr isotope chemostratigraphy although it has been hampered by a lack of precise geochronological and faunal control. For example, the period from ca 800-580 Ma is characterized by at least two and perhaps as many as four glacial events that are interpreted by many to be global glaciations on a "Snowball Earth". A lack of precise chronological constraints on the number and duration of glaciations, multiple large excursions in the carbon isotopic record, and an absence of detailed biostratigraphy have complicated global correlation and hindered our understanding of this important period of Earth history. However, the ongoing integration of chemostratigraphic and geochronological data are improving temporal resolution and detailed correlations. These data are critical for understanding the causes and effects of Neoproterozoic glaciations. The Cambrian-Precambrian boundary is generally associated with a negative shift in carbon values although global isochroneity has not yet been demonstrated and unconformities mark the boundary in many places. New data suggest an age of 542 Ma for the excursion and boundary in Oman; results from Namibia, Oman, and Siberia are all consistent with this result. It has yet to be demonstrated that the paleontologically defined boundary coincides with the isotopic shift or is globally isochronous. The emerging geochronological framework, when combined with integrated paleontological, chemostratigraphic, and geological data will allow detailed global correlation and evaluation of models that invoke both intrinsic and extrinsic triggers for evolution.
Validation of strain gauges as a method of measuring precision of fit of implant bars.
Hegde, Rashmi; Lemons, Jack E; Broome, James C; McCracken, Michael S
2009-04-01
Multiple articles in the literature have used strain gauges to estimate the precision of fit of implant bars. However, the accuracy of these measurements has not been fully documented. The purpose of this study was to evaluate the response of strain gauges to known amounts of misfit in an implant bar. This is an important step in validation of this device. A steel block was manufactured with five 4.0-mm externally hexed implant platforms machined into the block 7-mm apart. A 1.4-cm long gold alloy bar was cast to fit 2 of the platforms. Brass shims of varying thickness (150, 300, and 500 microm) were placed under one side of the bar to create misfit. A strain gage was used to record strain readings on top of the bar, one reading at first contact of the bar and one at maximum screw torque. Microgaps between the bar and the steel platforms were measured using a high-precision optical measuring device at 4 points around the platform. The experiment was repeated 3 times. Two-way analysis of variance and linear regression were used for statistical analyses. Shim thickness had a significant effect on strain (P < 0.0001). There was a significant positive correlation between shim thickness and strain (R(2) = 0.93) for strain at maximum torque, and for strain measurements at first contact (R(2) = 0.91). Microgap measurements showed no correlation with increasing misfit. Strain in the bar increased significantly with increasing levels of misfit. Strain measurements induced at maximum torque are not necessarily indicative of the maximum strains experienced by the bar. The presence or absence of a microgap between the bar and the platform is not necessarily indicative of passivity. These data suggest that microgap may not be clinically reliable as a measure of precision of fit.
Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.
2004-01-01
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.
Neural noise and movement-related codes in the macaque supplementary motor area.
Averbeck, Bruno B; Lee, Daeyeol
2003-08-20
We analyzed the variability of spike counts and the coding capacity of simultaneously recorded pairs of neurons in the macaque supplementary motor area (SMA). We analyzed the mean-variance functions for single neurons, as well as signal and noise correlations between pairs of neurons. All three statistics showed a strong dependence on the bin width chosen for analysis. Changes in the correlation structure of single neuron spike trains over different bin sizes affected the mean-variance function, and signal and noise correlations between pairs of neurons were much smaller at small bin widths, increasing monotonically with the width of the bin. Analyses in the frequency domain showed that the noise between pairs of neurons, on average, was most strongly correlated at low frequencies, which explained the increase in noise correlation with increasing bin width. The coding performance was analyzed to determine whether the temporal precision of spike arrival times and the interactions within and between neurons could improve the prediction of the upcoming movement. We found that in approximately 62% of neuron pairs, the arrival times of spikes at a resolution between 66 and 40 msec carried more information than spike counts in a 200 msec bin. In addition, in 19% of neuron pairs, inclusion of within (11%)- or between-neuron (8%) correlations in spike trains improved decoding accuracy. These results suggest that in some SMA neurons elements of the spatiotemporal pattern of activity may be relevant for neural coding.
Precise Time - Naval Oceanography Portal
section Advanced Search... Sections Home Time Earth Orientation Astronomy Meteorology Oceanography Ice You are here: Home ⺠USNO ⺠Precise Time USNO Logo USNO Navigation Master Clock GPS Display Clocks TWSTT Telephone Time NTP Info Precise Time The U. S. Naval Observatory is charged with maintaining the
F. Mauro; Vicente J. Monleon; H. Temesgen; L.A. Ruiz
2017-01-01
Accounting for spatial correlation of LiDAR model errors can improve the precision of model-based estimators. To estimate spatial correlation, sample designs that provide close observations are needed, but their implementation might be prohibitively expensive. To quantify the gains obtained by accounting for the spatial correlation of model errors, we examined (
Cortical ensemble activity increasingly predicts behaviour outcomes during learning of a motor task
NASA Astrophysics Data System (ADS)
Laubach, Mark; Wessberg, Johan; Nicolelis, Miguel A. L.
2000-06-01
When an animal learns to make movements in response to different stimuli, changes in activity in the motor cortex seem to accompany and underlie this learning. The precise nature of modifications in cortical motor areas during the initial stages of motor learning, however, is largely unknown. Here we address this issue by chronically recording from neuronal ensembles located in the rat motor cortex, throughout the period required for rats to learn a reaction-time task. Motor learning was demonstrated by a decrease in the variance of the rats' reaction times and an increase in the time the animals were able to wait for a trigger stimulus. These behavioural changes were correlated with a significant increase in our ability to predict the correct or incorrect outcome of single trials based on three measures of neuronal ensemble activity: average firing rate, temporal patterns of firing, and correlated firing. This increase in prediction indicates that an association between sensory cues and movement emerged in the motor cortex as the task was learned. Such modifications in cortical ensemble activity may be critical for the initial learning of motor tasks.
Avila, Irene; Lin, Shih-Chieh
2014-03-01
The survival of animals depends critically on prioritizing responses to motivationally salient stimuli. While it is generally believed that motivational salience increases decision speed, the quantitative relationship between motivational salience and decision speed, measured by reaction time (RT), remains unclear. Here we show that the neural correlate of motivational salience in the basal forebrain (BF), defined independently of RT, is coupled with faster and also more precise decision speed. In rats performing a reward-biased simple RT task, motivational salience was encoded by BF bursting response that occurred before RT. We found that faster RTs were tightly coupled with stronger BF motivational salience signals. Furthermore, the fraction of RT variability reflecting the contribution of intrinsic noise in the decision-making process was actively suppressed in faster RT distributions with stronger BF motivational salience signals. Artificially augmenting the BF motivational salience signal via electrical stimulation led to faster and more precise RTs and supports a causal relationship. Together, these results not only describe for the first time, to our knowledge, the quantitative relationship between motivational salience and faster decision speed, they also reveal the quantitative coupling relationship between motivational salience and more precise RT. Our results further establish the existence of an early and previously unrecognized step in the decision-making process that determines both the RT speed and variability of the entire decision-making process and suggest that this novel decision step is dictated largely by the BF motivational salience signal. Finally, our study raises the hypothesis that the dysregulation of decision speed in conditions such as depression, schizophrenia, and cognitive aging may result from the functional impairment of the motivational salience signal encoded by the poorly understood noncholinergic BF neurons.
da Silva, Layzon Antonio Lemos; Pezzini, Bianca Ramos; Soares, Luciano
2015-01-01
Background: The chemical characterization is essential to validate the pharmaceutical use of vegetable raw materials. Ultraviolet spectroscopy is an important technique to determine flavonoids, which are important active compounds from Ocimum basilicum. Objective: The objective of this work was to optimize a spectrophotometric method, based on flavonoid-aluminum chloride (AlCl3) complexation to determine the total flavonoid content (TFC) in leaves of O. basilicum (herbal material), using response surface methodology. Materials and Methods: The effects of (1) the herbal material: Solvent ratio (0.02, 0.03, 0.05, 0.07, and 0.08 g/mL), (2) stock solution volume (0.8, 2.3, 4.4, 6.5, and 8.0 mL) and (3) AlCl3 volume (0.8, 1.0, 1.2, 1.4, and 1.6 mL) on the TFC were evaluated. The analytical performance parameters precision, linearity and robustness of the method were tested. Results: The herbal material: Solvent ratio and stock solution volume showed an important influence on the method response. After choosing the optimized conditions, the method exhibited a precision (RSD%) lower than 6% for repeatability (RSD%) and lower than 8% for intermediate precision (on the order of literature values for biotechnological methods), coefficient of correlation of 0.9984, and no important influence could be observed for variations of the time of complexation with AlCl3. However, the time and temperature of extraction were critical for TFC method and must be carefully controlled during the analysis. Conclusion: Thus, this study allowed the optimization of a simple, fast and precise method for the determination of the TFC in leaves of O. basilicum, which can be used to support the quality assessment of this herbal material. PMID:25709217
Avila, Irene; Lin, Shih-Chieh
2014-01-01
The survival of animals depends critically on prioritizing responses to motivationally salient stimuli. While it is generally believed that motivational salience increases decision speed, the quantitative relationship between motivational salience and decision speed, measured by reaction time (RT), remains unclear. Here we show that the neural correlate of motivational salience in the basal forebrain (BF), defined independently of RT, is coupled with faster and also more precise decision speed. In rats performing a reward-biased simple RT task, motivational salience was encoded by BF bursting response that occurred before RT. We found that faster RTs were tightly coupled with stronger BF motivational salience signals. Furthermore, the fraction of RT variability reflecting the contribution of intrinsic noise in the decision-making process was actively suppressed in faster RT distributions with stronger BF motivational salience signals. Artificially augmenting the BF motivational salience signal via electrical stimulation led to faster and more precise RTs and supports a causal relationship. Together, these results not only describe for the first time, to our knowledge, the quantitative relationship between motivational salience and faster decision speed, they also reveal the quantitative coupling relationship between motivational salience and more precise RT. Our results further establish the existence of an early and previously unrecognized step in the decision-making process that determines both the RT speed and variability of the entire decision-making process and suggest that this novel decision step is dictated largely by the BF motivational salience signal. Finally, our study raises the hypothesis that the dysregulation of decision speed in conditions such as depression, schizophrenia, and cognitive aging may result from the functional impairment of the motivational salience signal encoded by the poorly understood noncholinergic BF neurons. PMID:24642480
The formation of chondrules at high gas pressures in the solar nebula.
Galy, A; Young, E D; Ash, R D; O'Nions, R K
2000-12-01
High-precision magnesium isotope measurements of whole chondrules from the Allende carbonaceous chondrite meteorite show that some aluminum-rich Allende chondrules formed at or near the time of formation of calcium-aluminum-rich inclusions and that some others formed later and incorporated precursors previously enriched in magnesium-26. Chondrule magnesium-25/magnesium-24 correlates with [magnesium]/[aluminum] and size, the aluminum-rich, smaller chondrules being the most enriched in the heavy isotopes of magnesium. These relations imply that high gas pressures prevailed during chondrule formation in the solar nebula.
Study for elevator cage position during the braking period
NASA Astrophysics Data System (ADS)
Ungureanu, M.; Crăciun, I.; Bănică, M.; Dăscălescu, A.
2016-08-01
An important problem in order to study an elevator cage position for its braking period is to establish a correlation between the studies in the fields of mechanics and electric. The classical approaches to establish the elevator kinematic parameters are position, velocity and acceleration, but the last studies performed in order to determine the positioning performed by introducing supplementary another parameter - the jerk- which is derived with respect to time of acceleration. Thus we get a precise method for cage motion control for third-order trajectory planning.
The NANOGrav Eleven-Year Data Set: High-precision timing of 48 Millisecond Pulsars
NASA Astrophysics Data System (ADS)
Nice, David J.; NANOGrav
2017-01-01
Gravitational waves from sources such as supermassive black hole binary systems perturb times-of-flight of signals traveling from pulsars to the Earth. The NANOGrav collaboration aims to measure these perturbations in high precision millisecond pulsar timing data and thus to directly detect gravitational waves and characterize the gravitational wave sources. By observing pulsars over time spans of many years, we are most sensitive to gravitational waves at nanohertz frequencies. This work is complimentary to ground based detectors such as LIGO, which are sensitive to gravitational waves with frequencies 10 orders of magnitude higher.In this presentation we describe the NANOGrav eleven-year data set. This includes pulsar time-of-arrival measurements from 48 millisecond pulsars made with the Arecibo Observatory (for pulsars with declinations between -1 and 39 degrees) and the Green Bank Telescope (for other pulsars, with two pulsars overlapping with Arecibo). The data set consists of more than 300,000 pulse time-of-arrival measurements made in nearly 7000 unique observations (a given pulsar observed with a given telescope receiver on a given day). In the best cases, measurement precision is better than 100 nanoseconds, and in nearly all cases it is better than 1 microsecond.All pulsars in our program are observed at intervals of 3 to 4 weeks. Observations use wideband data acquisition systems and are made at two receivers at widely separated frequencies at each epoch, allowing for characterization and mitigation of the effects of interstellar medium on the signal propagation. Observation of a large number of pulsars allows for searches for correlated perturbations among the pulsar signals, which is crucial for achieving high-significance detection of gravitational waves in the face of uncorrelated noise (from gravitational waves and rotation noise) in the individual pulsars. In addition, seven pulsars are observed at weekly intervals. This increases our sensitivity to individual gravitational wave sources.
Weingärtner, Sebastian; Meßner, Nadja M; Zöllner, Frank G; Akçakaya, Mehmet; Schad, Lothar R
2017-08-01
To study the feasibility of black-blood contrast in native T 1 mapping for reduction of partial voluming at the blood-myocardium interface. A saturation pulse prepared heart-rate-independent inversion recovery (SAPPHIRE) T 1 mapping sequence was combined with motion-sensitized driven-equilibrium (MSDE) blood suppression for black-blood T 1 mapping at 3 Tesla. Phantom scans were performed to assess the T 1 time accuracy. In vivo black-blood and conventional SAPPHIRE T 1 mapping was performed in eight healthy subjects and analyzed for T 1 times, precision, and inter- and intraobserver variability. Furthermore, manually drawn regions of interest (ROIs) in all T 1 maps were dilated and eroded to analyze the dependence of septal T 1 times on the ROI thickness. Phantom results and in vivo myocardial T 1 times show comparable accuracy with black-blood compared to conventional SAPPHIRE (in vivo: black-blood: 1562 ± 56 ms vs. conventional: 1583 ± 58 ms, P = 0.20); Using black-blood SAPPHIRE precision was significantly lower (standard deviation: 133.9 ± 24.6 ms vs. 63.1 ± 6.4 ms, P < .0001), and blood T 1 time measurement was not possible. Significantly increased interobserver interclass correlation coefficient (ICC) (0.996 vs. 0.967, P = 0.011) and similar intraobserver ICC (0.979 vs. 0.939, P = 0.11) was obtained with the black-blood sequence. Conventional SAPPHIRE showed strong dependence on the ROI thickness (R 2 = 0.99). No such trend was observed using the black-blood approach (R 2 = 0.29). Black-blood SAPPHIRE successfully eliminates partial voluming at the blood pool in native myocardial T 1 mapping while providing accurate T 1 times, albeit at a reduced precision. Magn Reson Med 78:484-493, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Monitoring the hatch time of individual chicken embryos.
Romanini, C E B; Exadaktylos, V; Tong, Q; McGonnel, I; Demmers, T G M; Bergoug, H; Eterradossi, N; Roulston, N; Garain, P; Bahr, C; Berckmans, D
2013-02-01
This study investigated variations in eggshell temperature (T(egg)) during the hatching process of broiler eggs. Temperature sensors monitored embryo temperature by registering T(egg) every minute. Measurements carried out on a sample of 40 focal eggs revealed temperature drops between 2 to 6°C during the last 3 d of incubation. Video cameras recorded the hatching process and served as the gold standard reference for manually labeling the hatch times of chicks. Comparison between T(egg) drops and the hatch time of individuals revealed a time synchronization with 99% correlation coefficient and an absolute average time difference up to 25 min. Our findings suggest that attaching temperature sensors to eggshells is a precise tool for monitoring the hatch time of individual chicks. Individual hatch monitoring registers the biological age of chicks and facilitates an accurate and reliable means to count hatching results and manage the hatch window.
Developments for the 6He beta - nu angular correlation experiment
NASA Astrophysics Data System (ADS)
Zumwalt, David W.
This thesis describes developments toward the measurement of the angular correlation between the beta and the antineutrino in the beta decay of 6He. This decay is a pure Gamow-Teller decay which is described in the Standard Model as a purely axial vector weak interaction. The angular correlation is characterized by the parameter abetanu = -1/3 in the Standard Model. Any deviation from this value would be evidence for tensor components in the weak interaction and would constitute new physics. A new method will be used to measure the parameter a betanu from 6He decays, featuring a magneto-optical trap that will measure the beta particle in coincidence with the recoiling 6Li daughter ion. This neutral atom trapping scheme provides cold, tightly confined atoms which will reduce systematic uncertainties related to the initial position of the decay. By knowing the initial position of the decay and measuring the time of flight of the recoiling 6Li daughter ion in coincidence with the beta, the angular correlation between the beta and the antineutrino can be deduced. We aim to measure a betanu first to the level of 1%, and eventually to the 0.1% level, which would represent an order of magnitude improvement in precision over past experiments. Towards this goal, we have designed, built, and successfully tested a liquid lithium target to provide >2×10. {10} 6He atoms/sto a low-background environment, which is the most intense source of 6He presently available. This allowed for an additional measurement of the 6He half-life (806.89 +/- 0.11stat +0.23-0.19syst ms) to be made with unprecedented precision, resolving discrepancies in past measurements. We have also tested our trapping and detection apparatus and have begun to record preliminary coincidence events.
Improvements in the chronology, geochemistry and correlation techniques of tephra in Antarctic ice
NASA Astrophysics Data System (ADS)
Iverson, N. A.; Dunbar, N. W.; McIntosh, W. C.; Pearce, N. J.; Kyle, P. R.
2013-12-01
Visible and crypto tephra layers found in West Antarctic ice provide an excellent record of Antarctic volcanism over the past 100ka. Tephra layers are deposited almost instantaneously across wide areas creating horizons that, if found in several locations, provide 'pinning points' to adjust ice time scales that may otherwise be lacking detailed chronology. Individual tephra layers can have distinct chemical fingerprints allowing them to correlate over great distances. Advances in sample preparation, geochemical analyses (major and trace elements) of fine grained tephra and higher precision 40Ar/39Ar dating of young (<100ka) proximal volcanic deposits are improving an already established tephra record in West Antarctica. Forty three of the potential hundreds of silicate layers found in a recently drilled deep West Antarctic Ice Sheet Divide core (WDC06A) have been analyzed for major elements and a subset for trace elements. Of these layers, at least 16 are homogenous tephra that could be correlated to other ice cores (e.g. Siple Dome, SDMA) and/or to source volcanoes found throughout Antarctica and even extra-continental eruptions (e.g. Sub-Antarctic islands and South America). Combining ice core tephra with those exposed in blue ice areas provide more locations to correlate widespread eruptions. For example, a period of heightened eruptive activity at Mt. Berlin, West Antarctica between 24 and 28ka produced a set of tephra layers that are found in WDC06A and SDMA ice cores, as well as at a nearby blue ice area at Mt. Moulton (BIT-151 and BIT-152). Possible correlative tephra layers are found at ice ages of 26.4, 26.9 and 28.8ka in WDC06A and 26.5, 27.0, and 28.7ka in SDMA cores. The geochemical similarities of major elements in these layers mean that ongoing trace element analyses will be vital to decipher the sequence of events during this phase of activity at Mt. Berlin. Sample WDC06A-2767.117 (ice age of 28.6×1.0ka) appears to correlate to blue ice tephra BIT-152 and to tephra layer SDMA-5683 (ice age of 28.5ka). This tephra layer also appears to be present in blue ice at Mt. Terra Nova on Ross Island, 1400km away, suggesting that it may be a possible to link ice cores in East Antarctica (e.g. Talos Dome and Law Dome). The amount of feldspar in ice core tephra is typically too small to be directly dated by 40Ar/39Ar method, making it very important to geochemically correlate these layers to proximal deposits where more and larger feldspar can be sampled. The correlation of WDC06A-2767.117 to the coarse, proximal BIT-152 provides one such link. The New Mexico Geochronology Research Lab (NMGRL) has two new multi-collector ARGUS VI mass spectrometers that can provide single crystal laser fusion ages that are approximately an order of magnitude more precise than the previous determinations. With these advancements in analytical technology, we hope to improve precision on 'pinning points' in the deep ice cores where annual layer counting becomes less precise.
Load Weight Classification of The Quayside Container Crane Based On K-Means Clustering Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Bingqian; Hu, Xiong; Tang, Gang; Wang, Yide
2017-07-01
The precise knowledge of the load weight of each operation of the quayside container crane is important for accurately assessing the service life of the crane. The load weight is directly related to the vibration intensity. Through the study on the vibration of the hoist motor of the crane in radial and axial directions, we can classify the load using K-means clustering algorithm and quantitative statistical analysis. Vibration in radial direction is significantly and positively correlated with that in axial direction by correlation analysis, which means that we can use the data only in one of the directions to carry out the study improving then the efficiency without degrading the accuracy of load classification. The proposed method can well represent the real-time working condition of the crane.
Detection system for neutron β decay correlations in the UCNB and Nab experiments
Broussard, L. J.; Oak Ridge National Lab.; Zeck, B. A.; ...
2016-12-19
Here, we describe a detection system designed to precisely measure multiple correlations in neutron β decay. Furthermore, the system is based on thick, large area, highly segmented silicon detectors developed in collaboration with Micron Semiconductor, Ltd. The prototype system meets specifications of energy thresholds below 10 keV, energy resolution of ~3 keV FWHM, and rise time of ~50 ns with 19 of the 127 detector pixels instrumented. We have demonstrated the coincident detection of β particles and recoil protons from neutron β decay, using ultracold neutrons at the Los Alamos Neutron Science Center, . The fully instrumented detection system willmore » be implemented in the UCNB and Nab experiments, to determine the neutron β decay parameters B, a, and b.« less
Acetylcholinesterase and Nissl staining in the same histological section.
Shipley, M T; Ennis, M; Behbehani, M M
1989-12-18
Acetylcholinesterase (AChE) enzyme histochemistry and Nissl staining are commonly utilized in neural architectonic studies. However, the opaque reaction deposit produced by the most commonly used AChE histochemical methods is not compatible with satisfactory Nissl staining. As a result, precise correlation of AChE and Nissl staining necessitates time-consuming comparisons of adjacent sections which may have differential shrinkage. Here, we have modified the Koelle-Friedenwald histochemical reaction for AChE by omitting the final intensification steps. The modified reaction yields a non-opaque reaction product that is selectively visualized by darkfield illumination. This non-intensified darkfield AChE (NIDA) reaction allows clear visualization of Nissl staining in the same histological section. This combined AChE-Nissl method greatly facilitates detailed correlation of enzyme and cytoarchitectonic organization.
Saotome, Kousaku; Matsushita, Akira; Matsumoto, Koji; Kato, Yoshiaki; Nakai, Kei; Murata, Koichi; Yamamoto, Tetsuya; Sankai, Yoshiyuki; Matsumura, Akira
2017-02-01
A fast spin-echo sequence based on the Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction (PROPELLER) technique is a magnetic resonance (MR) imaging data acquisition and reconstruction method for correcting motion during scans. Previous studies attempted to verify the in vivo capabilities of motion-corrected PROPELLER in real clinical situations. However, such experiments are limited by repeated, stray head motion by research participants during the prescribed and precise head motion protocol of a PROPELLER acquisition. Therefore, our purpose was to develop a brain phantom set for motion-corrected PROPELLER. The profile curves of the signal intensities on the in vivo T 2 -weighted image (T 2 WI) and 3-D rapid prototyping technology were used to produce the phantom. In addition, we used a homemade driver system to achieve in-plane motion at the intended timing. We calculated the Pearson's correlation coefficient (R 2 ) between the signal intensities of the in vivo T 2 WI and the phantom T 2 WI and clarified the rotation precision of the driver system. In addition, we used the phantom set to perform initial experiments to show the rotational angle and frequency dependences of PROPELLER. The in vivo and phantom T 2 WIs were visually congruent, with a significant correlation (R 2 ) of 0.955 (p<.001). The rotational precision of the driver system was within 1 degree of tolerance. The experiment on the rotational angle dependency showed image discrepancies between the rotational angles. The experiment on the rotational frequency dependency showed that the reconstructed images became increasingly blurred by the corruption of the blades as the number of motions increased. In this study, we developed a phantom that showed image contrasts and construction similar to the in vivo T 2 WI. In addition, our homemade driver system achieved precise in-plane motion at the intended timing. Our proposed phantom set could perform systematic experiments with a real clinical MR image, which to date has not been possible in in vivo studies. Further investigation should focus on the improvement of the motion-correction algorithm in PROPELLER using our phantom set for what would traditionally be considered problematic patients (children, emergency patients, elderly, those with dementia, and so on). Copyright © 2016 Elsevier Inc. All rights reserved.
92 Years of the Ising Model: A High Resolution Monte Carlo Study
NASA Astrophysics Data System (ADS)
Xu, Jiahao; Ferrenberg, Alan M.; Landau, David P.
2018-04-01
Using extensive Monte Carlo simulations that employ the Wolff cluster flipping and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising model with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, we obtained the critical inverse temperature K c = 0.221 654 626(5) and the critical exponent of the correlation length ν = 0.629 912(86) with precision that improves upon previous Monte Carlo estimates.
High-Precision Phenotyping of Grape Bunch Architecture Using Fast 3D Sensor and Automation.
Rist, Florian; Herzog, Katja; Mack, Jenny; Richter, Robert; Steinhage, Volker; Töpfer, Reinhard
2018-03-02
Wine growers prefer cultivars with looser bunch architecture because of the decreased risk for bunch rot. As a consequence, grapevine breeders have to select seedlings and new cultivars with regard to appropriate bunch traits. Bunch architecture is a mosaic of different single traits which makes phenotyping labor-intensive and time-consuming. In the present study, a fast and high-precision phenotyping pipeline was developed. The optical sensor Artec Spider 3D scanner (Artec 3D, L-1466, Luxembourg) was used to generate dense 3D point clouds of grapevine bunches under lab conditions and an automated analysis software called 3D-Bunch-Tool was developed to extract different single 3D bunch traits, i.e., the number of berries, berry diameter, single berry volume, total volume of berries, convex hull volume of grapes, bunch width and bunch length. The method was validated on whole bunches of different grapevine cultivars and phenotypic variable breeding material. Reliable phenotypic data were obtained which show high significant correlations (up to r² = 0.95 for berry number) compared to ground truth data. Moreover, it was shown that the Artec Spider can be used directly in the field where achieved data show comparable precision with regard to the lab application. This non-invasive and non-contact field application facilitates the first high-precision phenotyping pipeline based on 3D bunch traits in large plant sets.
Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction
NASA Technical Reports Server (NTRS)
Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.
2013-01-01
The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.
Reconstruction of Rayleigh-Lamb dispersion spectrum based on noise obtained from an air-jet forcing.
Larose, Eric; Roux, Philippe; Campillo, Michel
2007-12-01
The time-domain cross correlation of incoherent and random noise recorded by a series of passive sensors contains the impulse response of the medium between these sensors. By using noise generated by a can of compressed air sprayed on the surface of a plexiglass plate, we are able to reconstruct not only the time of flight but the whole wave forms between the sensors. From the reconstruction of the direct A(0) and S(0) waves, we derive the dispersion curves of the flexural waves, thus estimating the mechanical properties of the material without a conventional electromechanical source. The dense array of receivers employed here allow a precise frequency-wavenumber study of flexural waves, along with a thorough evaluation of the rate of convergence of the correlation with respect to the record length, the frequency, and the distance between the receivers. The reconstruction of the actual amplitude and attenuation of the impulse response is also addressed in this paper.
Two-electron spin correlations in precision placed donors in silicon.
Broome, M A; Gorman, S K; House, M G; Hile, S J; Keizer, J G; Keith, D; Hill, C D; Watson, T F; Baker, W J; Hollenberg, L C L; Simmons, M Y
2018-03-07
Substitutional donor atoms in silicon are promising qubits for quantum computation with extremely long relaxation and dephasing times demonstrated. One of the critical challenges of scaling these systems is determining inter-donor distances to achieve controllable wavefunction overlap while at the same time performing high fidelity spin readout on each qubit. Here we achieve such a device by means of scanning tunnelling microscopy lithography. We measure anti-correlated spin states between two donor-based spin qubits in silicon separated by 16 ± 1 nm. By utilising an asymmetric system with two phosphorus donors at one qubit site and one on the other (2P-1P), we demonstrate that the exchange interaction can be turned on and off via electrical control of two in-plane phosphorus doped detuning gates. We determine the tunnel coupling between the 2P-1P system to be 200 MHz and provide a roadmap for the observation of two-electron coherent exchange oscillations.
NASA Astrophysics Data System (ADS)
Li, Ming-Lung; Wang, Yi-Chou; Liou, Tong-Miin; Lin, Chao-An
2014-10-01
Precise locations of rupture region under contrast agent leakage of five ruptured cerebral artery aneurysms during computed tomography angiography, which is to our knowledge for the first time, were successfully identified among 101 patients. These, together with numerical simulations based on the reconstructed aneurysmal models, were used to analyze hemodynamic parameters of aneurysms under different cardiac cyclic flow rates. For side wall type aneurysms, different inlet flow rates have mild influences on the shear stresses distributions. On the other hand, for branch type aneurysms, the predicted wall shear stress (WSS) correlates strongly with the increase of inlet vessel velocity. The mean and time averaged WSSes at rupture regions are found to be lower than those over the surface of the aneurysms. Also, the levels of the oscillatory shear index (OSI) are higher than the reported threshold value, supporting the assertion that high OSI correlates with rupture of the aneurysm. However, the present results also indicate that OSI level at the rupture region is relatively lower.
A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays
NASA Astrophysics Data System (ADS)
Guo, Y. J.; Lee, K. J.; Caballero, R. N.
2018-04-01
The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.; Roth, Don J.; Baaklini, George Y.
1987-01-01
Acoustic images of a silicon carbide ceramic disk were obtained using a precision scanning contact pulse echo technique. Phase and cross-correlation velocity, and attenuation maps were used to form color images of microstructural variations. These acoustic images reveal microstructural variations not observable with X-ray radiography.
An estimation of distribution method for infrared target detection based on Copulas
NASA Astrophysics Data System (ADS)
Wang, Shuo; Zhang, Yiqun
2015-10-01
Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.
Zheng, Y.
2013-01-01
Temporal sound cues are essential for sound recognition, pitch, rhythm, and timbre perception, yet how auditory neurons encode such cues is subject of ongoing debate. Rate coding theories propose that temporal sound features are represented by rate tuned modulation filters. However, overwhelming evidence also suggests that precise spike timing is an essential attribute of the neural code. Here we demonstrate that single neurons in the auditory midbrain employ a proportional code in which spike-timing precision and firing reliability covary with the sound envelope cues to provide an efficient representation of the stimulus. Spike-timing precision varied systematically with the timescale and shape of the sound envelope and yet was largely independent of the sound modulation frequency, a prominent cue for pitch. In contrast, spike-count reliability was strongly affected by the modulation frequency. Spike-timing precision extends from sub-millisecond for brief transient sounds up to tens of milliseconds for sounds with slow-varying envelope. Information theoretic analysis further confirms that spike-timing precision depends strongly on the sound envelope shape, while firing reliability was strongly affected by the sound modulation frequency. Both the information efficiency and total information were limited by the firing reliability and spike-timing precision in a manner that reflected the sound structure. This result supports a temporal coding strategy in the auditory midbrain where proportional changes in spike-timing precision and firing reliability can efficiently signal shape and periodicity temporal cues. PMID:23636724
Proceedings of the Fourth Precise Time and Time Interval Planning Meeting
NASA Technical Reports Server (NTRS)
Acrivos, H. N. (Compiler); Wardrip, S. C. (Compiler)
1972-01-01
The proceedings of a conference on Precise Time and Time Interval Planning are presented. The subjects discussed include the following: (1) satellite timing techniques, precision frequency sources, and very long baseline interferometry, (2) frequency stabilities and communications, and (3) very low frequency and ultrahigh frequency propagation and use. Emphasis is placed on the accuracy of time discrimination obtained with time measuring equipment and specific applications of time measurement to military operations and civilian research projects.
Comparison of precision orbit derived density estimates for CHAMP and GRACE satellites
NASA Astrophysics Data System (ADS)
Fattig, Eric Dale
Current atmospheric density models cannot adequately represent the density variations observed by satellites in Low Earth Orbit (LEO). Using an optimal orbit determination process, precision orbit ephemerides (POE) are used as measurement data to generate corrections to density values obtained from existing atmospheric models. Densities obtained using these corrections are then compared to density data derived from the onboard accelerometers of satellites, specifically the CHAMP and GRACE satellites. This comparison takes two forms, cross correlation analysis and root mean square analysis. The densities obtained from the POE method are nearly always superior to the empirical models, both in matching the trends observed by the accelerometer (cross correlation), and the magnitudes of the accelerometer derived density (root mean square). In addition, this method consistently produces better results than those achieved by the High Accuracy Satellite Drag Model (HASDM). For satellites orbiting Earth that pass through Earth's upper atmosphere, drag is the primary source of uncertainty in orbit determination and prediction. Variations in density, which are often not modeled or are inaccurately modeled, cause difficulty in properly calculating the drag acting on a satellite. These density variations are the result of many factors; however, the Sun is the main driver in upper atmospheric density changes. The Sun influences the densities in Earth's atmosphere through solar heating of the atmosphere, as well as through geomagnetic heating resulting from the solar wind. Data are examined for fourteen hour time spans between November 2004 and July 2009 for both the CHAMP and GRACE satellites. This data spans all available levels of solar and geomagnetic activity, which does not include data in the elevated and high solar activity bins due to the nature of the solar cycle. Density solutions are generated from corrections to five different baseline atmospheric models, as well as nine combinations of density and ballistic coefficient correlated half-lives. These half-lives are varied among values of 1.8, 18, and 180 minutes. A total of forty-five sets of results emerge from the orbit determination process for all combinations of baseline density model and half-lives. Each time period is examined for both CHAMP and GRACE-A, and the results are analyzed. Results are averaged from all solutions periods for 2004--2007. In addition, results are averaged after binning according to solar and geomagnetic activity levels. For any given day in this period, a ballistic coefficient correlated half-life of 1.8 minutes yields the best correlation and root mean square values for both CHAMP and GRACE. For CHAMP, a density correlated half-life of 18 minutes is best for higher levels of solar and geomagnetic activity, while for lower levels 180 minutes is usually superior. For GRACE, 180 minutes is nearly always best. The three Jacchia-based atmospheric models yield very similar results. The CIRA 1972 or Jacchia 1971 models as baseline consistently produce the best results for both satellites, though results obtained for Jacchia-Roberts are very similar to the other Jacchia-based models. Data are examined in a similar manner for the extended solar minimum period during 2008 and 2009, albeit with a much smaller sampling of data. With the exception of some atypical results, similar combinations of half-lives and baseline atmospheric model produce the best results. A greater sampling of data will aid in characterizing density in a period of especially low solar activity. In general, cross correlation values for CHAMP and GRACE revealed that the POE method matched trends observed by the accelerometers very well. However, one period of time deviated from this trend for the GRACE-A satellite. Between late October 2005 and January 2006, correlations for GRACE-A were very low. Special examination of the surrounding months revealed the extent of time this period covered. Half-life and baseline model combinations that produced the best results during this time were similar to those during normal periods. Plotting these periods revealed very short period density variations in the accelerometer that could not be reproduced by the empirical models, HASDM, or the POE method. Finally, densities produced using precision orbit data for the GRACE-B satellite were shown to be nearly indistinguishable from those produced by GRACE-A. Plots of the densities produced for both satellites during the same time periods revealed this fact. Multiple days were examined covering all possible ranges of solar and geomagnetic activity. In addition, the period in which GRACE-A correlations were low was studied. No significant differences existed between GRACE-A and GRACE-B for all of the days examined.
Observation of subfemtosecond fluctuations of the pulse separation in a soliton molecule.
Shi, Haosen; Song, Youjian; Wang, Chingyue; Zhao, Luming; Hu, Minglie
2018-04-01
In this work, we study the timing instability of a scalar twin-pulse soliton molecule generated by a passively mode-locked Er-fiber laser. Subfemtosecond precision relative timing jitter characterization between the two solitons composing the molecule is enabled by the balanced optical cross-correlation (BOC) method. Jitter spectral density reveals a short-term (on the microsecond to millisecond timescale) random fluctuation of the pulse separation even in the robust stationary soliton molecules. The root-mean-square (rms) timing jitter is on the order of femtoseconds depending on the pulse separation and the mode-locking regime. The lowest rms timing jitter is 0.83 fs, which is observed in the dispersion managed mode-locking regime. Moreover, the BOC method has proved to be capable of resolving the soliton interaction dynamics in various vibrating soliton molecules.
Parameter Estimation with Entangled Photons Produced by Parametric Down-Conversion
NASA Technical Reports Server (NTRS)
Cable, Hugo; Durkin, Gabriel A.
2010-01-01
We explore the advantages offered by twin light beams produced in parametric down-conversion for precision measurement. The symmetry of these bipartite quantum states, even under losses, suggests that monitoring correlations between the divergent beams permits a high-precision inference of any symmetry-breaking effect, e.g., fiber birefringence. We show that the quantity of entanglement is not the key feature for such an instrument. In a lossless setting, scaling of precision at the ultimate "Heisenberg" limit is possible with photon counting alone. Even as photon losses approach 100% the precision is shot-noise limited, and we identify the crossover point between quantum and classical precision as a function of detected flux. The predicted hypersensitivity is demonstrated with a Bayesian simulation.
Parameter estimation with entangled photons produced by parametric down-conversion.
Cable, Hugo; Durkin, Gabriel A
2010-07-02
We explore the advantages offered by twin light beams produced in parametric down-conversion for precision measurement. The symmetry of these bipartite quantum states, even under losses, suggests that monitoring correlations between the divergent beams permits a high-precision inference of any symmetry-breaking effect, e.g., fiber birefringence. We show that the quantity of entanglement is not the key feature for such an instrument. In a lossless setting, scaling of precision at the ultimate "Heisenberg" limit is possible with photon counting alone. Even as photon losses approach 100% the precision is shot-noise limited, and we identify the crossover point between quantum and classical precision as a function of detected flux. The predicted hypersensitivity is demonstrated with a Bayesian simulation.
NASA Technical Reports Server (NTRS)
1975-01-01
The Proceedings contain the papers presented at the Seventh Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting and the edited record of the discussion period following each paper. This meeting provided a forum to promote more effective, efficient, economical and skillful applications of PTTI technology to the many problem areas to which PTTI offers solutions. Specifically the purpose of the meeting is to: disseminate, coordinate, and exchange practical information associated with precise time and frequency; acquaint systems engineers, technicians and managers with precise time and frequency technology and its applications; and review present and future requirements for PTTI.
Precision measurement of the nuclear polarization in laser-cooled, optically pumped 37 K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenker, B.; Behr, J. A.; Melconian, D.
We report a measurement of the nuclear polarization of laser-cooled, optically pumped 37K atoms which will allow us to precisely measure angular correlation parameters in themore » $${\\beta }^{+}$$-decay of the same atoms. These results will be used to test the V ₋ A framework of the weak interaction at high precision. At the Triumf neutral atom trap (Trinat), a magneto-optical trap confines and cools neutral 37K atoms and optical pumping spin-polarizes them. We monitor the nuclear polarization of the same atoms that are decaying in situ by photoionizing a small fraction of the partially polarized atoms and then use the standard optical Bloch equations to model their population distribution. We obtain an average nuclear polarization of $$\\bar{P}=0.9913\\pm 0.0009$$, which is significantly more precise than previous measurements with this technique. Since our current measurement of the β-asymmetry has $$0.2 \\% $$ statistical uncertainty, the polarization measurement reported here will not limit its overall uncertainty. This result also demonstrates the capability to measure the polarization to $$\\lt 0.1 \\% $$, allowing for a measurement of angular correlation parameters to this level of precision, which would be competitive in searches for new physics.« less
Precision measurement of the nuclear polarization in laser-cooled, optically pumped 37 K
Fenker, B.; Behr, J. A.; Melconian, D.; ...
2016-07-13
We report a measurement of the nuclear polarization of laser-cooled, optically pumped 37K atoms which will allow us to precisely measure angular correlation parameters in themore » $${\\beta }^{+}$$-decay of the same atoms. These results will be used to test the V ₋ A framework of the weak interaction at high precision. At the Triumf neutral atom trap (Trinat), a magneto-optical trap confines and cools neutral 37K atoms and optical pumping spin-polarizes them. We monitor the nuclear polarization of the same atoms that are decaying in situ by photoionizing a small fraction of the partially polarized atoms and then use the standard optical Bloch equations to model their population distribution. We obtain an average nuclear polarization of $$\\bar{P}=0.9913\\pm 0.0009$$, which is significantly more precise than previous measurements with this technique. Since our current measurement of the β-asymmetry has $$0.2 \\% $$ statistical uncertainty, the polarization measurement reported here will not limit its overall uncertainty. This result also demonstrates the capability to measure the polarization to $$\\lt 0.1 \\% $$, allowing for a measurement of angular correlation parameters to this level of precision, which would be competitive in searches for new physics.« less
Dynamic Communicability Predicts Infectiousness
NASA Astrophysics Data System (ADS)
Mantzaris, Alexander V.; Higham, Desmond J.
Using real, time-dependent social interaction data, we look at correlations between some recently proposed dynamic centrality measures and summaries from large-scale epidemic simulations. The evolving network arises from email exchanges. The centrality measures, which are relatively inexpensive to compute, assign rankings to individual nodes based on their ability to broadcast information over the dynamic topology. We compare these with node rankings based on infectiousness that arise when a full stochastic SI simulation is performed over the dynamic network. More precisely, we look at the proportion of the network that a node is able to infect over a fixed time period, and the length of time that it takes for a node to infect half the network. We find that the dynamic centrality measures are an excellent, and inexpensive, proxy for the full simulation-based measures.
The bright future of single-molecule fluorescence imaging
Juette, Manuel F.; Terry, Daniel S.; Wasserman, Michael R.; Zhou, Zhou; Altman, Roger B.; Zheng, Qinsi; Blanchard, Scott C.
2014-01-01
Single-molecule Förster resonance energy transfer (smFRET) is an essential and maturing tool to probe biomolecular interactions and conformational dynamics in vitro and, increasingly, in living cells. Multi-color smFRET enables the correlation of multiple such events and the precise dissection of their order and timing. However, the requirements for good spectral separation, high time resolution, and extended observation times place extraordinary demands on the fluorescent labels used in such experiments. Together with advanced experimental designs and data analysis, the development of long-lasting, non-fluctuating fluorophores is therefore proving key to progress in the field. Recently developed strategies for obtaining ultra-stable organic fluorophores spanning the visible spectrum are underway that will enable multi-color smFRET studies to deliver on their promise of previously unachievable biological insights. PMID:24956235
Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model
NASA Astrophysics Data System (ADS)
Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.
2018-04-01
While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.
Larbi, A; Pesquer, L; Reboul, G; Omoumi, P; Perozziello, A; Abadie, P; Loriaut, P; Copin, P; Ducouret, E; Dallaudière, B
2016-10-01
Recent studies described that MRI is a good examination to assess damage in chronic athletic pubalgia (AP). However, to our knowledge, no studies focus on systematic correlation of precise tendon or parietal lesion in MRI with surgery and histological assessment. Therefore, we performed a case-control study to determine if MRI can precisely assess Adductor longus (AL) tendinopathy and parietal lesion, compared with surgery and histology. MRI can determine if AP comes from pubis symphysis, musculotendinous or inguinal orifice structures. Eighteen consecutive patients were enrolled from November 2011 to April 2013 for chronic AP. To constitute a control group, we also enrolled 18 asymptomatic men. All MRI were reviewed in consensus by 2 skeletal radiologists for pubic symphysis, musculotendinous, abdominal wall assessment and compared to surgery and histology findings. Regarding pubis symphysis, we found 4 symmetric bone marrow oedema (14%), 2 secondary cleft (7%) and 2 superior ligaments lesions (7%). For AL tendon, we mainly found 13 asymmetric bone marrow oedema (46%), 15 hyperaemia (54%). Regarding abdominal wall, the deep inguinal orifice size in the group of symptomatic athletes and the control group was respectively 27.3±6.4mm and 23.8±6.3mm. The correlation between MRI and surgery/histology was low: 20% for the AL tendon and 9% for the abdominal wall. If we chose the criteria "affected versus unaffected", this correlation became higher: 100% for AL tendon and 73% for the abdominal wall. MRI chronic athletic pubalgia concerns preferentially AL tendinopathy and deep inguinal canal dehiscence with high correlation to surgery/histology when only considering the item "affected versus unaffected" despite low correlation when we try to precisely grade these lesions. III: case-control study. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Department of Defense Precise Time and Time Interval program improvement plan
NASA Technical Reports Server (NTRS)
Bowser, J. R.
1981-01-01
The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.
NASA Astrophysics Data System (ADS)
Bauer, Daniel R.; Stevens, Benjamin; Taft, Jefferson; Chafin, David; Petre, Vinnie; Theiss, Abbey P.; Otter, Michael
2014-03-01
Recently, it has been demonstrated that the preservation of cancer biomarkers, such as phosphorylated protein epitopes, in formalin-fixed paraffin-embedded tissue is highly dependent on the localized concentration of the crosslinking agent. This study details a real-time diffusion monitoring system based on the acoustic time-of-flight (TOF) between pairs of 4 MHz focused transducers. Diffusion affects TOF because of the distinct acoustic velocities of formalin and interstitial fluid. Tissue is placed between the transducers and vertically translated to obtain TOF values at multiple locations with a spatial resolution of approximately 1 mm. Imaging is repeated for several hours until osmotic equilibrium is reached. A post-processing technique, analogous to digital acoustic interferometry, enables detection of subnanosecond TOF differences. Reference subtraction is used to compensate for environmental effects. Diffusion measurements with TOF monitoring ex vivo human tonsil tissue are well-correlated with a single exponential curve (R2>0.98) with a magnitude of up to 50 ns, depending on the tissue size (2-6 mm). The average exponential decay constant of 2 and 6 mm diameter samples are 20 and 315 minutes, respectively, although times varied significantly throughout the tissue (σmax=174 min). This technique can precisely monitor diffusion progression and could be used to mitigate effects from tissue heterogeneity and intersample variability, enabling improved preservation of cancer biomarkers distinctly sensitive to degradation during preanalytical tissue processing.
An electrophysiological signal that precisely tracks the emergence of error awareness
Murphy, Peter R.; Robertson, Ian H.; Allen, Darren; Hester, Robert; O'Connell, Redmond G.
2012-01-01
Recent electrophysiological research has sought to elucidate the neural mechanisms necessary for the conscious awareness of action errors. Much of this work has focused on the error positivity (Pe), a neural signal that is specifically elicited by errors that have been consciously perceived. While awareness appears to be an essential prerequisite for eliciting the Pe, the precise functional role of this component has not been identified. Twenty-nine participants performed a novel variant of the Go/No-go Error Awareness Task (EAT) in which awareness of commission errors was indicated via a separate speeded manual response. Independent component analysis (ICA) was used to isolate the Pe from other stimulus- and response-evoked signals. Single-trial analysis revealed that Pe peak latency was highly correlated with the latency at which awareness was indicated. Furthermore, the Pe was more closely related to the timing of awareness than it was to the initial erroneous response. This finding was confirmed in a separate study which derived IC weights from a control condition in which no indication of awareness was required, thus ruling out motor confounds. A receiver-operating-characteristic (ROC) curve analysis showed that the Pe could reliably predict whether an error would be consciously perceived up to 400 ms before the average awareness response. Finally, Pe latency and amplitude were found to be significantly correlated with overall error awareness levels between subjects. Our data show for the first time that the temporal dynamics of the Pe trace the emergence of error awareness. These findings have important implications for interpreting the results of clinical EEG studies of error processing. PMID:22470332
Tang, Ning; Pahalawatta, Vihanga; Frank, Andrea; Bagley, Zowie; Viana, Raquel; Lampinen, John; Leckie, Gregor; Huang, Shihai; Abravaya, Klara; Wallis, Carole L
2017-07-01
HIV RNA suppression is a key indicator for monitoring success of antiretroviral therapy. From a logistical perspective, viral load (VL) testing using Dried Blood Spots (DBS) is a promising alternative to plasma based VL testing in resource-limited settings. To evaluate the analytical and clinical performance of the Abbott RealTime HIV-1 assay using a fully automated one-spot DBS sample protocol. Limit of detection (LOD), linearity, lower limit of quantitation (LLQ), upper limit of quantitation (ULQ), and precision were determined using serial dilutions of HIV-1 Virology Quality Assurance stock (VQA Rush University), or HIV-1-containing armored RNA, made in venous blood. To evaluate correlation, bias, and agreement, 497 HIV-1 positive adult clinical samples were collected from Ivory Coast, Uganda and South Africa. For each HIV-1 participant, DBS-fingerprick, DBS-venous and plasma sample results were compared. Correlation and bias values were obtained. The sensitivity and specificity were analyzed at a threshold of 1000 HIV-1 copies/mL generated using the standard plasma protocol. The Abbott HIV-1 DBS protocol had an LOD of 839 copies/mL, a linear range from 500 to 1×10 7 copies/mL, an LLQ of 839 copies/mL, a ULQ of 1×10 7 copies/mL, and an inter-assay SD of ≤0.30 log copies/mL for all tested levels within this range. With clinical samples, the correlation coefficient (r value) was 0.896 between DBS-fingerprick and plasma and 0.901 between DBS-venous and plasma, and the bias was -0.07 log copies/mL between DBS-fingerprick and plasma and -0.02 log copies/mL between DBS-venous and plasma. The sensitivity of DBS-fingerprick and DBS-venous was 93%, while the specificity of both DBS methods was 95%. The results demonstrated that the Abbott RealTime HIV-1 assay with DBS sample protocol is highly sensitive, specific and precise across a wide dynamic range and correlates well with plasma values. The Abbott RealTime HIV-1 assay with DBS sample protocol provides an alternative sample collection and transfer option in resource-limited settings and expands the utility of a viral load test to monitor HIV-1 ART treatment for infected patients. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Kessler, Michael D.; Yerges-Armstrong, Laura; Taub, Margaret A.; Shetty, Amol C.; Maloney, Kristin; Jeng, Linda Jo Bone; Ruczinski, Ingo; Levin, Albert M.; Williams, L. Keoki; Beaty, Terri H.; Mathias, Rasika A.; Barnes, Kathleen C.; Boorgula, Meher Preethi; Campbell, Monica; Chavan, Sameer; Ford, Jean G.; Foster, Cassandra; Gao, Li; Hansel, Nadia N.; Horowitz, Edward; Huang, Lili; Ortiz, Romina; Potee, Joseph; Rafaels, Nicholas; Scott, Alan F.; Vergara, Candelaria; Gao, Jingjing; Hu, Yijuan; Johnston, Henry Richard; Qin, Zhaohui S.; Padhukasahasram, Badri; Dunston, Georgia M.; Faruque, Mezbah U.; Kenny, Eimear E.; Gietzen, Kimberly; Hansen, Mark; Genuario, Rob; Bullis, Dave; Lawley, Cindy; Deshpande, Aniket; Grus, Wendy E.; Locke, Devin P.; Foreman, Marilyn G.; Avila, Pedro C.; Grammer, Leslie; Kim, Kwang-YounA; Kumar, Rajesh; Schleimer, Robert; Bustamante, Carlos; De La Vega, Francisco M.; Gignoux, Chris R.; Shringarpure, Suyash S.; Musharoff, Shaila; Wojcik, Genevieve; Burchard, Esteban G.; Eng, Celeste; Gourraud, Pierre-Antoine; Hernandez, Ryan D.; Lizee, Antoine; Pino-Yanes, Maria; Torgerson, Dara G.; Szpiech, Zachary A.; Torres, Raul; Nicolae, Dan L.; Ober, Carole; Olopade, Christopher O.; Olopade, Olufunmilayo; Oluwole, Oluwafemi; Arinola, Ganiyu; Song, Wei; Abecasis, Goncalo; Correa, Adolfo; Musani, Solomon; Wilson, James G.; Lange, Leslie A.; Akey, Joshua; Bamshad, Michael; Chong, Jessica; Fu, Wenqing; Nickerson, Deborah; Reiner, Alexander; Hartert, Tina; Ware, Lorraine B.; Bleecker, Eugene; Meyers, Deborah; Ortega, Victor E.; Pissamai, Maul R. N.; Trevor, Maul R. N.; Watson, Harold; Araujo, Maria Ilma; Oliveira, Ricardo Riccio; Caraballo, Luis; Marrugo, Javier; Martinez, Beatriz; Meza, Catherine; Ayestas, Gerardo; Herrera-Paz, Edwin Francisco; Landaverde-Torres, Pamela; Erazo, Said Omar Leiva; Martinez, Rosella; Mayorga, Alvaro; Mayorga, Luis F.; Mejia-Mejia, Delmy-Aracely; Ramos, Hector; Saenz, Allan; Varela, Gloria; Vasquez, Olga Marina; Ferguson, Trevor; Knight-Madden, Jennifer; Samms-Vaughan, Maureen; Wilks, Rainford J.; Adegnika, Akim; Ateba-Ngoa, Ulysse; Yazdanbakhsh, Maria; O'Connor, Timothy D.
2016-01-01
To characterize the extent and impact of ancestry-related biases in precision genomic medicine, we use 642 whole-genome sequences from the Consortium on Asthma among African-ancestry Populations in the Americas (CAAPA) project to evaluate typical filters and databases. We find significant correlations between estimated African ancestry proportions and the number of variants per individual in all variant classification sets but one. The source of these correlations is highlighted in more detail by looking at the interaction between filtering criteria and the ClinVar and Human Gene Mutation databases. ClinVar's correlation, representing African ancestry-related bias, has changed over time amidst monthly updates, with the most extreme switch happening between March and April of 2014 (r=0.733 to r=−0.683). We identify 68 SNPs as the major drivers of this change in correlation. As long as ancestry-related bias when using these clinical databases is minimally recognized, the genetics community will face challenges with implementation, interpretation and cost-effectiveness when treating minority populations. PMID:27725664
A closer look at the concept of regional clocks for Precise Point Positioning
NASA Astrophysics Data System (ADS)
Weber, Robert; Karabatic, Ana; Thaler, Gottfried; Abart, Christoph; Huber, Katrin
2010-05-01
Under the precondition of at least two successfully tracked signals at different carrier frequencies we may obtain their ionosphere free linear combination. By introducing approximate values for geometric effects like orbits and tropospheric delay as well as an initial bias parameter N per individual satellite we can solve for the satellite clock with respect to the receiver clock. Noting, that residual effects like orbit errors, remaining tropospheric delays and a residual bias parameter map into these parameters, this procedure leaves us with a kind of virtual clock differences. These clocks cover regional effects and are therefore clearly correlated with clocks at nearby station. Therefore we call these clock differences, which are clearly different from clock solutions provided for instance by IGS, the "regional clocks". When introducing the regional clocks obtained from real-time data of a GNSS reference station network we are able to process the coordinates of a nearby isolated station via a PPP .In terms of PPP-convergence time which will be reduced down to 30 minutes or less, this procedure is clearly favorable. The accuracy is quite comparable with state of the art PPP procedures. Nevertheless, this approach cannot compete in fixing times with double-difference approaches but the correlation holds over hundreds of kilometers distance to our master station and the clock differences can easily by obtained, even in real-time. This presentation provides preliminary results of the project RA-PPP. RA-PPP is a research project financed by the Federal Ministry for Transport, Innovation and Technology, managed by the Austrian Research Promotion Agency (FFG) in the course of the 6th call of the Austrian Space Application Program (ASAP). RA-PPP stands for Rapid Precise Point Positioning, which denotes the wish for faster and more accurate algorithms for PPP. The concept of regional clocks which will be demonstrated in detail in this presentation is one out of 4 concepts to be evaluated in this project.
Error analysis of high-rate GNSS precise point positioning for seismic wave measurement
NASA Astrophysics Data System (ADS)
Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan
2017-06-01
High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.
DSN Beowulf Cluster-Based VLBI Correlator
NASA Technical Reports Server (NTRS)
Rogstad, Stephen P.; Jongeling, Andre P.; Finley, Susan G.; White, Leslie A.; Lanyi, Gabor E.; Clark, John E.; Goodhart, Charles E.
2009-01-01
The NASA Deep Space Network (DSN) requires a broadband VLBI (very long baseline interferometry) correlator to process data routinely taken as part of the VLBI source Catalogue Maintenance and Enhancement task (CAT M&E) and the Time and Earth Motion Precision Observations task (TEMPO). The data provided by these measurements are a crucial ingredient in the formation of precision deep-space navigation models. In addition, a VLBI correlator is needed to provide support for other VLBI related activities for both internal and external customers. The JPL VLBI Correlator (JVC) was designed, developed, and delivered to the DSN as a successor to the legacy Block II Correlator. The JVC is a full-capability VLBI correlator that uses software processes running on multiple computers to cross-correlate two-antenna broadband noise data. Components of this new system (see Figure 1) consist of Linux PCs integrated into a Beowulf Cluster, an existing Mark5 data storage system, a RAID array, an existing software correlator package (SoftC) originally developed for Delta DOR Navigation processing, and various custom- developed software processes and scripts. Parallel processing on the JVC is achieved by assigning slave nodes of the Beowulf cluster to process separate scans in parallel until all scans have been processed. Due to the single stream sequential playback of the Mark5 data, some ramp-up time is required before all nodes can have access to required scan data. Core functions of each processing step are accomplished using optimized C programs. The coordination and execution of these programs across the cluster is accomplished using Pearl scripts, PostgreSQL commands, and a handful of miscellaneous system utilities. Mark5 data modules are loaded on Mark5 Data systems playback units, one per station. Data processing is started when the operator scans the Mark5 systems and runs a script that reads various configuration files and then creates an experiment-dependent status database used to delegate parallel tasks between nodes and storage areas (see Figure 2). This script forks into three processes: extract, translate, and correlate. Each of these processes iterates on available scan data and updates the status database as the work for each scan is completed. The extract process coordinates and monitors the transfer of data from each of the Mark5s to the Beowulf RAID storage systems. The translate process monitors and executes the data conversion processes on available scan files, and writes the translated files to the slave nodes. The correlate process monitors the execution of SoftC correlation processes on the slave nodes for scans that have completed translation. A comparison of the JVC and the legacy Block II correlator outputs reveals they are well within a formal error, and that the data are comparable with respect to their use in flight navigation. The processing speed of the JVC is improved over the Block II correlator by a factor of 4, largely due to the elimination of the reel-to-reel tape drives used in the Block II correlator.
Measurement of latent cognitive abilities involved in concept identification learning.
Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Nock, Matthew K; Naifeh, James A; Heeringa, Steven; Ursano, Robert J; Stein, Murray B
2015-01-01
We used cognitive and psychometric modeling techniques to evaluate the construct validity and measurement precision of latent cognitive abilities measured by a test of concept identification learning: the Penn Conditional Exclusion Test (PCET). Item response theory parameters were embedded within classic associative- and hypothesis-based Markov learning models and were fitted to 35,553 Army soldiers' PCET data from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Data were consistent with a hypothesis-testing model with multiple latent abilities-abstraction and set shifting. Latent abstraction ability was positively correlated with number of concepts learned, and latent set-shifting ability was negatively correlated with number of perseverative errors, supporting the construct validity of the two parameters. Abstraction was most precisely assessed for participants with abilities ranging from 1.5 standard deviations below the mean to the mean itself. Measurement of set shifting was acceptably precise only for participants making a high number of perseverative errors. The PCET precisely measures latent abstraction ability in the Army STARRS sample, especially within the range of mildly impaired to average ability. This precision pattern is ideal for a test developed to measure cognitive impairment as opposed to cognitive strength. The PCET also measures latent set-shifting ability, but reliable assessment is limited to the impaired range of ability, reflecting that perseverative errors are rare among cognitively healthy adults. Integrating cognitive and psychometric models can provide information about construct validity and measurement precision within a single analytical framework.
The 26th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting
NASA Technical Reports Server (NTRS)
Sydnor, Richard (Editor)
1995-01-01
This document is a compilation of technical papers presented at the 26th Annual PTTI Applications and Planning Meeting. Papers are in the following categories: (1) Recent developments in rubidium, cesium, and hydrogen-based frequency standards, and in cryogenic and trapped-ion technology; (2) International and transnational applications of Precise Time and Time Interval technology with emphasis on satellite laser tracking, GLONASS timing, intercomparison of national time scales and international telecommunications; (3) Applications of Precise Time and Time Interval technology to the telecommunications, power distribution, platform positioning, and geophysical survey industries; (4) Applications of PTTI technology to evolving military communications and navigation systems; and (5) Dissemination of precise time and frequency by means of GPS, GLONASS, MILSTAR, LORAN, and synchronous communications satellites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zawisza, I; Yan, H; Yin, F
Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogatemore » signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction algorithm is effective in estimating surrogate motion multiple-steps in advance. Relative-weighting method shows better prediction accuracy than equal-weighting method. More parameters of this algorithm are under investigation.« less
Merunka, Dalibor; Peric, Mirna; Peric, Miroslav
2015-02-19
The X-band electron paramagnetic resonance spectroscopy (EPR) of a stable, spherical nitroxide spin probe, perdeuterated 2,2,6,6-tetramethyl-4-oxopiperidine-1-oxyl (pDTO) has been used to study the nanostructural organization of a series of 1-alkyl-3-methylimidazolium tetrafluoroborate ionic liquids (ILs) with alkyl chain lengths from two to eight carbons. By employing nonlinear least-squares fitting of the EPR spectra, we have obtained values of the rotational correlation time and hyperfine coupling splitting of pDTO to high precision. The rotational correlation time of pDTO in ILs and squalane, a viscous alkane, can be fit very well to a power law functionality with a singular temperature, which often describes a number of physical quantities measured in supercooled liquids. The viscosity of the ILs and squalane, taken from the literature, can also be fit to the same power law expression, which means that the rotational correlation times and the ionic liquid viscosities have similar functional dependence on temperature. The apparent activation energy of both the rotational correlation time of pDTO and the viscous flow of ILs and squalane increases with decreasing temperature; in other words, they exhibit strong non-Arrhenius behavior. The rotational correlation time of pDTO as a function of η/T, where η is the shear viscosity and T is the temperature, is well described by the Stokes-Einstein-Debye (SED) law, while the hydrodynamic probe radii are solvent dependent and are smaller than the geometric radius of the probe. The temperature dependence of hyperfine coupling splitting is the same in all four ionic liquids. The value of the hyperfine coupling splitting starts decreasing with increasing alkyl chain length in the ionic liquids in which the number of carbons in the alkyl chain is greater than four. This decrease together with the decrease in the hydrodynamic radius of the probe indicates a possible existence of nonpolar nanodomains.
Resolving the Timing of Events Around the Cretaceous-Paleogene Boundary
NASA Astrophysics Data System (ADS)
Sprain, Courtney Jean
Despite decades of study, the exact cause of the Cretaceous-Paleogene boundary (KPB) mass extinction remains contentious. Hypothesized scenarios center around two main environmental perturbations: voluminous (>10 6 km3) volcanic eruptions from the Deccan Traps in modern-day India, and the large impact recorded by the Chicxulub crater. The impact hypothesis has gained broad support, bolstered by the discoveries of iridium anomalies, shocked quartz, and spherules at the KPB worldwide, which are contemporaneous with the Chicxulub impact structure. However, evidence for protracted extinctions, particularly in non-marine settings, and paleoenvironmental change associated with climatic swings before the KPB, challenge the notion that the impact was the sole cause of the KPB mass extinction. Despite forty years of study, the relative importance of each of these events is unclear, and one key inhibitor is insufficient resolution of existing geochronology. In this dissertation, I present work developing a high-precision global chronologic framework for the KPB that outlines the temporal sequence of biotic changes (both within the terrestrial and marine realms), climatic changes, and proposed perturbations (i.e. impact, volcanic eruptions) using 40Ar/39Ar geochronology and paleomagnetism. This work is focused on two major areas of study: 1) refining the timing and tempo of terrestrial ecosystem change around the KPB, and 2) calibrating the geomagnetic polarity timescale, and particularly the timing and duration of magnetic polarity chron C29r (the KPB falls about halfway into C29r). First I develop a high-precision chronostratigraphic framework for fluvial sediments within the Hell Creek region, in NE Montana, which is one of the best-studied terrestrial KPB sections worldwide. For this work I dated 15 tephra deposits with +/- 30 ka precision using 40Ar/ 39Ar geochronology, ranging in time from 300 ka before the KPB to 1 Ma after. By tying these results to paleontological records, this work is able to constrain the timing of terrestrial faunal decline and recovery in addition to calibrating late Cretaceous and early Paleocene North American Land Mammal Ages biostratigraphy. To aid in global correlation, I next sought to calibrate the timing and duration of C29r. However, based on discrepancies noticed between a calculated duration for C29r, from new dates collected as part of this dissertation and previously published magnetostratigraphy for the Hell Creek region, and the duration provided within the Geologic Time Scale 2012, it became clear that reliability of sediments from the Hell Creek as paleomagnetic recorders was suspect. To test this claim, a complete characterization of the rock magnetic properties of sediments from the Hell Creek region was undertaken. To aid characterization, a new test to determine the presence of intermediate composition titanohematite (Fe2-yTiyO3; 0.5 ≤ y ≤ 0.7) was developed. Results from rock magnetic characterization show that sediments from the Hell Creek should be reliable paleomagnetic recorders, so long as care is taken to remove goethite (a secondary mineral that previous magnetostratigraphic studies in the Hell Creek did not remove), and to avoid samples that have been heated above 200ºC. With the knowledge that sediments from the Hell Creek region are reliable magnetic recorders, I collected 14 new magnetostratigraphic sections, and 18 new high-precision 40Ar/39Ar dates which together provide constraints on the timing and duration of chron C29r, at unprecedented precision. This work enables correlation of our record in the Hell Creek to other KPB records around the globe, in addition to providing a test of the Paleocene astrochronologic timescale.
Meintker, Lisa; Haimerl, Maria; Ringwald, Jürgen; Krause, Stefan W
2013-11-01
Measurement of immature platelets was introduced into routine diagnostics by Sysmex as immature platelet fraction (IPF) some years ago and recently by Abbott as reticulated platelet fraction (rPT). Here, we compare both methods. We evaluated the precision and agreement of these parameters between Sysmex XE-5000 and Abbott CD-Sapphire in three distinct thrombocytopaenic cohorts: 30 patients with beginning thrombocytopaenia and 64 patients with recovering platelets (PLT) after chemotherapy, 16 patients with immune thrombocytopaenia (ITP) or heparin-induced thrombocytopaenia type 2 (HIT) and 110 additional normal controls. Furthermore, we analysed, how IPF/rPT differed between these thrombocytopaenic cohorts and controls. Both analysers demonstrated acceptable overall precision (repeatability) of IPF/rPT with lower precision at low PLT counts. IPF/rPT artificially increased during storage of blood samples overnight. Inter-instrument comparison showed a moderate correlation (Pearson r²=0.38) and a systematic bias of 1.04 towards higher IPF-values with the XE-5000. IPF/rPT was highest in recovering thrombopoesis after chemotherapy and moderately increased in ITP/HIT. The normal range deduced from control samples was much narrower with CD-Sapphire (1.0%-3.8%, established here for the first time) in comparison to XE-5000 (0.8%-7.9%) leading to a smaller overlap of samples with increased PLT turnover and normal controls. IPF and rPT both give useful information on PLT turnover, although the two analysers only show a moderate inter-instrument correlation and have different reference ranges. A better separation of patient groups with high PLT turnover like ITP/HIT from normal controls is obtained by CD-Sapphire.
SAGE III solar ozone measurements: Initial results
NASA Technical Reports Server (NTRS)
Wang, Hsiang-Jui; Cunnold, Derek M.; Trepte, Chip; Thomason, Larry W.; Zawodny, Joseph M.
2006-01-01
Results from two retrieval algorithms, o3-aer and o3-mlr , used for SAGE III solar occultation ozone measurements in the stratosphere and upper troposphere are compared. The main differences between these two retrieved (version 3.0) ozone are found at altitudes above 40 km and below 15 km. Compared to correlative measurements, the SAGE II type ozone retrievals (o3-aer) provide better precisions above 40 km and do not induce artificial hemispheric differences in upper stratospheric ozone. The multiple linear regression technique (o3_mlr), however, can yield slightly more accurate ozone (by a few percent) in the lower stratosphere and upper troposphere. By using SAGE III (version 3.0) ozone from both algorithms and in their preferred regions, the agreement between SAGE III and correlative measurements is shown to be approx.5% down to 17 km. Below 17 km SAGE III ozone values are systematically higher, by 10% at 13 km, and a small hemispheric difference (a few percent) appears. Compared to SAGE III and HALOE, SAGE II ozone has the best accuracy in the lowest few kilometers of the stratosphere. Estimated precision in SAGE III ozone is about 5% or better between 20 and 40 km and approx.10% at 50 km. The precision below 20 km is difficult to evaluate because of limited coincidences between SAGE III and sondes. SAGE III ozone values are systematically slightly larger (2-3%) than those from SAGE II but the profile shapes are remarkably similar for altitudes above 15 km. There is no evidence of any relative drift or time dependent differences between these two instruments for altitudes above 15-20 km.
Precision Topography of Pluvial Features in Nevada as Analogs for Possible Pluvial Landforms on Mars
NASA Astrophysics Data System (ADS)
Zimbelman, J. R.; Garry, W. B.; Irwin, R. P.
2009-12-01
Topographic measurements with better than 2 cm horizontal and 4 cm vertical precision were obtained for pluvial features in Nevada using a Trimble R8 Differential Global Positioning System (DGPS), making use of both real-time kinematic and post-processed kinematic techniques. We collected ten transects across shorelines in the southern end of Surprise Valley, near the California border in NW Nevada, on April 15-17, 2008, plus five transects of shorelines and eight transects of a wavecut scarp in Long Valley, near the Utah border in NE Nevada, on May 5-7, 2009. Each transect consists of topographic points keyed to field notes and photographs. In Surprise Valley, the highstand shoreline was noted at 1533.4 m elevation in 8 of the 10 transects, and several prominent intermediate shorelines could be correlated between two or more transects. In Long Valley, the well preserved highstand shoreline elevation of 1908.7 m correlated (within 0.6 m) to the base of the wavecut scarp along a horizontal distance of 1.2 km. These results demonstrate that adherence to a geopotential elevation level is one of the strongest indicators that a possible shoreline feature is the result of pluvial processes, and that elevation levels of features can be clearly detected and documented with precise topographic measurements. The High Resolution Imaging Science Experiment (HiRISE) is returning images of Mars that show potential shoreline features in remarkable detail (e.g., image PSP_009998_2165, 32 cm/pixel, showing a possible shoreline in NW Arabia). Our results from studying shorelines in Nevada will provide a basis for evaluating the plausibility of possible shoreline features on Mars, the implications of which are significant for the overall history of Mars.
Screening test for direct oral anticoagulants with the dilute Russell viper venom time.
Pratt, Jackie; Crispin, Philip
2018-06-01
To evaluate the dilute Russell viper venom time (DRVVT) for the detection of direct-acting oral anticoagulants (DOACs) and to investigate the effect of DOACS on coagulation assays. Patients with DOACs and controls had plasma levels determined by an anti-Xa assay and dilute thrombin clotting time (TCT). Levels were correlated with the DRVVT as well as TCT, prothrombin time (PT), activated partial thromboplastin time (APTT), fibrinogen, protein C, protein S and antithrombin levels. The utility of the DRVVT for detecting clinically significant levels of DOACs was evaluated. There were 44 samples from patients taking dabigatran, 83 with rivaroxaban, 18 with apixaban and 55 controls. The PT and APTT failed to detect clinically significant doses of anticoagulants adequately. The TCT was increased in patients taking dabigatran and normal in controls and patients on FXa inhibitors. There was a linear correlation with all DOAC levels and the DRVVT, with moderate precision, but it showed high sensitivity (95%) and specificity (90%) for clinically significant DOAC levels. The DRVVT detects clinically significant levels of DOACs and, in conjunction with the TCT, may be used as a screen for the presence and type of DOAC. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Science 101: How Do Atomic Clocks Work?
ERIC Educational Resources Information Center
Science and Children, 2008
2008-01-01
You might be wondering why in the world we need such precise measures of time. Well, many systems we use everyday, such as Global Positioning Systems, require precise synchronization of time. This comes into play in telecommunications and wireless communications, also. For purely scientific reasons, we can use precise measurement of time to…
Time Delay Embedding Increases Estimation Precision of Models of Intraindividual Variability
ERIC Educational Resources Information Center
von Oertzen, Timo; Boker, Steven M.
2010-01-01
This paper investigates the precision of parameters estimated from local samples of time dependent functions. We find that "time delay embedding," i.e., structuring data prior to analysis by constructing a data matrix of overlapping samples, increases the precision of parameter estimates and in turn statistical power compared to standard…
NASA Technical Reports Server (NTRS)
Owe, M.
1981-01-01
Using aerial photographs dating back to 1937, the historical trends of five land use classes (crop, forest, open field, urban and suburban) are determined. The relationships between these and various flow regime parameters are investigated. Annual runoff is found to be 7.5 inches greater now than in the year 1932. It is also found that growing season runoff increased by 3.5 inches during the same period. This increase is approximately equivalent to 160 area inches of excess runoff during the 45-year period of observation. The increase in runoff is found to be positively correlated with the percent basin area in the urban, suburban and open field land use classes. A negative correlation is established with forest and crop land. Although poor correlations are found with high flow, low flow, flow interval and flow date data, it is thought that a more precise quantification of land use or a smaller basin area may possibly have yielded more positive results for streamflow timing data.
NASA Astrophysics Data System (ADS)
Jiang, Weiping; Ma, Jun; Li, Zhao; Zhou, Xiaohui; Zhou, Boye
2018-05-01
The analysis of the correlations between the noise in different components of GPS stations has positive significance to those trying to obtain more accurate uncertainty of velocity with respect to station motion. Previous research into noise in GPS position time series focused mainly on single component evaluation, which affects the acquisition of precise station positions, the velocity field, and its uncertainty. In this study, before and after removing the common-mode error (CME), we performed one-dimensional linear regression analysis of the noise amplitude vectors in different components of 126 GPS stations with a combination of white noise, flicker noise, and random walking noise in Southern California. The results show that, on the one hand, there are above-moderate degrees of correlation between the white noise amplitude vectors in all components of the stations before and after removal of the CME, while the correlations between flicker noise amplitude vectors in horizontal and vertical components are enhanced from un-correlated to moderately correlated by removing the CME. On the other hand, the significance tests show that, all of the obtained linear regression equations, which represent a unique function of the noise amplitude in any two components, are of practical value after removing the CME. According to the noise amplitude estimates in two components and the linear regression equations, more accurate noise amplitudes can be acquired in the two components.
Tuijtel, Maarten W; Mulder, Aat A; Posthuma, Clara C; van der Hoeven, Barbara; Koster, Abraham J; Bárcena, Montserrat; Faas, Frank G A; Sharp, Thomas H
2017-09-05
Correlative light-electron microscopy (CLEM) combines the high spatial resolution of transmission electron microscopy (TEM) with the capability of fluorescence light microscopy (FLM) to locate rare or transient cellular events within a large field of view. CLEM is therefore a powerful technique to study cellular processes. Aligning images derived from both imaging modalities is a prerequisite to correlate the two microscopy data sets, and poor alignment can limit interpretability of the data. Here, we describe how uranyl acetate, a commonly-used contrast agent for TEM, can be induced to fluoresce brightly at cryogenic temperatures (-195 °C) and imaged by cryoFLM using standard filter sets. This dual-purpose contrast agent can be used as a general tool for CLEM, whereby the equivalent staining allows direct correlation between fluorescence and TEM images. We demonstrate the potential of this approach by performing multi-colour CLEM of cells containing equine arteritis virus proteins tagged with either green- or red-fluorescent protein, and achieve high-precision localization of virus-induced intracellular membrane modifications. Using uranyl acetate as a dual-purpose contrast agent, we achieve an image alignment precision of ~30 nm, twice as accurate as when using fiducial beads, which will be essential for combining TEM with the evolving field of super-resolution light microscopy.
DORIS-based point mascons for the long term stability of precise orbit solutions
NASA Astrophysics Data System (ADS)
Cerri, L.; Lemoine, J. M.; Mercier, F.; Zelensky, N. P.; Lemoine, F. G.
2013-08-01
In recent years non-tidal Time Varying Gravity (TVG) has emerged as the most important contributor in the error budget of Precision Orbit Determination (POD) solutions for altimeter satellites' orbits. The Gravity Recovery And Climate Experiment (GRACE) mission has provided POD analysts with static and time-varying gravity models that are very accurate over the 2002-2012 time interval, but whose linear rates cannot be safely extrapolated before and after the GRACE lifespan. One such model based on a combination of data from GRACE and Lageos from 2002-2010, is used in the dynamic POD solutions developed for the Geophysical Data Records (GDRs) of the Jason series of altimeter missions and the equivalent products from lower altitude missions such as Envisat, Cryosat-2, and HY-2A. In order to accommodate long-term time-variable gravity variations not included in the background geopotential model, we assess the feasibility of using DORIS data to observe local mass variations using point mascons. In particular, we show that the point-mascon approach can stabilize the geographically correlated orbit errors which are of fundamental interest for the analysis of regional Mean Sea Level trends based on altimeter data, and can therefore provide an interim solution in the event of GRACE data loss. The time series of point-mass solutions for Greenland and Antarctica show good agreement with independent series derived from GRACE data, indicating a mass loss at rate of 210 Gt/year and 110 Gt/year respectively.
Adaptive force sonorheometry for assessment of whole blood coagulation.
Mauldin, F William; Viola, Francesco; Hamer, Theresa C; Ahmed, Eman M; Crawford, Shawna B; Haverstick, Doris M; Lawrence, Michael B; Walker, William F
2010-05-02
Viscoelastic diagnostics that monitor the hemostatic function of whole blood (WB), such as thromboelastography, have been developed with demonstrated clinical utility. By measuring the cumulative effects of all components of hemostasis, viscoelastic diagnostics have circumvented many of the challenges associated with more common tests of blood coagulation. We describe a new technology, called sonorheometry, that adaptively applies acoustic radiation force to assess coagulation function in WB. The repeatability (precision) of coagulation parameters was assessed using citrated WB samples. A reference range of coagulation parameters, along with corresponding measurements from prothrombin time (PT) and partial thromboplastin time (PTT), were obtained from WB samples of 20 healthy volunteers. In another study, sonorheometry monitored anticoagulation with heparin (0-5 IU/ml) and reversal from varied dosages of protamine (0-10 IU/ml) in heparinized WB (2 IU/ml). Sonorheometry exhibited low CVs for parameters: clot initiation time (TC1), <7%; clot stabilization time (TC2), <6.5%; and clotting angle (theta), <3.5%. Good correlation was observed between clotting times, TC1 and TC2, and PTT (r=0.65 and 0.74 respectively; n=18). Linearity to heparin dosage was observed with average linearity r>0.98 for all coagulation parameters. We observed maximum reversal of heparin anticoagulation at protamine to heparin ratios of 1.4:1 from TC1 (P=0.6) and 1.2:1 from theta (P=0.55). Sonorheometry is a non-contact method for precise assessment of WB coagulation. Copyright 2010 Elsevier B.V. All rights reserved.
Höhn, K; Fuchs, J; Fröber, A; Kirmse, R; Glass, B; Anders-Össwein, M; Walther, P; Kräusslich, H-G; Dietrich, C
2015-08-01
In this study, we present a correlative microscopy workflow to combine detailed 3D fluorescence light microscopy data with ultrastructural information gained by 3D focused ion beam assisted scanning electron microscopy. The workflow is based on an optimized high pressure freezing/freeze substitution protocol that preserves good ultrastructural detail along with retaining the fluorescence signal in the resin embedded specimens. Consequently, cellular structures of interest can readily be identified and imaged by state of the art 3D confocal fluorescence microscopy and are precisely referenced with respect to an imprinted coordinate system on the surface of the resin block. This allows precise guidance of the focused ion beam assisted scanning electron microscopy and limits the volume to be imaged to the structure of interest. This, in turn, minimizes the total acquisition time necessary to conduct the time consuming ultrastructural scanning electron microscope imaging while eliminating the risk to miss parts of the target structure. We illustrate the value of this workflow for targeting virus compartments, which are formed in HIV-pulsed mature human dendritic cells. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Application of troposphere model from NWP and GNSS data into real-time precise positioning
NASA Astrophysics Data System (ADS)
Wilgan, Karina; Hadas, Tomasz; Kazmierski, Kamil; Rohm, Witold; Bosy, Jaroslaw
2016-04-01
The tropospheric delay empirical models are usually functions of meteorological parameters (temperature, pressure and humidity). The application of standard atmosphere parameters or global models, such as GPT (global pressure/temperature) model or UNB3 (University of New Brunswick, version 3) model, may not be sufficient, especially for positioning in non-standard weather conditions. The possible solution is to use regional troposphere models based on real-time or near-real time measurements. We implement a regional troposphere model into the PPP (Precise Point Positioning) software GNSS-WARP (Wroclaw Algorithms for Real-time Positioning) developed at Wroclaw University of Environmental and Life Sciences. The software is capable of processing static and kinematic multi-GNSS data in real-time and post-processing mode and takes advantage of final IGS (International GNSS Service) products as well as IGS RTS (Real-Time Service) products. A shortcoming of PPP technique is the time required for the solution to converge. One of the reasons is the high correlation among the estimated parameters: troposphere delay, receiver clock offset and receiver height. To efficiently decorrelate these parameters, a significant change in satellite geometry is required. Alternative solution is to introduce the external high-quality regional troposphere delay model to constrain troposphere estimates. The proposed model consists of zenith total delays (ZTD) and mapping functions calculated from meteorological parameters from Numerical Weather Prediction model WRF (Weather Research and Forecasting) and ZTDs from ground-based GNSS stations using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zurich.
Centennial to millennial variations of atmospheric methane during the early Holocene
NASA Astrophysics Data System (ADS)
Yang, Ji-Woong; Ahn, Jinho; Brook, Edward
2015-04-01
Atmospheric CH4 is one of the most important greenhouse gases. Ice core studies revealed strong correlations between millennial CH4 variations and Greenland climate during the last glacial period. However, millennial to sub-millennial CH4 variations during interglacial periods are not well studied. Recently, several high-resolution data sets have been produced for the late Holocene, but it is difficult to distinguish natural- from anthropogenic changes. In contrast, the methane budget of the early Holocene is not affected by anthropogenic disturbances, thus may help us better understand natural CH4 control mechanisms under interglacial climate boundary conditions. Here we present our new high-precision and high-resolution atmospheric CH4 record from Siple Dome ice core, Antarctica that covers the early Holocene. We used our new wet extraction system at Seoul National University that shows a good precision of ~1 ppb. Our data show several tens of ppb of centennial- to millennial CH4 variations and an anti-correlative evolution with Greenland climate on the millennial time scale. The CH4 record could have been affected by many different types of forcing, including temperature, precipitation (monsoon intensity), biomass burning, sea surface temperature, and solar activity. According to our data, early Holocene CH4 is well correlated with records of hematite stained grains (HSG) in North Atlantic sediment records, within age uncertainties. A red-noise spectral analysis yields peaks at frequencies of ~1270 and ~80 years, which are similar to solar frequencies, but further investigations are needed to determine major controlling factor of atmospheric CH4during the early Holocene.
Bello-Silva, Marina Stella; Wehner, Martin; Eduardo, Carlos de Paula; Lampert, Friedrich; Poprawe, Reinhart; Hermans, Martin; Esteves-Oliveira, Marcella
2013-01-01
This study aimed to evaluate the possibility of introducing ultra-short pulsed lasers (USPL) in restorative dentistry by maintaining the well-known benefits of lasers for caries removal, but also overcoming disadvantages, such as thermal damage of irradiated substrate. USPL ablation of dental hard tissues was investigated in two phases. Phase 1--different wavelengths (355, 532, 1,045, and 1,064 nm), pulse durations (picoseconds and femtoseconds) and irradiation parameters (scanning speed, output power, and pulse repetition rate) were assessed for enamel and dentin. Ablation rate was determined, and the temperature increase measured in real time. Phase 2--the most favorable laser parameters were evaluated to correlate temperature increase to ablation rate and ablation efficiency. The influence of cooling methods (air, air-water spray) on ablation process was further analyzed. All parameters tested provided precise and selective tissue ablation. For all lasers, faster scanning speeds resulted in better interaction and reduced temperature increase. The most adequate results were observed for the 1064-nm ps-laser and the 1045-nm fs-laser. Forced cooling caused moderate changes in temperature increase, but reduced ablation, being considered unnecessary during irradiation with USPL. For dentin, the correlation between temperature increase and ablation efficiency was satisfactory for both pulse durations, while for enamel, the best correlation was observed for fs-laser, independently of the power used. USPL may be suitable for cavity preparation in dentin and enamel, since effective ablation and low temperature increase were observed. If adequate laser parameters are selected, this technique seems to be promising for promoting the laser-assisted, minimally invasive approach.
Finite size of hadrons and Bose-Einstein correlations
NASA Astrophysics Data System (ADS)
Bialas, A.; Zalewski, K.
2013-11-01
It is observed that the finite size of hadrons produced in high energy collisions implies that their positions are correlated, since the probability to find two hadrons on top of each other is highly reduced. It is then shown that this effect can naturally explain the values of the correlation function below one, observed at LEP and LHC for pairs of identical pions. to emphasize the role of inter-hadron correlations in the explanation of the observed negative values of C(p1,p2)-1 and to point out that a natural source of such inter-hadron correlations can be provided by the finite sizes of the produced hadrons. Several comments are in order.(i) Our use of the Θ-function to parametrize the excluded volume correlations is clearly only a crude approximation. For a precise description of data almost certainly a more sophisticated parametrization of the effect will be needed. In particular, note that with our parametrization the correlation in space-time does not affect the single-particle and two-particle non-symmetrized momentum distributions. The same comment applies to our use of Gaussians.(ii) It has been recently found [6,7] that in pp collisions at LHC, the volume of the system (as determined from the fitted HBT parameters) depends weakly on the multiplicity of the particles produced in the collision. This suggests that large multiplicity in an event is due to a longer emission time. If true, this should be also reflected in the HBT measurements and it may be interesting to investigate this aspect of the problem in more detail.(iii) To investigate further the space and/or time correlations between the emitted particles more information is needed. It would be interesting to study the minima in the correlation functions separately for the “side”, “out” and “long” directions. Such studies may allow to determine the size of the “excluded volume” and compare it with other estimates [14,15]. We also feel that with the present accuracy and statistics of data, measurements of three-particle B-E correlations represent the potential to provide some essential information helping to understand what is really going on.
Performance Evaluation of Real-Time Precise Point Positioning Method
NASA Astrophysics Data System (ADS)
Alcay, Salih; Turgut, Muzeyyen
2017-12-01
Post-Processed Precise Point Positioning (PPP) is a well-known zero-difference positioning method which provides accurate and precise results. After the experimental tests, IGS Real Time Service (RTS) officially provided real time orbit and clock products for the GNSS community that allows real-time (RT) PPP applications. Different software packages can be used for RT-PPP. In this study, in order to evaluate the performance of RT-PPP, 3 IGS stations are used. Results, obtained by using BKG Ntrip Client (BNC) Software v2.12, are examined in terms of both accuracy and precision.
Noise analysis of GPS time series in Taiwan
NASA Astrophysics Data System (ADS)
Lee, You-Chia; Chang, Wu-Lung
2017-04-01
Global positioning system (GPS) usually used for researches of plate tectonics and crustal deformation. In most studies, GPS time series considered only time-independent noises (white noise), but time-dependent noises (flicker noise, random walk noise) which were found by nearly twenty years are also important to the precision of data. The rate uncertainties of stations will be underestimated if the GPS time series are assumed only time-independent noise. Therefore studying the noise properties of GPS time series is necessary in order to realize the precision and reliability of velocity estimates. The lengths of our GPS time series are from over 500 stations around Taiwan with time spans longer than 2.5 years up to 20 years. The GPS stations include different monument types such as deep drill braced, roof, metal tripod, and concrete pier, and the most common type in Taiwan is the metal tripod. We investigated the noise properties of continuous GPS time series by using the spectral index and amplitude of the power law noise. During the process we first remove the data outliers, and then estimate linear trend, size of offsets, and seasonal signals, and finally the amplitudes of the power-law and white noise are estimated simultaneously. Our preliminary results show that the noise amplitudes of the north component are smaller than that of the other two components, and the largest amplitudes are in the vertical. We also find that the amplitudes of white noise and power-law noises are positively correlated in three components. Comparisons of noise amplitudes of different monument types in Taiwan reveal that the deep drill braced monuments have smaller data uncertainties and therefore are more stable than other monuments.
Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision
Yang, Bingwei; Xie, Xinhao; Li, Duan
2018-01-01
Time of flight (TOF) based light detection and ranging (LiDAR) is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC) that counts time between trigger signals and analog-to-digital converter (ADC) that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR) with analog discrete return system based ranging (AR), a peak detection method (WR-PK) shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC), WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision. PMID:29642639
NASA Astrophysics Data System (ADS)
Crosnier de Bellaistre, C.; Trefzger, C.; Aspect, A.; Georges, A.; Sanchez-Palencia, L.
2018-01-01
We study numerically the expansion dynamics of an initially confined quantum wave packet in the presence of a disordered potential and a uniform bias force. For white-noise disorder, we find that the wave packet develops asymmetric algebraic tails for any ratio of the force to the disorder strength. The exponent of the algebraic tails decays smoothly with that ratio and no evidence of a critical behavior on the wave density profile is found. Algebraic localization features a series of critical values of the force-to-disorder strength where the m th position moment of the wave packet diverges. Below the critical value for the m th moment, we find fair agreement between the asymptotic long-time value of the m th moment and the predictions of diagrammatic calculations. Above it, we find that the m th moment grows algebraically in time. For correlated disorder, we find evidence of systematic delocalization, irrespective to the model of disorder. More precisely, we find a two-step dynamics, where both the center-of-mass position and the width of the wave packet show transient localization, similar to the white-noise case, at short time and delocalization at sufficiently long time. This correlation-induced delocalization is interpreted as due to the decrease of the effective de Broglie wavelength, which lowers the effective strength of the disorder in the presence of finite-range correlations.
2010-11-01
CDMA base stations are each synchronized by GPS receivers, they provide an indirect link to GPS system time and UTC time . The major stock...antenna synchronizes the Local Area Network (LAN) to within 10 microseconds of UTC using the IEEE-1588 Precision Time Protocol (PTP). This is an...activities. Understanding and measuring latency on the LAN is key to the success of HFTs. Without precise time synchronization below 1 millisecond
Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, Stephen; Cleveland, Steve; Favalli, Andrea
We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less
Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting
Croft, Stephen; Cleveland, Steve; Favalli, Andrea; ...
2017-04-29
We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less
Estimating the effective system dead time parameter for correlated neutron counting
NASA Astrophysics Data System (ADS)
Croft, Stephen; Cleveland, Steve; Favalli, Andrea; McElroy, Robert D.; Simone, Angela T.
2017-11-01
Neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correcting these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. This latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.
Shear-rate dependence of the viscosity of the Lennard-Jones liquid at the triple point
NASA Astrophysics Data System (ADS)
Ferrario, M.; Ciccotti, G.; Holian, B. L.; Ryckaert, J. P.
1991-11-01
High-precision molecular-dynamics (MD) data are reported for the shear viscosity η of the Lennard-Jones liquid at its triple point, as a function of the shear rate ɛ˙ for a large system (N=2048). The Green-Kubo (GK) value η(ɛ˙=0)=3.24+/-0.04 is estimated from a run of 3.6×106 steps (40 nsec). We find no numerical evidence of a t-3/2 long-time tail for the GK integrand (stress-stress time-correlation function). From our nonequilibrium MD results, obtained both at small and large values of ɛ˙, a consistent picture emerges that supports an analytical (quadratic at low shear rate) dependence of the viscosity on ɛ˙.
Modified interferometric imaging condition for reverse-time migration
NASA Astrophysics Data System (ADS)
Guo, Xue-Bao; Liu, Hong; Shi, Ying
2018-01-01
For reverse-time migration, high-resolution imaging mainly depends on the accuracy of the velocity model and the imaging condition. In practice, however, the small-scale components of the velocity model cannot be estimated by tomographical methods; therefore, the wavefields are not accurately reconstructed from the background velocity, and the imaging process will generate artefacts. Some of the noise is due to cross-correlation of unrelated seismic events. Interferometric imaging condition suppresses imaging noise very effectively, especially the unknown random disturbance of the small-scale part. The conventional interferometric imaging condition is extended in this study to obtain a new imaging condition based on the pseudo-Wigner distribution function (WDF). Numerical examples show that the modified interferometric imaging condition improves imaging precision.
Comparing Optical Oscillators across the Air to Milliradians in Phase and 10^{-17} in Frequency.
Sinclair, Laura C; Bergeron, Hugo; Swann, William C; Baumann, Esther; Deschênes, Jean-Daniel; Newbury, Nathan R
2018-02-02
We demonstrate carrier-phase optical two-way time-frequency transfer (carrier-phase OTWTFT) through the two-way exchange of frequency comb pulses. Carrier-phase OTWTFT achieves frequency comparisons with a residual instability of 1.2×10^{-17} at 1 s across a turbulent 4-km free space link, surpassing previous OTWTFT by 10-20 times and enabling future high-precision optical clock networks. Furthermore, by exploiting the carrier phase, this approach is able to continuously track changes in the relative optical phase of distant optical oscillators to 9 mrad (7 as) at 1 s averaging, effectively extending optical phase coherence over a broad spatial network for applications such as correlated spectroscopy between distant atomic clocks.
A digital correlator upgrade for the Arcminute MicroKelvin Imager
NASA Astrophysics Data System (ADS)
Hickish, Jack; Razavi-Ghods, Nima; Perrott, Yvette C.; Titterington, David J.; Carey, Steve H.; Scott, Paul F.; Grainge, Keith J. B.; Scaife, Anna M. M.; Alexander, Paul; Saunders, Richard D. E.; Crofts, Mike; Javid, Kamran; Rumsey, Clare; Jin, Terry Z.; Ely, John A.; Shaw, Clive; Northrop, Ian G.; Pooley, Guy; D'Alessandro, Robert; Doherty, Peter; Willatt, Greg P.
2018-04-01
The Arcminute Microkelvin Imager (AMI) telescopes located at the Mullard Radio Astronomy Observatory near Cambridge have been significantly enhanced by the implementation of a new digital correlator with 1.2 MHz spectral resolution. This system has replaced a 750-MHz resolution analogue lag-based correlator, and was designed to mitigate the effects of radio frequency interference, particularly that from geostationary satellites which are visible from the AMI site when observing at low declinations. The upgraded instrument consists of 18 ROACH2 Field Programmable Gate Array platforms used to implement a pair of real-time FX correlators - one for each of AMI's two arrays. The new system separates the down-converted RF baseband signal from each AMI receiver into two sub-bands, each of which are filtered to a width of 2.3 GHz and digitized at 5-Gsps with 8 bits of precision. These digital data streams are filtered into 2048 frequency channels and cross-correlated using FPGA hardware, with a commercial 10 Gb Ethernet switch providing high-speed data interconnect. Images formed using data from the new digital correlator show over an order of magnitude improvement in dynamic range over the previous system. The ability to observe at low declinations has also been significantly improved.
Polarization and amplitude probes in Hanle effect EIT noise spectroscopy of a buffer gas cell
NASA Astrophysics Data System (ADS)
O'Leary, Shannon; Zheng, Aojie; Crescimanno, Michael
2015-05-01
Noise correlation spectroscopy on systems manifesting Electromagnetically Induced Transparency (EIT) holds promise as a simple, robust method for performing high-resolution spectroscopy used in applications such as EIT-based atomic magnetometry and clocks. While this noise conversion can diminish the precision of EIT applications, noise correlation techniques transform the noise into a useful spectroscopic tool that can improve the application's precision. We study intensity noise, originating from the large phase noise of a semiconductor diode laser's light, in Rb vapor EIT in the Hanle configuration. We report here on our recent experimental work on and complementary theoretical modeling of the effects of light polarization preparation and post-selection on the correlation spectrum and on the independent noise channel traces. We also explain methodology and recent results for delineating the effects of residual laser amplitude fluctuations on the correlation noise resonance as compared to other contributing processes. Understanding these subtleties are essential for optimizing EIT-noise applications.
Analytical solution of tt¯ dilepton equations
NASA Astrophysics Data System (ADS)
Sonnenschein, Lars
2006-03-01
The top quark antiquark production system in the dilepton decay channel is described by a set of equations which is nonlinear in the unknown neutrino momenta. Its most precise and least time consuming solution is of major importance for measurements of top quark properties like the top quark mass and tt¯ spin correlations. The initial system of equations can be transformed into two polynomial equations with two unknowns by means of elementary algebraic operations. These two polynomials of multidegree two can be reduced to one univariate polynomial of degree four by means of resultants. The obtained quartic equation is solved analytically.
NASA Astrophysics Data System (ADS)
Carpi, Laura; Masoller, Cristina
2018-02-01
Many natural systems display transitions among different dynamical regimes, which are difficult to identify when the data are noisy and high dimensional. A technologically relevant example is a fiber laser, which can display complex dynamical behaviors that involve nonlinear interactions of millions of cavity modes. Here we study the laminar-turbulence transition that occurs when the laser pump power is increased. By applying various data analysis tools to empirical intensity time series we characterize their persistence and demonstrate that at the transition temporal correlations can be precisely represented by a surprisingly simple model.
Decorrelation of L-band and C-band interferometry to volcanic risk prevention
NASA Astrophysics Data System (ADS)
Malinverni, E. S.; Sandwell, D.; Tassetti, A. N.; Cappelletti, L.
2013-10-01
SAR has several strong key features: fine spatial resolution/precision and high temporal pass frequency. Moreover, the InSAR technique allows the accurate detection of ground deformations. This high potential technology can be invaluable to study volcanoes: it provides important information on pre-eruption surface deformation, improving the understanding of volcanic processes and the ability to predict eruptions. As a downside, SAR measurements are influenced by artifacts such as atmospheric effects or bad topographic data. Correlation gives a measure of these interferences, quantifying the similarity of the phase of two SAR images. Different approaches exists to reduce these errors but the main concern remain the possibility to correlate images with different acquisition times: snow-covered or heavily-vegetated areas produce seasonal changes on the surface. Minimizing the time between passes partly limits decorrelation. Though, images with a short temporal baseline aren't always available and some artifacts affecting correlation are timeindependent. This work studies correlation of pairs of SAR images focusing on the influence of surface and climate conditions, especially snow coverage and temperature. Furthermore, the effects of the acquisition band on correlation are taken into account, comparing L-band and C-band images. All the chosen images cover most of the Yellowstone caldera (USA) over a span of 4 years, sampling all the seasons. Interferograms and correlation maps are generated. To isolate temporal decorrelation, pairs of images with the shortest baseline are chosen. Correlation maps are analyzed in relation to snow depth and temperature. Results obtained with ENVISAT and ERS satellites (C-band) are compared with the ones from ALOS (L-band). Results show a good performance during winter and a bad attitude towards wet snow (spring and fall). During summer both L-band and C-band maintain a good coherence with L-band performing better over vegetation.
Gating based on internal/external signals with dynamic correlation updates.
Wu, Huanmei; Zhao, Qingya; Berbeco, Ross I; Nishioka, Seiko; Shirato, Hiroki; Jiang, Steve B
2008-12-21
Precise localization of mobile tumor positions in real time is critical to the success of gated radiotherapy. Tumor positions are usually derived from either internal or external surrogates. Fluoroscopic gating based on internal surrogates, such as implanted fiducial markers, is accurate however requiring a large amount of imaging dose. Gating based on external surrogates, such as patient abdominal surface motion, is non-invasive however less accurate due to the uncertainty in the correlation between tumor location and external surrogates. To address these complications, we propose to investigate an approach based on hybrid gating with dynamic internal/external correlation updates. In this approach, the external signal is acquired at high frequency (such as 30 Hz) while the internal signal is sparsely acquired (such as 0.5 Hz or less). The internal signal is used to validate and update the internal/external correlation during treatment. Tumor positions are derived from the external signal based on the newly updated correlation. Two dynamic correlation updating algorithms are introduced. One is based on the motion amplitude and the other is based on the motion phase. Nine patients with synchronized internal/external motion signals are simulated retrospectively to evaluate the effectiveness of hybrid gating. The influences of different clinical conditions on hybrid gating, such as the size of gating windows, the optimal timing for internal signal acquisition and the acquisition frequency are investigated. The results demonstrate that dynamically updating the internal/external correlation in or around the gating window will reduce false positive with relatively diminished treatment efficiency. This improvement will benefit patients with mobile tumors, especially greater for early stage lung cancers, for which the tumors are less attached or freely floating in the lung.
Visual aided pacing in respiratory maneuvers
NASA Astrophysics Data System (ADS)
Rambaudi, L. R.; Rossi, E.; Mántaras, M. C.; Perrone, M. S.; Siri, L. Nicola
2007-11-01
A visual aid to pace self-controlled respiratory cycles in humans is presented. Respiratory manoeuvres need to be accomplished in several clinic and research procedures, among others, the studies on Heart Rate Variability. Free running respiration turns to be difficult to correlate with other physiologic variables. Because of this fact, voluntary self-control is asked from the individuals under study. Currently, an acoustic metronome is used to pace respiratory frequency, its main limitation being the impossibility to induce predetermined timing in the stages within the respiratory cycle. In the present work, visual driven self-control was provided, with separate timing for the four stages of a normal respiratory cycle. This visual metronome (ViMet) was based on a microcontroller which power-ON and -OFF an eight-LED bar, in a four-stage respiratory cycle time series handset by the operator. The precise timing is also exhibited on an alphanumeric display.
Spatial and Time Coincidence Detection of the Decay Chain of Short-Lived Radioactive Nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granja, Carlos; Jakubek, Jan; Platkevic, Michal
The quantum counting position sensitive pixel detector Timepix with per-pixel energy and time resolution enables to detect radioactive ions and register the consecutive decay chain by simultaneous position-and time-correlation. This spatial and timing coincidence technique in the same sensor is demonstrated by the registration of the decay chain {sup 8}He{yields}{sup {beta} 8}Li and {sup 8}Li{yields}{sup {beta}-} {sup 8}Be{yields}{alpha}+{alpha} and by the measurement of the {beta} decay half-lives. Radioactive ions, selectively obtained from the Lohengrin fission fragment spectrometer installed at the High Flux Reactor of the ILL Grenoble, are delivered to the Timepix silicon sensor where decays of the implanted ionsmore » and daughter nuclei are registered and visualized. We measure decay lifetimes in the range {>=}{mu}s with precision limited just by counting statistics.« less
Precision Seismic Monitoring of Volcanic Eruptions at Axial Seamount
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Wilcock, W. S. D.; Tolstoy, M.; Baillard, C.; Tan, Y. J.; Schaff, D. P.
2017-12-01
Seven permanent ocean bottom seismometers of the Ocean Observatories Initiative's real time cabled observatory at Axial Seamount off the coast of the western United States record seismic activity since 2014. The array captured the April 2015 eruption, shedding light on the detailed structure and dynamics of the volcano and the Juan de Fuca midocean ridge system (Wilcock et al., 2016). After a period of continuously increasing seismic activity primarily associated with the reactivation of caldera ring faults, and the subsequent seismic crisis on April 24, 2015 with 7000 recorded events that day, seismicity rates steadily declined and the array currently records an average of 5 events per day. Here we present results from ongoing efforts to automatically detect and precisely locate seismic events at Axial in real-time, providing the computational framework and fundamental data that will allow rapid characterization and analysis of spatio-temporal changes in seismogenic properties. We combine a kurtosis-based P- and S-phase onset picker and time domain cross-correlation detection and phase delay timing algorithms together with single-event and double-difference location methods to rapidly and precisely (tens of meters) compute the location and magnitudes of new events with respect to a 2-year long, high-resolution background catalog that includes nearly 100,000 events within a 5×5 km region. We extend the real-time double-difference location software DD-RT to efficiently handle the anticipated high-rate and high-density earthquake activity during future eruptions. The modular monitoring framework will allow real-time tracking of other seismic events such as tremors and sea-floor lava explosions that enable the timing and location of lava flows and thus guide response research cruises to the most interesting sites. Finally, rapid detection of eruption precursors and initiation will allow for adaptive sampling by the OOI instruments for optimal recording of future eruptions. With a higher eruption recurrence rate than land-based volcanoes the Axial OOI observatory offers the opportunity to monitor and study volcanic eruptions throughout multiple cycles.
The 22nd Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting
NASA Technical Reports Server (NTRS)
Sydnor, Richard L. (Editor)
1990-01-01
Papers presented at the 22nd Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting are compiled. The following subject areas are covered: Rb, Cs, and H-based frequency standards and cryogenic and trapped-ion technology; satellite laser tracking networks, GLONASS timing, intercomparison of national time scales and international telecommunications; telecommunications, power distribution, platform positioning, and geophysical survey industries; military communications and navigation systems; and dissemination of precise time and frequency by means of GPS, GLONASS, MILSTAR, LORAN, and synchronous communication satellites.
Submillihertz magnetic spectroscopy performed with a nanoscale quantum sensor
NASA Astrophysics Data System (ADS)
Schmitt, Simon; Gefen, Tuvia; Stürner, Felix M.; Unden, Thomas; Wolff, Gerhard; Müller, Christoph; Scheuer, Jochen; Naydenov, Boris; Markham, Matthew; Pezzagna, Sebastien; Meijer, Jan; Schwarz, Ilai; Plenio, Martin; Retzker, Alex; McGuinness, Liam P.; Jelezko, Fedor
2017-05-01
Precise timekeeping is critical to metrology, forming the basis by which standards of time, length, and fundamental constants are determined. Stable clocks are particularly valuable in spectroscopy because they define the ultimate frequency precision that can be reached. In quantum metrology, the qubit coherence time defines the clock stability, from which the spectral linewidth and frequency precision are determined. We demonstrate a quantum sensing protocol in which the spectral precision goes beyond the sensor coherence time and is limited by the stability of a classical clock. Using this technique, we observed a precision in frequency estimation scaling in time T as T-3/2 for classical oscillating fields. The narrow linewidth magnetometer based on single spins in diamond is used to sense nanoscale magnetic fields with an intrinsic frequency resolution of 607 microhertz, which is eight orders of magnitude narrower than the qubit coherence time.
NASA Astrophysics Data System (ADS)
Nelson, David; McManus, Barry; Shorter, Joanne; Zahniser, Mark; Ono, Shuhei
2014-05-01
The capacity for real time precise in situ measurements of isotopic ratios of a variety of trace gases at ambient concentrations continues to create new opportunities for the study of the exchanges and fluxes of gases in the environment. Aerodyne Research has made rapid progress in laser based instruments since our introduction in 2007 of the first truly field worthy instrument for real time measurements of isotopologues of carbon dioxide. We have focused on two instrument design platforms, with either one or two lasers. Absorption cells with more than 200 meters path length allow precise measurements of trace gases with low ambient concentrations. Most of our systems employ mid infrared quantum cascade lasers. However, recently available 3 micron antimonide based diode lasers are also proving useful for isotopic measurements. By substituting different lasers and detectors, we can simultaneously measure the isotopic composition of a variety of gases, including: H2O, CO2, CH4, N2O and CO. Our newest instrument for true simultaneous measurement of isotopologues of CO2 (12CO2, 13CO2, 12C18O16O) has (1 s) precision better than 0.1 per mil for both ratios. The availability of 10 Hz measurements allows measurement of isotopic fluxes via eddy correlation. The single laser instrument fits in a 19 inch rack and is only 25 cm tall. A two laser instrument is larger, but with that instrument we can also measure clumped isotopes of CO2, with 1 second precisions of: 2.3 per mil for 13C18O16O, and 6.7 per mil for 13C17O16O. The sample size for such a measurement corresponds to 0.2 micromole of pure CO2. Another variation on the two laser instrument simultaneously measures isotopologues of CO2 (12CO2, 13CO2, 12C18O16O) and H2O (H216O, H218O, HD16O). Preliminary results for water ratio precisions (in 1s) are 0.1 per mil for H218O and 0.3 per mil for HD16O, simultaneous (1 s) precisions for isotopologues of CO2 of ~0.1 per mil. Methane, nitrous oxide and carbon monoxide have such low ambient concentrations that real-time isotopologue measurements are a serious challenge. For these gases, we typically use our 200 m absorption cell. Several of these instruments have already been used for long term field measurements of isotopologues of methane, (12CH4, 13CH4), with a demonstrated (1 s) precision of 1.5 per mil. A new version of this instrument operating near 3.3 microns has recently been developed to quantify 13CH4 and CH3D simultaneously. In separate experiments at MIT, using trapped concentrated samples, we have made highly precise measurements of the abundance of the clumped isotope of methane: 13CH3D. We are also developing methods to monitor the isotopic abundance of the isotopes of CO and N2O. We have achieved a measurement precision for ambient 13CO (1 s) of 1.9 per mil. For the isotopologues of N2O (14N216O, 14N15N 16O, 15N14N 16O, 14N218O), we have demonstrated (1 s) precision at ambient levels (320 ppb) of ~3 per mil. For N2O, a quasi continuous preconcentrator has been used to give even better precisions (<0.1 per mil) and one is being developed for CO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, Stephen; Santi, Peter A.; Henzlova, Daniela
The Feynman-Y statistic is a type of autocorrelation analysis. It is defined as the excess variance-to-mean ratio, Y = VMR - 1, of the number count distribution formed by sampling a pulse train using a series of non-overlapping gates. It is a measure of the degree of correlation present on the pulse train with Y = 0 for Poisson data. In the context of neutron coincidence counting we show that the same information can be obtained from the accidentals histogram acquired using the multiplicity shift-register method, which is currently the common autocorrelation technique applied in nuclear safeguards. In the casemore » of multiplicity shift register analysis however, overlapping gates, either triggered by the incoming pulse stream or by a periodic clock, are used. The overlap introduces additional covariance but does not alter the expectation values. In this paper we discuss, for a particular data set, the relative merit of the Feynman and shift-register methods in terms of both precision and dead time correction. Traditionally the Feynman approach is applied with a relatively long gate width compared to the dieaway time. The main reason for this is so that the gate utilization factor can be taken as unity rather than being treated as a system parameter to be determined at characterization/calibration. But because the random trigger interval gate utilization factor is slow to saturate this procedure requires a gate width many times the effective 1/e dieaway time. In the traditional approach this limits the number of gates that can be fitted into a given assay duration. We empirically show that much shorter gates, similar in width to those used in traditional shift register analysis can be used. Because the way in which the correlated information present on the pulse train is extracted is different for the moments based method of Feynman and the various shift register based approaches, the dead time losses are manifested differently for these two approaches. The resulting estimates for the dead time corrected first and second order reduced factorial moments should be independent of the method however and this allows the respective dead time formalism to be checked. We discuss how to make dead time corrections in both the shift register and the Feynman approaches.« less
Emulating JWST Exoplanet Transit Observations in a Testbed laboratory experiment
NASA Astrophysics Data System (ADS)
Touli, D.; Beichman, C. A.; Vasisht, G.; Smith, R.; Krist, J. E.
2014-12-01
The transit technique is used for the detection and characterization of exoplanets. The combination of transit and radial velocity (RV) measurements gives information about a planet's radius and mass, respectively, leading to an estimate of the planet's density (Borucki et al. 2011) and therefore to its composition and evolutionary history. Transit spectroscopy can provide information on atmospheric composition and structure (Fortney et al. 2013). Spectroscopic observations of individual planets have revealed atomic and molecular species such as H2O, CO2 and CH4 in atmospheres of planets orbiting bright stars, e.g. Deming et al. (2013). The transit observations require extremely precise photometry. For instance, Jupiter transit results to a 1% brightness decrease of a solar type star while the Earth causes only a 0.0084% decrease (84 ppm). Spectroscopic measurements require still greater precision <30ppm. The Precision Projector Laboratory (PPL) is a collaboration between the Jet Propulsion Laboratory (JPL) and California Institute of Technology (Caltech) to characterize and validate detectors through emulation of science images. At PPL we have developed a testbed to project simulated spectra and other images onto a HgCdTe array in order to assess precision photometry for transits, weak lensing etc. for Explorer concepts like JWST, WFIRST, EUCLID. In our controlled laboratory experiment, the goal is to demonstrate ability to extract weak transit spectra as expected for NIRCam, NIRIS and NIRSpec. Two lamps of variable intensity, along with spectral line and photometric simulation masks emulate the signals from a star-only, from a planet-only and finally, from a combination of a planet + star. Three masks have been used to simulate spectra in monochromatic light. These masks, which are fabricated at JPL, have a length of 1000 pixels and widths of 2 pixels, 10 pixels and 1 pixel to correspond respectively to the noted above JWST instruments. From many-hour long observing sequences, we obtain time series photometry with deliberate offsets introduced to test sensitivity to pointing jitter and other effects. We can modify the star-planet brightness contrast by factors up to 10^4:1. With cross correlation techniques we calculate positional shifts which are then used to decorrelate the effects of vertical and lateral offsets due to turbulence and instrumental vibrations on the photometry. Using Principal Component Analysis (PCA), we reject correlated temporal noise to achieve a precision lower than 50 ppm (Clanton et al. 2012). In our current work, after decorrelation of vertical and lateral offsets along with PCA, we achieve a precision of sim20 ppm. To assess the photometric precision we use the Allan variance (Allan 1987). This statistical method is used to characterize noise and stability as it indicates shot noise limited performance. Testbed experiments are ongoing to provide quantitative information on the achievable spectroscopic precision using realistic exoplanet spectra with the goal to define optimized data acquisition sequences for use, for example, with the James Webb Space Telescope.
Relations between basic and specific motor abilities and player quality of young basketball players.
Marić, Kristijan; Katić, Ratko; Jelicić, Mario
2013-05-01
Subjects from 5 first league clubs from Herzegovina were tested with the purpose of determining the relations of basic and specific motor abilities, as well as the effect of specific abilities on player efficiency in young basketball players (cadets). A battery of 12 tests assessing basic motor abilities and 5 specific tests assessing basketball efficiency were used on a sample of 83 basketball players. Two significant canonical correlations, i.e. linear combinations explained the relation between the set of twelve variables of basic motor space and five variables of situational motor abilities. Underlying the first canonical linear combination is the positive effect of the general motor factor, predominantly defined by jumping explosive power, movement speed of the arms, static strength of the arms and coordination, on specific basketball abilities: movement efficiency, the power of the overarm throw, shooting and passing precision, and the skill of handling the ball. The impact of basic motor abilities of precision and balance on specific abilities of passing and shooting precision and ball handling is underlying the second linear combination. The results of regression correlation analysis between the variable set of specific motor abilities and game efficiency have shown that the ability of ball handling has the largest impact on player quality in basketball cadets, followed by shooting precision and passing precision, and the power of the overarm throw.
Reischauer, Carolin; Patzwahl, René; Koh, Dow-Mu; Froehlich, Johannes M; Gutzeit, Andreas
2018-04-01
To evaluate whole-lesion volumetric texture analysis of apparent diffusion coefficient (ADC) maps for assessing treatment response in prostate cancer bone metastases. Texture analysis is performed in 12 treatment-naïve patients with 34 metastases before treatment and at one, two, and three months after the initiation of androgen deprivation therapy. Four first-order and 19 second-order statistical texture features are computed on the ADC maps in each lesion at every time point. Repeatability, inter-patient variability, and changes in the feature values under therapy are investigated. Spearman rank's correlation coefficients are calculated across time to demonstrate the relationship between the texture features and the serum prostate specific antigen (PSA) levels. With few exceptions, the texture features exhibited moderate to high precision. At the same time, Friedman's tests revealed that all first-order and second-order statistical texture features changed significantly in response to therapy. Thereby, the majority of texture features showed significant changes in their values at all post-treatment time points relative to baseline. Bivariate analysis detected significant correlations between the great majority of texture features and the serum PSA levels. Thereby, three first-order and six second-order statistical features showed strong correlations with the serum PSA levels across time. The findings in the present work indicate that whole-tumor volumetric texture analysis may be utilized for response assessment in prostate cancer bone metastases. The approach may be used as a complementary measure for treatment monitoring in conjunction with averaged ADC values. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Straten, W., E-mail: vanstraten.willem@gmail.com
2013-01-15
A new method of polarimetric calibration is presented in which the instrumental response is derived from regular observations of PSR J0437-4715 based on the assumption that the mean polarized emission from this millisecond pulsar remains constant over time. The technique is applicable to any experiment in which high-fidelity polarimetry is required over long timescales; it is demonstrated by calibrating 7.2 years of high-precision timing observations of PSR J1022+1001 made at the Parkes Observatory. Application of the new technique followed by arrival time estimation using matrix template matching yields post-fit residuals with an uncertainty-weighted standard deviation of 880 ns, two timesmore » smaller than that of arrival time residuals obtained via conventional methods of calibration and arrival time estimation. The precision achieved by this experiment yields the first significant measurements of the secular variation of the projected semimajor axis, the precession of periastron, and the Shapiro delay; it also places PSR J1022+1001 among the 10 best pulsars regularly observed as part of the Parkes Pulsar Timing Array (PPTA) project. It is shown that the timing accuracy of a large fraction of the pulsars in the PPTA is currently limited by the systematic timing error due to instrumental polarization artifacts. More importantly, long-term variations of systematic error are correlated between different pulsars, which adversely affects the primary objectives of any pulsar timing array experiment. These limitations may be overcome by adopting the techniques presented in this work, which relax the demand for instrumental polarization purity and thereby have the potential to reduce the development cost of next-generation telescopes such as the Square Kilometre Array.« less
Hartmann, Anja; Becker, Kathrin; Karsten, Ulf; Remias, Daniel; Ganzera, Markus
2015-01-01
Mycosporine-like amino acids (MAAs), a group of small secondary metabolites found in algae, cyanobacteria, lichens and fungi, have become ecologically and pharmacologically relevant because of their pronounced UV-absorbing and photo-protective potential. Their analytical characterization is generally achieved by reversed phase HPLC and the compounds are often quantified based on molar extinction coefficients. As an alternative approach, in our study a fully validated hydrophilic interaction liquid chromatography (HILIC) method is presented. It enables the precise quantification of several analytes with adequate retention times in a single run, and can be coupled directly to MS. Excellent linear correlation coefficients (R2 > 0.9991) were obtained, with limit of detection (LOD) values ranging from 0.16 to 0.43 µg/mL. Furthermore, the assay was found to be accurate (recovery rates from 89.8% to 104.1%) and precise (intra-day precision: 5.6%, inter-day precision ≤6.6%). Several algae were assayed for their content of known MAAs like porphyra-334, shinorine, and palythine. Liquid chromatography-mass spectrometry (LC-MS) data indicated a novel compound in some of them, which could be isolated from the marine species Catenella repens and structurally elucidated by nuclear magnetic resonance spectroscopy (NMR) as (E)-3-hydroxy-2-((5-hydroxy-5-(hydroxymethyl)-2-methoxy-3-((2-sulfoethyl)amino)cyclohex-2-en-1-ylidene)amino) propanoic acid, a novel MAA called catenelline. PMID:26473886
Shawky, Eman; Sallam, Shaimaa M
2017-11-01
A new high-throughput method was developed for the simultaneous analysis of isoflavones and soyasaponnins in Soy (Glycine max L.) products by high-performance thin-layer chromatography with densitometry and multiple detection. Silica gel was used as the stationary phase and ethyl acetate:methanol:water:acetic acid (100:20:16:1, v/v/v/v) as the mobile phase. After chromatographic development, multi-wavelength scanning was carried out by: (i) UV-absorbance measurement at 265 nm for genistin, daidzin and glycitin, (ii) Vis-absorbance measurement at 650 nm for Soyasaponins I and III, after post-chromatographic derivatization with anisaldehyde/sulfuric acid reagent. Validation of the developed method was found to meet the acceptance criteria delineated by ICH guidelines with respect to linearity, accuracy, precision, specificity and robustness. Calibrations were linear with correlation coefficients of >0.994. Intra-day precisions relative standard deviation (RSD)% of all substances in matrix were determined to be between 0.7 and 0.9%, while inter-day precisions (RSD%) ranged between 1.2 and 1.8%. The validated method was successfully applied for determination of the studied analytes in soy-based infant formula and soybean products. The new method compares favorably to other reported methods in being as accurate and precise and in the same time more feasible and cost-effective. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Hartmann, Anja; Becker, Kathrin; Karsten, Ulf; Remias, Daniel; Ganzera, Markus
2015-10-09
Mycosporine-like amino acids (MAAs), a group of small secondary metabolites found in algae, cyanobacteria, lichens and fungi, have become ecologically and pharmacologically relevant because of their pronounced UV-absorbing and photo-protective potential. Their analytical characterization is generally achieved by reversed phase HPLC and the compounds are often quantified based on molar extinction coefficients. As an alternative approach, in our study a fully validated hydrophilic interaction liquid chromatography (HILIC) method is presented. It enables the precise quantification of several analytes with adequate retention times in a single run, and can be coupled directly to MS. Excellent linear correlation coefficients (R² > 0.9991) were obtained, with limit of detection (LOD) values ranging from 0.16 to 0.43 µg/mL. Furthermore, the assay was found to be accurate (recovery rates from 89.8% to 104.1%) and precise (intra-day precision: 5.6%, inter-day precision ≤6.6%). Several algae were assayed for their content of known MAAs like porphyra-334, shinorine, and palythine. Liquid chromatography-mass spectrometry (LC-MS) data indicated a novel compound in some of them, which could be isolated from the marine species Catenella repens and structurally elucidated by nuclear magnetic resonance spectroscopy (NMR) as (E)-3-hydroxy-2-((5-hydroxy-5-(hydroxymethyl)-2-methoxy-3-((2-sulfoethyl)amino)cyclohex-2-en-1-ylidene)amino) propanoic acid, a novel MAA called catenelline.
Joint estimation of 2D-DOA and frequency based on space-time matrix and conformal array.
Wan, Liang-Tian; Liu, Lu-Tao; Si, Wei-Jian; Tian, Zuo-Xi
2013-01-01
Each element in the conformal array has a different pattern, which leads to the performance deterioration of the conventional high resolution direction-of-arrival (DOA) algorithms. In this paper, a joint frequency and two-dimension DOA (2D-DOA) estimation algorithm for conformal array are proposed. The delay correlation function is used to suppress noise. Both spatial and time sampling are utilized to construct the spatial-time matrix. The frequency and 2D-DOA estimation are accomplished based on parallel factor (PARAFAC) analysis without spectral peak searching and parameter pairing. The proposed algorithm needs only four guiding elements with precise positions to estimate frequency and 2D-DOA. Other instrumental elements can be arranged flexibly on the surface of the carrier. Simulation results demonstrate the effectiveness of the proposed algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ievlev, Anton V.; Belianinov, Alexei; Jesse, Stephen
Time of flight secondary ion mass spectrometry (ToF SIMS) is one of the most powerful characterization tools allowing imaging of the chemical properties of various systems and materials. It allows precise studies of the chemical composition with sub-100-nm lateral and nanometer depth spatial resolution. However, comprehensive interpretation of ToF SIMS results is challengeable, because of the data volume and its multidimensionality. Furthermore, investigation of the samples with pronounced topographical features are complicated by the spectral shift. In this work we developed approach for the comprehensive ToF SIMS data interpretation based on the data analytics and automated extraction of the samplemore » topography based on time of flight shift. We further applied this approach to investigate correlation between biological function and chemical composition in Arabidopsis roots.« less
Ievlev, Anton V.; Belianinov, Alexei; Jesse, Stephen; ...
2017-12-06
Time of flight secondary ion mass spectrometry (ToF SIMS) is one of the most powerful characterization tools allowing imaging of the chemical properties of various systems and materials. It allows precise studies of the chemical composition with sub-100-nm lateral and nanometer depth spatial resolution. However, comprehensive interpretation of ToF SIMS results is challengeable, because of the data volume and its multidimensionality. Furthermore, investigation of the samples with pronounced topographical features are complicated by the spectral shift. In this work we developed approach for the comprehensive ToF SIMS data interpretation based on the data analytics and automated extraction of the samplemore » topography based on time of flight shift. We further applied this approach to investigate correlation between biological function and chemical composition in Arabidopsis roots.« less
Ruggeri, Marco; de Freitas, Carolina; Williams, Siobhan; Hernandez, Victor M.; Cabot, Florence; Yesilirmak, Nilufer; Alawa, Karam; Chang, Yu-Cherng; Yoo, Sonia H.; Gregori, Giovanni; Parel, Jean-Marie; Manns, Fabrice
2016-01-01
Abstract: Two SD-OCT systems and a dual channel accommodation target were combined and precisely synchronized to simultaneously image the anterior segment and the ciliary muscle during dynamic accommodation. The imaging system simultaneously generates two synchronized OCT image sequences of the anterior segment and ciliary muscle with an imaging speed of 13 frames per second. The system was used to acquire OCT image sequences of a non-presbyopic and a pre-presbyopic subject accommodating in response to step changes in vergence. The image sequences were processed to extract dynamic morphological data from the crystalline lens and the ciliary muscle. The synchronization between the OCT systems allowed the precise correlation of anatomical changes occurring in the crystalline lens and ciliary muscle at identical time points during accommodation. To describe the dynamic interaction between the crystalline lens and ciliary muscle, we introduce accommodation state diagrams that display the relation between anatomical changes occurring in the accommodating crystalline lens and ciliary muscle. PMID:27446660
Ruggeri, Marco; de Freitas, Carolina; Williams, Siobhan; Hernandez, Victor M; Cabot, Florence; Yesilirmak, Nilufer; Alawa, Karam; Chang, Yu-Cherng; Yoo, Sonia H; Gregori, Giovanni; Parel, Jean-Marie; Manns, Fabrice
2016-04-01
Two SD-OCT systems and a dual channel accommodation target were combined and precisely synchronized to simultaneously image the anterior segment and the ciliary muscle during dynamic accommodation. The imaging system simultaneously generates two synchronized OCT image sequences of the anterior segment and ciliary muscle with an imaging speed of 13 frames per second. The system was used to acquire OCT image sequences of a non-presbyopic and a pre-presbyopic subject accommodating in response to step changes in vergence. The image sequences were processed to extract dynamic morphological data from the crystalline lens and the ciliary muscle. The synchronization between the OCT systems allowed the precise correlation of anatomical changes occurring in the crystalline lens and ciliary muscle at identical time points during accommodation. To describe the dynamic interaction between the crystalline lens and ciliary muscle, we introduce accommodation state diagrams that display the relation between anatomical changes occurring in the accommodating crystalline lens and ciliary muscle.
Psoma, A K; Pasias, I N; Rousis, N I; Barkonikos, K A; Thomaidis, N S
2014-05-15
A rapid, sensitive, accurate and precise method for the determination of Pb, Cd, As and Cu in seafood and fish feed samples by Simultaneous Electrothermal Atomic Absorption Spectrometry was developed in regard to Council Directive 333/2007EC and ISO/IEC 17025 (2005). Different approaches were investigated in order to shorten the analysis time, always taking into account the sensitivity. For method validation, precision (repeatability and reproducibility) and accuracy by addition recovery tests have been assessed as performance criteria. The expanded uncertainties based on the Eurachem/Citac Guidelines were calculated. The method was accredited by the Hellenic Accreditation System and it was applied for an 8 years study in seafood (n=202) and fish feeds (n=275) from the Greek market. The annual and seasonal variation of the elemental content and correlation among the elemental content in fish feeds and the respective fish samples were also accomplished. Copyright © 2013 Elsevier Ltd. All rights reserved.
Development and Validation of an HPLC Method for Karanjin in Pongamia pinnata linn. Leaves.
Katekhaye, S; Kale, M S; Laddha, K S
2012-01-01
A rapid, simple and specific reversed-phase HPLC method has been developed for analysis of karanjin in Pongamia pinnata Linn. leaves. HPLC analysis was performed on a C(18) column using an 85:13.5:1.5 (v/v) mixtures of methanol, water and acetic acid as isocratic mobile phase at a flow rate of 1 ml/min. UV detection was at 300 nm. The method was validated for accuracy, precision, linearity, specificity. Validation revealed the method is specific, accurate, precise, reliable and reproducible. Good linear correlation coefficients (r(2)>0.997) were obtained for calibration plots in the ranges tested. Limit of detection was 4.35 μg and limit of quantification was 16.56 μg. Intra and inter-day RSD of retention times and peak areas was less than 1.24% and recovery was between 95.05 and 101.05%. The established HPLC method is appropriate enabling efficient quantitative analysis of karanjin in Pongamia pinnata leaves.
Development and Validation of an HPLC Method for Karanjin in Pongamia pinnata linn. Leaves
Katekhaye, S; Kale, M. S.; Laddha, K. S.
2012-01-01
A rapid, simple and specific reversed-phase HPLC method has been developed for analysis of karanjin in Pongamia pinnata Linn. leaves. HPLC analysis was performed on a C18 column using an 85:13.5:1.5 (v/v) mixtures of methanol, water and acetic acid as isocratic mobile phase at a flow rate of 1 ml/min. UV detection was at 300 nm. The method was validated for accuracy, precision, linearity, specificity. Validation revealed the method is specific, accurate, precise, reliable and reproducible. Good linear correlation coefficients (r2>0.997) were obtained for calibration plots in the ranges tested. Limit of detection was 4.35 μg and limit of quantification was 16.56 μg. Intra and inter-day RSD of retention times and peak areas was less than 1.24% and recovery was between 95.05 and 101.05%. The established HPLC method is appropriate enabling efficient quantitative analysis of karanjin in Pongamia pinnata leaves. PMID:23204626
A self-synchronized high speed computational ghost imaging system: A leap towards dynamic capturing
NASA Astrophysics Data System (ADS)
Suo, Jinli; Bian, Liheng; Xiao, Yudong; Wang, Yongjin; Zhang, Lei; Dai, Qionghai
2015-11-01
High quality computational ghost imaging needs to acquire a large number of correlated measurements between the to-be-imaged scene and different reference patterns, thus ultra-high speed data acquisition is of crucial importance in real applications. To raise the acquisition efficiency, this paper reports a high speed computational ghost imaging system using a 20 kHz spatial light modulator together with a 2 MHz photodiode. Technically, the synchronization between such high frequency illumination and bucket detector needs nanosecond trigger precision, so the development of synchronization module is quite challenging. To handle this problem, we propose a simple and effective computational self-synchronization scheme by building a general mathematical model and introducing a high precision synchronization technique. The resulted efficiency is around 14 times faster than state-of-the-arts, and takes an important step towards ghost imaging of dynamic scenes. Besides, the proposed scheme is a general approach with high flexibility for readily incorporating other illuminators and detectors.
Express-evaluation of the psycho-physiological condition of Paralympic athletes
Drozdovski, Alexander; Gromova, Irina; Korotkov, Konstantin; Shelkov, Oleg; Akinnagbe, Femi
2012-01-01
Objective Evaluation of elite athletes’ psycho-physiological condition at various stages of preparation and in international competition. Design Athletes were tested during training and participation in international competition using methods of galvanic skin response (GSR) and gas discharge visualization (GDV). Setting Saint Petersburg Federal Research Institute of Physical Culture and Sport, Russia and Paralympic athletic training camp, Norway. Participants Eighteen athletes from Russia’s Skiing and Biathlon Paralympic Team. All athletes had some level of damage to their musculoskeletal system. Main outcome measures Stress level (SL), energy potential (EP), and psycho-emotional tension (PET). Results It was found that the higher the level of EP achieved by the athlete in the training period, the lower the SL in the competition time. The SL of an athlete recorded in the training period significantly correlates with the SL both before and at the time of competition. The PET and SL before the World Cup was negatively correlated to the results of skiing competitions. Conclusion Evaluation of PET, EP, and SL through GSR and GDV offers a fast, highly precise, non-invasive method to assess an athlete’s level of readiness during both training and at the time of competition. PMID:24198605
Sastrawan, J; Jones, C; Akhalwaya, I; Uys, H; Biercuk, M J
2016-08-01
We introduce concepts from optimal estimation to the stabilization of precision frequency standards limited by noisy local oscillators. We develop a theoretical framework casting various measures for frequency standard variance in terms of frequency-domain transfer functions, capturing the effects of feedback stabilization via a time series of Ramsey measurements. Using this framework, we introduce an optimized hybrid predictive feedforward measurement protocol that employs results from multiple past measurements and transfer-function-based calculations of measurement covariance to improve the accuracy of corrections within the feedback loop. In the presence of common non-Markovian noise processes these measurements will be correlated in a calculable manner, providing a means to capture the stochastic evolution of the local oscillator frequency during the measurement cycle. We present analytic calculations and numerical simulations of oscillator performance under competing feedback schemes and demonstrate benefits in both correction accuracy and long-term oscillator stability using hybrid feedforward. Simulations verify that in the presence of uncompensated dead time and noise with significant spectral weight near the inverse cycle time predictive feedforward outperforms traditional feedback, providing a path towards developing a class of stabilization software routines for frequency standards limited by noisy local oscillators.
Mapping Cryo-volcanic Activity from Enceladus’ South Polar Region
NASA Astrophysics Data System (ADS)
Tigges, Mattie; Spitale, Joseph N.
2017-10-01
Using Cassini images taken of Enceladus’ south polar plumes at various times and orbital locations, we are producing maps of eruptive activity at various times. The purpose of this experiment is to understand the mechanism that controls the cryo-volcanic eruptions.The current hypothesis is that Tiger Stripe activity is modulated by tidal forcing, which would predict a correlation between orbital phase and the amount and distribution of eruptive activity. The precise nature of those correlations depends on how the crust is failing and how the plumbing system is organized.We use simulated curtains of ejected material that are superimposed over Cassini images, obtained during thirteen different flybys, taken between mid-2009 and mid-2012. Each set represents a different time and location in Enceladus’ orbit about Saturn, and contains images of the plumes from various angles. Shadows cast onto the backlit ejected material by the terminator of the moon are used to determine which fractures were active at that point in the orbit.Maps of the spatial distribution of eruptive activity at various orbital phases can be used to evaluate various hypotheses about the failure modes that produce the eruptions.
High precision pulsar timing and spin frequency second derivatives
NASA Astrophysics Data System (ADS)
Liu, X. J.; Bassa, C. G.; Stappers, B. W.
2018-05-01
We investigate the impact of intrinsic, kinematic and gravitational effects on high precision pulsar timing. We present an analytical derivation and a numerical computation of the impact of these effects on the first and second derivative of the pulsar spin frequency. In addition, in the presence of white noise, we derive an expression to determine the expected measurement uncertainty of a second derivative of the spin frequency for a given timing precision, observing cadence and timing baseline and find that it strongly depends on the latter (∝t-7/2). We show that for pulsars with significant proper motion, the spin frequency second derivative is dominated by a term dependent on the radial velocity of the pulsar. Considering the data sets from three Pulsar Timing Arrays, we find that for PSR J0437-4715 a detectable spin frequency second derivative will be present if the absolute value of the radial velocity exceeds 33 km s-1. Similarly, at the current timing precision and cadence, continued timing observations of PSR J1909-3744 for about another eleven years, will allow the measurement of its frequency second derivative and determine the radial velocity with an accuracy better than 14 km s-1. With the ever increasing timing precision and observing baselines, the impact of the, largely unknown, radial velocities of pulsars on high precision pulsar timing can not be neglected.
2002-12-01
34th Annual Precise Time and Time Interval (PTTI) Meeting 243 IEEE-1588™ STANDARD FOR A PRECISION CLOCK SYNCHRONIZATION PROTOCOL FOR... synchronization . 2. Cyclic-systems. In cyclic-systems, timing is periodic and is usually defined by the characteristics of a cyclic network or bus...incommensurate, timing schedules for each device are easily implemented. In addition, synchronization accuracy depends on the accuracy of the common
López-Miguel, Alberto; Martínez-Almeida, Loreto; González-García, María J; Coco-Martín, María B; Sobrado-Calvo, Paloma; Maldonado, Miguel J
2013-02-01
To assess the intrasession and intersession precision of ocular, corneal, and internal higher-order aberrations (HOAs) measured using an integrated topographer and Hartmann-Shack wavefront sensor (Topcon KR-1W) in refractive surgery candidates. IOBA-Eye Institute, Valladolid, Spain. Evaluation of diagnostic technology. To analyze intrasession repeatability, 1 experienced examiner measured eyes 9 times successively. To study intersession reproducibility, the same clinician obtained measurements from another set of eyes in 2 consecutive sessions 1 week apart. Ocular, corneal, and internal HOAs were obtained. Coma and spherical aberrations, 3rd- and 4th-order aberrations, and total HOAs were calculated for a 6.0 mm pupil diameter. For intrasession repeatability (75 eyes), excellent intraclass correlation coefficients (ICCs) were obtained (ICC >0.87), except for internal primary coma (ICC = 0.75) and 3rd-order (ICC = 0.72) HOAs. Repeatability precision (1.96 × S(w)) values ranged from 0.03 μm (corneal primary spherical) to 0.08 μm (ocular primary coma). For intersession reproducibility (50 eyes), ICCs were good (>0.8) for ocular primary spherical, 3rd-order, and total higher-order aberrations; reproducibility precision values ranged from 0.06 μm (corneal primary spherical) to 0.21 μm (internal 3rd order), with internal HOAs having the lowest precision (≥0.12 μm). No systematic bias was found between examinations on different days. The intrasession repeatability was high; therefore, the device's ability to measure HOAs in a reliable way was excellent. Under intersession reproducibility conditions, dependable corneal primary spherical aberrations were provided. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chu, Zhuyin; He, Huaiyu; Ramezani, Jahandar; Bowring, Samuel A.; Hu, Dongyu; Zhang, Lijun; Zheng, Shaolin; Wang, Xiaolin; Zhou, Zhonghe; Deng, Chenglong; Guo, Jinghui
2016-10-01
The Yanliao Biota of northeastern China comprises the oldest feathered dinosaurs, transitional pterosaurs, as well as the earliest eutherian mammals, multituberculate mammals, and new euharamiyidan species that are key elements of the Mesozoic biotic record. Recent discovery of the Yanliao Biota in the Daxishan section near the town of Linglongta, Jianchang County in western Liaoning Province have greatly enhanced our knowledge of the transition from dinosaurs to birds, primitive to derived pterosaurs, and the early evolution of mammals. Nevertheless, fundamental questions regarding the correlation of fossil-bearing strata, rates of dinosaur and mammalian evolution, and their relationship to environmental change in deep time remain unresolved due to the paucity of precise and accurate temporal constraints. These limitations underscore the importance of placing the rich fossil record of Jianchang within a high-resolution chronostratigraphic framework that has thus far been hampered by the relatively low precision of in situ radioisotopic dating techniques. Here we present high-precision U-Pb zircon geochronology by the chemical abrasion isotope dilution thermal ionization mass spectrometry (CA-ID-TIMS) from three interstratified ash beds previously dated by secondary-ion mass spectrometry (SIMS) technique. The results constrain the key fossil horizons of the Daxishan section to an interval spanning 160.89 to 160.25 Ma with 2σ analytical uncertainties that range from ±46 to ±69 kyr. These data place the Yanliao Biota from Jianchang in the Oxfordian Stage of the Late Jurassic, and mark the Daxishan section as the site of Earth's oldest precisely dated feathered dinosaurs and eutherian mammals.
Distribution and Characteristics of Repeating Earthquakes in Northern California
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Schaff, D. P.; Zechar, J. D.; Shaw, B. E.
2012-12-01
Repeating earthquakes are playing an increasingly important role in the study of fault processes and behavior, and have the potential to improve hazard assessment, earthquake forecast, and seismic monitoring capabilities. These events rupture the same fault patch repeatedly, generating virtually identical seismograms. In California, repeating earthquakes have been found predominately along the creeping section of the central San Andreas Fault, where they are believed to represent failing asperities on an otherwise creeping fault. Here, we use the northern California double-difference catalog of 450,000 precisely located events (1984-2009) and associated database of 2 billion waveform cross-correlation measurements to systematically search for repeating earthquakes across various tectonic regions. An initial search for pairs of earthquakes with high-correlation coefficients and similar magnitudes resulted in 4,610 clusters including a total of over 26,000 earthquakes. A subsequent double-difference re-analysis of these clusters resulted in 1,879 sequences (8,640 events) where a common rupture area can be resolved to the precision of a few tens of meters or less. These repeating earthquake sequences (RES) include between 3 and 24 events with magnitudes up to ML=4. We compute precise relative magnitudes between events in each sequence from differential amplitude measurements. Differences between these and standard coda-duration magnitudes have a standard deviation of 0.09. The RES occur throughout northern California, but RES with 10 or more events (6%) only occur along the central San Andreas and Calaveras faults. We are establishing baseline characteristics for each sequence, such as recurrence intervals and their coefficient of variation (CV), in order to compare them across tectonic regions. CVs for these clusters range from 0.002 to 2.6, indicating a range of behavior between periodic occurrence (CV~0), random occurrence, and temporal clustering. 10% of the RES show burst-like behavior with mean recurrence times smaller than one month. 5% of the RES have mean recurrence times greater than one year and include more than 10 earthquakes. Earthquakes in the 50 most periodic sequences (CV<0.2) do not appear to be predictable by either time- or slip-predictable models, consistent with previous findings. We demonstrate that changes in recurrence intervals of repeating earthquakes can be routinely monitored. This is especially important for sequences with CV~0, as they may indicate changes in the loading rate. We also present results from retrospective forecast experiments based on near-real time hazard functions.
A refined methodology for modeling volume quantification performance in CT
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Wilson, Joshua; Samei, Ehsan
2014-03-01
The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.
Ambrosio, Leire; Portillo, Mari Carmen; Rodríguez-Blázquez, Carmen; Rodriguez-Violante, Mayela; Castrillo, Juan Carlos Martínez; Arillo, Víctor Campos; Garretto, Nélida Susana; Arakaki, Tomoko; Dueñas, Marcos Serrano; Álvarez, Mario; Ibáñez, Ivonne Pedroso; Carvajal, Ana; Martínez-Martín, Pablo
2016-01-01
Understanding how a person lives with a chronic illness, such as Parkinson’s disease (PD), is necessary to provide individualized care and professionals role in person-centered care at clinical and community levels is paramount. The present study was aimed to analyze the psychometric properties of the Living with Chronic Illness-PD Scale (EC-PC) in a wide Spanish-speaking population with PD. International cross-sectional study with retest was carried out with 324 patients from four Latin American countries and Spain. Feasibility, acceptability, scaling assumptions, reliability, precision, and construct validity were tested. The study included 324 patients, with age (mean±s.d.) 66.67±10.68 years. None of the EC-PC items had missing values and all acceptability parameters fulfilled the standard criteria. Around two-third of the items (61.54%) met scaling assumptions standards. Concerning internal consistency, Cronbach’s alpha values were 0.68–0.88; item-total correlation was >0.30, except for two items; item homogeneity index was >0.30, and inter-item correlation values 0.14–0.76. Intraclass correlation coefficient for EC-PC stability was 0.76 and standard error of measurement (s.e.m.) for precision was 8.60 (for a EC-PC s.d.=18.57). EC-PC presented strong correlation with social support (rS=0.61) and moderate correlation with life satisfaction (rS=0.46). Weak and negligible correlations were found with the other scales. Internal validity correlations ranged from 0.46 to 0.78. EC-PC total scores were significantly different for each severity level based on Hoehn and Yahr and Clinical Impression of Severity Index, but not for Patient Global Impression of Severity. The EC-PC has satisfactory acceptability, reliability, precision, and validity to evaluate living with PD. PMID:28725703
Pineau, V; Lebel, B; Gouzy, S; Dutheil, J-J; Vielpeau, C
2010-10-01
The use of dual mobility cups is an effective method to prevent dislocations. However, the specific design of these implants can raise the suspicion of increased wear and subsequent periprosthetic osteolysis. Using radiostereometric analysis (RSA), migration of the femoral head inside the cup of a dual mobility implant can be defined to apprehend polyethylene wear rate. The study aimed to establish the precision of RSA measurement of femoral head migration in the cup of a dual mobility implant, and its intra- and interobserver variability. A total hip prosthesis phantom was implanted and placed under weight loading conditions in a simulator. Model-based RSA measurement of implant penetration involved specially machined polyethylene liners with increasing concentric wear (no wear, then 0.25, 0.5 and 0.75mm). Three examiners, blinded to the level of wear, analyzed (10 times) the radiostereometric films of the four liners. There was one experienced, one trained, and one inexperienced examiner. Statistical analysis measured the accuracy, precision, and intra- and interobserver variability by calculating Root Mean Square Error (RMSE), Concordance Correlation Coefficient (CCC), Intra Class correlation Coefficient (ICC), and Bland-Altman plots. Our protocol, that used a simple geometric model rather than the manufacturer's CAD files, showed precision of 0.072mm and accuracy of 0.034mm, comparable with machining tolerances with low variability. Correlation between wear measurement and true value was excellent with a CCC of 0.9772. Intraobserver reproducibility was very good with an ICC of 0.9856, 0.9883 and 0.9842, respectively for examiners 1, 2 and 3. Interobserver reproducibility was excellent with a CCC of 0.9818 between examiners 2 and 1, and 0.9713 between examiners 3 and 1. Quantification of wear is indispensable for the surveillance of dual mobility implants. This in vitro study validates our measurement method. Our results, and comparison with other studies using different measurement technologies (RSA, standard radiographs, Martell method) make model-based RSA the reference method for measuring the wear of total hip prostheses in vivo. Level 3. Prospective diagnostic study. Copyright © 2010 Elsevier Masson SAS. All rights reserved.
A 6He production facility and an electrostatic trap for measurement of the beta-neutrino correlation
NASA Astrophysics Data System (ADS)
Mukul, I.; Hass, M.; Heber, O.; Hirsh, T. Y.; Mishnayot, Y.; Rappaport, M. L.; Ron, G.; Shachar, Y.; Vaintraub, S.
2018-08-01
A novel experiment has been commissioned at the Weizmann Institute of Science for the study of weak interactions via a high-precision measurement of the beta-neutrinoangular correlation in the radioactive decay of short-lived 6He. The facility consists of a 14 MeV d + t neutron generator to produce atomic 6He, followed by ionization and bunching in an electron beam ion source, and injection into an electrostatic ion beam trap. This ion trap has been designed for efficient detection of the decay products from trapped light ions. The storage time in the trap for different stable ions was found to be in the range of 0.6 to 1.2 s at the chamber pressure of ∼7 × 10-10 mbar. We present the initial test results of the facility, and also demonstrate an important upgrade of an existing method (Stora et al., 2012) for production of light radioactive atoms, viz. 6He, for the precision measurement. The production rate of 6He atoms in the present setup has been estimated to be ∼ 1 . 45 × 10-4 atoms per neutron, and the system efficiency was found to be 4.0 ± 0.6%. An improvement to this setup is also presented for the enhanced production and diffusion of radioactive atoms for future use.
Study on individual stochastic model of GNSS observations for precise kinematic applications
NASA Astrophysics Data System (ADS)
Próchniewicz, Dominik; Szpunar, Ryszard
2015-04-01
The proper definition of mathematical positioning model, which is defined by functional and stochastic models, is a prerequisite to obtain the optimal estimation of unknown parameters. Especially important in this definition is realistic modelling of stochastic properties of observations, which are more receiver-dependent and time-varying than deterministic relationships. This is particularly true with respect to precise kinematic applications which are characterized by weakening model strength. In this case, incorrect or simplified definition of stochastic model causes that the performance of ambiguity resolution and accuracy of position estimation can be limited. In this study we investigate the methods of describing the measurement noise of GNSS observations and its impact to derive precise kinematic positioning model. In particular stochastic modelling of individual components of the variance-covariance matrix of observation noise performed using observations from a very short baseline and laboratory GNSS signal generator, is analyzed. Experimental test results indicate that the utilizing the individual stochastic model of observations including elevation dependency and cross-correlation instead of assumption that raw measurements are independent with the same variance improves the performance of ambiguity resolution as well as rover positioning accuracy. This shows that the proposed stochastic assessment method could be a important part in complex calibration procedure of GNSS equipment.
DoE optimization of a mercury isotope ratio determination method for environmental studies.
Berni, Alex; Baschieri, Carlo; Covelli, Stefano; Emili, Andrea; Marchetti, Andrea; Manzini, Daniela; Berto, Daniela; Rampazzo, Federico
2016-05-15
By using the experimental design (DoE) technique, we optimized an analytical method for the determination of mercury isotope ratios by means of cold-vapor multicollector ICP-MS (CV-MC-ICP-MS) to provide absolute Hg isotopic ratio measurements with a suitable internal precision. By running 32 experiments, the influence of mercury and thallium internal standard concentrations, total measuring time and sample flow rate was evaluated. Method was optimized varying Hg concentration between 2 and 20 ng g(-1). The model finds out some correlations within the parameters affect the measurements precision and predicts suitable sample measurement precisions for Hg concentrations from 5 ng g(-1) Hg upwards. The method was successfully applied to samples of Manila clams (Ruditapes philippinarum) coming from the Marano and Grado lagoon (NE Italy), a coastal environment affected by long term mercury contamination mainly due to mining activity. Results show different extents of both mass dependent fractionation (MDF) and mass independent fractionation (MIF) phenomena in clams according to their size and sampling sites in the lagoon. The method is fit for determinations on real samples, allowing for the use of Hg isotopic ratios to study mercury biogeochemical cycles in complex ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Iwata, Tetsuo; Taga, Takanori; Mizuno, Takahiko
2018-02-01
We have constructed a high-efficiency, photon-counting phase-modulation fluorometer (PC-PMF) using a field-programmable gate array, which is a modified version of the photon-counting fluorometer (PCF) that works in a pulsed-excitation mode (Iwata and Mizuno in Meas Sci Technol 28:075501, 2017). The common working principle for both is the simultaneous detection of the photoelectron pulse train, which covers 64 ns with a 1.0-ns resolution time (1.0 ns/channel). The signal-gathering efficiency was improved more than 100 times over that of conventional time-correlated single-photon-counting at the expense of resolution time depending on the number of channels. The system dead time for building a histogram was eliminated, markedly shortening the measurement time for fluorescent samples with moderately high quantum yields. We describe the PC-PMF and make a brief comparison with the pulsed-excitation PCF in precision, demonstrating the potential advantage of PC-PMF.
Quantifying time in sedimentary successions by radio-isotopic dating of ash beds
NASA Astrophysics Data System (ADS)
Schaltegger, Urs
2014-05-01
Sedimentary rock sequences are an accurate record of geological, chemical and biological processes throughout the history of our planet. If we want to know more about the duration or the rates of some of these processes, we can apply methods of absolute age determination, i.e. of radio-isotopic dating. Data of highest precision and accuracy, and therefore of highest degree of confidence, are obtained by chemical abrasion, isotope-dilution, thermal ionization mass spectrometry (CA-ID-TIMS) 238U-206Pb dating techniques, applied to magmatic zircon from ash beds that are interbedded with the sediments. This techniques allows high-precision estimates of age at the 0.1% uncertainty for single analyses, and down to 0.03% uncertainty for groups of statistically equivalent 206Pb/238U dates. Such high precision is needed, since we would like the precision to be approximately equivalent or better than the (interpolated) duration of ammonoid zones in the Mesozoic (e.g., Ovtcharova et al. 2006), or to match short feedback rates of biological, climatic, or geochemical cycles after giant volcanic eruptions in large igneous provinces (LIP's), e.g., at the Permian/Triassic or the Triassic/Jurassic boundaries. We also wish to establish as precisely as possible temporal coincidence between the sedimentary record and short-lived volcanic events within the LIP's. Precision and accuracy of the U-Pb data has to be traceable and quantifiable in absolute terms, achieved by direct reference to the international kilogram, via an absolute calibration of the standard and isotopic tracer solutions. Only with a perfect control on precision and accuracy of radio-isotopic data, we can confidently determine whether two ages of geological events are really different, and avoid mistaking interlaboratory or interchronometer biases for age difference. The development of unprecedented precision of CA-ID-TIMS 238U-206Pb dates led to the recognition of protracted growth of zircon in a magmatic liquid (see, e.g., Schoene et al. 2012), which then becomes transferred into volcanic ashes as excess dispersion of 238U-206Pb dates (see, e.g., Guex et al. 2012). Zircon is crystallizing in the magmatic liquid shortly before the volcanic eruption; we therefore aim at finding the youngest zircon date or youngest statistically equivalent cluster of 238U-206Pb dates as an approximation of ash deposition (Wotzlaw et al. 2013). Time gaps between last zircon crystallization and eruption ("Δt") may be as large as 100-200 ka, at the limits of analytical precision. Understanding the magmatic crystallization history of zircon is the fundamental background for interpreting ash bed dates in a sedimentary succession. Ash beds of different stratigraphic position and age my be generated within different magmatic systems, showing different crystallization histories. A sufficient number of samples (N) is therefore of paramount importance, not to lose the stratigraphic age control in a given section, and to be able to discard samples with large Δt - but, how large has to be "N"? In order to use the youngest zircon or zircons as an approximation of the age of eruption and ash deposition, we need to be sure that we have quantitatively solved the problem of post-crystallization lead loss - but, how can we be sure?! Ash bed zircons are prone to partial loss of radiogenic lead, because the ashes have been flushed by volcanic gases, as well as brines during sediment compaction. We therefore need to analyze a sufficient number of zircons (n) to be sure not to miss the youngest - but, how large has to be "n"? Analysis of trace elements or oxygen, hafnium isotopic compositions in dated zircon may sometimes help to distinguish zircon that is in equilibrium with the last magmatic liquid, from those that are recycled from earlier crystallization episodes, or to recognize zircon with partial lead loss (Schoene et al. 2010). Respecting these constraints, we may arrive at accurate correlation of periods of global environmental and biotic disturbance (from ash bed analysis in biostratigraphically or cyclostratigraphically well constrained marine sections) with volcanic activity; examples are the Triassic-Jurassic boundary and the Central Atlantic Magmatic Province (Schoene et al. 2010), or the lower Toarcian oceanic anoxic event and the Karoo Province volcanism (Sell et al. in prep.). High-precision temporal correlations may also be obtained by combining high-precision U-Pb dating with biochronology in the Middle Triassic (Ovtcharova et al., in prep.), or by comparing U-Pb dates with astronomical timescales in the Upper Miocene (Wotzlaw et al., in prep.). References Guex, J., Schoene, B., Bartolini, A., Spangenberg, J., Schaltegger, U., O'Dogherty, L., et al. (2012). Geochronological constraints on post-extinction recovery of the ammonoids and carbon cycle perturbations during the Early Jurassic. Palaeogeography, Palaeoclimatology, Palaeoecology, 346-347(C), 1-11. Ovtcharova, M., Bucher, H., Schaltegger, U., Galfetti, T., Brayard, A., & Guex, J. (2006). New Early to Middle Triassic U-Pb ages from South China: Calibration with ammonoid biochronozones and implications for the timing of the Triassic biotic recovery. Earth and Planetary Science Letters, 243(3-4), 463-475. Ovtcharova, M., Goudemand, N., Galfetti, Th., Guodun, K., Hammer, O., Schaltegger, U., Bucher, H. Improving accuracy and precision of radio-isotopic and biochronological approaches in dating geological boundaries: The Early-Middle Triassic boundary case. In preparation. Schoene, B., Schaltegger, U., Brack, P., Latkoczy, C., Stracke, A., & Günther, D. (2012). Rates of magma differentiation and emplacement in a ballooning pluton recorded by U-Pb TIMS-TEA, Adamello batholith, Italy. Earth and Planetary Science Letters, 355-356, 162-173. Schoene, B., Latkoczy, C., Schaltegger, U., & Günther, D. (2010). A new method integrating high-precision U-Pb geochronology with zircon trace element analysis (U-Pb TIMS-TEA). Geochimica Et Cosmochimica Acta, 74(24), 7144-7159. Schoene, B., Guex, J., Bartolini, A., Schaltegger, U., & Blackburn, T. J. (2010). Correlating the end-Triassic mass extinction and flood basalt volcanism at the 100 ka level. Geology, 38(5), 387-390. Sell, B., Ovtcharova, M., Guex, J., Jourdan, F., Schaltegger, U. Evaluating the link between the Karoo LIP and climatic-biologic events of the Toarcian Stage with high-precision U-Pb geochronology. In preparation. Wotzlaw, J. F., Schaltegger, U., Frick, D. A., Dungan, M. A., Gerdes, A., & Günther, D. (2013). Tracking the evolution of large-volume silicic magma reservoirs from assembly to supereruption. Geology, 41(8), 867-870. Wotzlaw, J.F., Hüsing, S.K., Hilgen, F.J.., Schaltegger, U. Testing the gold standard of geochronology against astronomical time: High-precision U-Pb geochronology of orbitally tuned ash beds from the Mediterranean Miocene. In preparation.
Boahen, Kwabena
2013-01-01
A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436
Browning, James V.; Miller, Kenneth G.; Sugarman, Peter J.; Barron, John; McCarthy, Francine M.G.; Kulhanek, Denise K.; Katz, Miriam E.; Feigenson, Mark D.
2013-01-01
Integrated Ocean Drilling Program Expedition 313 continuously cored and logged latest Eocene to early-middle Miocene sequences at three sites (M27, M28, and M29) on the inner-middle continental shelf offshore New Jersey, providing an opportunity to evaluate the ages, global correlations, and significance of sequence boundaries. We provide a chronology for these sequences using integrated strontium isotopic stratigraphy and biostratigraphy (primarily calcareous nannoplankton, diatoms, and dinocysts [dinoflagellate cysts]). Despite challenges posed by shallow-water sediments, age resolution is typically ±0.5 m.y. and in many sequences is as good as ±0.25 m.y. Three Oligocene sequences were sampled at Site M27 on sequence bottomsets. Fifteen early to early-middle Miocene sequences were dated at Sites M27, M28, and M29 across clinothems in topsets, foresets (where the sequences are thickest), and bottomsets. A few sequences have coarse (∼1 m.y.) or little age constraint due to barren zones; we constrain the age estimates of these less well dated sequences by applying the principle of superposition, i.e., sediments above sequence boundaries in any site are younger than the sediments below the sequence boundaries at other sites. Our age control provides constraints on the timing of deposition in the clinothem; sequences on the topsets are generally the youngest in the clinothem, whereas the bottomsets generally are the oldest. The greatest amount of time is represented on foresets, although we have no evidence for a correlative conformity. Our chronology provides a baseline for regional and interregional correlations and sea-level reconstructions: (1) we correlate a major increase in sedimentation rate precisely with the timing of the middle Miocene climate changes associated with the development of a permanent East Antarctic Ice Sheet; and (2) the timing of sequence boundaries matches the deep-sea oxygen isotopic record, implicating glacioeustasy as a major driver for forming sequence boundaries.
Le Berre, Maël; Aubertin, Johannes; Piel, Matthieu
2012-11-01
The quest to understand how the mechanical and geometrical environment of cells impacts their behavior and fate has been a major force driving the recent development of new technologies in cell biology research. Despite rapid advances in this field, many challenges remain in order to bridge the gap between the classical and simple cell culture plate and the biological reality of actual tissue. In tissues, cells have their physical space constrained by neighboring cells and the extracellular matrix. Here, we propose a simple and versatile device to precisely and dynamically control this confinement parameter in cultured cells. We show that there is a precise threshold deformation above which the nuclear lamina breaks and reconstructs, whereas nuclear volume changes. We also show that different nuclear deformations correlate with the expression of specific sets of genes, including nuclear factors and classical mechanotransduction pathways. This versatile device thus enables the precise control of cell and nuclear deformation by confinement and the correlative study of the associated molecular events.
Sorensen, Mathew D; Teichman, Joel M H; Bailey, Michael R
2009-07-01
Proof-of-principle in vitro experiments evaluated a prototype ultrasound technology to size kidney stone fragments. Nineteen human stones were measured using manual calipers. A 10-MHz, 1/8'' (10F) ultrasound transducer probe pinged each stone on a kidney tissue phantom submerged in water using two methods. In Method 1, the instrument was aligned such that the ultrasound pulse traveled through the stone. In Method 2, the instrument was aligned partially over the stone such that the ultrasound pulse traveled through water. For Method 1, the correlation between caliper- and ultrasound-determined stone size was r(2) = 0.71 (P < 0.0001). All but two stone measurements were accurate and precise to within 1 mm. For Method 2, the correlation was r(2) = 0.99 (P < 0.0001), and measurements were accurate and precise to within 0.25 mm. The prototype technology and either method measured stone size with good accuracy and precision. This technology may be possible to incorporate into ureteroscopy.
A volcanic connection between the Pennsylvanian of the Mid-Continent and Appalachian regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyons, P.C.; Congdon, R.D.; Outerbridge, W.F.
1993-02-01
Until now, it has not been possible to find key beds that precisely connect the Pennsylvanian section in the Mid-Continent and Appalachian regions. Altered volcanic ash deposits (tonsteins) offer the potential for high-precision stratigraphic correlation. The Fire Clay tonstein which is chemically distinct from the other five Middle Pennsylvanian tonstein beds in the central Appalachian basin, has a unimodal, rhyolitic fingerprint'' based on glass-inclusion data (n = 109) from volcanic quartz. This tonstein has been correlated over a distance of about 400 km in KY, WV, VA, and TN. Analyses of glass inclusions (n = 12) in volcanic quartz frommore » a mixed-layer (illite/smectite) tonstein (K-bentonite) from near the Morrowan-Atokan boundary, recovered from cuttings in Arkansas wells (Phillips Petroleum Co., [number sign]2, Johnson City; Carter Oil Co., [number sign]1 Williams, Conway City), are identical, within the limits of analytical precision, to those from the Fire Clay tonstein.« less
Cross-correlation-based earthquake relocation and ambient noise imaging at Axial Seamount
NASA Astrophysics Data System (ADS)
Tan, Y. J.; Waldhauser, F.; Tolstoy, M.; Wilcock, W. S. D.
2016-12-01
The seismic network that was installed on Axial Seamount as part of the Ocean Observatory Initiative's Cabled Array has been streaming live data since November 2014, encompassing an eruption in April-May of 2015. The network includes two broadband and five short-period seismometers spanning the southern half of the caldera. Almost 200,000 local earthquakes were detected in the first year of operation. Earthquake locations based on phase picks delineate outward dipping ring faults inferred to have accommodated deflation and guided dike propagation during the eruption (Wilcock et al., submitted). We will present results from our current effort of computing cross-correlation-based double-difference hypocenter locations to derive a more detailed image of the structures that provide insight into the active processes leading up to, during, and after the volcano's eruption. The new high-resolution hypocenters will form the base catalog for real-time double-difference monitoring of the seismicity recorded by the Cabled Array, allowing for high-precision evaluation of variation in seismogenic properties. We will also present results of measurements of temporal velocity changes associated with the eruption using seismic noise cross-correlations. This method has the potential to reveal areas of dike injection and magma withdrawal, as well as for real-time monitoring of temporal velocity variations associated with active volcanic processes.
Increasing the computational efficient of digital cross correlation by a vectorization method
NASA Astrophysics Data System (ADS)
Chang, Ching-Yuan; Ma, Chien-Ching
2017-08-01
This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.
NASA Astrophysics Data System (ADS)
Hansen, Sandra; Quiroga-González, Enrique; Carstensen, Jürgen; Adelung, Rainer; Föll, Helmut
2017-05-01
Perfectly aligned silicon microwire arrays show exceptionally high cycling stability with record setting (high) areal capacities of 4.25 mAh cm-2. Those wires have a special, modified length and thickness in order to perform this good. Geometry and sizes are the most important parameters of an anode to obtain batteries with high cycling stability without irreversible losses. The wires are prepared with a unique etching fabrication method, which allows to fabricate wires of very precise sizes. In order to investigate how good randomly oriented silicon wires perform in contrast to the perfect order of the array, the wires are embedded in a paste. This study reveals the fundamental correlation between geometry, mechanics and charge transfer kinetics of silicon electrodes. Using a suitable RC equivalent circuit allows to evaluate data from cyclic voltammetry and simultaneous FFT-Impedance Spectroscopy (FFT-IS), yielding in time-resolved resistances, time constants, and their direct correlation to the phase transformations. The change of the resistances during lithiation and delithiation correlates to kinetics and charge transfer mechanisms. This study demonstrates how the mechanical and physiochemical interactions at the silicon/paste interface inside the paste electrodes lead to void formation around silicon and with it to material loss and capacity fading.
The progress on time & frequency during the past 5 decades
NASA Astrophysics Data System (ADS)
Wang, Zheng-Ming
2002-06-01
The number and variety of applications using precise timing are astounding and increasing along with the new technology in communication, computer science, space science as well as in other fields. The world has evolved into the information age, and precise timing is at the heart of managing the flow of that information, which prompts the progress on precise timing itself rapidly. The development of time scales, UT1 determination, frequency standards, time transfer and the time dissemination for the past half century in the world and in China are described in this paper. The expectation in this field is discussed.
The 25th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting
NASA Technical Reports Server (NTRS)
Sydnor, Richard L. (Editor)
1994-01-01
Papers in the following categories are presented: recent developments in rubidium, cesium, and hydrogen-based frequency standards, and in cryogenic and trapped-ion technology; international and transnational applications of precise time and time interval (PTTI) technology with emphasis on satellite laser tracking networks, GLONASS timing, intercomparison of national time scales and international telecommunication; applications of PTTI technology to the telecommunications, power distribution, platform positioning, and geophysical survey industries; application of PTTI technology to evolving military communications and navigation systems; and dissemination of precise time and frequency by means of GPS, GLONASS, MILSTAR, LORAN, and synchronous communications satellites.
Low Cost Precision Lander for Lunar Exploration
NASA Astrophysics Data System (ADS)
Head, J. N.; Gardner, T. G.; Hoppa, G. V.; Seybold, K. G.
2004-12-01
For 60 years the US Defense Department has invested heavily in producing small, low mass, precision guided vehicles. The technologies matured under these programs include terrain-aided navigation, closed loop terminal guidance algorithms, robust autopilots, high thrust-to-weight propulsion, autonomous mission management software, sensors, and data fusion. These technologies will aid NASA in addressing New Millennium Science and Technology goals as well as the requirements flowing from the Vision articulated in January 2004. Establishing and resupplying a long term lunar presence will require automated landing precision not yet demonstrated. Precision landing will increase safety and assure mission success. In the DOD world, such technologies are used routinely and reliably. Hence, it is timely to generate a point design for a precise planetary lander useful for lunar exploration. In this design science instruments amount to 10 kg, 16% of the lander vehicle mass. This compares favorably with 7% for Mars Pathfinder and less than 15% for Surveyor. The mission design flies the lander in an inert configuration to the moon, relying on a cruise stage for navigation and TCMs. The lander activates about a minute before impact. A solid booster reduces the vehicle speed to 300-450 m/s. The lander is now about 2 minutes from touchdown and has 600 to 700 m/s delta-v capability, allowing for about 10 km of vehicle divert during terminal descent. This concept of operations is chosen because it closely mimics missile operational timelines used for decades: the vehicle remains inert in a challenging environment, then must execute its mission flawlessly on a moment's notice. The vehicle design consists of a re-plumbed propulsion system, using propellant tanks and thrusters from exoatmospheric programs. A redesigned truss provides hard points for landing gear, electronics, power supply, and science instruments. A radar altimeter and a Digital Scene Matching Area Correlator (DSMAC) provide data for the terminal guidance algorithms. DSMAC acquires high-resolution images for real-time correlation with a reference map. This system provides ownship position with a resolution comparable to the map. Since the DSMAC can sample at 1.5 mrad, any imaging acquired below 70 km altitude will surpass the resolution available from previous missions. DSMAC has a mode where image data are compressed and downlinked. This capability could be used to downlink live images during terminal guidance. Approximately 500 kbitps telemetry would be required to provide the first live descent imaging sequence since Ranger. This would provide unique geologic context imaging for the landing site. The development path to produce such a vehicle is that used to develop missiles. First, a pathfinder vehicle is designed and built as a test bed for hardware integration including science instruments. Second, a hover test vehicle would be built. Equipped with mass mockups for the science payload, the vehicle would otherwise be an exact copy of the flight vehicle. The hover vehicle would be flown on earth to demonstrate the proper function and integration of the propulsion system, autopilots, navigation algorithms, and guidance sensors. There is sufficient delta-v in the proposed design to take off from the ground, fly a ballistic arc to over 100 m altitude, then guide to a precision soft landing. Once the vehicle has flown safely on earth, then the validated design would be used to produce the flight vehicle. Since this leverages the billions of dollars DOD has invested in these technologies, it should be possible to land useful science payloads precisely on the lunar surface at relatively low cost.
MINOS Timing and GPS Precise Point Positioning
2012-01-01
Minos Timing Spec • Neutrinos created in bunches separated by 19 ns • ~ 1 neutrino/day detected in Soudan Mine – 2 milliseconds travel time...calibration – No low-cost Fermilab to Soudan Mine connections known – Not yet tested for operational time transfer Clock Options • High-Performance... UNDERGROUND LABORATORY •;, ~ (((ft.F ~’: · GPS PRECISE POINT POSITIONING A Brief Overview What is GPS PPP? • GPS PPP is a way to use precise ephemerides
Collette, Laurence; Burzykowski, Tomasz; Carroll, Kevin J; Newling, Don; Morris, Tom; Schröder, Fritz H
2005-09-01
The long duration of phase III clinical trials of overall survival (OS) slows down the treatment-development process. It could be shortened by using surrogate end points. Prostate-specific antigen (PSA) is the most studied biomarker in prostate cancer (PCa). This study attempts to validate PSA end points as surrogates for OS in advanced PCa. Individual data from 2,161 advanced PCa patients treated in studies comparing bicalutamide to castration were used in a meta-analytic approach to surrogate end-point validation. PSA response, PSA normalization, time to PSA progression, and longitudinal PSA measurements were considered. The known association between PSA and OS at the individual patient level was confirmed. The association between the effect of intervention on any PSA end point and on OS was generally low (determination coefficient, < 0.69). It is a common misconception that high correlation between biomarkers and true end point justify the use of the former as surrogates. To statistically validate surrogate end points, a high correlation between the treatment effects on the surrogate and true end point needs to be established across groups of patients treated with two alternative interventions. The levels of association observed in this study indicate that the effect of hormonal treatment on OS cannot be predicted with a high degree of precision from observed treatment effects on PSA end points, and thus statistical validity is unproven. In practice, non-null treatment effects on OS can be predicted only from precisely estimated large effects on time to PSA progression (TTPP; hazard ratio, < 0.50).
NASA Astrophysics Data System (ADS)
Tung, Hsin; Chen, Horng-Yue; Hu, Jyr-Ching; Ching, Kuo-En; Chen, Hongey; Yang, Kuo-Hsin
2016-12-01
We present precise deformation velocity maps for the two year period from September 2011 to July 2013 of the northern Taiwan area, Taipei, by using persistent scatterer interferometry (PSI) technique for processing 18 high resolution X-band synthetic aperture radar (SAR) images archived from COSMO-SkyMed (CSK) constellation. According to the result, the highest subsidence rates are found in Luzou and Wuku area in which the rate is about 15 mm/yr and 10 mm/yr respectively in the whole dataset. However, dramatic change from serve subsidence to uplift in surface deformation was revealed in the Taipei Basin in two different time spans: 2011/09-2012/09 and 2012/09-2013/07. This result shows good agreement with robust continuous GPS measurement and precise leveling survey data across the central Taipei Basin. Moreover, it also represents high correlation with groundwater table. From 8 well data in the Taipei basin, the storativity is roughly constant across most of the aquifer with values between 0.5 × 10- 4 and 1.6 × 10- 3 in Jingmei Formation and 0.8 × 10- 4 and 1.4 × 10- 3 in Wuku Formation. This high correlation indicated that one meter groundwater level change could induce about 9 and 16 mm surface deformation change in Luzou and Wuku area respectively, which is about eight times faster the long-term tectonic deformation rate in this area. Thus, to access the activity of the Shanchiao Fault, it is important to discriminate tectonic movement from anthropogenic or seasonal effect in the Taipei Basin to better understand the geohazards and mitigation in the Taipei metropolitan area.
Can the prognosis of individual patients with glioblastoma be predicted using an online calculator?
Parks, Christopher; Heald, James; Hall, Gregory; Kamaly-Asl, Ian
2013-01-01
Background In an exploratory subanalysis of the European Organisation for Research and Treatment of Cancer and National Cancer Institute of Canada (EORTC/NCIC) trial data, Gorlia et al. identified a variety of factors that were predictive of overall survival, including therapy administered, age, extent of surgery, mini-mental score, administration of corticosteroids, World Health Organization (WHO) performance status, and O-methylguanine-DNA methyltransferase (MGMT) promoter methylation status. Gorlia et al. developed 3 nomograms, each intended to predict the survival times of patients with newly diagnosed glioblastoma on the basis of individual-specific combinations of prognostic factors. These are available online as a “GBM Calculator” and are intended for use in patient counseling. This study is an external validation of this calculator. Method One hundred eighty-seven patients from 2 UK neurosurgical units who had histologically confirmed glioblastoma (WHO grade IV) had their information at diagnosis entered into the GBM calculator. A record was made of the actual and predicted median survival time for each patient. Statistical analysis was performed to assess the accuracy, precision, correlation, and discrimination of the calculator. Results The calculator gives both inaccurate and imprecise predictions. Only 23% of predictions were within 25% of the actual survival, and the percentage bias is 140% in our series. The coefficient of variance is 76%, where a smaller percentage would indicate greater precision. There is only a weak positive correlation between the predicted and actual survival among patients (R2 of 0.07). Discrimination is inadequate as measured by a C-index of 0.62. Conclusions The authors would not recommend the use of this tool in patient counseling. If departments were considering its use, we would advise that a similar validating exercise be undertaken. PMID:23543729
NASA Astrophysics Data System (ADS)
Majorowicz, Jacek A.; Safanda, Jan; Harris, Robert N.; Skinner, Walter R.
1999-05-01
New temperature logs in wells located in the grassland ecozone in the Southern Canadian Prairies in Saskatchewan, where surface disturbance is considered minor, show a large curvature in the upper 100 m. The character of this curvature is consistent with ground surface temperature (GST) warming in the 20th century. Repetition of precise temperature logs in southern Saskatchewan (years 1986 and 1997) shows the conductive nature of warming of the subsurface sediments. The magnitude of surface temperature change during that time (11 years) is high (0.3-0.4°C). To assess the conductive nature of temperature variations at the grassland surface interface, several precise air and soil temperature time series in the southern Canadian Prairies (1965-1995) were analyzed. The combined anomalies correlated at 0.85. Application of the functional space inversion (FSI) technique with the borehole temperature logs and site-specific lithology indicates a warming to date of approximately 2.5°C since a minimum in the late 18th century to mid 19th century. This warming represents an approximate increase from 4°C around 1850 to 6.5°C today. The significance of this record is that it suggests almost half of the warming occurred prior to 1900, before dramatic build up of atmospheric green house gases. This result correlates well with the proxy record of climatic change further to the north, beyond the Arctic Circle [Overpeck, J., Hughen, K., Hardy, D., Bradley, R., Case, R., Douglas, M., Finney, B., Gajewski, K., Jacoby, G., Jennings, A., Lamourex, S., Lasca, A., MacDonald, G., Moore, J., Retelle, M., Smith, S., Wolfe, A., Zielinski, G., 1997. Arctic environmental change of the last four centuries, Science 278, 1251-1256.].
NASA Astrophysics Data System (ADS)
Özcan, Ercan; Less, György; Báldi-Beke, Mária; Kollányi, Katalin; Acar, Ferhat
2009-05-01
The marine Oligo-Miocene units of western Taurides, deposited under different tectonic regimes (in Bey Dağları platform in foreland and coeval sequences in hinterland), were studied to establish a high-resolution biostratigraphic framework. Biometric study of the full spectrum of larger foraminifera in a regional scale allowed us correlating them with the shallow benthic zonation (SBZ) system introduced by [Cahuzac, B., Poignant, A., 1997. Essai de biozonation de l'Oligo-Miocène dans les bassins européens à l'aide des grands foraminifères néritiques. Bulletin de la Société géologique de France 168, 155-169], and to determine the ages of these sites on zonal precision for the first time. In correlating these assemblages to standard shallow benthic zones, planktonic data were also used whenever possible. Taxa, classified under the genera Nummulites, Miogypsina, Miolepidocyclina, Nephrolepidina, Eulepidina, Heterostegina, Operculina and Cycloclypeus (?) and their assemblages, closely resemble to the fauna described from European basins. These groups characterize the SBZ 22B to 25 zones referring to a time interval from early Chattian to Burdigalian. However, a main gap in late Chattian (SBZ 23) and in early part of the Aquitanian (SBZ 24) is also recorded in the platform succession. In the meantime, rare Eulepidina in the Burdigalian levels suggest a clear Indo-Pacific influence. Based on the discovery of early Chattian (SBZ 22B) deposits (previously mapped under Eocene/Miocene units), the Oligo-Miocene stratigraphy of the Bey Dağları platform is also revised. A more precise chronology for regional Miocene transgression is presented based on the miogypsinid evolutionary scale.
Use of precision time and time interval (PTTI)
NASA Technical Reports Server (NTRS)
Taylor, J. D.
1974-01-01
A review of range time synchronization methods are discussed as an important aspect of range operations. The overall capabilities of various missile ranges to determine precise time of day by synchronizing to available references and applying this time point to instrumentation for time interval measurements are described.
NASA Astrophysics Data System (ADS)
Suarez, S. E.; Brookfield, M. E.; Catlos, E. J.; Stockli, D. F.; Batchelor, R. A.
2016-12-01
The end of the Ordovician marks one of the greatest of the Earth's mass extinctions. One hypothesis explains this mass extinction as the result of a short-lived, major glaciation preceded by episodes of increased volcanism brought on by the Taconic orogeny. K-bentonites, weathered volcanic ash, provide evidence for increased volcanism. However, there is a lack of modern precise U-Pb dating of these ashes and some confusion in the biostratigraphy. The aim of this study is to obtain more precise U-Pb zircon ages from biostratigraphically constrained bentonites which will lead to better correlation of the Upper Ordovician and Lower Silurian relative time scales, as well as time the pulses of eruption. Zircon grains were extracted from the samples by heavy mineral separation and U-Pb dated using the Laser Ablation-Inductively Coupled Plasma-Mass Spectrometer at the University of Texas-Austin. We report here 3 precise U-Pb zircon ages from the Trenton Group, Ontario, Canada, and Dob's Linn, Scotland. The youngest age from the top of the Kirkfield Formation in Ontario is 448.0 +/- 18 Ma, which fits with existing late Ordovician stratigraphic ages. At Dob's Linn, Scotland, the site of the Ordovician/Silurian Global Boundary Stratigraphic Section and Point (GSSP), the youngest age for DL7, a bentonite 5 meters below the GSSP is 402.0 +/- 12.0 Ma, and for DL24L, a bentonite 8 meters above the GSSP is 358.2 +/- 7.9 Ma. These are Devonian ages in current timescales - the current age for the GSSP is 443.8 +/- 1.8 Ma, based on an U/Pb dates from a bentonite 1.6 meters above the GSSP at Dob's Linn. We are confident that our techniques rule out contamination and the most likely explanation is that the small zircons we analyzed either suffered Pb loss, or grew overgrowths during low grade hydrothermal metamorphism of the sediments during the intrusion of the Southern Upland Devonian granites during the Caledonian orogeny. These Devonian ages suggest that the 443.8 +/- 1.8 Ma age may also be suspect. The Dob's Linn site is therefore unsuitable for calibrating the biostratigraphic horizons. Work in progress will provide more U-Pb dating of bentonites from around the Ordovician-Silurian boundary in Canada, United States, Britain and Scandinavia with the aim of calibrating the local series and stages in order to help in International correlations.
Precise measurement of scleral radius using anterior eye profilometry.
Jesus, Danilo A; Kedzia, Renata; Iskander, D Robert
2017-02-01
To develop a new and precise methodology to measure the scleral radius based on anterior eye surface. Eye Surface Profiler (ESP, Eaglet-Eye, Netherlands) was used to acquire the anterior eye surface of 23 emmetropic subjects aged 28.1±6.6years (mean±standard deviation) ranging from 20 to 45. Scleral radius was obtained based on the approximation of the topographical scleral data to a sphere using least squares fitting and considering the axial length as a reference point. To better understand the role of scleral radius in ocular biometry, measurements of corneal radius, central corneal thickness, anterior chamber depth and white-to-white corneal diameter were acquired with IOLMaster 700 (Carl Zeiss Meditec AG, Jena, Germany). The estimated scleral radius (11.2±0.3mm) was shown to be highly precise with a coefficient of variation of 0.4%. A statistically significant correlation between axial length and scleral radius (R 2 =0.957, p<0.001) was observed. Moreover, corneal radius (R 2 =0.420, p<0.001), anterior chamber depth (R 2 =0.141, p=0.039) and white-to-white corneal diameter (R 2 =0.146, p=0.036) have also shown statistically significant correlations with the scleral radius. Lastly, no correlation was observed comparing scleral radius to the central corneal thickness (R 2 =0.047, p=0.161). Three-dimensional topography of anterior eye acquired with Eye Surface Profiler together with a given estimate of the axial length, can be used to calculate the scleral radius with high precision. Copyright © 2016 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Dunet, Vincent; Klein, Ran; Allenbach, Gilles; Renaud, Jennifer; deKemp, Robert A; Prior, John O
2016-06-01
Several analysis software packages for myocardial blood flow (MBF) quantification from cardiac PET studies exist, but they have not been compared using concordance analysis, which can characterize precision and bias separately. Reproducible measurements are needed for quantification to fully develop its clinical potential. Fifty-one patients underwent dynamic Rb-82 PET at rest and during adenosine stress. Data were processed with PMOD and FlowQuant (Lortie model). MBF and myocardial flow reserve (MFR) polar maps were quantified and analyzed using a 17-segment model. Comparisons used Pearson's correlation ρ (measuring precision), Bland and Altman limit-of-agreement and Lin's concordance correlation ρc = ρ·C b (C b measuring systematic bias). Lin's concordance and Pearson's correlation values were very similar, suggesting no systematic bias between software packages with an excellent precision ρ for MBF (ρ = 0.97, ρc = 0.96, C b = 0.99) and good precision for MFR (ρ = 0.83, ρc = 0.76, C b = 0.92). On a per-segment basis, no mean bias was observed on Bland-Altman plots, although PMOD provided slightly higher values than FlowQuant at higher MBF and MFR values (P < .0001). Concordance between software packages was excellent for MBF and MFR, despite higher values by PMOD at higher MBF values. Both software packages can be used interchangeably for quantification in daily practice of Rb-82 cardiac PET.
NASA Astrophysics Data System (ADS)
Couturier, C.; Riffard, Q.; Sauzet, N.; Guillaudin, O.; Naraghi, F.; Santos, D.
2017-11-01
Low-pressure gaseous TPCs are well suited detectors to correlate the directions of nuclear recoils to the galactic Dark Matter (DM) halo. Indeed, in addition to providing a measure of the energy deposition due to the elastic scattering of a DM particle on a nucleus in the target gas, they allow for the reconstruction of the track of the recoiling nucleus. In order to exclude the background events originating from radioactive decays on the surfaces of the detector materials within the drift volume, efforts are ongoing to precisely localize the track nuclear recoil in the drift volume along the axis perpendicular to the cathode plane. We report here the implementation of the measure of the signal induced on the cathode by the motion of the primary electrons toward the anode in a MIMAC chamber. As a validation, we performed an independent measurement of the drift velocity of the electrons in the considered gas mixture, correlating in time the cathode signal with the measure of the arrival times of the electrons on the anode.
Measurement of the electron shake-off in the β-decay of laser-trapped 6He atoms
NASA Astrophysics Data System (ADS)
Hong, Ran; Bagdasarova, Yelena; Garcia, Alejandro; Storm, Derek; Sternberg, Matthew; Swanson, Erik; Wauters, Frederik; Zumwalt, David; Bailey, Kevin; Leredde, Arnaud; Mueller, Peter; O'Connor, Thomas; Flechard, Xavier; Liennard, Etienne; Knecht, Andreas; Naviliat-Cuncic, Oscar
2016-03-01
Electron shake-off is an important process in many high precision nuclear β-decay measurements searching for physics beyond the standard model. 6He being one of the lightest β-decaying isotopes, has a simple atomic structure. Thus, it is well suited for testing calculations of shake-off effects. Shake-off probabilities from the 23S1 and 23P2 initial states of laser trapped 6He matter for the on-going beta-neutrino correlation study at the University of Washington. These probabilities are obtained by analyzing the time-of-flight distribution of the recoil ions detected in coincidence with the beta particles. A β-neutrino correlation independent analysis approach was developed. The measured upper limit of the double shake-off probability is 2 ×10-4 at 90% confidence level. This result is ~100 times lower than the most recent calculation by Schulhoff and Drake. This work is supported by DOE, Office of Nuclear Physics, under Contract Nos. DE-AC02-06CH11357 and DE-FG02-97ER41020.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Młyńczak, J.; Sawicz-Kryniger, K.; Fry, A. R.
2014-01-01
The Linac coherent light source (LCLS) at the SLAC National Accelerator Laboratory (SLAC) is the world’s first hard X-ray free electron laser (XFEL) and is capable of producing high-energy, femtosecond duration X-ray pulses. A common technique to study fast timescale physical phenomena, various “pump/probe” techniques are used. In these techniques there are two lasers, one optical and one X-ray, that work as a pump and as a probe to study dynamic processes in atoms and molecules. In order to resolve phenomena that occur on femtosecond timescales, it is imperative to have very precise timing between the optical lasers and X-raysmore » (on the order of ~ 20 fs or better). The lasers are synchronized to the same RF source that drives the accelerator and produces the X-ray laser. However, elements in the lasers cause some drift and time jitter, thereby de-synchronizing the system. This paper considers cross-correlation technique as a way to quantify the drift and jitter caused by the regenerative amplifier of the ultrafast optical laser.« less
Ultrasonic wave velocity measurement in small polymeric and cortical bone specimens
NASA Technical Reports Server (NTRS)
Kohles, S. S.; Bowers, J. R.; Vailas, A. C.; Vanderby, R. Jr
1997-01-01
A system was refined for the determination of the bulk ultrasonic wave propagation velocity in small cortical bone specimens. Longitudinal and shear wave propagations were measured using ceramic, piezoelectric 20 and 5 MHz transducers, respectively. Results of the pulse transmission technique were refined via the measurement of the system delay time. The precision and accuracy of the system were quantified using small specimens of polyoxymethylene, polystyrene-butadiene, and high-density polyethylene. These polymeric materials had known acoustic properties, similarity of propagation velocities to cortical bone, and minimal sample inhomogeneity. Dependence of longitudinal and transverse specimen dimensions upon propagation times was quantified. To confirm the consistency of longitudinal wave propagation in small cortical bone specimens (< 1.0 mm), cut-down specimens were prepared from a normal rat femur. Finally, cortical samples were prepared from each of ten normal rat femora, and Young's moduli (Eii), shear moduli (Gij), and Poisson ratios (Vij) were measured. For all specimens (bone, polyoxymethylene, polystyrene-butadiene, and high-density polyethylene), strong linear correlations (R2 > 0.997) were maintained between propagation time and distance throughout the size ranges down to less than 0.4 mm. Results for polyoxymethylene, polystyrene-butadiene, and high-density polyethylene were accurate to within 5 percent of reported literature values. Measurement repeatability (precision) improved with an increase in the wave transmission distance (propagating dimension). No statistically significant effect due to the transverse dimension was detected.
Kim, Hyo Seon; Chun, Jin Mi; Kwon, Bo-In; Lee, A-Reum; Kim, Ho Kyoung; Lee, A Yeong
2016-10-01
Ultra-performance convergence chromatography, which integrates the advantages of supercritical fluid chromatography and ultra high performance liquid chromatography technologies, is an environmentally friendly analytical method that uses dramatically reduced amounts of organic solvents. An ultra-performance convergence chromatography method was developed and validated for the quantification of decursinol angelate and decursin in Angelica gigas using a CSH Fluoro-Phenyl column (2.1 mm × 150 mm, 1.7 μm) with a run time of 4 min. The method had an improved resolution and a shorter analysis time in comparison to the conventional high-performance liquid chromatography method. This method was validated in terms of linearity, precision, and accuracy. The limits of detection were 0.005 and 0.004 μg/mL for decursinol angelate and decursin, respectively, while the limits of quantitation were 0.014 and 0.012 μg/mL, respectively. The two components showed good regression (correlation coefficient (r 2 ) > 0.999), excellent precision (RSD < 2.28%), and acceptable recoveries (99.75-102.62%). The proposed method can be used to efficiently separate, characterize, and quantify decursinol angelate and decursin in Angelica gigas and its related medicinal materials or preparations, with the advantages of a shorter analysis time, greater sensitivity, and better environmental compatibility. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
MR-based source localization for MR-guided HDR brachytherapy
NASA Astrophysics Data System (ADS)
Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.
2018-04-01
For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.
A phase match based frequency estimation method for sinusoidal signals
NASA Astrophysics Data System (ADS)
Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao
2015-04-01
Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.
A real-time surface inspection system for precision steel balls based on machine vision
NASA Astrophysics Data System (ADS)
Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen
2016-07-01
Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.
NASA Astrophysics Data System (ADS)
Monna, F.; Loizeau, J.-L.; Thomas, B. A.; Guéguen, C.; Favarger, P.-Y.
1998-08-01
One of the factors limiting the precision of inductively coupled plasma mass spectrometry is the counting statistics, which depend upon acquisition time and ion fluxes. In the present study, the precision of the isotopic measurements of Pb and Sr is examined. The time of measurement is optimally shared for each isotope, using a mathematical simulation, to provide the lowest theoretical analytical error. Different algorithms of mass bias correction are also taken into account and evaluated in term of improvement of overall precision. Several experiments allow a comparison of real conditions with theory. The present method significantly improves the precision, regardless of the instrument used. However, this benefit is more important for equipment which originally yields a precision close to that predicted by counting statistics. Additionally, the procedure is flexible enough to be easily adapted to other problems, such as isotopic dilution.
NASA Astrophysics Data System (ADS)
Kim, Jungrack; Kim, Younghwi; Park, Minseong
2016-10-01
At the present time, arguments continue regarding the migration speeds of Martian dune fields and their correlation with atmospheric circulation. However, precisely measuring the spatial translation of Martian dunes has succeeded only a very few times—for example, in the Nili Patera study (Bridges et al. 2012) using change-detection algorithms and orbital imagery. Therefore, in this study, we developed a generic procedure to precisely measure the migration of dune fields with recently introduced 25-cm resolution orbital imagery specifically using a high-accuracy photogrammetric processor. The processor was designed to trace estimated dune migration, albeit slight, over the Martian surface by 1) the introduction of very high resolution ortho images and stereo analysis based on hierarchical geodetic control for better initial point settings; 2) positioning error removal throughout the sensor model refinement with a non-rigorous bundle block adjustment, which makes possible the co-alignment of all images in a time series; and 3) improved sub-pixel co-registration algorithms using optical flow with a refinement stage conducted on a pyramidal grid processor and a blunder classifier. Moreover, volumetric changes of Martian dunes were additionally traced by means of stereo analysis and photoclinometry. The established algorithms have been tested using high-resolution HIRISE time-series images over several Martian dune fields. Dune migrations were iteratively processed both spatially and volumetrically, and the results were integrated to be compared to the Martian climate model. Migrations over well-known crater dune fields appeared to be almost static for the considerable temporal periods and were weakly correlated with wind directions estimated by the Mars Climate Database (Millour et al. 2015). As a result, a number of measurements over dune fields in the Mars Global Dune Database (Hayward et al. 2014) covering polar areas and mid-latitude will be demonstrated. Acknowledgements:The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement Nr. 607379.
Cawello, Willi; Schäfer, Carina
2014-08-01
Frequent plasma sampling to monitor pharmacokinetic (PK) profile of antiepileptic drugs (AEDs), is invasive, costly and time consuming. For drugs with a well-defined PK profile, such as AED lacosamide, equations can accurately approximate PK parameters from one steady-state plasma sample. Equations were derived to approximate steady-state peak and trough lacosamide plasma concentrations (Cpeak,ss and Ctrough,ss, respectively) and area under concentration-time curve during dosing interval (AUCτ,ss) from one plasma sample. Lacosamide (ka: ∼2 h(-1); ke: ∼0.05 h(-1), corresponding to half-life of 13 h) was calculated to reach Cpeak,ss after ∼1 h (tmax,ss). Equations were validated by comparing approximations to reference PK parameters obtained from single plasma samples drawn 3-12h following lacosamide administration, using data from double-blind, placebo-controlled, parallel-group PK study. Values of relative bias (accuracy) between -15% and +15%, and root mean square error (RMSE) values≤15% (precision) were considered acceptable for validation. Thirty-five healthy subjects (12 young males; 11 elderly males, 12 elderly females) received lacosamide 100mg/day for 4.5 days. Equation-derived PK values were compared to reference mean Cpeak,ss, Ctrough,ss and AUCτ,ss values. Equation-derived PK data had a precision of 6.2% and accuracy of -8.0%, 2.9%, and -0.11%, respectively. Equation-derived versus reference PK values for individual samples obtained 3-12h after lacosamide administration showed correlation (R2) range of 0.88-0.97 for AUCτ,ss. Correlation range for Cpeak,ss and Ctrough,ss was 0.65-0.87. Error analyses for individual sample comparisons were independent of time. Derived equations approximated lacosamide Cpeak,ss, Ctrough,ss and AUCτ,ss using one steady-state plasma sample within validation range. Approximated PK parameters were within accepted validation criteria when compared to reference PK values. Copyright © 2014 Elsevier B.V. All rights reserved.
Dwyer, Michael G; Bergsland, Niels; Zivadinov, Robert
2014-04-15
SIENA and similar techniques have demonstrated the utility of performing "direct" measurements as opposed to post-hoc comparison of cross-sectional data for the measurement of whole brain (WB) atrophy over time. However, gray matter (GM) and white matter (WM) atrophy are now widely recognized as important components of neurological disease progression, and are being actively evaluated as secondary endpoints in clinical trials. Direct measures of GM/WM change with advantages similar to SIENA have been lacking. We created a robust and easily-implemented method for direct longitudinal analysis of GM/WM atrophy, SIENAX multi-time-point (SIENAX-MTP). We built on the basic halfway-registration and mask composition components of SIENA to improve the raw output of FMRIB's FAST tissue segmentation tool. In addition, we created LFAST, a modified version of FAST incorporating a 4th dimension in its hidden Markov random field model in order to directly represent time. The method was validated by scan-rescan, simulation, comparison with SIENA, and two clinical effect size comparisons. All validation approaches demonstrated improved longitudinal precision with the proposed SIENAX-MTP method compared to SIENAX. For GM, simulation showed better correlation with experimental volume changes (r=0.992 vs. 0.941), scan-rescan showed lower standard deviations (3.8% vs. 8.4%), correlation with SIENA was more robust (r=0.70 vs. 0.53), and effect sizes were improved by up to 68%. Statistical power estimates indicated a potential drop of 55% in the number of subjects required to detect the same treatment effect with SIENAX-MTP vs. SIENAX. The proposed direct GM/WM method significantly improves on the standard SIENAX technique by trading a small amount of bias for a large reduction in variance, and may provide more precise data and additional statistical power in longitudinal studies. Copyright © 2013 Elsevier Inc. All rights reserved.
Modeling long-term human activeness using recurrent neural networks for biometric data.
Kim, Zae Myung; Oh, Hyungrai; Kim, Han-Gyu; Lim, Chae-Gyun; Oh, Kyo-Joong; Choi, Ho-Jin
2017-05-18
With the invention of fitness trackers, it has been possible to continuously monitor a user's biometric data such as heart rates, number of footsteps taken, and amount of calories burned. This paper names the time series of these three types of biometric data, the user's "activeness", and investigates the feasibility in modeling and predicting the long-term activeness of the user. The dataset used in this study consisted of several months of biometric time-series data gathered by seven users independently. Four recurrent neural network (RNN) architectures-as well as a deep neural network and a simple regression model-were proposed to investigate the performance on predicting the activeness of the user under various length-related hyper-parameter settings. In addition, the learned model was tested to predict the time period when the user's activeness falls below a certain threshold. A preliminary experimental result shows that each type of activeness data exhibited a short-term autocorrelation; and among the three types of data, the consumed calories and the number of footsteps were positively correlated, while the heart rate data showed almost no correlation with neither of them. It is probably due to this characteristic of the dataset that although the RNN models produced the best results on modeling the user's activeness, the difference was marginal; and other baseline models, especially the linear regression model, performed quite admirably as well. Further experimental results show that it is feasible to predict a user's future activeness with precision, for example, a trained RNN model could predict-with the precision of 84%-when the user would be less active within the next hour given the latest 15 min of his activeness data. This paper defines and investigates the notion of a user's "activeness", and shows that forecasting the long-term activeness of the user is indeed possible. Such information can be utilized by a health-related application to proactively recommend suitable events or services to the user.
Proceedings of the 8th Precise Time and Time Interval (PTTI) Applications and Planning Meeting
NASA Technical Reports Server (NTRS)
1977-01-01
The Proceedings contain the papers presented at the Eight Annual Precise Time and Tme Interval PTTI Applications and Planning Meeting. The edited record of the discussions following the papers and the panel discussions are also included. This meeting provided a forum for the exchange of information on precise time and frequency technology among members of the scientific community and persons with program applications. The 282 registered attendees came from various U.S. Government agencies, private industry, universities and a number of foreign countries were represented. In this meeting, papers were presented that emphasized: (1) definitions and international regulations of precise time sources and users, (2) the scientific foundations of Hydrogen Maser standards, the current developments in this field and the application experience, and (3) how to measure the stability performance properties of precise standards. As in the previous meetings, update and new papers were presented on system applications with past, present and future requirements identified.
Analysis of spectra using correlation functions
NASA Technical Reports Server (NTRS)
Beer, Reinhard; Norton, Robert H.
1988-01-01
A novel method is presented for the quantitative analysis of spectra based on the properties of the cross correlation between a real spectrum and either a numerical synthesis or laboratory simulation. A new goodness-of-fit criterion called the heteromorphic coefficient H is proposed that has the property of being zero when a fit is achieved and varying smoothly through zero as the iteration proceeds, providing a powerful tool for automatic or near-automatic analysis. It is also shown that H can be rendered substantially noise-immune, permitting the analysis of very weak spectra well below the apparent noise level and, as a byproduct, providing Doppler shift and radial velocity information with excellent precision. The technique is in regular use in the Atmospheric Trace Molecule Spectroscopy (ATMOS) project and operates in an interactive, realtime computing environment with turn-around times of a few seconds or less.
Benítez, Alfredo; Santiago, Ulises; Sanchez, John E; Ponce, Arturo
2018-01-01
In this work, an innovative cathodoluminescence (CL) system is coupled to a scanning electron microscope and synchronized with a Raspberry Pi computer integrated with an innovative processing signal. The post-processing signal is based on a Python algorithm that correlates the CL and secondary electron (SE) images with a precise dwell time correction. For CL imaging, the emission signal is collected through an optical fiber and transduced to an electrical signal via a photomultiplier tube (PMT). CL Images are registered in a panchromatic mode and can be filtered using a monochromator connected between the optical fiber and the PMT to produce monochromatic CL images. The designed system has been employed to study ZnO samples prepared by electrical arc discharge and microwave methods. CL images are compared with SE images and chemical elemental mapping images to correlate the emission regions of the sample.
NASA Astrophysics Data System (ADS)
Benítez, Alfredo; Santiago, Ulises; Sanchez, John E.; Ponce, Arturo
2018-01-01
In this work, an innovative cathodoluminescence (CL) system is coupled to a scanning electron microscope and synchronized with a Raspberry Pi computer integrated with an innovative processing signal. The post-processing signal is based on a Python algorithm that correlates the CL and secondary electron (SE) images with a precise dwell time correction. For CL imaging, the emission signal is collected through an optical fiber and transduced to an electrical signal via a photomultiplier tube (PMT). CL Images are registered in a panchromatic mode and can be filtered using a monochromator connected between the optical fiber and the PMT to produce monochromatic CL images. The designed system has been employed to study ZnO samples prepared by electrical arc discharge and microwave methods. CL images are compared with SE images and chemical elemental mapping images to correlate the emission regions of the sample.
Correlation between spin structure oscillations and domain wall velocities
Bisig, André; Stärk, Martin; Mawass, Mohamad-Assaad; Moutafis, Christoforos; Rhensius, Jan; Heidler, Jakoba; Büttner, Felix; Noske, Matthias; Weigand, Markus; Eisebitt, Stefan; Tyliszczak, Tolek; Van Waeyenberge, Bartel; Stoll, Hermann; Schütz, Gisela; Kläui, Mathias
2013-01-01
Magnetic sensing and logic devices based on the motion of magnetic domain walls rely on the precise and deterministic control of the position and the velocity of individual magnetic domain walls in curved nanowires. Varying domain wall velocities have been predicted to result from intrinsic effects such as oscillating domain wall spin structure transformations and extrinsic pinning due to imperfections. Here we use direct dynamic imaging of the nanoscale spin structure that allows us for the first time to directly check these predictions. We find a new regime of oscillating domain wall motion even below the Walker breakdown correlated with periodic spin structure changes. We show that the extrinsic pinning from imperfections in the nanowire only affects slow domain walls and we identify the magnetostatic energy, which scales with the domain wall velocity, as the energy reservoir for the domain wall to overcome the local pinning potential landscape. PMID:23978905
Richter, Craig G; Thompson, William H; Bosman, Conrado A; Fries, Pascal
2015-07-01
The quantification of covariance between neuronal activities (functional connectivity) requires the observation of correlated changes and therefore multiple observations. The strength of such neuronal correlations may itself undergo moment-by-moment fluctuations, which might e.g. lead to fluctuations in single-trial metrics such as reaction time (RT), or may co-fluctuate with the correlation between activity in other brain areas. Yet, quantifying the relation between moment-by-moment co-fluctuations in neuronal correlations is precluded by the fact that neuronal correlations are not defined per single observation. The proposed solution quantifies this relation by first calculating neuronal correlations for all leave-one-out subsamples (i.e. the jackknife replications of all observations) and then correlating these values. Because the correlation is calculated between jackknife replications, we address this approach as jackknife correlation (JC). First, we demonstrate the equivalence of JC to conventional correlation for simulated paired data that are defined per observation and therefore allow the calculation of conventional correlation. While the JC recovers the conventional correlation precisely, alternative approaches, like sorting-and-binning, result in detrimental effects of the analysis parameters. We then explore the case of relating two spectral correlation metrics, like coherence, that require multiple observation epochs, where the only viable alternative analysis approaches are based on some form of epoch subdivision, which results in reduced spectral resolution and poor spectral estimators. We show that JC outperforms these approaches, particularly for short epoch lengths, without sacrificing any spectral resolution. Finally, we note that the JC can be applied to relate fluctuations in any smooth metric that is not defined on single observations. Copyright © 2015. Published by Elsevier Inc.
Mazzocco, Michèle M M; Feigenson, Lisa; Halberda, Justin
2011-01-01
The Approximate Number System (ANS) is a primitive mental system of nonverbal representations that supports an intuitive sense of number in human adults, children, infants, and other animal species. The numerical approximations produced by the ANS are characteristically imprecise and, in humans, this precision gradually improves from infancy to adulthood. Throughout development, wide ranging individual differences in ANS precision are evident within age groups. These individual differences have been linked to formal mathematics outcomes, based on concurrent, retrospective, or short-term longitudinal correlations observed during the school age years. However, it remains unknown whether this approximate number sense actually serves as a foundation for these school mathematics abilities. Here we show that ANS precision measured at preschool, prior to formal instruction in mathematics, selectively predicts performance on school mathematics at 6 years of age. In contrast, ANS precision does not predict non-numerical cognitive abilities. To our knowledge, these results provide the first evidence for early ANS precision, measured before the onset of formal education, predicting later mathematical abilities.
Mazzocco, Michèle M. M.; Feigenson, Lisa; Halberda, Justin
2011-01-01
The Approximate Number System (ANS) is a primitive mental system of nonverbal representations that supports an intuitive sense of number in human adults, children, infants, and other animal species. The numerical approximations produced by the ANS are characteristically imprecise and, in humans, this precision gradually improves from infancy to adulthood. Throughout development, wide ranging individual differences in ANS precision are evident within age groups. These individual differences have been linked to formal mathematics outcomes, based on concurrent, retrospective, or short-term longitudinal correlations observed during the school age years. However, it remains unknown whether this approximate number sense actually serves as a foundation for these school mathematics abilities. Here we show that ANS precision measured at preschool, prior to formal instruction in mathematics, selectively predicts performance on school mathematics at 6 years of age. In contrast, ANS precision does not predict non-numerical cognitive abilities. To our knowledge, these results provide the first evidence for early ANS precision, measured before the onset of formal education, predicting later mathematical abilities. PMID:21935362
Precision medicine of aneurysmal subarachnoid hemorrhage, vasospasm and delayed cerebral ischemia.
Burrell, Christian; Avalon, Nicole E; Siegel, Jason; Pizzi, Michael; Dutta, Tumpa; Charlesworth, M Cristine; Freeman, William D
2016-11-01
Precision medicine provides individualized treatment of diseases through leveraging patient-to-patient variation. Aneurysmal subarachnoid hemorrhage carries tremendous morbidity and mortality with cerebral vasospasm and delayed cerebral ischemia proving devastating and unpredictable. Lack of treatment measures for these conditions could be improved through precision medicine. Areas covered: Discussed are the pathophysiology of CV and DCI, treatment guidelines, and evidence for precision medicine used for prediction and prevention of poor outcomes following aSAH. A PubMed search was performed using keywords cerebral vasospasm or delayed cerebral ischemia and either biomarkers, precision medicine, metabolomics, proteomics, or genomics. Over 200 peer-reviewed articles were evaluated. The studies presented cover biomarkers identified as predictive markers or therapeutic targets following aSAH. Expert commentary: The biomarkers reviewed here correlate with CV, DCI, and neurologic outcomes after aSAH. Though practical use in clinical management of aSAH is not well established, using these biomarkers as predictive tools or therapeutic targets demonstrates the potential of precision medicine.
NASA Technical Reports Server (NTRS)
Hellwig, H.; Stein, S. R.; Walls, F. L.; Kahan, A.
1978-01-01
The relationship between system performance and clock or oscillator performance is discussed. Tradeoffs discussed include: short term stability versus bandwidth requirements; frequency accuracy versus signal acquisition time; flicker of frequency and drift versus resynchronization time; frequency precision versus communications traffic volume; spectral purity versus bit error rate, and frequency standard stability versus frequency selection and adjustability. The benefits and tradeoffs of using precise frequency and time signals are various levels of precision and accuracy are emphasized.
Nanosecond time transfer via shuttle laser ranging experiment
NASA Technical Reports Server (NTRS)
Reinhardt, V. S.; Premo, D. A.; Fitzmaurice, M. W.; Wardrip, S. C.; Cervenka, P. O.
1978-01-01
A method is described to use a proposed shuttle laser ranging experiment to transfer time with nanosecond precision. All that need be added to the original experiment are low cost ground stations and an atomic clock on the shuttle. It is shown that global time transfer can be accomplished with 1 ns precision and transfer up to distances of 2000 km can be accomplished with better than 100 ps precision.
Non-contact measurement of linear external dimensions of the mouse eye
Wisard, Jeffrey; Chrenek, Micah A.; Wright, Charles; Dalal, Nupur; Pardue, Machelle T.; Boatright, Jeffrey H.; Nickerson, John M.
2010-01-01
Biometric analyses of quantitative traits in eyes of mice can reveal abnormalities related to refractive or ocular development. Due to the small size of the mouse eye, highly accurate and precise measurements are needed to detect meaningful differences. We sought a non-contact measuring technique to obtain highly accurate and precise linear dimensions of the mouse eye. Laser micrometry was validated with gauge block standards. Simple procedures to measure eye dimensions on three axes were devised. Mouse eyes from C57BL/6J and rd10 on a C57BL/6J background were dissected and extraocular muscle and fat removed. External eye dimensions of axial length (anterior-posterior (A-P) axis) and equatorial diameter (superior-inferior (S-I) and nasal-temporal (N-T) axes) were obtained with a laser micrometer. Several approaches to prevent or ameliorate evaporation due to room air were employed. The resolution of the laser micrometer was less than 0.77 microns, and it provided accurate and precise non-contact measurements of eye dimensions on three axes. External dimensions of the eye strongly correlated with eye weight. The N-T and S-I dimensions of the eye correlated with each other most closely from among the 28 pair-wise combinations of the several parameters that were collected. The equatorial axis measurements correlated well from the right and left eye of each mouse. The A-P measurements did not correlate or correlated poorly in each pair of eyes. The instrument is well suited for the measurement of enucleated eyes and other structures from most commonly used species in experimental vision research and ophthalmology. PMID:20067806
Non-contact measurement of linear external dimensions of the mouse eye.
Wisard, Jeffrey; Chrenek, Micah A; Wright, Charles; Dalal, Nupur; Pardue, Machelle T; Boatright, Jeffrey H; Nickerson, John M
2010-03-30
Biometric analyses of quantitative traits in eyes of mice can reveal abnormalities related to refractive or ocular development. Due to the small size of the mouse eye, highly accurate and precise measurements are needed to detect meaningful differences. We sought a non-contact measuring technique to obtain highly accurate and precise linear dimensions of the mouse eye. Laser micrometry was validated with gauge block standards. Simple procedures to measure eye dimensions on three axes were devised. Mouse eyes from C57BL/6J and rd10 on a C57BL/6J background were dissected and extraocular muscle and fat removed. External eye dimensions of axial length (anterior-posterior (A-P) axis) and equatorial diameter (superior-inferior (S-I) and nasal-temporal (N-T) axes) were obtained with a laser micrometer. Several approaches to prevent or ameliorate evaporation due to room air were employed. The resolution of the laser micrometer was less than 0.77 microm, and it provided accurate and precise non-contact measurements of eye dimensions on three axes. External dimensions of the eye strongly correlated with eye weight. The N-T and S-I dimensions of the eye correlated with each other most closely from among the 28 pair-wise combinations of the several parameters that were collected. The equatorial axis measurements correlated well from the right and left eye of each mouse. The A-P measurements did not correlate or correlated poorly in each pair of eyes. The instrument is well suited for the measurement of enucleated eyes and other structures from most commonly used species in experimental vision research and ophthalmology. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Schneider, David M; Woolley, Sarah M N
2010-06-01
Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the auditory midbrain increases neural discrimination of complex vocalizations.
Performance evaluation of Abbott CELL-DYN Ruby for routine use.
Lehto, T; Hedberg, P
2008-10-01
CELL-DYN Ruby is a new automated hematology analyzer suitable for routine use in small laboratories and as a back-up or emergency analyzer in medium- to high-volume laboratories. The analyzer was evaluated by comparing the results from the CELL-DYN((R)) Ruby with the results obtained from CELL-DYN Sapphire . Precision, linearity, and carryover between patient samples were also assessed. Precision was good at all levels for the routine cell blood count (CBC) parameters, CV% being
Sampling networks with prescribed degree correlations
NASA Astrophysics Data System (ADS)
Del Genio, Charo; Bassler, Kevin; Erdos, Péter; Miklos, István; Toroczkai, Zoltán
2014-03-01
A feature of a network known to affect its structural and dynamical properties is the presence of correlations amongst the node degrees. Degree correlations are a measure of how much the connectivity of a node influences the connectivity of its neighbours, and they are fundamental in the study of processes such as the spreading of information or epidemics, the cascading failures of damaged systems and the evolution of social relations. We introduce a method, based on novel mathematical results, that allows the exact sampling of networks where the number of connections between nodes of any given connectivity is specified. Our algorithm provides a weight associated to each sample, thereby allowing network observables to be measured according to any desired distribution, and it is guaranteed to always terminate successfully in polynomial time. Thus, our new approach provides a preferred tool for scientists to model complex systems of current relevance, and enables researchers to precisely study correlated networks with broad societal importance. CIDG acknowledges support by the European Commission's FP7 through grant No. 288021. KEB acknowledges support from the NSF through grant DMR?1206839. KEB, PE, IM and ZT acknowledge support from AFSOR and DARPA through grant FA?9550-12-1-0405.
Consistency relations for sharp inflationary non-Gaussian features
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mooij, Sander; Palma, Gonzalo A.; Panotopoulos, Grigoris
If cosmic inflation suffered tiny time-dependent deviations from the slow-roll regime, these would induce the existence of small scale-dependent features imprinted in the primordial spectra, with their shapes and sizes revealing information about the physics that produced them. Small sharp features could be suppressed at the level of the two-point correlation function, making them undetectable in the power spectrum, but could be amplified at the level of the three-point correlation function, offering us a window of opportunity to uncover them in the non-Gaussian bispectrum. In this article, we show that sharp features may be analyzed using only data coming frommore » the three point correlation function parametrizing primordial non-Gaussianity. More precisely, we show that if features appear in a particular non-Gaussian triangle configuration (e.g. equilateral, folded, squeezed), these must reappear in every other configuration according to a specific relation allowing us to correlate features across the non-Gaussian bispectrum. As a result, we offer a method to study scale-dependent features generated during inflation that depends only on data coming from measurements of non-Gaussianity, allowing us to omit data from the power spectrum.« less
Multi-Station Broad Regional Event Detection Using Waveform Correlation
NASA Astrophysics Data System (ADS)
Slinkard, M.; Stephen, H.; Young, C. J.; Eckert, R.; Schaff, D. P.; Richards, P. G.
2013-12-01
Previous waveform correlation studies have established the occurrence of repeating seismic events in various regions, and the utility of waveform-correlation event-detection on broad regional or even global scales to find events currently not included in traditionally-prepared bulletins. The computational burden, however, is high, limiting previous experiments to relatively modest template libraries and/or processing time periods. We have developed a distributed computing waveform correlation event detection utility that allows us to process years of continuous waveform data with template libraries numbering in the thousands. We have used this system to process several years of waveform data from IRIS stations in East Asia, using libraries of template events taken from global and regional bulletins. Detections at a given station are confirmed by 1) comparison with independent bulletins of seismicity, and 2) consistent detections at other stations. We find that many of the detected events are not in traditional catalogs, hence the multi-station comparison is essential. In addition to detecting the similar events, we also estimate magnitudes very precisely based on comparison with the template events (when magnitudes are available). We have investigated magnitude variation within detected families of similar events, false alarm rates, and the temporal and spatial reach of templates.
The atmosphere- and hydrosphere-correlated signals in GPS observations
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Boy, Jean-Paul; Klos, Anna; Figurski, Mariusz
2015-04-01
The circulation of surface geophysical fluids (e.g. atmosphere, ocean, continental hydrology, etc.) induces global mass redistribution at the Earth's surface, and then surface deformations and gravity variations. The deformations can be reliably recorded by permanent GPS observations nowadays. The loading effects can be precisely modelled by convolving outputs from global general circulation models and Green's functions describing the Earth's response. Previously published papers showed that either surface gravity records or space-based observations can be efficiently corrected for atmospheric loading effects using surface pressure fields from atmospheric models. In a similar way, loading effects due to continental hydrology can be corrected from precise positioning observations. We evaluated 3-D displacement at the selected ITRF2008 core sites that belong to IGS (International GNSS Service) network due to atmospheric, oceanic and hydrological circulation using different models. Atmospheric and induced oceanic loading estimates were computed using the ECMWF (European Centre for Medium Range Weather Forecasts) operational and reanalysis (ERA interim) surface pressure fields, assuming an inverted barometer ocean response or a barotropic ocean model forced by air pressure and winds (MOG2D). The IB (Inverted Barometer) hypothesis was classically chosen, in which atmospheric pressure variations are fully compensated by static sea height variations. This approximation is valid for periods exceeding typically 5 to 20 days. At higher frequencies, dynamic effects cannot be neglected. Hydrological loading were provided using MERRA land (Modern-Era Retrospective Analysis for Research and Applications - NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5)) for the different stations. After that we compared the results to the GPS-derived time series of North, East and Up components. The analysis of satellite data was performed twofold: firstly, the time series from network solution (NS) processed in Bernese 5.0 software by the Military University of Technology EPN Local Analysis Centre, secondly, the ones from PPP (Precise Point Positioning) from JPL (Jet Propulsion Laboratory) processing in Gipsy-Oasis were analyzed. Both were modelled with wavelet decomposition with Meyer orthogonal mother wavelet. Here, nine levels of decomposition were applied and eighth detail of it was interpreted as changes close to one year. In this way, both NS and PPP time series where presented as curves with annual period with amplitudes and phases changeable in time. The same analysis was performed for atmospheric (ATM) and hydrospheric (HYDR) models. All annual curves (modelled from NS, PPP, ATM and HYDR) were then compared to each other to investigate whether GPS observations contain the atmosphere and hydrosphere correlated signals and in what way the amplitudes of them may disrupt the GPS time series.
Accuracy of a new partial coherence interferometry analyser for biometric measurements.
Holzer, M P; Mamusa, M; Auffarth, G U
2009-06-01
Precise biometry is an essential preoperative measurement for refractive surgery as well as cataract surgery. A new device based on partial coherence interferometry technology was tested and evaluated for accuracy of measurements. In a prospective study 200 eyes of 100 healthy phakic volunteers were examined with a functional prototype of the new ALLEGRO BioGraph (Wavelight AG)/LENSTAR LS 900 (Haag Streit AG) biometer and with the IOLMaster V.5 (Carl Zeiss Meditec AG). As recommended by the manufacturers, repeated measurements were performed with both devices and the results compared using Spearman correlation calculations (WinSTAT). Spearman correlation showed high correlations for axial length and keratometry measurements between the two devices tested. Anterior chamber depth, however, had a lower correlation between the two biometry devices. In addition, the mean values of the anterior chamber depth differed (IOLMaster 3.48 (SD 0.42) mm versus BioGraph/LENSTAR 3.64 (SD 0.26) mm); however, this difference was not statistically different (p>0.05, t test). The new biometer provided results that correlated very well with those of the IOLMaster. The ALLEGRO BioGraph/LENSTAR LS 900 is a precise device containing additional features that will be helpful tools for any cataract or refractive surgeon.
Estimation of stress relaxation time for normal and abnormal breast phantoms using optical technique
NASA Astrophysics Data System (ADS)
Udayakumar, K.; Sujatha, N.
2015-03-01
Many of the early occurring micro-anomalies in breast may transform into a deadliest cancer tumor in future. Probability of curing early occurring abnormalities in breast is more if rightly identified. Even in mammogram, considered as a golden standard technique for breast imaging, it is hard to pick up early occurring changes in the breast tissue due to the difference in mechanical behavior of the normal and abnormal tissue when subjected to compression prior to x-ray or laser exposure. In this paper, an attempt has been made to estimate the stress relaxation time of normal and abnormal breast mimicking phantom using laser speckle image correlation. Phantoms mimicking normal breast is prepared and subjected to precise mechanical compression. The phantom is illuminated by a Helium Neon laser and by using a CCD camera, a sequence of strained phantom speckle images are captured and correlated by the image mean intensity value at specific time intervals. From the relation between mean intensity versus time, tissue stress relaxation time is quantified. Experiments were repeated for phantoms with increased stiffness mimicking abnormal tissue for similar ranges of applied loading. Results shows that phantom with more stiffness representing abnormal tissue shows uniform relaxation for varying load of the selected range, whereas phantom with less stiffness representing normal tissue shows irregular behavior for varying loadings in the given range.
GPS signal loss in the wide area monitoring system: Prevalence, impact, and solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Wenxuan; Zhou, Dao; Zhan, Lingwei
The phasor measurement unit (PMUs), equipped with Global Positioning System (GPS) receivers for precise time synchronization, provides measurements of voltage and current phasors at different nodes of the wide area monitoring system. However, GPS receivers are likely to lose satellite signals due to various unpredictable factors. The prevalence of GPS signal loss (GSL) on PMUs is first investigated using real PMU data. The historical GSL events are extracted from a phasor data concentrator (PDC) and FNET/GridEye server. The correlation between GSL and time, spatial location, solar activity are explored via comprehensive statistical analysis. Furthermore, the impact of GSL on phasormore » measurement accuracy has been studied via experiments. Finally, several potential solutions to mitigate the impact of GSL on PMUs are discussed and compared.« less
GPS signal loss in the wide area monitoring system: Prevalence, impact, and solution
Yao, Wenxuan; Zhou, Dao; Zhan, Lingwei; ...
2017-03-19
The phasor measurement unit (PMUs), equipped with Global Positioning System (GPS) receivers for precise time synchronization, provides measurements of voltage and current phasors at different nodes of the wide area monitoring system. However, GPS receivers are likely to lose satellite signals due to various unpredictable factors. The prevalence of GPS signal loss (GSL) on PMUs is first investigated using real PMU data. The historical GSL events are extracted from a phasor data concentrator (PDC) and FNET/GridEye server. The correlation between GSL and time, spatial location, solar activity are explored via comprehensive statistical analysis. Furthermore, the impact of GSL on phasormore » measurement accuracy has been studied via experiments. Finally, several potential solutions to mitigate the impact of GSL on PMUs are discussed and compared.« less
NASA Astrophysics Data System (ADS)
Santos, Abel; Yoo, Jeong Ha; Rohatgi, Charu Vashisth; Kumeria, Tushar; Wang, Ye; Losic, Dusan
2016-01-01
This study is the first realisation of true optical rugate filters (RFs) based on nanoporous anodic alumina (NAA) by sinusoidal waves. An innovative and rationally designed sinusoidal pulse anodisation (SPA) approach in galvanostatic mode is used with the aim of engineering the effective medium of NAA in a sinusoidal fashion. A precise control over the different anodisation parameters (i.e. anodisation period, anodisation amplitude, anodisation offset, number of pulses, anodisation temperature and pore widening time) makes it possible to engineer the characteristic reflection peaks and interferometric colours of NAA-RFs, which can be finely tuned across the UV-visible-NIR spectrum. The effect of the aforementioned anodisation parameters on the photonic properties of NAA-RFs (i.e. characteristic reflection peaks and interferometric colours) is systematically assessed in order to establish for the first time a comprehensive rationale towards NAA-RFs with fully controllable photonic properties. The experimental results are correlated with a theoretical model (Looyenga-Landau-Lifshitz - LLL), demonstrating that the effective medium of these photonic nanostructures can be precisely described by the effective medium approximation. NAA-RFs are also demonstrated as chemically selective photonic platforms combined with reflectometric interference spectroscopy (RIfS). The resulting optical sensing system is used to assess the reversible binding affinity between a model drug (i.e. indomethacin) and human serum albumin (HSA) in real-time. Our results demonstrate that this system can be used to determine the overall pharmacokinetic profile of drugs, which is a critical aspect to be considered for the implementation of efficient medical therapies.This study is the first realisation of true optical rugate filters (RFs) based on nanoporous anodic alumina (NAA) by sinusoidal waves. An innovative and rationally designed sinusoidal pulse anodisation (SPA) approach in galvanostatic mode is used with the aim of engineering the effective medium of NAA in a sinusoidal fashion. A precise control over the different anodisation parameters (i.e. anodisation period, anodisation amplitude, anodisation offset, number of pulses, anodisation temperature and pore widening time) makes it possible to engineer the characteristic reflection peaks and interferometric colours of NAA-RFs, which can be finely tuned across the UV-visible-NIR spectrum. The effect of the aforementioned anodisation parameters on the photonic properties of NAA-RFs (i.e. characteristic reflection peaks and interferometric colours) is systematically assessed in order to establish for the first time a comprehensive rationale towards NAA-RFs with fully controllable photonic properties. The experimental results are correlated with a theoretical model (Looyenga-Landau-Lifshitz - LLL), demonstrating that the effective medium of these photonic nanostructures can be precisely described by the effective medium approximation. NAA-RFs are also demonstrated as chemically selective photonic platforms combined with reflectometric interference spectroscopy (RIfS). The resulting optical sensing system is used to assess the reversible binding affinity between a model drug (i.e. indomethacin) and human serum albumin (HSA) in real-time. Our results demonstrate that this system can be used to determine the overall pharmacokinetic profile of drugs, which is a critical aspect to be considered for the implementation of efficient medical therapies. Electronic supplementary information (ESI) available: Table containing a comprehensive summary of the effect of the different anodisation parameters on the characteristic reflection peaks of NAA-RFs and further information on the relationship between glucose concentration and its refractive index and examples of ΔλPeak calculation for establishing the binding affinity between HSA and indomethacin molecules. See DOI: 10.1039/c5nr05462a
LYSO based precision timing calorimeters
NASA Astrophysics Data System (ADS)
Bornheim, A.; Apresyan, A.; Ronzhin, A.; Xie, S.; Duarte, J.; Spiropulu, M.; Trevor, J.; Anderson, D.; Pena, C.; Hassanshahi, M. H.
2017-11-01
In this report we outline the study of the development of calorimeter detectors using bright scintillating crystals. We discuss how timing information with a precision of a few tens of pico seconds and below can significantly improve the reconstruction of the physics events under challenging high pileup conditions to be faced at the High-Luminosity LHC or a future hadron collider. The particular challenge in measuring the time of arrival of a high energy photon lies in the stochastic component of the distance of initial conversion and the size of the electromagnetic shower. We present studies and measurements from test beams for calorimeter based timing measurements to explore the ultimate timing precision achievable for high energy photons of 10 GeV and above. We focus on techniques to measure the timing with a high precision in association with the energy of the photon. We present test-beam studies and results on the timing performance and characterization of the time resolution of LYSO-based calorimeters. We demonstrate time resolution of 30 ps is achievable for a particular design.
Prochazka, Ivan; Kodet, Jan; Panek, Petr
2012-11-01
We have designed, constructed, and tested the overall performance of the electronic circuit for the two-way time transfer between two timing devices over modest distances with sub-picosecond precision and a systematic error of a few picoseconds. The concept of the electronic circuit enables to carry out time tagging of pulses of interest in parallel to the comparison of the time scales of these timing devices. The key timing parameters of the circuit are: temperature change of the delay is below 100 fs/K, timing stability time deviation better than 8 fs for averaging time from minutes to hours, sub-picosecond time transfer precision, and a few picoseconds time transfer accuracy.
Multi-GNSS real-time precise orbit/clock/UPD products and precise positioning service at GFZ
NASA Astrophysics Data System (ADS)
Li, Xingxing; Ge, Maorong; Liu, Yang; Fritsche, Mathias; Wickert, Jens; Schuh, Harald
2016-04-01
The rapid development of multi-constellation GNSSs (Global Navigation Satellite Systems, e.g., BeiDou, Galileo, GLONASS, GPS) and the IGS (International GNSS Service) Multi-GNSS Experiment (MGEX) bring great opportunities and challenges for real-time precise positioning service. In this contribution, we present a GPS+GLONASS+BeiDou+Galileo four-system model to fully exploit the observations of all these four navigation satellite systems for real-time precise orbit determination, clock estimation and positioning. A rigorous multi-GNSS analysis is performed to achieve the best possible consistency by processing the observations from different GNSS together in one common parameter estimation procedure. Meanwhile, an efficient multi-GNSS real-time precise positioning service system is designed and demonstrated by using the Multi-GNSS Experiment (MGEX) and International GNSS Service (IGS) data streams including stations all over the world. The addition of the BeiDou, Galileo and GLONASS systems to the standard GPS-only processing, reduces the convergence time almost by 70%, while the positioning accuracy is improved by about 25%. Some outliers in the GPS-only solutions vanish when multi-GNSS observations are processed simultaneous. The availability and reliability of GPS precise positioning decrease dramatically as the elevation cutoff increases. However, the accuracy of multi-GNSS precise point positioning (PPP) is hardly decreased and few centimeters are still achievable in the horizontal components even with 40° elevation cutoff.
Template optimization and transfer in perceptual learning.
Kurki, Ilmari; Hyvärinen, Aapo; Saarinen, Jussi
2016-08-01
We studied how learning changes the processing of a low-level Gabor stimulus, using a classification-image method (psychophysical reverse correlation) and a task where observers discriminated between slight differences in the phase (relative alignment) of a target Gabor in visual noise. The method estimates the internal "template" that describes how the visual system weights the input information for decisions. One popular idea has been that learning makes the template more like an ideal Bayesian weighting; however, the evidence has been indirect. We used a new regression technique to directly estimate the template weight change and to test whether the direction of reweighting is significantly different from an optimal learning strategy. The subjects trained the task for six daily sessions, and we tested the transfer of training to a target in an orthogonal orientation. Strong learning and partial transfer were observed. We tested whether task precision (difficulty) had an effect on template change and transfer: Observers trained in either a high-precision (small, 60° phase difference) or a low-precision task (180°). Task precision did not have an effect on the amount of template change or transfer, suggesting that task precision per se does not determine whether learning generalizes. Classification images show that training made observers use more task-relevant features and unlearn some irrelevant features. The transfer templates resembled partially optimized versions of templates in training sessions. The template change direction resembles ideal learning significantly but not completely. The amount of template change was highly correlated with the amount of learning.
FRW Solutions and Holography from Uplifted AdS/CFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Xi; Horn, Bart; /Stanford U., ITP /Stanford U., Phys. Dept. /SLAC
2012-02-15
Starting from concrete AdS/CFT dual pairs, one can introduce ingredients which produce cosmological solutions, including metastable de Sitter and its decay to non-accelerating FRW. We present simple FRW solutions sourced by magnetic flavor branes and analyze correlation functions and particle and brane dynamics. To obtain a holographic description, we exhibit a time-dependent warped metric on the solution and interpret the resulting redshifted region as a Lorentzian low energy effective field theory in one fewer dimension. At finite times, this theory has a finite cutoff, a propagating lower dimensional graviton and a finite covariant entropy bound, but at late times themore » lower dimensional Planck mass and entropy go off to infinity in a way that is dominated by contributions from the low energy effective theory. This opens up the possibility of a precise dual at late times. We reproduce the time-dependent growth of the number of degrees of freedom in the system via a count of available microscopic states in the corresponding magnetic brane construction.« less
FRW solutions and holography from uplifted AdS/CFT systems
NASA Astrophysics Data System (ADS)
Dong, Xi; Horn, Bart; Matsuura, Shunji; Silverstein, Eva; Torroba, Gonzalo
2012-05-01
Starting from concrete AdS/CFT dual pairs, one can introduce ingredients which produce cosmological solutions, including metastable de Sitter and its decay to nonaccelerating Friedmann-Robertson-Walker. We present simple Friedmann-Robertson-Walker solutions sourced by magnetic flavor branes and analyze correlation functions and particle and brane dynamics. To obtain a holographic description, we exhibit a time-dependent warped metric on the solution and interpret the resulting redshifted region as a Lorentzian low energy effective field theory in one fewer dimension. At finite times, this theory has a finite cutoff, a propagating lower-dimensional graviton, and a finite covariant entropy bound, but at late times the lower-dimensional Planck mass and entropy go off to infinity in a way that is dominated by contributions from the low energy effective theory. This opens up the possibility of a precise dual at late times. We reproduce the time-dependent growth of the number of degrees of freedom in the system via a count of available microscopic states in the corresponding magnetic brane construction.
Lee, Sohee; Park, Seulkee; Lee, Cho Rok; Son, Haiyoung; Kim, Jungwoo; Kang, Sang-Wook; Jeong, Jong Ju; Nam, Kee-Hyun; Chung, Woong Youn; Park, Cheong Soo
2013-07-01
Robotic applications have achieved safe and precise thyroidectomy with notable cosmetic and functional benefits. This study was designed to document the influence of body habitus on robotic thyroidectomy in papillary thyroid carcinoma (PTC) patients. From July 2009 to February 2010, 352 patients underwent robotic thyroidectomy using a gasless, transaxillary single-incision approach at Yonsei University Health System. Body habitus was described using body mass index category (normal weight, overweight, obese), neck length, shoulder width, and shoulder width to neck length ratios. The impact of body habitus on surgical outcomes was analyzed with respect to operation time, number of retrieved central nodes, bleeding amount, and postoperative complications. Of the 352 patients, 217 underwent less than total thyroidectomy and 135 underwent total thyroidectomy. Operative variables (i.e. operation times, bleeding amounts, and numbers of retrieved central nodes) showed no significant differences between three BMI groups for less than total thyroidectomy. However, total operation and working space times were longer for obese patients during total thyroidectomy. In particular, shoulder width was positively correlated with total operation time, working space time, console time, and number of retrieved central nodes. On the other hand, postoperative complications were not significantly different in the three BMI groups and showed no significant correlation with the other indices of body habitus. Standardized robotic thyroidectomy can be performed safely and feasibly in patients with a large body habitus despite longer operation times.
A Catalog of Transit Timing Posterior Distributions for all Kepler Planet Candidate Transit Events
NASA Astrophysics Data System (ADS)
Montet, Benjamin Tyler; Becker, Juliette C.; Johnson, John Asher
2015-12-01
Kepler has ushered in a new era of planetary dynamics, enabling the detection of interactions between multiple planets in transiting systems for hundreds of systems. These interactions, observed as transit timing variations (TTVs), have been used to find non-transiting companions to transiting systems and to measure masses, eccentricities, and inclinations of transiting planets. Often, physical parameters are inferred by comparing the observed light curve to the result of a photodynamical model, a time-intensive process that often ignores the effects of correlated noise in the light curve. Catalogs of transit timing observations have previously neglected non-Gaussian uncertainties in the times of transit, uncertainties in the transit shape, and short cadence data. Here, I present a catalog of not only times of transit centers, but also posterior distributions on the time of transit for every planet candidate transit event in the Kepler data, developed through importance sampling of each transit. This catalog allows one to marginalize over uncertainties in the transit shape and incorporate short cadence data, the effects of correlated noise, and non-Gaussian posteriors. Our catalog will enable dynamical studies that reflect accurately the precision of Kepler and its limitations without requiring the computational power to model the light curve completely with every integration. I will also present our open-source N-body photodynamical modeling code, which integrates planetary and stellar orbits accounting for the effects of GR, tidal effects, and Doppler beaming.
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, E. C.
2013-01-01
Background: Cluster-randomized experiments that assign intact groups such as schools or school districts to treatment conditions are increasingly common in educational research. Such experiments are inherently multilevel designs whose sensitivity (statistical power and precision of estimates) depends on the variance decomposition across levels.…
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.
2013-01-01
Background: Cluster randomized experiments that assign intact groups such as schools or school districts to treatment conditions are increasingly common in educational research. Such experiments are inherently multilevel designs whose sensitivity (statistical power and precision of estimates) depends on the variance decomposition across levels.…
Markert, Sebastian Matthias; Britz, Sebastian; Proppert, Sven; Lang, Marietta; Witvliet, Daniel; Mulcahy, Ben; Sauer, Markus; Zhen, Mei; Bessereau, Jean-Louis; Stigloher, Christian
2016-10-01
Correlating molecular labeling at the ultrastructural level with high confidence remains challenging. Array tomography (AT) allows for a combination of fluorescence and electron microscopy (EM) to visualize subcellular protein localization on serial EM sections. Here, we describe an application for AT that combines near-native tissue preservation via high-pressure freezing and freeze substitution with super-resolution light microscopy and high-resolution scanning electron microscopy (SEM) analysis on the same section. We established protocols that combine SEM with structured illumination microscopy (SIM) and direct stochastic optical reconstruction microscopy (dSTORM). We devised a method for easy, precise, and unbiased correlation of EM images and super-resolution imaging data using endogenous cellular landmarks and freely available image processing software. We demonstrate that these methods allow us to identify and label gap junctions in Caenorhabditis elegans with precision and confidence, and imaging of even smaller structures is feasible. With the emergence of connectomics, these methods will allow us to fill in the gap-acquiring the correlated ultrastructural and molecular identity of electrical synapses.
Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F
2012-09-14
We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Liu, Chong
2016-10-01
Field programmable gate arrays (FPGAs) manufactured with more advanced processing technology have faster carry chains and smaller delay elements, which are favorable for the design of tapped delay line (TDL)-style time-to-digital converters (TDCs) in FPGA. However, new challenges are posed in using them to implement TDCs with a high time precision. In this paper, we propose a bin realignment method and a dual-sampling method for TDC implementation in a Xilinx UltraScale FPGA. The former realigns the disordered time delay taps so that the TDC precision can approach the limit of its delay granularity, while the latter doubles the number of taps in the delay line so that the TDC precision beyond the cell delay limitation can be expected. Two TDC channels were implemented in a Kintex UltraScale FPGA, and the effectiveness of the new methods was evaluated. For fixed time intervals in the range from 0 to 440 ns, the average RMS precision measured by the two TDC channels reaches 5.8 ps using the bin realignment, and it further improves to 3.9 ps by using the dual-sampling method. The time precision has a 5.6% variation in the measured temperature range. Every part of the TDC, including dual-sampling, encoding, and on-line calibration, could run at a 500 MHz clock frequency. The system measurement dead time is only 4 ns.
[Application of elastic registration based on Demons algorithm in cone beam CT].
Pang, Haowen; Sun, Xiaoyang
2014-02-01
We applied Demons and accelerated Demons elastic registration algorithm in radiotherapy cone beam CT (CBCT) images, We provided software support for real-time understanding of organ changes during radiotherapy. We wrote a 3D CBCT image elastic registration program using Matlab software, and we tested and verified the images of two patients with cervical cancer 3D CBCT images for elastic registration, based on the classic Demons algorithm, minimum mean square error (MSE) decreased 59.7%, correlation coefficient (CC) increased 11.0%. While for the accelerated Demons algorithm, MSE decreased 40.1%, CC increased 7.2%. The experimental verification with two methods of Demons algorithm obtained the desired results, but the small difference appeared to be lack of precision, and the total registration time was a little long. All these problems need to be further improved for accuracy and reducing of time.
Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model.
Musekiwa, Alfred; Manda, Samuel O M; Mwambi, Henry G; Chen, Ding-Geng
2016-01-01
Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results.
NASA Astrophysics Data System (ADS)
Muzy, Jean-François; Baïle, Rachel; Bacry, Emmanuel
2013-04-01
In this paper we propose a new model for volatility fluctuations in financial time series. This model relies on a nonstationary Gaussian process that exhibits aging behavior. It turns out that its properties, over any finite time interval, are very close to continuous cascade models. These latter models are indeed well known to reproduce faithfully the main stylized facts of financial time series. However, it involves a large-scale parameter (the so-called “integral scale” where the cascade is initiated) that is hard to interpret in finance. Moreover, the empirical value of the integral scale is in general deeply correlated to the overall length of the sample. This feature is precisely predicted by our model, which, as illustrated by various examples from daily stock index data, quantitatively reproduces the empirical observations.
Erratum: Evidence That a Deep Meridional Flow Sets the Sunspot Cycle Period
NASA Technical Reports Server (NTRS)
Hathaway, David H.; Nandy, Dibyendu; Wilson, Robert M.; Reichmann, Edwin J.
2004-01-01
An error was made in entering the data. This changes the results concerning the length of the time lag between the variations in the meridional flow speed and those in the cycle amplitude. The final paragraph on page 667 should read: Finally, we study the relationship between the drift velocities and the amplitudes of the hemisphere/cycles. We compare the drift velocity at the maximum of the cycle to the amplitude of that cycle for that hemisphere. There is a positive (0.5) and significant (95%) correlation between the two. However, an even stronger relationship is found between the drift velocity and the amplitude of the N + 2 cycle. The correlation is stronger (0.7) and more significant (99%), as shown. This relationship is suggestive of a "memory" in the solar cycle, again a property of dynamo models that use meridional circulation. Indeed, the two-cycle lag is precisely the relationship found by Charbonneau & Dikpati. This behavior is, however, more difficult to interpret, and we elaborate on this in the next section. In either case, these correlations only explain part of the variance in cycle amplitude (25% for the current cycle and 50% for the N + 2 cycle). Obviously, other mechanisms, such as variations in the gradient in the rotation rate, also contribute to the cycle amplitude variations. Our investigation of possible connections between drift rates and the amplitudes of the N + 1 and N + 3 cycles gives no significant correlations at these alternative time lags.
Island colonisation and the evolutionary rates of body size in insular neonate snakes
Aubret, F
2015-01-01
Island colonisation by animal populations is often associated with dramatic shifts in body size. However, little is known about the rates at which these evolutionary shifts occur, under what precise selective pressures and the putative role played by adaptive plasticity on driving such changes. Isolation time played a significant role in the evolution of body size in island Tiger snake populations, where adaptive phenotypic plasticity followed by genetic assimilation fine-tuned neonate body and head size (hence swallowing performance) to prey size. Here I show that in long isolated islands (>6000 years old) and mainland populations, neonate body mass and snout-vent length are tightly correlated with the average prey body mass available at each site. Regression line equations were used to calculate body size values to match prey size in four recently isolated populations of Tiger snakes. Rates of evolution in body mass and snout-vent length, calculated for seven island snake populations, were significantly correlated with isolation time. Finally, rates of evolution in body mass per generation were significantly correlated with levels of plasticity in head growth rates. This study shows that body size evolution occurs at a faster pace in recently isolated populations and suggests that the level of adaptive plasticity for swallowing abilities may correlate with rates of body mass evolution. I hypothesise that, in the early stages of colonisation, adaptive plasticity and directional selection may combine and generate accelerated evolution towards an ‘optimal' phenotype. PMID:25074570
Tian, Chao; Wang, Lixin; Novick, Kimberly A
2016-10-15
High-precision analysis of atmospheric water vapor isotope compositions, especially δ(17) O values, can be used to improve our understanding of multiple hydrological and meteorological processes (e.g., differentiate equilibrium or kinetic fractionation). This study focused on assessing, for the first time, how the accuracy and precision of vapor δ(17) O laser spectroscopy measurements depend on vapor concentration, delta range, and averaging-time. A Triple Water Vapor Isotope Analyzer (T-WVIA) was used to evaluate the accuracy and precision of δ(2) H, δ(18) O and δ(17) O measurements. The sensitivity of accuracy and precision to water vapor concentration was evaluated using two international standards (GISP and SLAP2). The sensitivity of precision to delta value was evaluated using four working standards spanning a large delta range. The sensitivity of precision to averaging-time was assessed by measuring one standard continuously for 24 hours. Overall, the accuracy and precision of the δ(2) H, δ(18) O and δ(17) O measurements were high. Across all vapor concentrations, the accuracy of δ(2) H, δ(18) O and δ(17) O observations ranged from 0.10‰ to 1.84‰, 0.08‰ to 0.86‰ and 0.06‰ to 0.62‰, respectively, and the precision ranged from 0.099‰ to 0.430‰, 0.009‰ to 0.080‰ and 0.022‰ to 0.054‰, respectively. The accuracy and precision of all isotope measurements were sensitive to concentration, with the higher accuracy and precision generally observed under moderate vapor concentrations (i.e., 10000-15000 ppm) for all isotopes. The precision was also sensitive to the range of delta values, although the effect was not as large compared with the sensitivity to concentration. The precision was much less sensitive to averaging-time than the concentration and delta range effects. The accuracy and precision performance of the T-WVIA depend on concentration but depend less on the delta value and averaging-time. The instrument can simultaneously and continuously measure δ(2) H, δ(18) O and δ(17) O values in water vapor, opening a new window to better understand ecological, hydrological and meteorological processes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Di Donato, Guido; Laufer-Amorim, Renée; Palmieri, Chiara
2017-10-01
Ten normal prostates, 22 benign prostatic hyperplasia (BPH) and 29 prostate cancer (PC) were morphometrically analyzed with regard to mean nuclear area (MNA), mean nuclear perimeter (MNP), mean nuclear diameter (MND), coefficient of variation of the nuclear area (NACV), mean nuclear diameter maximum (MDx), mean nuclear diameter minimum (MDm), mean nuclear form ellipse (MNFe) and form factor (FF). The relationship between nuclear morphometric parameters and histological type, Gleason score, methods of sample collection, presence of metastases and survival time of canine PC were also investigated. Overall, nuclei from neoplastic cells were larger, with greater variation in nuclear size and shape compared to normal and hyperplastic cells. Significant differences were found between more (small acinar/ductal) and less (cribriform, solid) differentiated PCs with regard to FF (p<0.05). MNA, MNP, MND, MDx, and MDm were significantly correlated with the Gleason score of PC (p<0.05). MNA, MNP, MDx and MNFe may also have important prognostic implications in canine prostatic cancer since negatively correlated with the survival time. Biopsy specimens contained nuclei that were smaller and more irregular in comparison to those in prostatectomy and necropsy specimens and therefore factors associated with tissue sampling and processing may influence the overall morphometric evaluation. The results indicate that nuclear morphometric analysis in combination with Gleason score can help in canine prostate cancer grading, thus contributing to the establishment of a more precise prognosis and patient's management. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pneumococcusuria: From bench to bedside.
Krishna, S; Sanjeevan, K V; Sudheer, A; Dinesh, K R; Kumar, A; Karim, S
2012-01-01
The present study highlights six cases of pneumococcusuria during the time period of May 2008 to May 2010. All the patients had a co-existing predisposing factor with the isolation of Streptococcus pneumoniae in urine. Five of the six patients having signs and symptoms of urinary tract infections (UTI) were treated and cured of the same. It becomes essential to consider pneumococcal UTI in the presence of clinical signs and symptoms associated with urinary tract abnormalities like hydronephrosis and renal stones. S. pneumoniae may be regarded as an emerging pathogen in UTI. Precise microbiological diagnosis must correlate with the clinical signs and symptoms for the administration of appropriate antibiotic therapy.
Widely tunable chaotic fiber laser for WDM-PON detection
NASA Astrophysics Data System (ADS)
Zhang, Juan; Yang, Ling-zhen; Xu, Nai-jun; Wang, Juan-fen; Zhang, Zhao-xia; Liu, Xiang-lian
2014-05-01
A widely tunable high precision chaotic fiber laser is proposed and experimentally demonstrated. A tunable fiber Bragg grating (TFBG) filter is used as a tuning element to determine the turning range from 1533 nm to 1558 nm with a linewidth of 0.5 nm at any wavelength. The wide tuning range is capable of supporting 32 wavelength-division multiplexing (WDM) channels with 100 GHz channel spacing. All single wavelengths are found to be chaotic with 10 GHz bandwidth. The full width at half maximum (FWHM) of the chaotic correlation curve of the different wavelengths is on a picosecond time scale, thereby offering millimeter spatial resolution in WDM detection.
Lucid dreaming verified by volitional communication during REM sleep.
La Berge, S P; Nagel, L E; Dement, W C; Zarcone, V P
1981-06-01
The occurrence of lucid dreaming (dreaming while being conscious that one is dreaming) has been verified for 5 selected subjects who signaled that they knew they were dreaming while continuing to dream during unequivocal REM sleep. The signals consisted of particular dream actions having observable concomitants and were performed in accordance with pre-sleep agreement. The ability of proficient lucid dreamers to signal in this manner makes possible a new approach to dream research--such subjects, while lucid, could carry out diverse dream experiments marking the exact time of particular dream events, allowing derivation of of precise psychophysiological correlations and methodical testing of hypotheses.
Krzek, Jan; Piotrowska, Joanna
2011-01-01
A fast spectrophotometric method has been developed for bacitracin identification and determination after condensation reaction with dabsyl chloride. In addition, determination of dye stability of sulfonamide derivative and identification of the molar ratio of reagents was done at various time-points. The developed method has a good linearity with very broad spectrum, correlation coefficient of r = 0.9972, good precision (RSD = 1.54 +/- 0.11%), and recovery at three different levels of concentration was found between 98.33% and 103.47%. Usefulness of the method was demonstrated by positive results obtained during determination of bacitracin concentration in bulk drug.
Automatic Target Recognition Based on Cross-Plot
Wong, Kelvin Kian Loong; Abbott, Derek
2011-01-01
Automatic target recognition that relies on rapid feature extraction of real-time target from photo-realistic imaging will enable efficient identification of target patterns. To achieve this objective, Cross-plots of binary patterns are explored as potential signatures for the observed target by high-speed capture of the crucial spatial features using minimal computational resources. Target recognition was implemented based on the proposed pattern recognition concept and tested rigorously for its precision and recall performance. We conclude that Cross-plotting is able to produce a digital fingerprint of a target that correlates efficiently and effectively to signatures of patterns having its identity in a target repository. PMID:21980508
Tu, Rui; Zhang, Pengfei; Zhang, Rui; Liu, Jinhai; Lu, Xiaochun
2018-03-29
This study proposes two models for precise time transfer using the BeiDou Navigation Satellite System triple-frequency signals: ionosphere-free (IF) combined precise point positioning (PPP) model with two dual-frequency combinations (IF-PPP1) and ionosphere-free combined PPP model with a single triple-frequency combination (IF-PPP2). A dataset with a short baseline (with a common external time frequency) and a long baseline are used for performance assessments. The results show that IF-PPP1 and IF-PPP2 models can both be used for precise time transfer using BeiDou Navigation Satellite System (BDS) triple-frequency signals, and the accuracy and stability of time transfer is the same in both cases, except for a constant system bias caused by the hardware delay of different frequencies, which can be removed by the parameter estimation and prediction with long time datasets or by a priori calibration.
Quantum light in coupled interferometers for quantum gravity tests.
Ruo Berchera, I; Degiovanni, I P; Olivares, S; Genovese, M
2013-05-24
In recent years quantum correlations have received a lot of attention as a key ingredient in advanced quantum metrology protocols. In this Letter we show that they provide even larger advantages when considering multiple-interferometer setups. In particular, we demonstrate that the use of quantum correlated light beams in coupled interferometers leads to substantial advantages with respect to classical light, up to a noise-free scenario for the ideal lossless case. On the one hand, our results prompt the possibility of testing quantum gravity in experimental configurations affordable in current quantum optics laboratories and strongly improve the precision in "larger size experiments" such as the Fermilab holometer; on the other hand, they pave the way for future applications to high precision measurements and quantum metrology.
Precise and economic FIB/SEM for CLEM: with 2 nm voxels through mitosis.
Luckner, Manja; Wanner, Gerhard
2018-05-23
A portfolio is presented documenting economic, high-resolution correlative focused ion beam scanning electron microscopy (FIB/SEM) in routine, comprising: (i) the use of custom-labeled slides and coverslips, (ii) embedding of cells in thin, or ultra-thin resin layers for correlative light and electron microscopy (CLEM) and (iii) the claim to reach the highest resolution possible with FIB/SEM in xyz. Regions of interest (ROIs) defined in light microscope (LM), can be relocated quickly and precisely in SEM. As proof of principle, HeLa cells were investigated in 3D context at all stages of the cell cycle, documenting ultrastructural changes during mitosis: nuclear envelope breakdown and reassembly, Golgi degradation and reconstitution and the formation of the midzone and midbody.
Data and Time Transfer Using SONET Radio
NASA Technical Reports Server (NTRS)
Graceffo, Gary M.
1996-01-01
The need for precise knowledge of time and frequency has become ubiquitous throughout our society. The areas of astronomy, navigation, and high speed wide-area networks are among a few of the many consumers of this type of information. The Global Positioning System (GPS) has the potential to be the most comprehensive source of precise timing information developed to date; however, the introduction of selective availability has made it difficult for many users to recover this information from the GPS system with the precision required for today's systems. The system described in this paper is a 'Synchronous Optical NetWORK (SONET) Radio Data and Time Transfer System'. The objective of this system is to provide precise time and frequency information to a variety of end-users using a two-way data and time-transfer system. Although time and frequency transfers have been done for many years, this system is unique in that time and frequency information are embedded into existing communications traffic. This eliminates the need to make the transfer of time and frequency informatio a dedicated function of the communications system. For this system SONET has been selected as the transport format from which precise time is derived. SONET has been selected because of its high data rates and its increasing acceptance throughout the industry. This paper details a proof-of-concept initiative to perform embedded time and frequency transfers using SONET Radio.
Clinical evaluation of a miniaturized desktop breath hydrogen analyzer.
Duan, L P; Braden, B; Clement, T; Caspary, W F; Lembcke, B
1994-10-01
A small desktop electrochemical H2 analyzer (EC-60-Hydrogen monitor) was compared with a stationary electrochemical H2 monitor (GMI-exhaled Hydrogen monitor). The EC-60-H2 monitor shows a high degree of precision for repetitive (n = 10) measurements of standard hydrogen mixtures (CV 1-8%). The response time for completion of measurement is shorter than that of the GMI-exhaled H2 monitor (37 sec. vs 53 sec.; p < 0.0001), while reset times are almost identical (54 sec. vs 51 sec. n.s). In a clinical setting, breath H2-concentrations measured with the EC-60-H2 monitor and the GMI-exhaled H2 monitor were in excellent agreement with a linear correlation (Y = 1.12X + 1.022, r2 = 0.9617, n = 115). With increasing H2-concentrations the EC-60-H2 monitor required larger sample volumes for maintaining sufficient precision, and sample volumes greater than 200 ml were required with H2-concentrations > 30 ppm. For routine gastrointestinal function testing, the EC-60-H2 monitor is an satisfactory and reliable, easy to use and inexpensive desktop breath hydrogen analyzer, whereas in patients with difficulty in cooperating (children, people with severe pulmonary insufficiency), special care has to be applied to obtain sufficiently large breath samples.
Long-term influence of asteroids on planet longitudes and chaotic dynamics of the solar system
NASA Astrophysics Data System (ADS)
Woillez, E.; Bouchet, F.
2017-11-01
Over timescales much longer than an orbital period, the solar system exhibits large-scale chaotic behavior and can thus be viewed as a stochastic dynamical system. The aim of the present paper is to compare different sources of stochasticity in the solar system. More precisely we studied the importance of the long term influence of asteroids on the chaotic dynamics of the solar system. We show that the effects of asteroids on planets is similar to a white noise process, when those effects are considered on a timescale much larger than the correlation time τϕ ≃ 104 yr of asteroid trajectories. We computed the timescale τe after which the effects of the stochastic evolution of the asteroids lead to a loss of information for the initial conditions of the perturbed Laplace-Lagrange secular dynamics. The order of magnitude of this timescale is precisely determined by theoretical argument, and we find that τe ≃ 104 Myr. Although comparable to the full main-sequence lifetime of the sun, this timescale is considerably longer than the Lyapunov time τI ≃ 10 Myr of the solar system without asteroids. This shows that the external sources of chaos arise as a small perturbation in the stochastic secular behavior of the solar system, rather due to intrinsic chaos.
Automated coregistration of MTI spectral bands
NASA Astrophysics Data System (ADS)
Theiler, James P.; Galbraith, Amy E.; Pope, Paul A.; Ramsey, Keri A.; Szymanski, John J.
2002-08-01
In the focal plane of a pushbroom imager, a linear array of pixels is scanned across the scene, building up the image one row at a time. For the Multispectral Thermal Imager (MTI), each of fifteen different spectral bands has its own linear array. These arrays are pushed across the scene together, but since each band's array is at a different position on the focal plane, a separate image is produced for each band. The standard MTI data products (LEVEL1B_R_COREG and LEVEL1B_R_GEO) resample these separate images to a common grid and produce coregistered multispectral image cubes. The coregistration software employs a direct ``dead reckoning' approach. Every pixel in the calibrated image is mapped to an absolute position on the surface of the earth, and these are resampled to produce an undistorted coregistered image of the scene. To do this requires extensive information regarding the satellite position and pointing as a function of time, the precise configuration of the focal plane, and the distortion due to the optics. These must be combined with knowledge about the position and altitude of the target on the rotating ellipsoidal earth. We will discuss the direct approach to MTI coregistration, as well as more recent attempts to tweak the precision of the band-to-band registration using correlations in the imagery itself.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment.more » We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.« less
Research on a dem Coregistration Method Based on the SAR Imaging Geometry
NASA Astrophysics Data System (ADS)
Niu, Y.; Zhao, C.; Zhang, J.; Wang, L.; Li, B.; Fan, L.
2018-04-01
Due to the systematic error, especially the horizontal deviation that exists in the multi-source, multi-temporal DEMs (Digital Elevation Models), a method for high precision coregistration is needed. This paper presents a new fast DEM coregistration method based on a given SAR (Synthetic Aperture Radar) imaging geometry to overcome the divergence and time-consuming problem of the conventional DEM coregistration method. First, intensity images are simulated for two DEMs under the given SAR imaging geometry. 2D (Two-dimensional) offsets are estimated in the frequency domain using the intensity cross-correlation operation in the FFT (Fast Fourier Transform) tool, which can greatly accelerate the calculation process. Next, the transformation function between two DEMs is achieved via the robust least-square fitting of 2D polynomial operation. Accordingly, two DEMs can be precisely coregistered. Last, two DEMs, i.e., one high-resolution LiDAR (Light Detection and Ranging) DEM and one low-resolution SRTM (Shutter Radar Topography Mission) DEM, covering the Yangjiao landslide region of Chongqing are taken as an example to test the new method. The results indicate that, in most cases, this new method can achieve not only a result as much as 80 times faster than the minimum elevation difference (Least Z-difference, LZD) DEM registration method, but also more accurate and more reliable results.
Thrust and power measurements of Olympic swimmers
NASA Astrophysics Data System (ADS)
Wei, Timothy; Wu, Vicki; Hutchison, Sean; Mark, Russell
2012-11-01
Elite level swimming is an extremely precise and even choreographed activity. Swimmers not only know the exact number of strokes necessary to take them across the pool, they also plan to be a precise distance from the wall at the end of their last stroke. Too far away and they lose time by drifting into the wall. Too close and their competitor may slide in before their hand comes forward to touch the wall. In this context, it is important to know, in detail, where and how a swimmer propels her/himself through the water. Over the past decade, state-of-the-art flow and thrust measurement diagnostics have been brought to competitive swimming. But the ability to correlate stroke mechanics to thrust production without somehow constraining the swimmer has here-to-fore not been possible. Using high speed video, a simple approach to mapping the swimmer's speed, thrust and net power output in a time resolved manner has been developed. This methodology has been applied to Megan Jendrick, gold medalist in the 100 individual breast stroke and 4 × 100 medley relay events in 2000 and Ariana Kukors, 2009 world champion and continuing world record holder in the 200 individual medley. Implications for training future elite swimmers will be discussed.
Deep Spatial-Temporal Joint Feature Representation for Video Object Detection.
Zhao, Baojun; Zhao, Boya; Tang, Linbo; Han, Yuqi; Wang, Wenzheng
2018-03-04
With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).
Surface shear inviscidity of soluble surfactants
Zell, Zachary A.; Nowbahar, Arash; Mansard, Vincent; Leal, L. Gary; Deshmukh, Suraj S.; Mecca, Jodi M.; Tucker, Christopher J.; Squires, Todd M.
2014-01-01
Foam and emulsion stability has long been believed to correlate with the surface shear viscosity of the surfactant used to stabilize them. Many subtleties arise in interpreting surface shear viscosity measurements, however, and correlations do not necessarily indicate causation. Using a sensitive technique designed to excite purely surface shear deformations, we make the most sensitive and precise measurements to date of the surface shear viscosity of a variety of soluble surfactants, focusing on SDS in particular. Our measurements reveal the surface shear viscosity of SDS to be below the sensitivity limit of our technique, giving an upper bound of order 0.01 μN·s/m. This conflicts directly with almost all previous studies, which reported values up to 103–104 times higher. Multiple control and complementary measurements confirm this result, including direct visualization of monolayer deformation, for SDS and a wide variety of soluble polymeric, ionic, and nonionic surfactants of high- and low-foaming character. No soluble, small-molecule surfactant was found to have a measurable surface shear viscosity, which seriously undermines most support for any correlation between foam stability and surface shear rheology of soluble surfactants. PMID:24563383
Estimation of old field ecosystem biomass using low altitude imagery
NASA Technical Reports Server (NTRS)
Nor, S. M.; Safir, G.; Burton, T. M.; Hook, J. E.; Schultink, G.
1977-01-01
Color-infrared photography was used to evaluate the biomass of experimental plots in an old-field ecosystem that was treated with different levels of waste water from a sewage treatment facility. Cibachrome prints at a scale of approximately 1:1,600 produced from 35 mm color infrared slides were used to analyze density patterns using prepared tonal density scales and multicell grids registered to ground panels shown on the photograph. Correlation analyses between tonal density and vegetation biomass obtained from ground samples and harvests were carried out. Correlations between mean tonal density and harvest biomass data gave consistently high coefficients ranging from 0.530 to 0.896 at the 0.001 significance level. Corresponding multiple regression analysis resulted in higher correlation coefficients. The results of this study indicate that aerial infrared photography can be used to estimate standing crop biomass on waste water irrigated old field ecosystems. Combined with minimal ground truth data, this technique could enable managers of wastewater irrigation projects to precisely time harvest of such systems for maximal removal of nutrients in harvested biomass.
Ku, Y.-P.; Chen, C.-H.; Newhall, C.G.; Song, S.-R.; Yang, T.F.; Iizuka, Y.; McGeehin, J.
2008-01-01
The largest known eruption of Mt. Pinatubo in the late Quaternary was the Inararo Tuff Formation (ITF) eruption, roughly estimated as five times larger than the 1991 eruption. The precise age of the ITF eruption has been uncertain. Here, a correlative of the ITF eruption, Layer D, is identified in marine sediments, and an age obtained. Tephras were identified in core MD97-2142 of Leg II of the IMAGES III cruise in northern offshore of Palawan, southeastern South China Sea (12??41.33???N, 119??27.90???E). On the basis of the geochemical and isotopic fingerprints, Layer D can be correlated with the ITF eruption of the modern Pinatubo-eruption sequence. By means of the MD97-2142 SPECMAP chronology, Layer D was dated at around 81??2 ka. This estimated age of the ITF eruption and tephra Layer D coincides with an anomalously high SO4-2 spike occurring within the 5 millennia from 79 to 84 ka in the GISP2 ice core record. ?? 2007.
Diessel, E; Fuerst, T; Njeh, C F; Hans, D; Cheng, S; Genant, H K
2000-01-01
The purpose of this study was to evaluate a new imaging ultrasound scanner for the heel, the DTU-one (Osteometer MediTech, Denmark), by comparing quantitative ultrasound (QUS) results with bone mineral density (BMD) of the heel and femur from dual X-ray absorptiometry (DXA), and by comparing the DTU-one with another QUS device, the UBA 575+. The regions of interest in the DXA heel scan were matched with the regions evaluated by the two QUS devices. 134 healthy and 16 osteoporotic women aged 30-84 years old were enrolled in the study. In vivo short-term precision of the DTU-one for broadband ultrasound attenuation (BUA) and speed of sound (SOS) was 2.9% and 0.1%, respectively, and long-term precision was 3.8% and 0.2%, respectively. Highest correlations (r) between QUS and BMD measurements were achieved when comparing DTU-one results with BMD in matched regions of the DXA heel scan. Correlation coefficients (r) were 0.81 for BUA and SOS. Highest correlations with the UBA 575+ were 0.68 and 0.72, respectively. The comparison of BMD in different femoral sites with BUA and SOS (DTU-one) varied from 0.62 to 0.69 when including the entire study population. The correlation between BMD values within different sites of the femur tended to be higher (from r = 0.81 to 0.93). When comparing BUA with BUA and SOS with SOS on the two QUS devices, the absolute QUS values differed significantly. However, correlations were relatively high, with 0.76 for BUA and 0.82 for SOS. In conclusion, the results of the new quantitative ultrasound device, the DTU-one, are highly correlated (r = 0.8) with results obtained using the UBA 575+ and with BMD in the heel. The precision of the DTU-one is comparable to other QUS devices for BUA and is high for SOS.
Newborn Screening in the Era of Precision Medicine.
Yang, Lan; Chen, Jiajia; Shen, Bairong
2017-01-01
As newborn screening success stories gained general confirmation during the past 50 years, scientists quickly discovered diagnostic tests for a host of genetic disorders that could be treated at birth. Outstanding progress in sequencing technologies over the last two decades has made it possible to comprehensively profile newborn screening (NBS) and identify clinically relevant genomic alterations. With the rapid developments in whole-genome sequencing (WGS) and whole-exome sequencing (WES) recently, we can detect newborns at the genomic level and be able to direct the appropriate diagnosis to the different individuals at the appropriate time, which is also encompassed in the concept of precision medicine. Besides, we can develop novel interventions directed at the molecular characteristics of genetic diseases in newborns. The implementation of genomics in NBS programs would provide an effective premise for the identification of the majority of genetic aberrations and primarily help in accurate guidance in treatment and better prediction. However, there are some debate correlated with the widespread application of genome sequencing in NBS due to some major concerns such as clinical analysis, result interpretation, storage of sequencing data, and communication of clinically relevant mutations to pediatricians and parents, along with the ethical, legal, and social implications (so-called ELSI). This review is focused on these critical issues and concerns about the expanding role of genomics in NBS for precision medicine. If WGS or WES is to be incorporated into NBS practice, considerations about these challenges should be carefully regarded and tackled properly to adapt the requirement of genome sequencing in the era of precision medicine.
On the use of programmable hardware and reduced numerical precision in earth-system modeling.
Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N
2015-09-01
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.
Two-way sequential time synchronization: Preliminary results from the SIRIO-1 experiment
NASA Technical Reports Server (NTRS)
Detoma, E.; Leschiutta, S.
1981-01-01
A two-way time synchronization experiment performed in the spring of 1979 and 1980 via the Italian SIRIO-1 experimental telecommunications satellite is described. The experiment was designed and implemented to precisely monitor the satellite motion and to evaluate the possibility of performing a high precision, two-way time synchronization using a single communication channel, time-shared between the participating sites. Results show that the precision of the time synchronization is between 1 and 5 ns, while the evaluation and correction of the satellite motion effect was performed with an accuracy of a few nanoseconds or better over a time interval from 1 up to 20 seconds.
Should precise numerical dating overrule glacial geomorphology?
NASA Astrophysics Data System (ADS)
Winkler, Stefan
2016-04-01
Numerical age dating techniques, namely different types of terrestrial cosmogenic nuclide dating (TCND), have achieved an impressive progress in both laboratory precision and regional calibration models during the past few decades. It is now possible to apply precise TCND even to young landforms like Late Holocene moraines, a task seemed hardly achievable just about 15 years ago. An increasing number of studies provide very precise TCND ages for boulders from Late Holocene moraines enabling related reconstruction of glacier chronologies and the interpretation of these glacial landforms in a palaeoclimatological context. These studies may also solve previous controversies about different ages assigned to moraines obtained by different dating techniques, for example relative-age dating techniques or techniques combining relative-age dating with few fixed points derived from numerical age dating. There are a few cases, for example Mueller Glacier and nearby long debris-covered valley glacier in Aoraki/Mt.Cook National Park (Southern Alps, New Zealand), where the apparent "supremacy" of TCND-ages seem to overrule glacial geomorphological principles. Enabled by a comparatively high number of individual boulders precisely dated by TCND, moraine ridges on those glacier forelands have been primarily clustered on basis of these boulder ages rather than on their corresponding morphological position. To the extreme, segments of a particular moraine complex morphologically and sedimentologically proven to be formed during one event have become split and classified as two separate "moraines" on different parts of the glacier foreland. One ledge of another moraine complex contains 2 TCND-sampled boulders apparently representing two separate "moraines"-clusters of an age difference in the order of 1,500 years. Although recently criticism has been raised regarding the non-contested application of the arithmetic mean for calculation of TCND-ages for individual moraines, this problem is still not properly addressed in every case and significant age differences of individual boulders on moraine ridges create uncertainties with their palaeoclimatic interpretation. Referring to the exemplary case of the glacier forelands mentioned above it is argued that prior to any chronological interpretation the geomorphological correlation of individual moraine ridges and complexes need to be established and potential uncertainties clearly addressed. After the TCND-ages have been obtained from sampled boulders and assigned to the moraines any discrepancy needs to be carefully investigated to ensure that misleading ages don't effect subsequent chronological reconstructions and palaeoclimatic interpretations. Even if dating precision has recently considerably increased, moraines should not be clustered into synchronous moraine-groups based on TCND-ages if their morphological position or sedimentology contradicts such classification. Furthermore, the high precision of TCND-ages do often not consider the concept of 'LIA'-type events and different response times of nearby glaciers to the same mass balance/climate signal, therefore potentially overestimating the true number of glacier advances during a specific period. An alternative interpretation of existing TCND-ages reveals fewer advances during the Late Holocene. Summarising, modern TCND-ages are possibly "too precise" in some aspects and wrongly judged as superior to geomorphological evidence. A more critical evaluation would be beneficial to any subsequent attempts of intra-hemispheric and global correlation of glacier chronologies.
Menlove, Howard Olsen; Belian, Anthony P.; Geist, William H.; ...
2017-10-07
The purpose of this paper is to provide a solution to a decades old safeguards problem in the verification of the fissile concentration in fresh light water reactor (LWR) fuel assemblies. The problem is that the burnable poison (e.g. Gd 2O 3) addition to the fuel rods decreases the active neutron assay for the fuel assemblies. This paper presents a new innovative method for the verification of the 235U linear mass density in fresh LEU fuel assemblies that is insensitive to the burnable poison content. The technique makes use of the 238U atoms in the fuel rods to self-interrogate themore » 235U mass. The innovation for the new approach is that the 238U spontaneous fission (SF) neutrons from the rods induces fission reactions (IF) in the 235U that are time correlated with the SF source neutrons. Thus, the coincidence gate counting rate benefits from both the nu-bar of the 238U SF (2.07) and the 235U IF (2.44) for a fraction of the IF reactions. Whereas, the 238U SF background has no time-correlation boost. The higher the detection efficiency, the higher the correlated boost because background neutron counts from the SF are being converted to signal doubles. This time-correlation in the IF signal increases signal/background ratio that provides a good precision for the net signal from the 235U mass. The hard neutron energy spectrum makes the technique insensitive to the burnable poison loading where a Cd or Gd liner on the detector walls is used to prevent thermal-neutron reflection back into the fuel assembly from the detector. Here, we have named the system the fast-neutron passive collar (FNPC).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menlove, Howard Olsen; Belian, Anthony P.; Geist, William H.
The purpose of this paper is to provide a solution to a decades old safeguards problem in the verification of the fissile concentration in fresh light water reactor (LWR) fuel assemblies. The problem is that the burnable poison (e.g. Gd 2O 3) addition to the fuel rods decreases the active neutron assay for the fuel assemblies. This paper presents a new innovative method for the verification of the 235U linear mass density in fresh LEU fuel assemblies that is insensitive to the burnable poison content. The technique makes use of the 238U atoms in the fuel rods to self-interrogate themore » 235U mass. The innovation for the new approach is that the 238U spontaneous fission (SF) neutrons from the rods induces fission reactions (IF) in the 235U that are time correlated with the SF source neutrons. Thus, the coincidence gate counting rate benefits from both the nu-bar of the 238U SF (2.07) and the 235U IF (2.44) for a fraction of the IF reactions. Whereas, the 238U SF background has no time-correlation boost. The higher the detection efficiency, the higher the correlated boost because background neutron counts from the SF are being converted to signal doubles. This time-correlation in the IF signal increases signal/background ratio that provides a good precision for the net signal from the 235U mass. The hard neutron energy spectrum makes the technique insensitive to the burnable poison loading where a Cd or Gd liner on the detector walls is used to prevent thermal-neutron reflection back into the fuel assembly from the detector. Here, we have named the system the fast-neutron passive collar (FNPC).« less
NASA Astrophysics Data System (ADS)
Menlove, Howard; Belian, Anthony; Geist, William; Rael, Carlos
2018-01-01
The purpose of this paper is to provide a solution to a decades old safeguards problem in the verification of the fissile concentration in fresh light water reactor (LWR) fuel assemblies. The problem is that the burnable poison (e.g. Gd2O3) addition to the fuel rods decreases the active neutron assay for the fuel assemblies. This paper presents a new innovative method for the verification of the 235U linear mass density in fresh LEU fuel assemblies that is insensitive to the burnable poison content. The technique makes use of the 238U atoms in the fuel rods to self-interrogate the 235U mass. The innovation for the new approach is that the 238U spontaneous fission (SF) neutrons from the rods induces fission reactions (IF) in the 235U that are time correlated with the SF source neutrons. Thus, the coincidence gate counting rate benefits from both the nu-bar of the 238U SF (2.07) and the 235U IF (2.44) for a fraction of the IF reactions. Whereas, the 238U SF background has no time-correlation boost. The higher the detection efficiency, the higher the correlated boost because background neutron counts from the SF are being converted to signal doubles. This time-correlation in the IF signal increases signal/background ratio that provides a good precision for the net signal from the 235U mass. The hard neutron energy spectrum makes the technique insensitive to the burnable poison loading where a Cd or Gd liner on the detector walls is used to prevent thermal-neutron reflection back into the fuel assembly from the detector. We have named the system the fast-neutron passive collar (FNPC).
Sofia, C; Magno, C; Silipigni, S; Cantisani, V; Mucciardi, G; Sottile, F; Inferrera, A; Mazziotti, S; Ascenti, G
2017-01-01
To evaluate the precision of the centrality index (CI) measurement on three-dimensional (3D) volume rendering technique (VRT) images in patients with renal masses, compared to its standard measurement on axial images. Sixty-five patients with renal lesions underwent contrast-enhanced multidetector (MD) computed tomography (CT) for preoperative imaging. Two readers calculated the CI on two-dimensional axial images and on VRT images, measuring it in the plane that the tumour and centre of the kidney were lying in. Correlation and agreement of interobserver measurements and inter-method results were calculated using intraclass correlation (ICC) coefficients and the Bland-Altman method. Time saving was also calculated. The correlation coefficients were r=0.99 (p<0.05) and r=0.99 (p<0.05) for both the CI on axial and VRT images, with an ICC of 0.99, and 0.99, respectively. Correlation between the two methods of measuring the CI on VRT and axial CT images was r=0.99 (p<0.05). The two methods showed a mean difference of -0.03 (SD 0.13). Mean time saving per each examination with VRT was 45.5%. The present study showed that VRT and axial images produce almost identical values of CI, with the advantages of greater ease of execution and a time saving of almost 50% for 3D VRT images. In addition, VRT provides an integrated perspective that can better assist surgeons in clinical decision making and in operative planning, suggesting this technique as a possible standard method for CI measurement. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.