Sample records for precise time interval

  1. Department of Defense Precise Time and Time Interval program improvement plan

    NASA Technical Reports Server (NTRS)

    Bowser, J. R.

    1981-01-01

    The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.

  2. Method of high precision interval measurement in pulse laser ranging system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong

    2013-09-01

    Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.

  3. Time interval measurement device based on surface acoustic wave filter excitation, providing 1 ps precision and stability.

    PubMed

    Panek, Petr; Prochazka, Ivan

    2007-09-01

    This article deals with the time interval measurement device, which is based on a surface acoustic wave (SAW) filter as a time interpolator. The operating principle is based on the fact that a transversal SAW filter excited by a short pulse can generate a finite signal with highly suppressed spectra outside a narrow frequency band. If the responses to two excitations are sampled at clock ticks, they can be precisely reconstructed from a finite number of samples and then compared so as to determine the time interval between the two excitations. We have designed and constructed a two-channel time interval measurement device which allows independent timing of two events and evaluation of the time interval between them. The device has been constructed using commercially available components. The experimental results proved the concept. We have assessed the single-shot time interval measurement precision of 1.3 ps rms that corresponds to the time of arrival precision of 0.9 ps rms in each channel. The temperature drift of the measured time interval on temperature is lower than 0.5 ps/K, and the long term stability is better than +/-0.2 ps/h. These are to our knowledge the best values reported for the time interval measurement device. The results are in good agreement with the error budget based on the theoretical analysis.

  4. The 26th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Sydnor, Richard (Editor)

    1995-01-01

    This document is a compilation of technical papers presented at the 26th Annual PTTI Applications and Planning Meeting. Papers are in the following categories: (1) Recent developments in rubidium, cesium, and hydrogen-based frequency standards, and in cryogenic and trapped-ion technology; (2) International and transnational applications of Precise Time and Time Interval technology with emphasis on satellite laser tracking, GLONASS timing, intercomparison of national time scales and international telecommunications; (3) Applications of Precise Time and Time Interval technology to the telecommunications, power distribution, platform positioning, and geophysical survey industries; (4) Applications of PTTI technology to evolving military communications and navigation systems; and (5) Dissemination of precise time and frequency by means of GPS, GLONASS, MILSTAR, LORAN, and synchronous communications satellites.

  5. Use of precision time and time interval (PTTI)

    NASA Technical Reports Server (NTRS)

    Taylor, J. D.

    1974-01-01

    A review of range time synchronization methods are discussed as an important aspect of range operations. The overall capabilities of various missile ranges to determine precise time of day by synchronizing to available references and applying this time point to instrumentation for time interval measurements are described.

  6. Proceedings of the Fourth Precise Time and Time Interval Planning Meeting

    NASA Technical Reports Server (NTRS)

    Acrivos, H. N. (Compiler); Wardrip, S. C. (Compiler)

    1972-01-01

    The proceedings of a conference on Precise Time and Time Interval Planning are presented. The subjects discussed include the following: (1) satellite timing techniques, precision frequency sources, and very long baseline interferometry, (2) frequency stabilities and communications, and (3) very low frequency and ultrahigh frequency propagation and use. Emphasis is placed on the accuracy of time discrimination obtained with time measuring equipment and specific applications of time measurement to military operations and civilian research projects.

  7. Proceedings of the Eleventh Annual Precise Time and Time Interval (PTTI) Application and Planning Meeting. [conference

    NASA Technical Reports Server (NTRS)

    Wardrip, S. C. (Editor)

    1979-01-01

    Thirty eight papers are presented addressing various aspects of precise time and time interval applications. Areas discussed include: past accomplishments; state of the art systems; new and useful applications, procedures, and techniques; and fruitful directions for research efforts.

  8. Optical timing receiver for the NASA laser ranging system. Part 2: High precision time interval digitizer

    NASA Technical Reports Server (NTRS)

    Leskovar, B.; Turko, B.

    1977-01-01

    The development of a high precision time interval digitizer is described. The time digitizer is a 10 psec resolution stop watch covering a range of up to 340 msec. The measured time interval is determined as a separation between leading edges of a pair of pulses applied externally to the start input and the stop input of the digitizer. Employing an interpolation techniques and a 50 MHz high precision master oscillator, the equivalent of a 100 GHz clock frequency standard is achieved. Absolute accuracy and stability of the digitizer are determined by the external 50 MHz master oscillator, which serves as a standard time marker. The start and stop pulses are fast 1 nsec rise time signals, according to the Nuclear Instrument means of tunnel diode discriminators. Firing level of the discriminator define start and stop points between which the time interval is digitized.

  9. Proceedings of the 7th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Proceedings contain the papers presented at the Seventh Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting and the edited record of the discussion period following each paper. This meeting provided a forum to promote more effective, efficient, economical and skillful applications of PTTI technology to the many problem areas to which PTTI offers solutions. Specifically the purpose of the meeting is to: disseminate, coordinate, and exchange practical information associated with precise time and frequency; acquaint systems engineers, technicians and managers with precise time and frequency technology and its applications; and review present and future requirements for PTTI.

  10. The 22nd Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Sydnor, Richard L. (Editor)

    1990-01-01

    Papers presented at the 22nd Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting are compiled. The following subject areas are covered: Rb, Cs, and H-based frequency standards and cryogenic and trapped-ion technology; satellite laser tracking networks, GLONASS timing, intercomparison of national time scales and international telecommunications; telecommunications, power distribution, platform positioning, and geophysical survey industries; military communications and navigation systems; and dissemination of precise time and frequency by means of GPS, GLONASS, MILSTAR, LORAN, and synchronous communication satellites.

  11. The 25th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Sydnor, Richard L. (Editor)

    1994-01-01

    Papers in the following categories are presented: recent developments in rubidium, cesium, and hydrogen-based frequency standards, and in cryogenic and trapped-ion technology; international and transnational applications of precise time and time interval (PTTI) technology with emphasis on satellite laser tracking networks, GLONASS timing, intercomparison of national time scales and international telecommunication; applications of PTTI technology to the telecommunications, power distribution, platform positioning, and geophysical survey industries; application of PTTI technology to evolving military communications and navigation systems; and dissemination of precise time and frequency by means of GPS, GLONASS, MILSTAR, LORAN, and synchronous communications satellites.

  12. Off-set stabilizer for comparator output

    DOEpatents

    Lunsford, James S.

    1991-01-01

    A stabilized off-set voltage is input as the reference voltage to a comparator. In application to a time-interval meter, the comparator output generates a timing interval which is independent of drift in the initial voltage across the timing capacitor. A precision resistor and operational amplifier charge a capacitor to a voltage which is precisely offset from the initial voltage. The capacitance of the reference capacitor is selected so that substantially no voltage drop is obtained in the reference voltage applied to the comparator during the interval to be measured.

  13. Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice

    PubMed Central

    Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.

    2010-01-01

    In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777

  14. Proceedings of the 8th Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The Proceedings contain the papers presented at the Eight Annual Precise Time and Tme Interval PTTI Applications and Planning Meeting. The edited record of the discussions following the papers and the panel discussions are also included. This meeting provided a forum for the exchange of information on precise time and frequency technology among members of the scientific community and persons with program applications. The 282 registered attendees came from various U.S. Government agencies, private industry, universities and a number of foreign countries were represented. In this meeting, papers were presented that emphasized: (1) definitions and international regulations of precise time sources and users, (2) the scientific foundations of Hydrogen Maser standards, the current developments in this field and the application experience, and (3) how to measure the stability performance properties of precise standards. As in the previous meetings, update and new papers were presented on system applications with past, present and future requirements identified.

  15. Proceedings of the 30th Annual Precise Time and Time Interval (PTTI) Systems and Applications Meeting

    NASA Technical Reports Server (NTRS)

    Breakiron, Lee A. (Editor)

    1999-01-01

    This document is a compilation of technical papers presented at the 30th Annual Precise Time and Time Interval (PTTI) Systems and Applications Meeting held 1-3 December 1998 at the Hyatt Regency Hotel at Reston Town Center, Reston, Virginia. Papers are in the following categories: 1) Recent developments in rubidium, cesium, and hydrogen-based atomic frequency standards, and in trapped-ion and space clock technology; 2) National and international applications of PTTI technology with emphasis on GPS and GLONASS timing, atomic time scales, and telecommunications; 3) Applications of PTTI technology to evolving military navigation and communication systems; geodesy; aviation; and pulsars; and 4) Dissemination of precise time and frequency by means of GPS, geosynchronous communication satellites, computer networks, WAAS, and LORAN.

  16. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  17. Proceedings of the Thirteenth Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Wardrip, S. C.

    1982-01-01

    Proceedings of an annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting are summarized. A transparent view of the state-of-the-art, an opportunity to express needs, a view of important future trends, and a review of relevant past accomplishments were considered for PTTI managers, systems engineers, and program planner. Specific aims were: to provide PTTI users with new and useful applications, procedures, and techniques; to allow the PTTI researcher to better assess fruitful directions for research efforts.

  18. Variable-pulse switching circuit accurately controls solenoid-valve actuations

    NASA Technical Reports Server (NTRS)

    Gillett, J. D.

    1967-01-01

    Solid state circuit generating adjustable square wave pulses of sufficient power operates a 28 volt dc solenoid valve at precise time intervals. This circuit is used for precise time control of fluid flow in combustion experiments.

  19. 27th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Sydnor, Richard L. (Editor)

    1996-01-01

    This document is a compilation of technical papers presented at the 27th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting, held November 29 - December 1, 1995 at San Diego, CA. Papers are in the following categories: Recent developments in rubidium, cesium, and hydrogen-based frequency standards; and in cryogenic and trapped-ion technology; International and transnational applications of PTTI technology with emphasis on satellite laser tracking, GLONASS timing, intercomparison of national time scales and international telecommunications; Applications of PTTI technology to the telecommunications, power distribution, platform positioning, and geophysical survey industries; Applications of PTTI technology to evolving military communications and navigation systems; and Dissemination of precise time and frequency by means of Global Positioning System (GPS), Global Satellite Navigation System (GLONASS), MILSTAR, LORAN, and synchronous communications satellites.

  20. Interval timing in genetically modified mice: a simple paradigm

    PubMed Central

    Balci, F.; Papachristos, E. B.; Gallistel, C. R.; Brunner, D.; Gibson, J.; Shumyatsky, G. P.

    2009-01-01

    We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using D-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently. PMID:17696995

  1. Interval timing in genetically modified mice: a simple paradigm.

    PubMed

    Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P

    2008-04-01

    We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.

  2. Proceedings of the Sixteenth Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The effects of ionospheric and tropospheric propagation on time and frequency transfer, advances in the generation of precise time and frequency, time transfer techniques and filtering and modeling were among the topics emphasized. Rubidium and cesium frequency standard, crystal oscillators, masers, Kalman filters, and atomic clocks were discussed.

  3. Picosecond-precision multichannel autonomous time and frequency counter

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Kwiatkowski, P.; RóŻyc, K.; Jachna, Z.; Sondej, T.

    2017-12-01

    This paper presents the design, implementation, and test results of a multichannel time interval and frequency counter developed as a desktop instrument. The counter contains four main functional modules for (1) performing precise measurements, (2) controlling and fast data processing, (3) low-noise power suppling, and (4) supplying a stable reference clock (optional rubidium standard). A fundamental for the counter, the time interval measurement is based on time stamping combined with a period counting and in-period two-stage time interpolation that allows us to achieve wide measurement range (above 1 h), high precision (even better than 4.5 ps), and high measurement speed (up to 91.2 × 106 timestamps/s). The frequency is measured up to 3.0 GHz with the use of the reciprocal method. Wide functionality of the counter includes also the evaluation of frequency stability of clocks and oscillators (Allan deviation) and phase variation (time interval error, maximum time interval error, time deviation). The 8-channel measurement module is based on a field programmable gate array device, while the control unit involves a microcontroller with a high performance ARM-Cortex core. An efficient and user-friendly control of the counter is provided either locally, through the built-in keypad or/and color touch panel, or remotely, with the aid of USB, Ethernet, RS232C, or RS485 interfaces.

  4. Picosecond-precision multichannel autonomous time and frequency counter.

    PubMed

    Szplet, R; Kwiatkowski, P; Różyc, K; Jachna, Z; Sondej, T

    2017-12-01

    This paper presents the design, implementation, and test results of a multichannel time interval and frequency counter developed as a desktop instrument. The counter contains four main functional modules for (1) performing precise measurements, (2) controlling and fast data processing, (3) low-noise power suppling, and (4) supplying a stable reference clock (optional rubidium standard). A fundamental for the counter, the time interval measurement is based on time stamping combined with a period counting and in-period two-stage time interpolation that allows us to achieve wide measurement range (above 1 h), high precision (even better than 4.5 ps), and high measurement speed (up to 91.2 × 10 6 timestamps/s). The frequency is measured up to 3.0 GHz with the use of the reciprocal method. Wide functionality of the counter includes also the evaluation of frequency stability of clocks and oscillators (Allan deviation) and phase variation (time interval error, maximum time interval error, time deviation). The 8-channel measurement module is based on a field programmable gate array device, while the control unit involves a microcontroller with a high performance ARM-Cortex core. An efficient and user-friendly control of the counter is provided either locally, through the built-in keypad or/and color touch panel, or remotely, with the aid of USB, Ethernet, RS232C, or RS485 interfaces.

  5. IEEE-1588(Trademark) Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems

    DTIC Science & Technology

    2002-12-01

    34th Annual Precise Time and Time Interval (PTTI) Meeting 243 IEEE-1588™ STANDARD FOR A PRECISION CLOCK SYNCHRONIZATION PROTOCOL FOR... synchronization . 2. Cyclic-systems. In cyclic-systems, timing is periodic and is usually defined by the characteristics of a cyclic network or bus...incommensurate, timing schedules for each device are easily implemented. In addition, synchronization accuracy depends on the accuracy of the common

  6. Precise Interval Timer for Software Defined Radio

    NASA Technical Reports Server (NTRS)

    Pozhidaev, Aleksey (Inventor)

    2014-01-01

    A precise digital fractional interval timer for software defined radios which vary their waveform on a packet-by-packet basis. The timer allows for variable length in the preamble of the RF packet and allows to adjust boundaries of the TDMA (Time Division Multiple Access) Slots of the receiver of an SDR based on the reception of the RF packet of interest.

  7. Intact interval timing in circadian CLOCK mutants.

    PubMed

    Cordes, Sara; Gallistel, C R

    2008-08-28

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval-timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/- and -/- mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing.

  8. Ultrasonic sensor and method of use

    DOEpatents

    Condreva, Kenneth J.

    2001-01-01

    An ultrasonic sensor system and method of use for measuring transit time though a liquid sample, using one ultrasonic transducer coupled to a precision time interval counter. The timing circuit captures changes in transit time, representing small changes in the velocity of sound transmitted, over necessarily small time intervals (nanoseconds) and uses the transit time changes to identify the presence of non-conforming constituents in the sample.

  9. Single photon detection and timing in the Lunar Laser Ranging Experiment.

    NASA Technical Reports Server (NTRS)

    Poultney, S. K.

    1972-01-01

    The goals of the Lunar Laser Ranging Experiment lead to the need for the measurement of a 2.5 sec time interval to an accuracy of a nanosecond or better. The systems analysis which included practical retroreflector arrays, available laser systems, and large telescopes led to the necessity of single photon detection. Operation under all background illumination conditions required auxiliary range gates and extremely narrow spectral and spatial filters in addition to the effective gate provided by the time resolution. Nanosecond timing precision at relatively high detection efficiency was obtained using the RCA C31000F photomultiplier and Ortec 270 constant fraction of pulse-height timing discriminator. The timing accuracy over the 2.5 sec interval was obtained using a digital interval with analog vernier ends. Both precision and accuracy are currently checked internally using a triggerable, nanosecond light pulser. Future measurements using sub-nanosecond laser pulses will be limited by the time resolution of single photon detectors.

  10. A 3.9 ps Time-Interval RMS Precision Time-to-Digital Converter Using a Dual-Sampling Method in an UltraScale FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Liu, Chong

    2016-10-01

    Field programmable gate arrays (FPGAs) manufactured with more advanced processing technology have faster carry chains and smaller delay elements, which are favorable for the design of tapped delay line (TDL)-style time-to-digital converters (TDCs) in FPGA. However, new challenges are posed in using them to implement TDCs with a high time precision. In this paper, we propose a bin realignment method and a dual-sampling method for TDC implementation in a Xilinx UltraScale FPGA. The former realigns the disordered time delay taps so that the TDC precision can approach the limit of its delay granularity, while the latter doubles the number of taps in the delay line so that the TDC precision beyond the cell delay limitation can be expected. Two TDC channels were implemented in a Kintex UltraScale FPGA, and the effectiveness of the new methods was evaluated. For fixed time intervals in the range from 0 to 440 ns, the average RMS precision measured by the two TDC channels reaches 5.8 ps using the bin realignment, and it further improves to 3.9 ps by using the dual-sampling method. The time precision has a 5.6% variation in the measured temperature range. Every part of the TDC, including dual-sampling, encoding, and on-line calibration, could run at a 500 MHz clock frequency. The system measurement dead time is only 4 ns.

  11. Intact Interval Timing in Circadian CLOCK Mutants

    PubMed Central

    Cordes, Sara; Gallistel, C. R.

    2008-01-01

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/− and −/− mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing. PMID:18602902

  12. Theoretical implications of quantitative properties of interval timing and probability estimation in mouse and rat.

    PubMed

    Kheifets, Aaron; Freestone, David; Gallistel, C R

    2017-07-01

    In three experiments with mice ( Mus musculus ) and rats (Rattus norvigicus), we used a switch paradigm to measure quantitative properties of the interval-timing mechanism. We found that: 1) Rodents adjusted the precision of their timed switches in response to changes in the interval between the short and long feed latencies (the temporal goalposts). 2) The variability in the timing of the switch response was reduced or unchanged in the face of large trial-to-trial random variability in the short and long feed latencies. 3) The adjustment in the distribution of switch latencies in response to changes in the relative frequency of short and long trials was sensitive to the asymmetry in the Kullback-Leibler divergence. The three results suggest that durations are represented with adjustable precision, that they are timed by multiple timers, and that there is a trial-by-trial (episodic) record of feed latencies in memory. © 2017 Society for the Experimental Analysis of Behavior.

  13. Time Transfer With the Galileo Precise Timing Facility

    DTIC Science & Technology

    2007-11-01

    being designed on the basis of three techniques: TWSTFT , CV, and use of OSPF products. The last technique implies interfacing an external facility...hydrogen masers (AHM) manufactured by T4S (Switzerland) and the 4 cesiums by Symmetricom. • Time Transfer Subsystem This includes the TWSTFT Station...PTF GACF MUCF TSP GMS UTC(k) BIPM OSPF GSS GalileoSat TWSTFT links Slave PTF CV links 442 39th Annual Precise Time and Time Interval

  14. A new time calibration method for switched-capacitor-array-based waveform samplers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H.; Chen, C. -T.; Eclov, N.

    2014-08-24

    Here we have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibrationmore » is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. Ultimately, the new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.« less

  15. A new time calibration method for switched-capacitor-array-based waveform samplers

    NASA Astrophysics Data System (ADS)

    Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.

    2014-12-01

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be 2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.

  16. A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers.

    PubMed

    Kim, H; Chen, C-T; Eclov, N; Ronzhin, A; Murat, P; Ramberg, E; Los, S; Moses, W; Choong, W-S; Kao, C-M

    2014-12-11

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.

  17. A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers

    PubMed Central

    Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.

    2014-01-01

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration. PMID:25506113

  18. Relationship between cutoff frequency and accuracy in time-interval photon statistics applied to oscillating signals

    NASA Astrophysics Data System (ADS)

    Rebolledo, M. A.; Martinez-Betorz, J. A.

    1989-04-01

    In this paper the accuracy in the determination of the period of an oscillating signal, when obtained from the photon statistics time-interval probability, is studied as a function of the precision (the inverse of the cutoff frequency of the photon counting system) with which time intervals are measured. The results are obtained by means of an experiment with a square-wave signal, where the Fourier or square-wave transforms of the time-interval probability are measured. It is found that for values of the frequency of the signal near the cutoff frequency the errors in the period are small.

  19. VARIABLE TIME-INTERVAL GENERATOR

    DOEpatents

    Gross, J.E.

    1959-10-31

    This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

  20. Two-way sequential time synchronization: Preliminary results from the SIRIO-1 experiment

    NASA Technical Reports Server (NTRS)

    Detoma, E.; Leschiutta, S.

    1981-01-01

    A two-way time synchronization experiment performed in the spring of 1979 and 1980 via the Italian SIRIO-1 experimental telecommunications satellite is described. The experiment was designed and implemented to precisely monitor the satellite motion and to evaluate the possibility of performing a high precision, two-way time synchronization using a single communication channel, time-shared between the participating sites. Results show that the precision of the time synchronization is between 1 and 5 ns, while the evaluation and correction of the satellite motion effect was performed with an accuracy of a few nanoseconds or better over a time interval from 1 up to 20 seconds.

  1. Precise time technology for selected Air Force systems: Present status and future requirements

    NASA Technical Reports Server (NTRS)

    Yannoni, N. F.

    1981-01-01

    Precise time and time interval (PTTI) technology is becoming increasingly significant to Air Force operations as digital techniques find expanded utility in military missions. Timing has a key role in the function as well as in navigation. A survey of the PTTI needs of several Air Force systems is presented. Current technology supporting these needs was reviewed and new requirements are emphasized for systems as they transfer from initial development to final operational deployment.

  2. Proceedings of the 14th Annual Precise Time and Time Interval (PTTI) Applications Planning Meeting

    NASA Technical Reports Server (NTRS)

    Wardrip, S. C. (Editor)

    1983-01-01

    Developments and applications in the field of frequency and time are addressed. Specific topics include rubidium frequency standards, future timing requirements, noise and atomic standards, hydrogen maser technology, synchronization, and quartz technology.

  3. Proceedings of the Annual Precise Time and Time Interval (PTTI) Planning Meeting (6th). Held at U.S. Naval Research Laboratory, December 3-5, 1974

    DTIC Science & Technology

    1974-01-01

    General agreement seems to be developing that the geophysical system should be defined in terms of a large number of points...34A Laser-Interferometer System for the Absolute Determination of the Acceleration due to Gravity," In Proc. Int. Conf. on Precision Measurement...MO %. The ratio of the plasmaspheric to the total time-delays due to free

  4. PTTI applications at the limits of GPS

    NASA Technical Reports Server (NTRS)

    Douglas, Rob J.; Popelar, J.

    1995-01-01

    Canadian plans for precise time and time interval services are examined in the light of GPS capabilities developed for geodesy. We present our experience in establishing and operating a geodetic type GPS station in a time laboratory setting, and show sub-nanosecond residuals for time transfer between geodetic sites. We present our approach to establishing realistic standard uncertainties for short-term frequency calibration services over time intervals of hours, and for longer-term frequency dissemination at better than the 10(exp -15) level of accuracy.

  5. BDS Precise Point Positioning for Seismic Displacements Monitoring: Benefit from the High-Rate Satellite Clock Corrections

    PubMed Central

    Geng, Tao; Su, Xing; Fang, Rongxin; Xie, Xin; Zhao, Qile; Liu, Jingnan

    2016-01-01

    In order to satisfy the requirement of high-rate high-precision applications, 1 Hz BeiDou Navigation Satellite System (BDS) satellite clock corrections are generated based on precise orbit products, and the quality of the generated clock products is assessed by comparing with those from the other analysis centers. The comparisons show that the root mean square (RMS) of clock errors of geostationary Earth orbits (GEO) is about 0.63 ns, whereas those of inclined geosynchronous orbits (IGSO) and medium Earth orbits (MEO) are about 0.2–0.3 ns and 0.1 ns, respectively. Then, the 1 Hz clock products are used for BDS precise point positioning (PPP) to retrieve seismic displacements of the 2015 Mw 7.8 Gorkha, Nepal, earthquake. The derived seismic displacements from BDS PPP are consistent with those from the Global Positioning System (GPS) PPP, with RMS of 0.29, 0.38, and 1.08 cm in east, north, and vertical components, respectively. In addition, the BDS PPP solutions with different clock intervals of 1 s, 5 s, 30 s, and 300 s are processed and compared with each other. The results demonstrate that PPP with 300 s clock intervals is the worst and that with 1 s clock interval is the best. For the scenario of 5 s clock intervals, the precision of PPP solutions is almost the same to 1 s results. Considering the time consumption of clock estimates, we suggest that 5 s clock interval is competent for high-rate BDS solutions. PMID:27999384

  6. BDS Precise Point Positioning for Seismic Displacements Monitoring: Benefit from the High-Rate Satellite Clock Corrections.

    PubMed

    Geng, Tao; Su, Xing; Fang, Rongxin; Xie, Xin; Zhao, Qile; Liu, Jingnan

    2016-12-20

    In order to satisfy the requirement of high-rate high-precision applications, 1 Hz BeiDou Navigation Satellite System (BDS) satellite clock corrections are generated based on precise orbit products, and the quality of the generated clock products is assessed by comparing with those from the other analysis centers. The comparisons show that the root mean square (RMS) of clock errors of geostationary Earth orbits (GEO) is about 0.63 ns, whereas those of inclined geosynchronous orbits (IGSO) and medium Earth orbits (MEO) are about 0.2-0.3 ns and 0.1 ns, respectively. Then, the 1 Hz clock products are used for BDS precise point positioning (PPP) to retrieve seismic displacements of the 2015 Mw 7.8 Gorkha, Nepal, earthquake. The derived seismic displacements from BDS PPP are consistent with those from the Global Positioning System (GPS) PPP, with RMS of 0.29, 0.38, and 1.08 cm in east, north, and vertical components, respectively. In addition, the BDS PPP solutions with different clock intervals of 1 s, 5 s, 30 s, and 300 s are processed and compared with each other. The results demonstrate that PPP with 300 s clock intervals is the worst and that with 1 s clock interval is the best. For the scenario of 5 s clock intervals, the precision of PPP solutions is almost the same to 1 s results. Considering the time consumption of clock estimates, we suggest that 5 s clock interval is competent for high-rate BDS solutions.

  7. [Age and time estimation during different types of activity].

    PubMed

    Gareev, E M; Osipova, L G

    1980-01-01

    The study was concerned with the age characteristics of verbal and operative estimation of time intervals filled with different types of mental and physical activity as well as those free of it. The experiment was conducted on 85 subjects, 7--24 years of age. In all age groups and in both forms of time estimation (except verbal estimation in 10--12 years old children) there was a significant connection between the interval estimation and the type of activity. In adults and in 7--8 years old children, the connection was significantly tighter in operative estimations than in verbal ones. Unlike senior school children and adults, in 7--12 years old children there were sharp differences in precision between operative and verbal estimations and a discordance of their changes under the influence of activity. Precision and variability were rather similar in all age groups. It is suggested that the obtained data show heterochronity and a different rate of development of the higher nervous activity mechanisms providing for reflection of time in the form of verbal and voluntary motor reactions to the given interval.

  8. Time and Frequency Activities at the U.S. Naval Observatory

    DTIC Science & Technology

    2004-12-01

    325-332. [15] D. Kirchner, 1999, “Two Way Satellite Time and Frequency Transfer ( TWSTFT ),” Review of Radio Science (Oxford Science Publications...Time and Frequency Transfer ( TWSTFT ),” in Proceedings of the 36th Annual Precise Time and Time Interval (PTTI) Systems and Applications Meeting, 7-9

  9. Time and Frequency Activities at the U.S. Naval Observatory

    DTIC Science & Technology

    2005-01-01

    Naval Observatory, Washington, D.C.), pp. 325-332. [15] D. Kirchner, 1999, “Two Way Satellite Time and Frequency Transfer ( TWSTFT ),” Review of...of Carrier- Phase-Based Two-Way Satellite Time and Frequency Transfer ( TWSTFT ),” in Proceedings of the 36th Annual Precise Time and Time Interval

  10. Two-Way Time Transfer to Airborne Platforms Using Commercial Satellite Modems

    DTIC Science & Technology

    2002-12-01

    on Relativistic Time Transfer . 34th Annual Precise Time and Time Interval (PTTI) Meeting 366 QUESTIONS AND ANSWERS DAVE HOWE (National...Symposium on Frequency Control, 31 May-2 June 1989, Denver, Colorado, USA (IEEE Publication 89CH2690-6), pp. 174-178. [5] R. A. Nelson, 2002, Handbook

  11. Time and Frequency Activities at the U.S. Naval Observatory

    DTIC Science & Technology

    2007-11-01

    Institute of Navigation, Alexandria, Virginia). [21] D. Kirchner, 1999, “Two Way Satellite Time and Frequency Transfer ( TWSTFT ),” Review of Radio Science...Transfer ( TWSTFT ),” in Proceedings of the 36th Annual Precise Time and Time Interval (PTTI) Systems and Applications Meeting, 7-9 December 2004

  12. Steering UTC (AOS) and UTC (PL) by TA (PL)

    DTIC Science & Technology

    2007-01-01

    UTC. • A second time-transfer technique ( TWSTFT ) will be introduced at AOS. 38th Annual Precise Time and Time Interval (PTTI) Meeting 387 • AOS will...Deviation TWSTFT – Two-Way Satellite Time and Frequency Transfer UTC – Coordinated Universal Time UTC (i) – Realization of UTC by laboratory i

  13. Time and Frequency Activities at the U.S. Naval Observatory

    DTIC Science & Technology

    2007-01-01

    Time and Frequency Transfer ( TWSTFT ),” Review of Radio Science (Oxford Science Publications), pp. 27-44. 14 38th Annual Precise Time and Time Interval...Fonville, D. Matsakis, W. Shäfer, and A. Pawlitzki, 2005, “Development of Carrier- Phase-Based Two-Way Satellite Time and Frequency Transfer ( TWSTFT

  14. SHORT COMMUNICATION: Time measurement device with four femtosecond stability

    NASA Astrophysics Data System (ADS)

    Panek, Petr; Prochazka, Ivan; Kodet, Jan

    2010-10-01

    We present the experimental results of extremely precise timing in the sense of time-of-arrival measurements in a local time scale. The timing device designed and constructed in our laboratory is based on a new concept using a surface acoustic wave filter as a time interpolator. Construction of the device is briefly described. The experiments described were focused on evaluating the timing precision and stability. Low-jitter test pulses with a repetition frequency of 763 Hz were generated synchronously to the local time base and their times of arrival were measured. The resulting precision of a single measurement was typically 900 fs RMS, and a timing stability TDEV of 4 fs was achieved for time intervals in the range from 300 s to 2 h. To our knowledge this is the best value reported to date for the stability of a timing device. The experimental results are discussed and possible improvements are proposed.

  15. The effect of hospital care on early survival after penetrating trauma.

    PubMed

    Clark, David E; Doolittle, Peter C; Winchell, Robert J; Betensky, Rebecca A

    2014-12-01

    The effectiveness of emergency medical interventions can be best evaluated using time-to-event statistical methods with time-varying covariates (TVC), but this approach is complicated by uncertainty about the actual times of death. We therefore sought to evaluate the effect of hospital intervention on mortality after penetrating trauma using a method that allowed for interval censoring of the precise times of death. Data on persons with penetrating trauma due to interpersonal assault were combined from the 2008 to 2010 National Trauma Data Bank (NTDB) and the 2004 to 2010 National Violent Death Reporting System (NVDRS). Cox and Weibull proportional hazards models for survival time (t SURV ) were estimated, with TVC assumed to have constant effects for specified time intervals following hospital arrival. The Weibull model was repeated with t SURV interval-censored to reflect uncertainty about the precise times of death, using an imputation method to accommodate interval censoring along with TVC. All models showed that mortality was increased by older age, female sex, firearm mechanism, and injuries involving the head/neck or trunk. Uncensored models showed a paradoxical increase in mortality associated with the first hour in a hospital. The interval-censored model showed that mortality was markedly reduced after admission to a hospital, with a hazard ratio (HR) of 0.68 (95% CI 0.63, 0.73) during the first 30 min declining to a HR of 0.01 after 120 min. Admission to a verified level I trauma center (compared to other hospitals in the NTDB) was associated with a further reduction in mortality, with a HR of 0.93 (95% CI 0.82, 0.97). Time-to-event models with TVC and interval censoring can be used to estimate the effect of hospital care on early mortality after penetrating trauma or other acute medical conditions and could potentially be used for interhospital comparisons.

  16. Next Steps in Network Time Synchronization For Navy Shipboard Applications

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting NEXT STEPS IN NETWORK TIME SYNCHRONIZATION FOR NAVY SHIPBOARD APPLICATIONS...dynamic manner than in previous designs. This new paradigm creates significant network time synchronization challenges. The Navy has been...deploying the Network Time Protocol (NTP) in shipboard computing infrastructures to meet the current network time synchronization requirements

  17. Statistical inference for the within-device precision of quantitative measurements in assay validation.

    PubMed

    Liu, Jen-Pei; Lu, Li-Tien; Liao, C T

    2009-09-01

    Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.

  18. A new stochastic model considering satellite clock interpolation errors in precise point positioning

    NASA Astrophysics Data System (ADS)

    Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong

    2018-03-01

    Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.

  19. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  20. The Chip-Scale Atomic Clock - Recent Development Progress

    DTIC Science & Technology

    2004-09-01

    35th Annual Precise Time and Time Interval (PTTI) Meeting 467 THE CHIP-SCALE ATOMIC CLOCK – RECENT DEVELOPMENT PROGRESS R. Lutwak ...1] R. Lutwak , et al., 2003, “The Chip-Scale Atomic Clock – Coherent Population Trapping vs. Conventional Interrogation,” in

  1. Time of flight system on a chip

    NASA Technical Reports Server (NTRS)

    Paschalidis, Nicholas P. (Inventor)

    2006-01-01

    A CMOS time-of-flight TOF system-on-a-chip SoC for precise time interval measurement with low power consumption and high counting rate has been developed. The analog and digital TOF chip may include two Constant Fraction Discriminators CFDs and a Time-to-Digital Converter TDC. The CFDs can interface to start and stop anodes through two preamplifiers and perform signal processing for time walk compensation (110). The TDC digitizes the time difference with reference to an off-chip precise external clock (114). One TOF output is an 11-bit digital word and a valid event trigger output indicating a valid event on the 11-bit output bus (116).

  2. Application of Millisecond Pulsar Timing to the Long-Term Stability of Clock Ensembles

    NASA Technical Reports Server (NTRS)

    Foster, Roger S.; Matsakis, Demetrios N.

    1996-01-01

    We review the application of millisecond pulsars to define a precise long-term standard and positional reference system in a nearly inertial reference frame. We quantify the current timing precision of the best millisecond pulsars and define the required precise time and time interval (PTTI) accuracy and stability to enable time transfer via pulsars. Pulsars may prove useful as independent standards to examine decade-long timing stability and provide an independent natural system within which to calibrate any new, perhaps vastly improved atomic time scale. Since pulsar stability appears to be related to the lifetime of the pulsar, the new millisecond pulsar J173+0747 is projected to have a 100-day accuracy equivalent to a single HP5071 cesium standard. Over the last five years, dozens of new millisecond pulsars have been discovered. A few of the new millisecond pulsars may have even better timing properties.

  3. α7 Nicotinic acetylcholine receptors and temporal memory: Synergistic effects of combining prenatal choline and nicotine on reinforcement-induced resetting of an interval clock

    PubMed Central

    Cheng, Ruey-Kuang; Meck, Warren H.; Williams, Christina L.

    2006-01-01

    We previously showed that prenatal choline supplementation could increase the precision of timing and temporal memory and facilitate simultaneous temporal processing in mature and aged rats. In the present study, we investigated the ability of adult rats to selectively control the reinforcement-induced resetting of an internal clock as a function of prenatal drug treatments designed to affect the α7 nicotinic acetylcholine receptor (α7 nAChR). Male Sprague-Dawley rats were exposed to prenatal choline (CHO), nicotine (NIC), methyllycaconitine (MLA), choline + nicotine (CHO + NIC), choline + nicotine + methyllycaconitine (CHO + NIC + MLA), or a control treatment (CON). Beginning at 4-mo-of-age, rats were trained on a peak-interval timing procedure in which food was available at 10-, 30-, and 90-sec criterion durations. At steady-state performance there were no differences in timing accuracy, precision, or resetting among the CON, MLA, and CHO + NIC + MLA treatments. It was observed that the CHO and NIC treatments produced a small, but significant increase in timing precision, but no change in accuracy or resetting. In contrast, the CHO + NIC prenatal treatment produced a dramatic increase in timing precision and selective control of the resetting mechanism with no change in overall timing accuracy. The synergistic effect of combining prenatal CHO and NIC treatments suggests an organizational change in α7 nAChR function that is dependent upon a combination of selective and nonselective nAChR stimulation during early development. PMID:16547161

  4. The String Stability of a Trajectory-Based Interval Management Algorithm in the Midterm Airspace

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.

    2015-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides terminal controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain a precise spacing interval behind a target aircraft. As the percentage of IM equipped aircraft increases, controllers may provide IM clearances to sequences, or strings, of IM-equipped aircraft. It is important for these strings to maintain stable performance. This paper describes an analytic analysis of the string stability of the latest version of NASA's IM algorithm and a fast-time simulation designed to characterize the string performance of the IM algorithm. The analytic analysis showed that the spacing algorithm has stable poles, indicating that a spacing error perturbation will be reduced as a function of string position. The fast-time simulation investigated IM operations at two airports using constraints associated with the midterm airspace, including limited information of the target aircraft's intended speed profile and limited information of the wind forecast on the target aircraft's route. The results of the fast-time simulation demonstrated that the performance of the spacing algorithm is acceptable for strings of moderate length; however, there is some degradation in IM performance as a function of string position.

  5. Precise time and time interval applications to electric power systems

    NASA Technical Reports Server (NTRS)

    Wilson, Robert E.

    1992-01-01

    There are many applications of precise time and time interval (frequency) in operating modern electric power systems. Many generators and customer loads are operated in parallel. The reliable transfer of electrical power to the consumer partly depends on measuring power system frequency consistently in many locations. The internal oscillators in the widely dispersed frequency measuring units must be syntonized. Elaborate protection and control systems guard the high voltage equipment from short and open circuits. For the highest reliability of electric service, engineers need to study all control system operations. Precise timekeeping networks aid in the analysis of power system operations by synchronizing the clocks on recording instruments. Utility engineers want to reproduce events that caused loss of service to customers. Precise timekeeping networks can synchronize protective relay test-sets. For dependable electrical service, all generators and large motors must remain close to speed synchronism. The stable response of a power system to perturbations is critical to continuity of electrical service. Research shows that measurement of the power system state vector can aid in the monitoring and control of system stability. If power system operators know that a lightning storm is approaching a critical transmission line or transformer, they can modify operating strategies. Knowledge of the location of a short circuit fault can speed the re-energizing of a transmission line. One fault location technique requires clocks synchronized to one microsecond. Current research seeks to find out if one microsecond timekeeping can aid and improve power system control and operation.

  6. Proceedings of the Annual Precise Time and Time Interval (PTTI) applications and Planning Meeting (25th) Held in Marina Del Rey, California on 29 November - 2 December 1993

    DTIC Science & Technology

    1993-12-02

    electronics. In other words, while the main driving force of the past has been the desire for greater performance by way of accuracy, the future will demand ...that can match him in terms of number of years in the program; but there are a lot of folks that are brand new to the program. What is precise time...International Telecommunications Union (ITU). The additional development of a digital-filter view of all of these two-sample variances113) has

  7. The NANOGrav Observing Program: High-precision Millisecond Pulsar Timing and the Search for Nanohertz Gravitational Waves

    NASA Astrophysics Data System (ADS)

    Nice, David; NANOGrav

    2018-01-01

    The North American Observatory for Nanohertz Gravitational Waves (NANOGrav) collaboration is thirteen years into a program of long-term, high-precision millisecond pulsar timing, undertaken with the goal of detecting and characterization nanohertz gravitational waves (i.e., gravitational waves with periods of many years) by measuring their effect on observed pulse arrival times. Our primary instruments are the Arecibo Observatory, used to observe 37 pulsars with declinations between 0 and 39 degrees; and the Green Bank Telescope, used for 24 pulsars, of which 22 are outside the Arecibo range, and 2 are overlaps with the Arecibo source list. Additional observations are made with the VLA and (soon) CHIME.Most pulsars in our program are observed at intervals of three to four weeks, and seven are observed weekly. Observations of each pulsar are made over a wide range of radio frequencies at each epoch in order to measure and mitigate effects of the ionized interstellar medium on the pulse arrival times. Our targets are pulsars for which we can achieve timing precision of 1 microsecond or better in at each epoch; we achieve precision better than 100 nanoseconds in the best cases. Observing a large number of pulsars will allow for robust measurements of gravitational waves by analyzing correlations in the timing of pairs of pulsars depending on their separation on the sky. Our data are pooled with data from telescopes worldwide via the International Pulsar Timing Array (IPTA) collaboration, further increasing our sensitivity to gravitational waves.We release data at regular intervals. We will describe the NANOGrav 5-, 9- and 11-year data sets and give a status report on the NANOGrav 12.5-year data set.

  8. Filling the blanks in temporal intervals: the type of filling influences perceived duration and discrimination performance

    PubMed Central

    Horr, Ninja K.; Di Luca, Massimiliano

    2015-01-01

    In this work we investigate how judgments of perceived duration are influenced by the properties of the signals that define the intervals. Participants compared two auditory intervals that could be any combination of the following four types: intervals filled with continuous tones (filled intervals), intervals filled with regularly-timed short tones (isochronous intervals), intervals filled with irregularly-timed short tones (anisochronous intervals), and intervals demarcated by two short tones (empty intervals). Results indicate that the type of intervals to be compared affects discrimination performance and induces distortions in perceived duration. In particular, we find that duration judgments are most precise when comparing two isochronous and two continuous intervals, while the comparison of two anisochronous intervals leads to the worst performance. Moreover, we determined that the magnitude of the distortions in perceived duration (an effect akin to the filled duration illusion) is higher for tone sequences (no matter whether isochronous or anisochronous) than for continuous tones. Further analysis of how duration distortions depend on the type of filling suggests that distortions are not only due to the perceived duration of the two individual intervals, but they may also be due to the comparison of two different filling types. PMID:25717310

  9. Proceedings of the Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting (22nd) Held in Vienna, Virginia on 4-6 Dec 1990

    DTIC Science & Technology

    1991-05-01

    the problem of the frequency drift is still open. In- this context, the cavity pulling has drawn a lot of attention. Today, to our knowledge, 4...term maser frequency drift associated with the cavity pulling is a well known subject due to the high level of -precision obtainable in principle by...microprocessors. The frequency pulling due to microwave AM = =1:transitions (Ramsey pulling ) has been analyzed and shown to be important. Status of

  10. A flexible 32-channel time-to-digital converter implemented in a Xilinx Zynq-7000 field programmable gate array

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Kuang, Jie; Liu, Chong; Cao, Qiang; Li, Deng

    2017-03-01

    A high performance multi-channel time-to-digital converter (TDC) is implemented in a Xilinx Zynq-7000 field programmable gate array (FPGA). It can be flexibly configured as either 32 TDC channels with 9.9 ps time-interval RMS precision, 16 TDC channels with 6.9 ps RMS precision, or 8 TDC channels with 5.8 ps RMS precision. All TDCs have a 380 M Samples/second measurement throughput and a 2.63 ns measurement dead time. The performance consistency and temperature dependence of TDC channels are also evaluated. Because Zynq-7000 FPGA family integrates a feature-rich dual-core ARM based processing system and 28 nm Xilinx programmable logic in a single device, the realization of high performance TDCs on it will make the platform more widely used in time-measuring related applications.

  11. Enhancing quantum sensing sensitivity by a quantum memory

    PubMed Central

    Zaiser, Sebastian; Rendler, Torsten; Jakobi, Ingmar; Wolf, Thomas; Lee, Sang-Yun; Wagner, Samuel; Bergholm, Ville; Schulte-Herbrüggen, Thomas; Neumann, Philipp; Wrachtrup, Jörg

    2016-01-01

    In quantum sensing, precision is typically limited by the maximum time interval over which phase can be accumulated. Memories have been used to enhance this time interval beyond the coherence lifetime and thus gain precision. Here, we demonstrate that by using a quantum memory an increased sensitivity can also be achieved. To this end, we use entanglement in a hybrid spin system comprising a sensing and a memory qubit associated with a single nitrogen-vacancy centre in diamond. With the memory we retain the full quantum state even after coherence decay of the sensor, which enables coherent interaction with distinct weakly coupled nuclear spin qubits. We benchmark the performance of our hybrid quantum system against use of the sensing qubit alone by gradually increasing the entanglement of sensor and memory. We further apply this quantum sensor-memory pair for high-resolution NMR spectroscopy of single 13C nuclear spins. PMID:27506596

  12. DOD Dictionary of Military and Associated Terms

    DTIC Science & Technology

    2017-03-01

    to regionally grouped military and federal customers from commercial distributors using electronic commerce. Also called PV . See also distribution...and magnetosphere, interplanetary space, and the solar atmosphere. (JP 3-59) Terms and Definitions 218 space force application — Combat...precise time and time interval PUK packup kit PV prime vendor PVNTMED preventive medicine PVT positioning, velocity, and timing Abbreviations

  13. An accuracy assessment of realtime GNSS time series toward semi- real time seafloor geodetic observation

    NASA Astrophysics Data System (ADS)

    Osada, Y.; Ohta, Y.; Demachi, T.; Kido, M.; Fujimoto, H.; Azuma, R.; Hino, R.

    2013-12-01

    Large interplate earthquake repeatedly occurred in Japan Trench. Recently, the detail crustal deformation revealed by the nation-wide inland GPS network called as GEONET by GSI. However, the maximum displacement region for interplate earthquake is mainly located offshore region. GPS/Acoustic seafloor geodetic observation (hereafter GPS/A) is quite important and useful for understanding of shallower part of the interplate coupling between subducting and overriding plates. We typically conduct GPS/A in specific ocean area based on repeated campaign style using research vessel or buoy. Therefore, we cannot monitor the temporal variation of seafloor crustal deformation in real time. The one of technical issue on real time observation is kinematic GPS analysis because kinematic GPS analysis based on reference and rover data. If the precise kinematic GPS analysis will be possible in the offshore region, it should be promising method for real time GPS/A with USV (Unmanned Surface Vehicle) and a moored buoy. We assessed stability, precision and accuracy of StarFireTM global satellites based augmentation system. We primarily tested for StarFire in the static condition. In order to assess coordinate precision and accuracy, we compared 1Hz StarFire time series and post-processed precise point positioning (PPP) 1Hz time series by GIPSY-OASIS II processing software Ver. 6.1.2 with three difference product types (ultra-rapid, rapid, and final orbits). We also used difference interval clock information (30 and 300 seconds) for the post-processed PPP processing. The standard deviation of real time StarFire time series is less than 30 mm (horizontal components) and 60 mm (vertical component) based on 1 month continuous processing. We also assessed noise spectrum of the estimated time series by StarFire and post-processed GIPSY PPP results. We found that the noise spectrum of StarFire time series is similar pattern with GIPSY-OASIS II processing result based on JPL rapid orbit products with 300 seconds interval clock information. And we report stability, precision and accuracy of StarFire in the moving conditon.

  14. Common View Time Transfer Using Worldwide GPS and DMA Monitor Stations

    NASA Technical Reports Server (NTRS)

    Reid, Wilson G.; McCaskill, Thomas B.; Oaks, Orville J.; Buisson, James A.; Warren, Hugh E.

    1996-01-01

    Analysis of the on-orbit Navstar clocks and the Global Positioning System (GPS) monitor station reference clocks is performed by the Naval Research Laboratory using both broadcast and postprocessed precise ephemerides. The precise ephemerides are produced by the Defense Mapping Agency (DMA) for each of the GPS space vehicles from pseudo-range measurements collected at five GPS and at five DMA monitor stations spaced around the world. Recently, DMA established an additional site co-located with the US Naval Observatory precise time site. The time reference for the new DMA site is the DoD Master Clock. Now, for the first time, it is possible to transfer time every 15 minutes via common view from the DoD Master Clock to the 11 GPS and DMA monitor stations. The estimated precision of a single common-view time transfer measurement taken over a 15-minute interval was between 1.4 and 2.7 nanoseconds. Using the measurements from all Navstar space vehicles in common view during the 15-minute interval, typically 3-7 space vehicles, improved the estimate of the precision to between 0.65 and 1.13 nanoseconds. The mean phase error obtained from closure of the time transfer around the world using the 11 monitor stations and the 25 space vehicle clocks over a period of 4 months had a magnitude of 31 picoseconds. Analysis of the low noise time transfer from the DoD Master Clock to each of the monitor stations yields not only the bias in the time of the reference clock, but also focuses attention on structure in the behaviour of the reference clock not previously seen. Furthermore, the time transfer provides a a uniformly sampled database of 15-minute measurements that make possible, for the first time, the direct and exhaustive computation of the frequency stability of the monitor station reference clocks. To lend perspective to the analysis, a summary is given of the discontinuities in phase and frequency that occurred in the reference clock at the Master Control Station during the period covered by the analysis.

  15. Implementation of a high precision multi-measurement time-to-digital convertor on a Kintex-7 FPGA

    NASA Astrophysics Data System (ADS)

    Kuang, Jie; Wang, Yonggang; Cao, Qiang; Liu, Chong

    2018-05-01

    Time-to-digital convertors (TDCs) based on field programmable gate array (FPGA) are becoming more and more popular. Multi-measurement is an effective method to improve TDC precision beyond the cell delay limitation. However, the implementation of TDC with multi-measurement on FPGAs manufactured with 28 nm and more advanced process is facing new challenges. Benefiting from the ones-counter encoding scheme, which was developed in our previous work, we implement a ring oscillator multi-measurement TDC on a Xilinx Kintex-7 FPGA. Using the two TDC channels to measure time-intervals in the range (0 ns-30 ns), the average RMS precision can be improved to 5.76 ps, meanwhile the logic resource usage remains the same with the one-measurement TDC, and the TDC dead time is only 22 ns. The investigation demonstrates that the multi-measurement methods are still available for current main-stream FPGAs. Furthermore, the new implementation in this paper could make the trade-off among the time precision, resource usage and TDC dead time better than ever before.

  16. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  17. How to Handle a Satellite Change in an Operational TWSTFT Network?

    DTIC Science & Technology

    2010-11-01

    42 nd Annual Precise Time and Time Interval (PTTI) Meeting 285 HOW TO HANDLE A SATELLITE CHANGE IN AN OPERATIONAL TWSTFT NETWORK...way satellite time and frequency transfer ( TWSTFT ) is a powerful technique because of its real-time capabilities. In principle, the time difference...between remote clocks is almost instantaneously known after a measurement session. Long-term TWSTFT operations have required changes between

  18. Detection of Outliers in TWSTFT Data Used in TAI

    DTIC Science & Technology

    2009-11-01

    41st Annual Precise Time and Time Interval (PTTI) Meeting 421 DETECTION OF OUTLIERS IN TWSTFT DATA USED IN TAI A...data in two-way satellite time and frequency transfer ( TWSTFT ) time links. In the case of TWSTFT data used to calculate International Atomic Time...data; that TWSTFT links can show an underlying slope which renders the standard treatment more difficult. Using phase and frequency filtering

  19. Influence of Time-Pickoff Circuit Parameters on LiDAR Range Precision

    PubMed Central

    Wang, Hongming; Yang, Bingwei; Huyan, Jiayue; Xu, Lijun

    2017-01-01

    A pulsed time-of-flight (TOF) measurement-based Light Detection and Ranging (LiDAR) system is more effective for medium-long range distances. As a key ranging unit, a time-pickoff circuit based on automatic gain control (AGC) and constant fraction discriminator (CFD) is designed to reduce the walk error and the timing jitter for obtaining the accurate time interval. Compared with Cramer–Rao lower bound (CRLB) and the estimation of the timing jitter, four parameters-based Monte Carlo simulations are established to show how the range precision is influenced by the parameters, including pulse amplitude, pulse width, attenuation fraction and delay time of the CFD. Experiments were carried out to verify the relationship between the range precision and three of the parameters, exclusing pulse width. It can be concluded that two parameters of the ranging circuit (attenuation fraction and delay time) were selected according to the ranging performance of the minimum pulse amplitude. The attenuation fraction should be selected in the range from 0.2 to 0.6 to achieve high range precision. The selection criterion of the time-pickoff circuit parameters is helpful for the ranging circuit design of TOF LiDAR system. PMID:29039772

  20. Time Transfer from Combined Analysis of GPS and TWSTFT Data

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 565 TIME TRANSFER FROM COMBINED ANALYSIS OF GPS AND TWSTFT DATA...bipm.org Abstract This paper presents the time transfer results obtained from the combination of GPS data and TWSTFT data. Two different methods...view, constrained by TWSTFT data. Using the Vondrak-Cepek algorithm, the second approach (named PPP+TW) combines the TWSTFT time transfer data with

  1. Timing Precision and Rhythm in Developmental Dyslexia.

    ERIC Educational Resources Information Center

    Wolff, Peter H.

    2002-01-01

    Indicates that during a motor sequencing task, dyslexic students anticipated the signal of an isochronic pacing metronome by intervals that were two or three times as long as those of age matched normal readers or normal adults. Discusses the implications of the findings for temporal information processing deficits on one hand, and impaired…

  2. Automatic sweep circuit

    DOEpatents

    Keefe, Donald J.

    1980-01-01

    An automatically sweeping circuit for searching for an evoked response in an output signal in time with respect to a trigger input. Digital counters are used to activate a detector at precise intervals, and monitoring is repeated for statistical accuracy. If the response is not found then a different time window is examined until the signal is found.

  3. Precision Laser Spectroscopic Measurement of Helium -4(1S2S S(3) to 1S2P P(3)) Lamb Shift and Fine Structure

    NASA Astrophysics Data System (ADS)

    Dixson, Ronald Gene

    This thesis is a presentation of the results of a precise measurement of the absolute wavelength and fine structure splitting of the 1s2s ^3S to 1s2p ^3P transition of the ^4He atom. The experiment described in this thesis is the first one in which laser spectroscopy has been done on the 2 ^3S to 2^3 P transition in a metastable atomic beam. The energy interval between the 2^3S and the 2^3P state is precisely determined by comparison of the absolute wavelength of the transition with our standard laser (an iodine stabilized He-Ne laser with an accuracy of 1.6 parts in 10^{10 }) in a Fabry-Perot interferometer. The experimental Lamb shift of the transition is determined by subtracting from the measured frequency the precisely known non-quantum electrodynamic contributions to the theoretical value of the interval. From our measurements of the absolute wavelength, the following weighted (2J + 1) average for the 2^3S to 2^3P transition frequency and experimental Lamb Shift are obtained:eqalign{& rm f_{2S{-}2P} = 276 736 495.59 (5) rm MHz.cr& {bf L}[ 2^3Sto2 ^3P] = 5311.26 (5) rm MHz.cr} Our value for the Lamb Shift is in agreement with the best previous measurement but a factor of 60 more precise. It is also two orders of magnitude more precise than the present theoretical calculation, presenting quite a challenge to theorists. Nevertheless, this work is very timely since it is anticipated (DRA94) (MOR94) that the theory will reach this level in the near future. The measured fine structure splittings of the 2^3P level are: eqalign{rm 2^3P_0to rm2^3P_2 &: 31908.135 (3) rm MHzcrrm 2^3P_1to rm2^3P_2 &: sk{5}2291.173 (3) rm MHz}These results are more precise than previous microwave measurements and in significant disagreement with them, a situation which is especially timely and interesting since new theoretical calculations of these fine structure intervals (DRA94) at this level of precision are nearing completion.

  4. Precise time dissemination via portable atomic clocks

    NASA Technical Reports Server (NTRS)

    Putkovich, K.

    1982-01-01

    The most precise operational method of time dissemination over long distances presently available to the Precise Time and Time Interval (PTTI) community of users is by means of portable atomic clocks. The Global Positioning System (GPS), the latest system showing promise of replacing portable clocks for global PTTI dissemination, was evaluated. Although GPS has the technical capability of providing superior world-wide dissemination, the question of present cost and future accessibility may require a continued reliance on portable clocks for a number of years. For these reasons a study of portable clock operations as they are carried out today was made. The portable clock system that was utilized by the U.S. Naval Observatory (NAVOBSY) in the global synchronization of clocks over the past 17 years is described and the concepts on which it is based are explained. Some of its capabilities and limitations are also discussed.

  5. Calibrating GPS With TWSTFT For Accurate Time Transfer

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 577 CALIBRATING GPS WITH TWSTFT FOR ACCURATE TIME TRANSFER Z. Jiang1 and...primary time transfer techniques are GPS and TWSTFT (Two-Way Satellite Time and Frequency Transfer, TW for short). 83% of UTC time links are...Calibrating GPS With TWSTFT For Accurate Time Transfer 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT

  6. Two-Way Satellite Time and Frequency Transfer (TWSTFT) Calibration Constancy From Closure Sums

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 587 TWO-WAY SATELLITE TIME AND FREQUENCY TRANSFER ( TWSTFT ) CALIBRATION...Paris, France Abstract Two-way Satellite Time and Frequency Transfer ( TWSTFT ) is considered to be the most accurate means of long-distance...explanations for small, but non-zero, biases observed in the closure sums of uncalibrated data are presented. I. INTRODUCTION TWSTFT [1] has

  7. Proceedings of the Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting (15th) Held at Washington, DC on 6-8 December 1983,

    DTIC Science & Technology

    1984-04-02

    clock is an absolute technique with a 14 0 • ,4 precision of about 0.1 )us The results of the portable clock experiment indicate that LF sync...also gains direct access to the U. S. primary frequency standard, NBS-6. Access to1 BS-6 makes it possible to set an absolute limit of one part in 10...of the components in these equations are uncorrelated we may take vari- ances of each of these equations and the cross terms will average to zero 117

  8. Characterization and Uncertainty Analysis of a Reference Pressure Measurement System for Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley

    2004-01-01

    This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.

  9. Quantum preservation of the measurements precision using ultra-short strong pulses in exact analytical solution

    NASA Astrophysics Data System (ADS)

    Berrada, K.; Eleuch, H.

    2017-09-01

    Various schemes have been proposed to improve the parameter-estimation precision. In the present work, we suggest an alternative method to preserve the estimation precision by considering a model that closely describes a realistic experimental scenario. We explore this active way to control and enhance the measurements precision for a two-level quantum system interacting with classical electromagnetic field using ultra-short strong pulses with an exact analytical solution, i.e. beyond the rotating wave approximation. In particular, we investigate the variation of the precision with a few cycles pulse and a smooth phase jump over a finite time interval. We show that by acting on the shape of the phase transient and other parameters of the considered system, the amount of information may be increased and has smaller decay rate in the long time. These features make two-level systems incorporated in ultra-short, of-resonant and gradually changing phase good candidates for implementation of schemes for the quantum computation and the coherent information processing.

  10. TWSTFT Data Treatment for UTC Time Transfer

    DTIC Science & Technology

    2009-11-01

    41 st Annual Precise Time and Time Interval (PTTI) Meeting 409 TWSTFT DATA TREATMENT FOR UTC TIME TRANSFER Z. Jiang, W...Abstract TWSTFT (TW) is the primary technique of time and frequency transfers used at BIPM for the UTC/TAI generations. At present, some 19...number. 1. REPORT DATE NOV 2009 2. REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE TWSTFT Data Treatment for UTC Time

  11. Design of a short nonuniform acquisition protocol for quantitative analysis in dynamic cardiac SPECT imaging - a retrospective 123 I-MIBG animal study.

    PubMed

    Zan, Yunlong; Long, Yong; Chen, Kewei; Li, Biao; Huang, Qiu; Gullberg, Grant T

    2017-07-01

    Our previous works have found that quantitative analysis of 123 I-MIBG kinetics in the rat heart with dynamic single-photon emission computed tomography (SPECT) offers the potential to quantify the innervation integrity at an early stage of left ventricular hypertrophy. However, conventional protocols involving a long acquisition time for dynamic imaging reduce the animal survival rate and thus make longitudinal analysis difficult. The goal of this work was to develop a procedure to reduce the total acquisition time by selecting nonuniform acquisition times for projection views while maintaining the accuracy and precision of estimated physiologic parameters. Taking dynamic cardiac imaging with 123 I-MIBG in rats as an example, we generated time activity curves (TACs) of regions of interest (ROIs) as ground truths based on a direct four-dimensional reconstruction of experimental data acquired from a rotating SPECT camera, where TACs represented as the coefficients of B-spline basis functions were used to estimate compartmental model parameters. By iteratively adjusting the knots (i.e., control points) of B-spline basis functions, new TACs were created according to two rules: accuracy and precision. The accuracy criterion allocates the knots to achieve low relative entropy between the estimated left ventricular blood pool TAC and its ground truth so that the estimated input function approximates its real value and thus the procedure yields an accurate estimate of model parameters. The precision criterion, via the D-optimal method, forces the estimated parameters to be as precise as possible, with minimum variances. Based on the final knots obtained, a new protocol of 30 min was built with a shorter acquisition time that maintained a 5% error in estimating rate constants of the compartment model. This was evaluated through digital simulations. The simulation results showed that our method was able to reduce the acquisition time from 100 to 30 min for the cardiac study of rats with 123 I-MIBG. Compared to a uniform interval dynamic SPECT protocol (1 s acquisition interval, 30 min acquisition time), the newly proposed protocol with nonuniform interval achieved comparable (K1 and k2, P = 0.5745 for K1 and P = 0.0604 for k2) or better (Distribution Volume, DV, P = 0.0004) performance for parameter estimates with less storage and shorter computational time. In this study, a procedure was devised to shorten the acquisition time while maintaining the accuracy and precision of estimated physiologic parameters in dynamic SPECT imaging. The procedure was designed for 123 I-MIBG cardiac imaging in rat studies; however, it has the potential to be extended to other applications, including patient studies involving the acquisition of dynamic SPECT data. © 2017 American Association of Physicists in Medicine.

  12. Proceedings of the 30th Annual Precise Time and Time Interval (PTTI) Systems and Applications Meeting

    DTIC Science & Technology

    1999-01-01

    roa.es C. Tomis RedIRIS, Centro Comunicaciones CSIC RedIRIS Serrano 142, 28006 Madrid, Spain tel + 34-91-585-5150 fax +34-91-564-7421 e-mail...I&D) and managed by the Centro de Comunicaciones CSIC RedIRIS, which depends on the Scientific Research Council (Consejo Superior de Investigaciones

  13. Physical Layer Ethernet Clock Synchronization

    DTIC Science & Technology

    2010-11-01

    42 nd Annual Precise Time and Time Interval (PTTI) Meeting 77 PHYSICAL LAYER ETHERNET CLOCK SYNCHRONIZATION Reinhard Exel, Georg...oeaw.ac.at Nikolaus Kerö Oregano Systems, Mohsgasse 1, 1030 Wien, Austria E-mail: nikolaus.keroe@oregano.at Abstract Clock synchronization ...is a service widely used in distributed networks to coordinate data acquisition and actions. As the requirement to achieve tighter synchronization

  14. About Compass Time and Its Coordination with Other GNSSs

    DTIC Science & Technology

    2007-11-01

    mainly by TWSTFT , and other remote time link 20 39th Annual Precise Time and Time Interval (PTTI) Meeting techniques will be backups, including the...remote time link between the Compass ground control station and other GNSS time centers, by two-way satellite time and frequency transfer ( TWSTFT ) and...other GNSS time centers. Comparing the three methods mentioned above, the third is high in accuracy, but high in cost also. If the TWSTFT link is

  15. A 256-channel, high throughput and precision time-to-digital converter with a decomposition encoding scheme in a Kintex-7 FPGA

    NASA Astrophysics Data System (ADS)

    Song, Z.; Wang, Y.; Kuang, J.

    2018-05-01

    Field Programmable Gate Arrays (FPGAs) made with 28 nm and more advanced process technology have great potentials for implementation of high precision time-to-digital convertors (TDC), because the delay cells in the tapped delay line (TDL) used for time interpolation are getting smaller and smaller. However, the bubble problems in the TDL status are becoming more complicated, which make it difficult to achieve TDCs on these chips with a high time precision. In this paper, we are proposing a novel decomposition encoding scheme, which not only can solve the bubble problem easily, but also has a high encoding efficiency. The potential of these chips to realize TDC can be fully released with the scheme. In a Xilinx Kintex-7 FPGA chip, we implemented a TDC system with 256 TDC channels, which doubles the number of TDC channels that our previous technique could achieve. Performances of all these TDC channels are evaluated. The average RMS time precision among them is 10.23 ps in the time-interval measurement range of (0–10 ns), and their measurement throughput reaches 277 M measures per second.

  16. Evaluation of the Time and Frequency Transfer Capabilities of a Network of GNSS Receivers Located in Timing Laboratories

    DTIC Science & Technology

    2009-11-01

    metrology, different techniques are used for time and frequency transfer, basically TWSTFT (Two-Way Satellite Time and Frequency Transfer), GPS CV (Common...traditional GPS/GLONASS CV/AV receivers and TWSTFT equipment. Time and frequency transfer using GPS code and carrier-phase is an important...or mixing GPS geodetic results with other independent techniques, such as the TWSTFT . 41 st Annual Precise Time and Time Interval (PTTI

  17. Accuracy and Precision of Visual Stimulus Timing in PsychoPy: No Timing Errors in Standard Usage

    PubMed Central

    Garaizar, Pablo; Vadillo, Miguel A.

    2014-01-01

    In a recent report published in PLoS ONE, we found that the performance of PsychoPy degraded with very short timing intervals, suggesting that it might not be perfectly suitable for experiments requiring the presentation of very brief stimuli. The present study aims to provide an updated performance assessment for the most recent version of PsychoPy (v1.80) under different hardware/software conditions. Overall, the results show that PsychoPy can achieve high levels of precision and accuracy in the presentation of brief visual stimuli. Although occasional timing errors were found in very demanding benchmarking tests, there is no reason to think that they can pose any problem for standard experiments developed by researchers. PMID:25365382

  18. High-precision tracking of brownian boomerang colloidal particles confined in quasi two dimensions.

    PubMed

    Chakrabarty, Ayan; Wang, Feng; Fan, Chun-Zhen; Sun, Kai; Wei, Qi-Huo

    2013-11-26

    In this article, we present a high-precision image-processing algorithm for tracking the translational and rotational Brownian motion of boomerang-shaped colloidal particles confined in quasi-two-dimensional geometry. By measuring mean square displacements of an immobilized particle, we demonstrate that the positional and angular precision of our imaging and image-processing system can achieve 13 nm and 0.004 rad, respectively. By analyzing computer-simulated images, we demonstrate that the positional and angular accuracies of our image-processing algorithm can achieve 32 nm and 0.006 rad. Because of zero correlations between the displacements in neighboring time intervals, trajectories of different videos of the same particle can be merged into a very long time trajectory, allowing for long-time averaging of different physical variables. We apply this image-processing algorithm to measure the diffusion coefficients of boomerang particles of three different apex angles and discuss the angle dependence of these diffusion coefficients.

  19. A catalog of atmospheric densities from the drag on five balloon satellites

    NASA Technical Reports Server (NTRS)

    Jacchia, L. G.; Slowey, J. W.

    1975-01-01

    A catalog of atmospheric densities derived for the drag on five balloon satellites is presented. Much of the catalog was based on precisely reduced Baker-Nunn observations and, for that reason, provides much improved time resolution. The effect of direct solar radiation pressure was precisely evaluated, and that of terrestrial radiation pressure was included in every case. The interval covered for each satellite varies between 3.1 and 7.6 years, with the data extending from early 1961 to early 1973.

  20. Improvements in GPS precision: 10 Hz to one day

    NASA Astrophysics Data System (ADS)

    Choi, Kyuhong

    Seeking to understand Global Positioning System (GPS) measurements and the positioning solutions in various time intervals, this dissertation improves the consistency of pseudorange measurements from different receiver types, processes 30 s interval data with optimized filtering techniques, and analyzes very-high-rate data with short arc lengths and baseline noise. The first project studies satellite-dependent biases between C/A and P1 codes. Calibrating these biases reduces the inconsistency of satellite clocks, improving the ambiguity resolution which allows for higher position precision. Receiver-dependent biases for two receivers are compared with the bias products of Center for Orbit Determination in Europe (CODE). Baseline lengths ranging up to ˜2,100km are tested with the receiver-specific biases; they resolve more phase ambiguity by 4.3% than using CODE's products. The second project analyzes 1 s and 30 s interval GPS data of the 2003 Tokachi-Oki earthquake. For 1 Hz positioning, Iterative Tropospheric Estimation (ITE) method improves vertical precision. While equalized sidereal filtering reduces noise for multipath-dominant 30--300 s periods, it can cause long-term drifts in the timeseries. A study of postseismic deformation after the Tokachi-Oki earthquake uses 30 s interval position estimations to test multiple filtering strategies to maximize precision using lower-rate data. On top of the residual stacking, estimation of a random walk constraint of sigmaDelta = 1.80 cm/ hr shows maximum noise reduction capability while retaining the real deformation signal. These techniques enhance our grasp of fault response in the aftermath of great earthquakes. The third project probes noise floor characteristics of very-high-rate (> 1 Hz) GPS data. A hybrid method, designed and tested to resolve phase biases, minimizes computational burdens while keeping the quality of ambiguity-fixed solutions. Noise characteristics are compared after an analysis of 5 and 10 Hz Ashtech MicroZ and ZFX as well as Trimble NetRS receivers. The Trimble NetRS receiver noise has a timeseries standard deviation double that of Ashtech MicroZ receivers. Also, the power spectral density function has a 0.1 Hz peak. Noise power shows white noise for the frequency range from 2 Hz and higher. Each research project assesses the methods to reduce the noises and/or biases in various time intervals. Each method considered in this dissertation will fulfill the needs for scientific applications.

  1. On the use of programmable hardware and reduced numerical precision in earth-system modeling.

    PubMed

    Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N

    2015-09-01

    Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.

  2. The Application of Root Mean Square Electrocardiography (RMS ECG) for the Detection of Acquired and Congenital Long QT Syndrome

    PubMed Central

    Lux, Robert L.; Sower, Christopher Todd; Allen, Nancy; Etheridge, Susan P.; Tristani-Firouzi, Martin; Saarel, Elizabeth V.

    2014-01-01

    Background Precise measurement of the QT interval is often hampered by difficulty determining the end of the low amplitude T wave. Root mean square electrocardiography (RMS ECG) provides a novel alternative measure of ventricular repolarization. Experimental data have shown that the interval between the RMS ECG QRS and T wave peaks (RTPK) closely reflects the mean ventricular action potential duration while the RMS T wave width (TW) tracks the dispersion of repolarization timing. Here, we tested the precision of RMS ECG to assess ventricular repolarization in humans in the setting of drug-induced and congenital Long QT Syndrome (LQTS). Methods RMS ECG signals were derived from high-resolution 24 hour Holter monitor recordings from 68 subjects after receiving placebo and moxifloxacin and from standard 12 lead ECGs obtained in 97 subjects with LQTS and 97 age- and sex-matched controls. RTPK, QTRMS and RMS TW intervals were automatically measured using custom software and compared to traditional QT measures using lead II. Results All measures of repolarization were prolonged during moxifloxacin administration and in LQTS subjects, but the variance of RMS intervals was significantly smaller than traditional lead II measurements. TW was prolonged during moxifloxacin and in subjects with LQT-2, but not LQT-1 or LQT-3. Conclusion These data validate the application of RMS ECG for the detection of drug-induced and congenital LQTS. RMS ECG measurements are more precise than the current standard of care lead II measurements. PMID:24454918

  3. The application of root mean square electrocardiography (RMS ECG) for the detection of acquired and congenital long QT syndrome.

    PubMed

    Lux, Robert L; Sower, Christopher Todd; Allen, Nancy; Etheridge, Susan P; Tristani-Firouzi, Martin; Saarel, Elizabeth V

    2014-01-01

    Precise measurement of the QT interval is often hampered by difficulty determining the end of the low amplitude T wave. Root mean square electrocardiography (RMS ECG) provides a novel alternative measure of ventricular repolarization. Experimental data have shown that the interval between the RMS ECG QRS and T wave peaks (RTPK) closely reflects the mean ventricular action potential duration while the RMS T wave width (TW) tracks the dispersion of repolarization timing. Here, we tested the precision of RMS ECG to assess ventricular repolarization in humans in the setting of drug-induced and congenital Long QT Syndrome (LQTS). RMS ECG signals were derived from high-resolution 24 hour Holter monitor recordings from 68 subjects after receiving placebo and moxifloxacin and from standard 12 lead ECGs obtained in 97 subjects with LQTS and 97 age- and sex-matched controls. RTPK, QTRMS and RMS TW intervals were automatically measured using custom software and compared to traditional QT measures using lead II. All measures of repolarization were prolonged during moxifloxacin administration and in LQTS subjects, but the variance of RMS intervals was significantly smaller than traditional lead II measurements. TW was prolonged during moxifloxacin and in subjects with LQT-2, but not LQT-1 or LQT-3. These data validate the application of RMS ECG for the detection of drug-induced and congenital LQTS. RMS ECG measurements are more precise than the current standard of care lead II measurements.

  4. A study of intensity, fatigue and precision in two specific interval trainings in young tennis players: high-intensity interval training versus intermittent interval training

    PubMed Central

    Suárez Rodríguez, David; del Valle Soto, Miguel

    2017-01-01

    Background The aim of this study is to find the differences between two specific interval exercises. We begin with the hypothesis that the use of microintervals of work and rest allow for greater intensity of play and a reduction in fatigue. Methods Thirteen competition-level male tennis players took part in two interval training exercises comprising nine 2 min series, which consisted of hitting the ball with cross-court forehand and backhand shots, behind the service box. One was a high-intensity interval training (HIIT), made up of periods of continuous work lasting 2 min, and the other was intermittent interval training (IIT), this time with intermittent 2 min intervals, alternating periods of work with rest periods. Average heart rate (HR) and lactate levels were registered in order to observe the physiological intensity of the two exercises, along with the Borg Scale results for perceived exertion and the number of shots and errors in order to determine the intensity achieved and the degree of fatigue throughout the exercise. Results There were no significant differences in the average heart rate, lactate or the Borg Scale. Significant differences were registered, on the other hand, with a greater number of shots in the first two HIIT series (series 1 p>0.009; series 2 p>0.056), but not in the third. The number of errors was significantly lower in all the IIT series (series 1 p<0.035; series 2 p<0.010; series 3 p<0.001). Conclusion Our study suggests that high-intensity intermittent training allows for greater intensity of play in relation to the real time spent on the exercise, reduced fatigue levels and the maintaining of greater precision in specific tennis-related exercises. PMID:29021912

  5. Calibration of the BEV GPS Receiver by Using TWSTFT

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 543 CALIBRATION OF THE BEV GPS RECEIVER BY USING TWSTFT A. Niessner1, W...a calibration of the BEV reference GPS time receiver by using Two-way Satellite Time and Frequency Transfer ( TWSTFT ). Due to antenna changes, a new...calibration of the BEV receiver was necessary. This receiver is the first GPS receiver with calibration through TWSTFT and used for UTC computation

  6. Calibration of TWSTFT Links Through the Triangle Closure Condition

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 467 CALIBRATION OF TWSTFT LINKS THROUGH THE TRIANGLE CLOSURE CONDITION Z... TWSTFT (Two-Way Satellite Time and Frequency Transfer, TW for short) is, together with GPS time transfer, the primary technique used for UTC generation...valid OMB control number. 1. REPORT DATE 01 DEC 2008 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Calibration Of Twstft Links

  7. Precise Point Positioning technique for short and long baselines time transfer

    NASA Astrophysics Data System (ADS)

    Lejba, Pawel; Nawrocki, Jerzy; Lemanski, Dariusz; Foks-Ryznar, Anna; Nogas, Pawel; Dunst, Piotr

    2013-04-01

    In this work the clock parameters determination of several timing receivers TTS-4 (AOS), ASHTECH Z-XII3T (OP, ORB, PTB, USNO) and SEPTENTRIO POLARX4TR (ORB, since February 11, 2012) by use of the Precise Point Positioning (PPP) technique were presented. The clock parameters were determined for several time links based on the data delivered by time and frequency laboratories mentioned above. The computations cover the period from January 1 to December 31, 2012 and were performed in two modes with 7-day and one-month solution for all links. All RINEX data files which include phase and code GPS data were recorded in 30-second intervals. All calculations were performed by means of Natural Resource Canada's GPS Precise Point Positioning (GPS-PPP) software based on high-quality precise satellite coordinates and satellite clock delivered by IGS as the final products. The used independent PPP technique is a very powerful and simple method which allows for better control of antenna positions in AOS and a verification of other time transfer techniques like GPS CV, GLONASS CV and TWSTFT. The PPP technique is also a very good alternative for calibration of a glass fiber link PL-AOS realized at present by AOS. Currently PPP technique is one of the main time transfer methods used at AOS what considerably improve and strengthen the quality of the Polish time scales UTC(AOS), UTC(PL), and TA(PL). KEY-WORDS: Precise Point Positioning, time transfer, IGS products, GNSS, time scales.

  8. Millisecond Pulsar Observation at CRL

    DTIC Science & Technology

    2000-11-01

    32nd Annual Precise Time and Time Interval (PTTI) Meeting MILLISECOND PULSAR OBSERVATION AT CRL Y. Hanado, Y . Shibuya, M. Hosokawa, M. Sekido...status of millisecond pulsar timing observation at CRL.. Weekly observation of PSR1937+21 using the 34-m antenna at Kashima Space Research Center has...been on going since November 1997. Recently we eliminated systematic trends that were apparent in the data, and estimated the pulsar parameters of

  9. Integrated Doppler Correction to TWSTFT Using Round-Trip Measurement

    DTIC Science & Technology

    2010-11-01

    42 nd Annual Precise Time and Time Interval (PTTI) Meeting 251 INTEGRATED DOPPLER CORRECTION TO TWSTFT USING ROUND-TRIP MEASUREMENT Yi...Frequency Transfer ( TWSTFT ) data. It is necessary to correct the diurnal variation for comparing the time-scale difference. We focus on the up-/downlink...delay difference caused by satellite motion. In this paper, we propose to correct the TWSTFT data by using round-trip delay measurement. There are

  10. Research on the impact factors of GRACE precise orbit determination by dynamic method

    NASA Astrophysics Data System (ADS)

    Guo, Nan-nan; Zhou, Xu-hua; Li, Kai; Wu, Bin

    2018-07-01

    With the successful use of GPS-only-based POD (precise orbit determination), more and more satellites carry onboard GPS receivers to support their orbit accuracy requirements. It provides continuous GPS observations in high precision, and becomes an indispensable way to obtain the orbit of LEO satellites. Precise orbit determination of LEO satellites plays an important role for the application of LEO satellites. Numerous factors should be considered in the POD processing. In this paper, several factors that impact precise orbit determination are analyzed, namely the satellite altitude, the time-variable earth's gravity field, the GPS satellite clock error and accelerometer observation. The GRACE satellites provide ideal platform to study the performance of factors for precise orbit determination using zero-difference GPS data. These factors are quantitatively analyzed on affecting the accuracy of dynamic orbit using GRACE observations from 2005 to 2011 by SHORDE software. The study indicates that: (1) with the altitude of the GRACE satellite is lowered from 480 km to 460 km in seven years, the 3D (three-dimension) position accuracy of GRACE satellite orbit is about 3˜4 cm based on long spans data; (2) the accelerometer data improves the 3D position accuracy of GRACE in about 1 cm; (3) the accuracy of zero-difference dynamic orbit is about 6 cm with the GPS satellite clock error products in 5 min sampling interval and can be raised to 4 cm, if the GPS satellite clock error products with 30 s sampling interval can be adopted. (4) the time-variable part of earth gravity field model improves the 3D position accuracy of GRACE in about 0.5˜1.5 cm. Based on this study, we quantitatively analyze the factors that affect precise orbit determination of LEO satellites. This study plays an important role to improve the accuracy of LEO satellites orbit determination.

  11. Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting (24th) Held in McLean, VA on December 1-3, 1992

    DTIC Science & Technology

    1993-06-01

    additional source. For the past three years VNIIFTRI (Mendeleevo, Moscow Region, Russian Federation) and some other Russian time laboratories have used...Russian-built GLONASS navigation receivers for time 47 comparisons. Since June 1991, VNIIFTRI has operated a commercial CPS time receiver on loan from...the BIPM. Since February 1992, the BIPM has operated Russian GLONASS receiver on loan from the VNIIFTRI . This provides, for the first time, an

  12. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

    PubMed

    Brand, Andrew; Bradley, Michael T

    2016-02-01

    Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

  13. Quantum interval-valued probability: Contextuality and the Born rule

    NASA Astrophysics Data System (ADS)

    Tai, Yu-Tsung; Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr

    2018-05-01

    We present a mathematical framework based on quantum interval-valued probability measures to study the effect of experimental imperfections and finite precision measurements on defining aspects of quantum mechanics such as contextuality and the Born rule. While foundational results such as the Kochen-Specker and Gleason theorems are valid in the context of infinite precision, they fail to hold in general in a world with limited resources. Here we employ an interval-valued framework to establish bounds on the validity of those theorems in realistic experimental environments. In this way, not only can we quantify the idea of finite-precision measurement within our theory, but we can also suggest a possible resolution of the Meyer-Mermin debate on the impact of finite-precision measurement on the Kochen-Specker theorem.

  14. Precision and accuracy of suggested maxillary and mandibular landmarks with cone-beam computed tomography for regional superimpositions: An in vitro study.

    PubMed

    Lemieux, Genevieve; Carey, Jason P; Flores-Mir, Carlos; Secanell, Marc; Hart, Adam; Lagravère, Manuel O

    2016-01-01

    Our objective was to identify and evaluate the accuracy and precision (intrarater and interrater reliabilities) of various anatomic landmarks for use in 3-dimensional maxillary and mandibular regional superimpositions. We used cone-beam computed tomography reconstructions of 10 human dried skulls to locate 10 landmarks in the maxilla and the mandible. Precision and accuracy were assessed with intrarater and interrater readings. Three examiners located these landmarks in the cone-beam computed tomography images 3 times with readings scheduled at 1-week intervals. Three-dimensional coordinates were determined (x, y, and z coordinates), and the intraclass correlation coefficient was computed to determine intrarater and interrater reliabilities, as well as the mean error difference and confidence intervals for each measurement. Bilateral mental foramina, bilateral infraorbital foramina, anterior nasal spine, incisive canal, and nasion showed the highest precision and accuracy in both intrarater and interrater reliabilities. Subspinale and bilateral lingulae had the lowest precision and accuracy in both intrarater and interrater reliabilities. When choosing the most accurate and precise landmarks for 3-dimensional cephalometric analysis or plane-derived maxillary and mandibular superimpositions, bilateral mental and infraorbital foramina, landmarks in the anterior region of the maxilla, and nasion appeared to be the best options of the analyzed landmarks. Caution is needed when using subspinale and bilateral lingulae because of their higher mean errors in location. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  15. Interval Timing Deficits Assessed by Time Reproduction Dual Tasks as Cognitive Endophenotypes for Attention-Deficit/Hyperactivity Disorder

    PubMed Central

    Hwang-Gu, Shoou-Lian; Gau, Susan Shur-Fen

    2015-01-01

    The literature has suggested timing processing as a potential endophenotype for attention deficit/hyperactivity disorder (ADHD); however, whether the subjective internal clock speed presented by verbal estimation and limited attention capacity presented by time reproduction could be endophenotypes for ADHD is still unknown. We assessed 223 youths with DSM-IV ADHD (age range: 10-17 years), 105 unaffected siblings, and 84 typically developing (TD) youths using psychiatric interviews, intelligence tests, verbal estimation and time reproduction tasks (single task and simple and difficult dual tasks) at 5-second, 12-second, and 17-second intervals. We found that youths with ADHD tended to overestimate time in verbal estimation more than their unaffected siblings and TD youths, implying that fast subjective internal clock speed might be a characteristic of ADHD, rather than an endophenotype for ADHD. Youths with ADHD and their unaffected siblings were less precise in time reproduction dual tasks than TD youths. The magnitude of estimated errors in time reproduction was greater in youths with ADHD and their unaffected siblings than in TD youths, with an increased time interval at the 17-second interval and with increased task demands on both simple and difficult dual tasks versus the single task. Increased impaired time reproduction in dual tasks with increased intervals and task demands were shown in youths with ADHD and their unaffected siblings, suggesting that time reproduction deficits explained by limited attention capacity might be a useful endophenotype of ADHD. PMID:25992899

  16. The cyclic and fractal seismic series preceding an mb 4.8 earthquake on 1980 February 14 near the Virgin Islands

    USGS Publications Warehouse

    Varnes, D.J.; Bufe, C.G.

    1996-01-01

    Seismic activity in the 10 months preceding the 1980 February 14, mb 4.8 earthquake in the Virgin Islands, reported on by Frankel in 1982, consisted of four principal cycles. Each cycle began with a relatively large event or series of closely spaced events, and the duration of the cycles progressively shortened by a factor of about 3/4. Had this regular shortening of the cycles been recognized prior to the earthquake, the time of the next episode of setsmicity (the main shock) might have been closely estimated 41 days in advance. That this event could be much larger than the previous events is indicated from time-to-failure analysis of the accelerating rise in released seismic energy, using a non-linear time- and slip-predictable foreshock model. Examination of the timing of all events in the sequence shows an even higher degree of order. Rates of seismicity, measured by consecutive interevent times, when plotted on an iteration diagram of a rate versus the succeeding rate, form a triangular circulating trajectory. The trajectory becomes an ascending helix if extended in a third dimension, time. This construction reveals additional and precise relations among the time intervals between times of relatively high or relatively low rates of seismic activity, including period halving and doubling. The set of 666 time intervals between all possible pairs of the 37 recorded events appears to be a fractal; the set of time points that define the intervals has a finite, non-integer correlation dimension of 0.70. In contrast, the average correlation dimension of 50 random sequences of 37 events is significantly higher, dose to 1.0. In a similar analysis, the set of distances between pairs of epicentres has a fractal correlation dimension of 1.52. Well-defined cycles, numerous precise ratios among time intervals, and a non-random temporal fractal dimension suggest that the seismic series is not a random process, but rather the product of a deterministic dynamic system.

  17. The Effect of Neural Noise on Spike Time Precision in a Detailed CA3 Neuron Model

    PubMed Central

    Kuriscak, Eduard; Marsalek, Petr; Stroffek, Julius; Wünsch, Zdenek

    2012-01-01

    Experimental and computational studies emphasize the role of the millisecond precision of neuronal spike times as an important coding mechanism for transmitting and representing information in the central nervous system. We investigate the spike time precision of a multicompartmental pyramidal neuron model of the CA3 region of the hippocampus under the influence of various sources of neuronal noise. We describe differences in the contribution to noise originating from voltage-gated ion channels, synaptic vesicle release, and vesicle quantal size. We analyze the effect of interspike intervals and the voltage course preceding the firing of spikes on the spike-timing jitter. The main finding of this study is the ranking of different noise sources according to their contribution to spike time precision. The most influential is synaptic vesicle release noise, causing the spike jitter to vary from 1 ms to 7 ms of a mean value 2.5 ms. Of second importance was the noise incurred by vesicle quantal size variation causing the spike time jitter to vary from 0.03 ms to 0.6 ms. Least influential was the voltage-gated channel noise generating spike jitter from 0.02 ms to 0.15 ms. PMID:22778784

  18. High-precision optical measurement of the 2S hyperfine interval in atomic hydrogen.

    PubMed

    Kolachevsky, N; Fischer, M; Karshenboim, S G; Hänsch, T W

    2004-01-23

    We have applied an optical method to the measurement of the 2S hyperfine interval in atomic hydrogen. The interval has been measured by means of two-photon spectroscopy of the 1S-2S transition on a hydrogen atomic beam shielded from external magnetic fields. The measured value of the 2S hyperfine interval is equal to 177 556 860(16) Hz and represents the most precise measurement of this interval to date. The theoretical evaluation of the specific combination of 1S and 2S hyperfine intervals D21 is in fair agreement (within 1.4 sigma) with the value for D21 deduced from our measurement.

  19. Acquisition of peak responding: what is learned?

    PubMed

    Balci, Fuat; Gallistel, Charles R; Allen, Brian D; Frank, Krystal M; Gibson, Jacqueline M; Brunner, Daniela

    2009-01-01

    We investigated how the common measures of timing performance behaved in the course of training on the peak procedure in C3H mice. Following fixed interval (FI) pre-training, mice received 16 days of training in the peak procedure. The peak time and spread were derived from the average response rates while the start and stop times and their relative variability were derived from a single-trial analysis. Temporal precision (response spread) appeared to improve in the course of training. This apparent improvement in precision was, however, an averaging artifact; it was mediated by the staggered appearance of timed stops, rather than by the delayed occurrence of start times. Trial-by-trial analysis of the stop times for individual subjects revealed that stops appeared abruptly after three to five sessions and their timing did not change as training was prolonged. Start times and the precision of start and stop times were generally stable throughout training. Our results show that subjects do not gradually learn to time their start or stop of responding. Instead, they learn the duration of the FI, with robust temporal control over the start of the response; the control over the stop of response appears abruptly later.

  20. Acquisition of peak responding: What is learned?

    PubMed Central

    Balci, Fuat; Gallistel, Charles R.; Allen, Brian D.; Frank, Krystal M.; Gibson, Jacqueline M.; Brunner, Daniela

    2009-01-01

    We investigated how the common measures of timing performance behaved in the course of training on the peak procedure in C3H mice. Following fixed interval (FI) pre-training, mice received 16 days of training in the peak procedure. The peak time and spread were derived from the average response rates while the start and stop times and their relative variability were derived from a single-trial analysis. Temporal precision (response spread) appeared to improve in the course of training. This apparent improvement in precision was, however, an averaging artifact; it was mediated by the staggered appearance of timed stops, rather than by the delayed occurrence of start times. Trial-by-trial analysis of the stop times for individual subjects revealed that stops appeared abruptly after three to five sessions and their timing did not change as training was prolonged. Start times and the precision of start and stop times were generally stable throughout training. Our results show that subjects do not gradually learn to time their start or stop of responding. Instead, they learn the duration of the FI, with robust temporal control over the start of the response; the control over the stop of response appears abruptly later. PMID:18950695

  1. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2013-01-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  2. Proceedings of the 12th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Wardrip, S. C. (Editor)

    1981-01-01

    The meeting gave PTTI managers, systems engineers, and program planners a transparent view of the state-of-the-art, an opportunity to express needs, a view of important future trends, and a review of relevant past accomplishments. The PTTI users were provided with new and useful applications, procedures, and techniques. Emphasis is placed on military applications and avionics.

  3. New Steering Strategies for the USNO Master Clocks

    DTIC Science & Technology

    1999-12-01

    1992. P. Koppang and R. Leland , “Linear quadratic stochastic control of atomic hydrogen masers,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr...vol. 46, pp. 517-522, May 1999. P. Koppang and R. Leland , “Steering of frequency standards by the use of linear quadratic gaussian control theory...3lst Annual Precise Time and Time Interval (PTTI) Meeting NEWSTEERINGSTRATEGIESFOR THEUSNOMASTERCLOCKS Paul A. Koppang Datum, Inc. Beverly, MA

  4. Precise Time and Time Interval (PTTI) Systems and Applications Meeting (42nd Annual) Held in Reston, Virginia on November 15-18, 2010

    DTIC Science & Technology

    2010-11-01

    Tokyo University of Science, Japan; K. Watabe, K. Hagimoto, and T . Ikegami , National Metrology Institute of Japan Studies on an Improved...141 T . Iwata, K. Machita, T . Matsuzawa, National Institute of Advanced Industrial...285 K. Liang, National Institute of Metrology, P. R. China; T . Feldmann, A. Bauch, and D. Piester, Physikalisch

  5. Research on precise modeling of buildings based on multi-source data fusion of air to ground

    NASA Astrophysics Data System (ADS)

    Li, Yongqiang; Niu, Lubiao; Yang, Shasha; Li, Lixue; Zhang, Xitong

    2016-03-01

    Aims at the accuracy problem of precise modeling of buildings, a test research was conducted based on multi-source data for buildings of the same test area , including top data of air-borne LiDAR, aerial orthophotos, and façade data of vehicle-borne LiDAR. After accurately extracted the top and bottom outlines of building clusters, a series of qualitative and quantitative analysis was carried out for the 2D interval between outlines. Research results provide a reliable accuracy support for precise modeling of buildings of air ground multi-source data fusion, on the same time, discussed some solution for key technical problems.

  6. Screening for Learning and Memory Mutations: A New Approach.

    PubMed

    Gallistel, C R; King, A P; Daniel, A M; Freestone, D; Papachristos, E B; Balci, F; Kheifets, A; Zhang, J; Su, X; Schiff, G; Kourtev, H

    2010-01-30

    We describe a fully automated, live-in 24/7 test environment, with experimental protocols that measure the accuracy and precision with which mice match the ratio of their expected visit durations to the ratio of the incomes obtained from two hoppers, the progress of instrumental and classical conditioning (trials-to-acquisition), the accuracy and precision of interval timing, the effect of relative probability on the choice of a timed departure target, and the accuracy and precision of memory for the times of day at which food is available. The system is compact; it obviates the handling of the mice during testing; it requires negligible amounts of experimenter/technician time; and it delivers clear and extensive results from 3 protocols within a total of 7-9 days after the mice are placed in the test environment. Only a single 24-hour period is required for the completion of first protocol (the matching protocol), which is strong test of temporal and spatial estimation and memory mechanisms. Thus, the system permits the extensive screening of many mice in a short period of time and in limited space. The software is publicly available.

  7. Influence of the time scale on the construction of financial networks.

    PubMed

    Emmert-Streib, Frank; Dehmer, Matthias

    2010-09-30

    In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis.

  8. An improved grey model for the prediction of real-time GPS satellite clock bias

    NASA Astrophysics Data System (ADS)

    Zheng, Z. Y.; Chen, Y. Q.; Lu, X. S.

    2008-07-01

    In real-time GPS precise point positioning (PPP), real-time and reliable satellite clock bias (SCB) prediction is a key to implement real-time GPS PPP. It is difficult to hold the nuisance and inenarrable performance of space-borne GPS satellite atomic clock because of its high-frequency, sensitivity and impressionable, it accords with the property of grey model (GM) theory, i. e. we can look on the variable process of SCB as grey system. Firstly, based on limits of quadratic polynomial (QP) and traditional GM to predict SCB, a modified GM (1,1) is put forward to predict GPS SCB in this paper; and then, taking GPS SCB data for example, we analyzed clock bias prediction with different sample interval, the relationship between GM exponent and prediction accuracy, precision comparison of GM to QP, and concluded the general rule of different type SCB and GM exponent; finally, to test the reliability and validation of the modified GM what we put forward, taking IGS clock bias ephemeris product as reference, we analyzed the prediction precision with the modified GM, It is showed that the modified GM is reliable and validation to predict GPS SCB and can offer high precise SCB prediction for real-time GPS PPP.

  9. TWSTFT Network Status in the Pacific Rim Region and Development of a New Time Transfer Modem for TWSTFT

    DTIC Science & Technology

    2000-01-01

    32nd Annual Precise T ime and Time Interval ( P T T I ) Meeting TWSTFT NETWORK STATUS IN THE PACIFIC RIM REGION AND DEVELOPMENT OF A NEW TIME...TRANSFER MODEM FOR TWSTFT M. Imael, M. Hosokawal, Y . Hanadol, 2. Li2, P. Fisk3, Y . Nakadan4, and C. S. Liao5 ’Communications Research Laboratory...Metrology (NRLM), Japan 5Telecommunication Laboratories (TL) , Taipei, Taiwan Abstract Iko-Way Satellite Time and Frequency Transfer ( TWSTFT ) is one

  10. Precision Luminosity of LHC Proton-Proton Collisions at 13 TeV Using Hit Counting With TPX Pixel Devices

    NASA Astrophysics Data System (ADS)

    Sopczak, André; Ali, Babar; Asawatavonvanich, Thanawat; Begera, Jakub; Bergmann, Benedikt; Billoud, Thomas; Burian, Petr; Caicedo, Ivan; Caforio, Davide; Heijne, Erik; Janeček, Josef; Leroy, Claude; Mánek, Petr; Mochizuki, Kazuya; Mora, Yesid; Pacík, Josef; Papadatos, Costa; Platkevič, Michal; Polanský, Štěpán; Pospíšil, Stanislav; Suk, Michal; Svoboda, Zdeněk

    2017-03-01

    A network of Timepix (TPX) devices installed in the ATLAS cavern measures the LHC luminosity as a function of time as a stand-alone system. The data were recorded from 13-TeV proton-proton collisions in 2015. Using two TPX devices, the number of hits created by particles passing the pixel matrices was counted. A van der Meer scan of the LHC beams was analyzed using bunch-integrated luminosity averages over the different bunch profiles for an approximate absolute luminosity normalization. It is demonstrated that the TPX network has the capability to measure the reduction of LHC luminosity with precision. Comparative studies were performed among four sensors (two sensors in each TPX device) and the relative short-term precision of the luminosity measurement was determined to be 0.1% for 10-s time intervals. The internal long-term time stability of the measurements was below 0.5% for the data-taking period.

  11. Note: Tandem Kirkpatrick-Baez microscope with sixteen channels for high-resolution laser-plasma diagnostics

    NASA Astrophysics Data System (ADS)

    Yi, Shengzhen; Zhang, Zhe; Huang, Qiushi; Zhang, Zhong; Wang, Zhanshan; Wei, Lai; Liu, Dongxiao; Cao, Leifeng; Gu, Yuqiu

    2018-03-01

    Multi-channel Kirkpatrick-Baez (KB) microscopes, which have better resolution and collection efficiency than pinhole cameras, have been widely used in laser inertial confinement fusion to diagnose time evolution of the target implosion. In this study, a tandem multi-channel KB microscope was developed to have sixteen imaging channels with the precise control of spatial resolution and image intervals. This precise control was created using a coarse assembly of mirror pairs with high-accuracy optical prisms, followed by precise adjustment in real-time x-ray imaging experiments. The multilayers coated on the KB mirrors were designed to have substantially the same reflectivity to obtain a uniform brightness of different images for laser-plasma temperature analysis. The study provides a practicable method to achieve the optimum performance of the microscope for future high-resolution applications in inertial confinement fusion experiments.

  12. Volcano monitoring using GPS: Developing data analysis strategies based on the June 2007 Kīlauea Volcano intrusion and eruption

    USGS Publications Warehouse

    Larson, Kristine M.; Poland, Michael; Miklius, Asta

    2010-01-01

    The global positioning system (GPS) is one of the most common techniques, and the current state of the art, used to monitor volcano deformation. In addition to slow (several centimeters per year) displacement rates, GPS can be used to study eruptions and intrusions that result in much larger (tens of centimeters over hours-days) displacements. It is challenging to resolve precise positions using GPS at subdaily time intervals because of error sources such as multipath and atmospheric refraction. In this paper, the impact of errors due to multipath and atmospheric refraction at subdaily periods is examined using data from the GPS network on Kīlauea Volcano, Hawai'i. Methods for filtering position estimates to enhance precision are both simulated and tested on data collected during the June 2007 intrusion and eruption. Comparisons with tiltmeter records show that GPS instruments can precisely recover the timing of the activity.

  13. Radiocarbon content in the annual tree rings during last 150 years and time variation of cosmic rays

    NASA Technical Reports Server (NTRS)

    Kocharov, G. E.; Metskvarishvili, R. Y.; Tsereteli, S. L.

    1985-01-01

    The results of the high accuracy measurements of radiocarbon abundance in precisely dated tree rings in the interval 1800 to 1950 yrs are discussed. Radiocarbon content caused by solar activity is established. The temporal dependence of cosmic rays is constructed, by use of radio abundance data.

  14. Mixed-mode oscillations and interspike interval statistics in the stochastic FitzHugh-Nagumo model

    NASA Astrophysics Data System (ADS)

    Berglund, Nils; Landon, Damien

    2012-08-01

    We study the stochastic FitzHugh-Nagumo equations, modelling the dynamics of neuronal action potentials in parameter regimes characterized by mixed-mode oscillations. The interspike time interval is related to the random number of small-amplitude oscillations separating consecutive spikes. We prove that this number has an asymptotically geometric distribution, whose parameter is related to the principal eigenvalue of a substochastic Markov chain. We provide rigorous bounds on this eigenvalue in the small-noise regime and derive an approximation of its dependence on the system's parameters for a large range of noise intensities. This yields a precise description of the probability distribution of observed mixed-mode patterns and interspike intervals.

  15. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  16. Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.

    PubMed

    Ellner, Stephen P; Holmes, Elizabeth E

    2008-08-01

    We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.

  17. Modeling stream fish distributions using interval-censored detection times.

    PubMed

    Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro

    2016-08-01

    Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.

  18. A parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1993-01-01

    A parallel algorithm, called polysection, is presented for computing the eigenvalues of a symmetric tridiagonal matrix. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The signs of the polynomials at the interval endpoints are determined a priori and used to guarantee that all zeros are found. The use of finite-precision arithmetic may result in multiple zeros; however, in this case, the intervals coalesce and their number determines exactly the multiplicity of the zero. For an N x N matrix the eigenvalues can be determined in O(log-squared N) time with N-squared processors and O(N) time with N processors. The method is compared with a parallel variant of bisection that requires O(N-squared) time on a single processor, O(N) time with N processors, and O(log N) time with N-squared processors.

  19. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  20. Precision microwave measurement of the 2(3)P(1)-2(3)P(0) interval in atomic helium: a determination of the fine-structure constant.

    PubMed

    George, M C; Lombardi, L D; Hessels, E A

    2001-10-22

    The 2(3)P(1)-to- 2(3)P(0) interval in atomic helium is measured using a thermal beam of metastable helium atoms excited to the 2(3)P state using a 1.08-microm diode laser. The 2(3)P(1)-to- 2(3)P(0) transition is driven by 29.6-GHz microwaves in a rectangular waveguide cavity. Our result of 29,616,950.9+/-0.9 kHz is the most precise measurement of helium 2(3)P fine structure. When compared to precise theory for this interval, this measurement leads to a determination of the fine-structure constant of 1/137.0359864(31).

  1. The Chip-Scale Atomic Clock - Prototype Evaluation

    DTIC Science & Technology

    2007-11-01

    39th Annual Precise Time and Time Interval (PTTI) Meeting THE CHIP-SCALE ATOMIC CLOCK – PROTOTYPE EVALUATION R. Lutwak *, A. Rashed...been supported by the Defense Advanced Research Projects Agency, Contract # NBCHC020050. REFERENCES [1] R. Lutwak , D. Emmons, W. Riley, and...D.C.), pp. 539-550. [2] R. Lutwak , D. Emmons, T. English, W. Riley, A. Duwel, M. Varghese, D. K. Serkland, and G. M. Peake, 2004, “The Chip-Scale

  2. Cost and Precision of Brownian Clocks

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Seifert, Udo

    2016-10-01

    Brownian clocks are biomolecular networks that can count time. A paradigmatic example are proteins that go through a cycle, thus regulating some oscillatory behavior in a living system. Typically, such a cycle requires free energy often provided by ATP hydrolysis. We investigate the relation between the precision of such a clock and its thermodynamic costs. For clocks driven by a constant thermodynamic force, a given precision requires a minimal cost that diverges as the uncertainty of the clock vanishes. In marked contrast, we show that a clock driven by a periodic variation of an external protocol can achieve arbitrary precision at arbitrarily low cost. This result constitutes a fundamental difference between processes driven by a fixed thermodynamic force and those driven periodically. As a main technical tool, we map a periodically driven system with a deterministic protocol to one subject to an external protocol that changes in stochastic time intervals, which simplifies calculations significantly. In the nonequilibrium steady state of the resulting bipartite Markov process, the uncertainty of the clock can be deduced from the calculable dispersion of a corresponding current.

  3. Reporting of Uncertainty at the 2013 Annual Meeting of the American Society for Radiation Oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, W. Robert, E-mail: w.robert.lee@duke.edu

    Purpose: The annual meeting of the American Society for Radiation Oncology (ASTRO) is designed to disseminate new scientific findings and technical advances to professionals. Best practices of scientific dissemination require that some level of uncertainty (or imprecision) is provided. Methods and Materials: A total of 279 scientific abstracts were selected for oral presentation in a clinical session at the 2013 ASTRO Annual Meeting. A random sample of these abstracts was reviewed to determine whether a 95% confidence interval (95% CI) or analogous measure of precision was provided for time-to-event analyses. Results: A sample of 140 abstracts was reviewed. Of themore » 65 abstracts with Kaplan-Meier or cumulative incidence analyses, 6 included some measure of precision (6 of 65 = 9%; 95% CI, 2-16). Of the 43 abstracts reporting ratios for time-to-event analyses (eg, hazard ratio, risk ratio), 22 included some measure of precision (22 of 43 = 51%; 95% CI, 36-66). Conclusions: Measures of precision are not provided in a significant percentage of abstracts selected for oral presentation at the Annual Meeting of ASTRO.« less

  4. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    NASA Astrophysics Data System (ADS)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  5. Robotic fish tracking method based on suboptimal interval Kalman filter

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohong; Tang, Chao

    2017-11-01

    Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.

  6. Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan

    2017-03-01

    Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.

  7. Screening for Learning and Memory Mutations: A New Approach

    PubMed Central

    Gallistel, C. R.; King, A. P.; Daniel, A. M.; Freestone, D.; Papachristos, E. B.; Balci, F.; Kheifets, A.; Zhang, J.; Su, X.; Schiff, G.; Kourtev, H.

    2010-01-01

    We describe a fully automated, live-in 24/7 test environment, with experimental protocols that measure the accuracy and precision with which mice match the ratio of their expected visit durations to the ratio of the incomes obtained from two hoppers, the progress of instrumental and classical conditioning (trials-to-acquisition), the accuracy and precision of interval timing, the effect of relative probability on the choice of a timed departure target, and the accuracy and precision of memory for the times of day at which food is available. The system is compact; it obviates the handling of the mice during testing; it requires negligible amounts of experimenter/technician time; and it delivers clear and extensive results from 3 protocols within a total of 7–9 days after the mice are placed in the test environment. Only a single 24-hour period is required for the completion of first protocol (the matching protocol), which is strong test of temporal and spatial estimation and memory mechanisms. Thus, the system permits the extensive screening of many mice in a short period of time and in limited space. The software is publicly available. PMID:20352069

  8. Influence of the Time Scale on the Construction of Financial Networks

    PubMed Central

    Emmert-Streib, Frank; Dehmer, Matthias

    2010-01-01

    Background In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. Methodology/Principal Findings For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Conclusions/Significance Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis. PMID:20949124

  9. Sensori-motor synchronisation variability decreases as the number of metrical levels in the stimulus signal increases.

    PubMed

    Madison, Guy

    2014-03-01

    Timing performance becomes less precise for longer intervals, which makes it difficult to achieve simultaneity in synchronisation with a rhythm. The metrical structure of music, characterised by hierarchical levels of binary or ternary subdivisions of time, may function to increase precision by providing additional timing information when the subdivisions are explicit. This hypothesis was tested by comparing synchronisation performance across different numbers of metrical levels conveyed by loudness of sounds, such that the slowest level was loudest and the fastest was softest. Fifteen participants moved their hand with one of 9 inter-beat intervals (IBIs) ranging from 524 to 3,125 ms in 4 metrical level (ML) conditions ranging from 1 (one movement for each sound) to 4 (one movement for every 8th sound). The lowest relative variability (SD/IBI<1.5%) was obtained for the 3 longest IBIs (1600-3,125 ms) and MLs 3-4, significantly less than the smallest value (4-5% at 524-1024 ms) for any ML 1 condition in which all sounds are identical. Asynchronies were also more negative with higher ML. In conclusion, metrical subdivision provides information that facilitates temporal performance, which suggests an underlying neural multi-level mechanism capable of integrating information across levels. © 2013.

  10. Hypothesis testing for the validation of the kinetic spectrophotometric methods for the determination of lansoprazole in bulk and drug formulations via Fe(III) and Zn(II) chelates.

    PubMed

    Rahman, Nafisur; Kashif, Mohammad

    2010-03-01

    Point and interval hypothesis tests performed to validate two simple and economical, kinetic spectrophotometric methods for the assay of lansoprazole are described. The methods are based on the formation of chelate complex of the drug with Fe(III) and Zn(II). The reaction is followed spectrophotometrically by measuring the rate of change of absorbance of coloured chelates of the drug with Fe(III) and Zn(II) at 445 and 510 nm, respectively. The stoichiometric ratio of lansoprazole to Fe(III) and Zn(II) complexes were found to be 1:1 and 2:1, respectively. The initial-rate and fixed-time methods are adopted for determination of drug concentrations. The calibration graphs are linear in the range 50-200 µg ml⁻¹ (initial-rate method), 20-180 µg ml⁻¹ (fixed-time method) for lansoprazole-Fe(III) complex and 120-300 (initial-rate method), and 90-210 µg ml⁻¹ (fixed-time method) for lansoprazole-Zn(II) complex. The inter-day and intra-day precision data showed good accuracy and precision of the proposed procedure for analysis of lansoprazole. The point and interval hypothesis tests indicate that the proposed procedures are not biased. Copyright © 2010 John Wiley & Sons, Ltd.

  11. Behavioral and Single-Neuron Sensitivity to Millisecond Variations in Temporally Patterned Communication Signals

    PubMed Central

    Baker, Christa A.; Ma, Lisa; Casareale, Chelsea R.

    2016-01-01

    In many sensory pathways, central neurons serve as temporal filters for timing patterns in communication signals. However, how a population of neurons with diverse temporal filtering properties codes for natural variation in communication signals is unknown. Here we addressed this question in the weakly electric fish Brienomyrus brachyistius, which varies the time intervals between successive electric organ discharges to communicate. These fish produce an individually stereotyped signal called a scallop, which consists of a distinctive temporal pattern of ∼8–12 electric pulses. We manipulated the temporal structure of natural scallops during behavioral playback and in vivo electrophysiology experiments to probe the temporal sensitivity of scallop encoding and recognition. We found that presenting time-reversed, randomized, or jittered scallops increased behavioral response thresholds, demonstrating that fish's electric signaling behavior was sensitive to the precise temporal structure of scallops. Next, using in vivo intracellular recordings and discriminant function analysis, we found that the responses of interval-selective midbrain neurons were also sensitive to the precise temporal structure of scallops. Subthreshold changes in membrane potential recorded from single neurons discriminated natural scallops from time-reversed, randomized, and jittered sequences. Pooling the responses of multiple neurons improved the discriminability of natural sequences from temporally manipulated sequences. Finally, we found that single-neuron responses were sensitive to interindividual variation in scallop sequences, raising the question of whether fish may analyze scallop structure to gain information about the sender. Collectively, these results demonstrate that a population of interval-selective neurons can encode behaviorally relevant temporal patterns with millisecond precision. SIGNIFICANCE STATEMENT The timing patterns of action potentials, or spikes, play important roles in representing information in the nervous system. However, how these temporal patterns are recognized by downstream neurons is not well understood. Here we use the electrosensory system of mormyrid weakly electric fish to investigate how a population of neurons with diverse temporal filtering properties encodes behaviorally relevant input timing patterns, and how this relates to behavioral sensitivity. We show that fish are behaviorally sensitive to millisecond variations in natural, temporally patterned communication signals, and that the responses of individual midbrain neurons are also sensitive to variation in these patterns. In fact, the output of single neurons contains enough information to discriminate stereotyped communication signals produced by different individuals. PMID:27559179

  12. Behavioral and Single-Neuron Sensitivity to Millisecond Variations in Temporally Patterned Communication Signals.

    PubMed

    Baker, Christa A; Ma, Lisa; Casareale, Chelsea R; Carlson, Bruce A

    2016-08-24

    In many sensory pathways, central neurons serve as temporal filters for timing patterns in communication signals. However, how a population of neurons with diverse temporal filtering properties codes for natural variation in communication signals is unknown. Here we addressed this question in the weakly electric fish Brienomyrus brachyistius, which varies the time intervals between successive electric organ discharges to communicate. These fish produce an individually stereotyped signal called a scallop, which consists of a distinctive temporal pattern of ∼8-12 electric pulses. We manipulated the temporal structure of natural scallops during behavioral playback and in vivo electrophysiology experiments to probe the temporal sensitivity of scallop encoding and recognition. We found that presenting time-reversed, randomized, or jittered scallops increased behavioral response thresholds, demonstrating that fish's electric signaling behavior was sensitive to the precise temporal structure of scallops. Next, using in vivo intracellular recordings and discriminant function analysis, we found that the responses of interval-selective midbrain neurons were also sensitive to the precise temporal structure of scallops. Subthreshold changes in membrane potential recorded from single neurons discriminated natural scallops from time-reversed, randomized, and jittered sequences. Pooling the responses of multiple neurons improved the discriminability of natural sequences from temporally manipulated sequences. Finally, we found that single-neuron responses were sensitive to interindividual variation in scallop sequences, raising the question of whether fish may analyze scallop structure to gain information about the sender. Collectively, these results demonstrate that a population of interval-selective neurons can encode behaviorally relevant temporal patterns with millisecond precision. The timing patterns of action potentials, or spikes, play important roles in representing information in the nervous system. However, how these temporal patterns are recognized by downstream neurons is not well understood. Here we use the electrosensory system of mormyrid weakly electric fish to investigate how a population of neurons with diverse temporal filtering properties encodes behaviorally relevant input timing patterns, and how this relates to behavioral sensitivity. We show that fish are behaviorally sensitive to millisecond variations in natural, temporally patterned communication signals, and that the responses of individual midbrain neurons are also sensitive to variation in these patterns. In fact, the output of single neurons contains enough information to discriminate stereotyped communication signals produced by different individuals. Copyright © 2016 the authors 0270-6474/16/368985-16$15.00/0.

  13. Precision, time, and cost: a comparison of three sampling designs in an emergency setting.

    PubMed

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-05-02

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 x 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 x 30 cluster survey with two alternative sampling designs: a 33 x 6 cluster design (33 clusters, 6 observations per cluster) and a 67 x 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 x 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 x 6 and 67 x 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 x 6 and 67 x 3 designs provide wider confidence intervals than the 30 x 30 design for child anthropometric indicators, the 33 x 6 and 67 x 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 x 30 design does not. For the household-level indicators tested in this study, the 67 x 3 design provides the most precise results. However, our results show that neither the 33 x 6 nor the 67 x 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 x 6 and 67 x 3 designs required substantially less time and cost than that required for the 30 x 30 design. The findings of this study suggest the 33 x 6 and 67 x 3 designs can provide useful time- and resource-saving alternatives to the 30 x 30 method of data collection in emergency settings.

  14. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    PubMed Central

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-01-01

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster) and a 67 × 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data collection in emergency settings. PMID:18454866

  15. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  16. Comparing neuronal spike trains with inhomogeneous Poisson distribution: evaluation procedure and experimental application in cases of cyclic activity.

    PubMed

    Fiore, Lorenzo; Lorenzetti, Walter; Ratti, Giovannino

    2005-11-30

    A procedure is proposed to compare single-unit spiking activity elicited in repetitive cycles with an inhomogeneous Poisson process (IPP). Each spike sequence in a cycle is discretized and represented as a point process on a circle. The interspike interval probability density predicted for an IPP is computed on the basis of the experimental firing probability density; differences from the experimental interval distribution are assessed. This procedure was applied to spike trains which were repetitively induced by opening-closing movements of the distal article of a lobster leg. As expected, the density of short interspike intervals, less than 20-40 ms in length, was found to lie greatly below the level predicted for an IPP, reflecting the occurrence of the refractory period. Conversely, longer intervals, ranging from 20-40 to 100-120 ms, were markedly more abundant than expected; this provided evidence for a time window of increased tendency to fire again after a spike. Less consistently, a weak depression of spike generation was observed for longer intervals. A Monte Carlo procedure, implemented for comparison, produced quite similar results, but was slightly less precise and more demanding as concerns computation time.

  17. A 45 ps time digitizer with a two-phase clock and dual-edge two-stage interpolation in a field programmable gate array device

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Kalisz, J.; Jachna, Z.

    2009-02-01

    We present a time digitizer having 45 ps resolution, integrated in a field programmable gate array (FPGA) device. The time interval measurement is based on the two-stage interpolation method. A dual-edge two-phase interpolator is driven by the on-chip synthesized 250 MHz clock with precise phase adjustment. An improved dual-edge double synchronizer was developed to control the main counter. The nonlinearity of the digitizer's transfer characteristic is identified and utilized by the dedicated hardware code processor for the on-the-fly correction of the output data. Application of presented ideas has resulted in the measurement uncertainty of the digitizer below 70 ps RMS over the time interval ranging from 0 to 1 s. The use of the two-stage interpolation and a fast FIFO memory has allowed us to obtain the maximum measurement rate of five million measurements per second.

  18. The Chip-Scale Atomic Clock - Low-Power Physics Package

    DTIC Science & Technology

    2004-12-01

    36th Annual Precise Time and Time Interval (PTTI) Meeting 339 THE CHIP-SCALE ATOMIC CLOCK – LOW-POWER PHYSICS PACKAGE R. Lutwak ...pdf/documents/ds-x72.pdf [2] R. Lutwak , D. Emmons, W. Riley, and R. M. Garvey, 2003, “The Chip-Scale Atomic Clock – Coherent Population Trapping vs...2002, Reston, Virginia, USA (U.S. Naval Observatory, Washington, D.C.), pp. 539-550. [3] R. Lutwak , D. Emmons, T. English, and W. Riley, 2004

  19. Novel Tiltmeter for Monitoring Angle Shift In Incident Waves

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 559   NOVEL TILTMETER FOR MONITORING ANGLE SHIFT IN INCIDENT WAVES S... Tiltmeter For Monitoring Angle Shift In Incident Waves 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...up, any angle change of the incident beam ’θ results in a change of the intensity transmission of the resonator.     A NOVEL ANGLE TILTMETER

  20. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    PubMed

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with a second differently generated set of spatial point populations, ν₈ and ν(W) again being the best performers in the longer-range autocorrelated populations. However, no systematic variance estimators tested were free from bias. On balance, systematic designs bring more narrow confidence intervals in clustered populations, while random designs permit unbiased estimates of (often wider) confidence interval. The search continues for better estimators of sampling variance for the systematic survey mean.

  1. Seismic displacements monitoring for 2015 Mw 7.8 Nepal earthquake with GNSS data

    NASA Astrophysics Data System (ADS)

    Geng, T.; Su, X.; Xie, X.

    2017-12-01

    The high-rate Global Positioning Satellite System (GNSS) has been recognized as one of the powerful tools for monitoring ground motions generated by seismic events. The high-rate GPS and BDS data collected during the 2015 Mw 7.8 Nepal earthquake have been analyzed using two methods, that are the variometric approach and Precise point positioning (PPP). The variometric approach is based on time differenced technique using only GNSS broadcast products to estimate velocity time series from tracking observations in real time, followed by an integration procedure on the velocities to derive the seismic event induced displacements. PPP is a positioning method to calculate precise positions at centimeter- or even millimeter-level accuracy with a single GNSS receiver using precise satellite orbit and clock products. The displacement motions with accuracy of 2 cm at far-field stations and 5 cm at near-field stations with great ground motions and static offsets up to 1-2 m could be achieved. The multi-GNSS, GPS + BDS, could provide higher accuracy displacements with the increasing of satellite numbers and the improvement of the Position Dilution of Precision (PDOP) values. Considering the time consumption of clock estimates and the precision of PPP solutions, 5 s GNSS satellite clock interval is suggested. In addition, the GNSS-derived displacements are in good agreement with those from strong motion data. These studies demonstrate the feasibility of real-time capturing seismic waves with multi-GNSS observations, which is of great promise for the purpose of earthquake early warning and rapid hazard assessment.

  2. The NANOGrav Eleven-Year Data Set: High-precision timing of 48 Millisecond Pulsars

    NASA Astrophysics Data System (ADS)

    Nice, David J.; NANOGrav

    2017-01-01

    Gravitational waves from sources such as supermassive black hole binary systems perturb times-of-flight of signals traveling from pulsars to the Earth. The NANOGrav collaboration aims to measure these perturbations in high precision millisecond pulsar timing data and thus to directly detect gravitational waves and characterize the gravitational wave sources. By observing pulsars over time spans of many years, we are most sensitive to gravitational waves at nanohertz frequencies. This work is complimentary to ground based detectors such as LIGO, which are sensitive to gravitational waves with frequencies 10 orders of magnitude higher.In this presentation we describe the NANOGrav eleven-year data set. This includes pulsar time-of-arrival measurements from 48 millisecond pulsars made with the Arecibo Observatory (for pulsars with declinations between -1 and 39 degrees) and the Green Bank Telescope (for other pulsars, with two pulsars overlapping with Arecibo). The data set consists of more than 300,000 pulse time-of-arrival measurements made in nearly 7000 unique observations (a given pulsar observed with a given telescope receiver on a given day). In the best cases, measurement precision is better than 100 nanoseconds, and in nearly all cases it is better than 1 microsecond.All pulsars in our program are observed at intervals of 3 to 4 weeks. Observations use wideband data acquisition systems and are made at two receivers at widely separated frequencies at each epoch, allowing for characterization and mitigation of the effects of interstellar medium on the signal propagation. Observation of a large number of pulsars allows for searches for correlated perturbations among the pulsar signals, which is crucial for achieving high-significance detection of gravitational waves in the face of uncorrelated noise (from gravitational waves and rotation noise) in the individual pulsars. In addition, seven pulsars are observed at weekly intervals. This increases our sensitivity to individual gravitational wave sources.

  3. Study design and sampling intensity for demographic analyses of bear populations

    USGS Publications Warehouse

    Harris, R.B.; Schwartz, C.C.; Mace, R.D.; Haroldson, M.A.

    2011-01-01

    The rate of population change through time (??) is a fundamental element of a wildlife population's conservation status, yet estimating it with acceptable precision for bears is difficult. For studies that follow known (usually marked) bears, ?? can be estimated during some defined time by applying either life-table or matrix projection methods to estimates of individual vital rates. Usually however, confidence intervals surrounding the estimate are broader than one would like. Using an estimator suggested by Doak et al. (2005), we explored the precision to be expected in ?? from demographic analyses of typical grizzly (Ursus arctos) and American black (U. americanus) bear data sets. We also evaluated some trade-offs among vital rates in sampling strategies. Confidence intervals around ?? were more sensitive to adding to the duration of a short (e.g., 3 yrs) than a long (e.g., 10 yrs) study, and more sensitive to adding additional bears to studies with small (e.g., 10 adult females/yr) than large (e.g., 30 adult females/yr) sample sizes. Confidence intervals of ?? projected using process-only variance of vital rates were only slightly smaller than those projected using total variances of vital rates. Under sampling constraints typical of most bear studies, it may be more efficient to invest additional resources into monitoring recruitment and juvenile survival rates of females already a part of the study, than to simply increase the sample size of study females. ?? 2011 International Association for Bear Research and Management.

  4. An interval precise integration method for transient unbalance response analysis of rotor system with uncertainty

    NASA Astrophysics Data System (ADS)

    Fu, Chao; Ren, Xingmin; Yang, Yongfeng; Xia, Yebao; Deng, Wangqun

    2018-07-01

    A non-intrusive interval precise integration method (IPIM) is proposed in this paper to analyze the transient unbalance response of uncertain rotor systems. The transfer matrix method (TMM) is used to derive the deterministic equations of motion of a hollow-shaft overhung rotor. The uncertain transient dynamic problem is solved by combing the Chebyshev approximation theory with the modified precise integration method (PIM). Transient response bounds are calculated by interval arithmetic of the expansion coefficients. Theoretical error analysis of the proposed method is provided briefly, and its accuracy is further validated by comparing with the scanning method in simulations. Numerical results show that the IPIM can keep good accuracy in vibration prediction of the start-up transient process. Furthermore, the proposed method can also provide theoretical guidance to other transient dynamic mechanical systems with uncertainties.

  5. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.

  6. When Non-Dominant Is Better than Dominant: Kinesiotape Modulates Asymmetries in Timed Performance during a Synchronization-Continuation Task

    PubMed Central

    Bravi, Riccardo; Cohen, Erez J.; Martinelli, Alessio; Gottard, Anna; Minciacchi, Diego

    2017-01-01

    There is a growing consensus regarding the specialization of the non-dominant limb (NDL)/hemisphere system to employ proprioceptive feedback when executing motor actions. In a wide variety of rhythmic tasks the dominant limb (DL) has advantages in speed and timing consistency over the NDL. Recently, we demonstrated that the application of Kinesio® Tex (KT) tape, an elastic therapeutic device used for treating athletic injuries, improves significantly the timing consistency of isochronous wrist’s flexion-extensions (IWFEs) of the DL. We argued that the augmented precision of IWFEs is determined by a more efficient motor control during movements due to the extra-proprioceptive effect provided by KT. In this study, we tested the effect of KT on timing precision of IWFEs performed with the DL and the NDL, and we evaluated the efficacy of KT to counteract possible timing precision difference between limbs. Young healthy subjects performed with and without KT (NKT) a synchronization-continuation task in which they first entrained IWFEs to paced auditory stimuli (synchronization phase), and subsequently continued to produce motor responses with the same temporal interval in the absence of the auditory stimulus (continuation phase). Two inter-onset intervals (IOIs) of 550-ms and 800-ms, one within and the other beyond the boundaries of the spontaneous motor tempo, were tested. Kinematics was recorded and temporal parameters were extracted and analyzed. Our results show that limb advantages in performing proficiently rhythmic movements are not side-locked but depend also on speed of movement. The application of KT significantly reduces the timing variability of IWFEs performed at 550-ms IOI. KT not only cancels the disadvantages of the NDL but also makes it even more precise than the DL without KT. The superior sensitivity of the NDL to use the extra-sensory information provided by KT is attributed to a greater competence of the NDL/hemisphere system to rely on sensory input. The findings in this study add a new piece of information to the context of motor timing literature. The performance asymmetries here demonstrated as preferred temporal environments could reflect limb differences in the choice of sensorimotor control strategies for the production of human movement. PMID:28943842

  7. Chirped frequency transfer: a tool for synchronization and time transfer.

    PubMed

    Raupach, Sebastian M F; Grosche, Gesine

    2014-06-01

    We propose and demonstrate the phase-stabilized transfer of a chirped frequency as a tool for synchronization and time transfer. Technically, this is done by evaluating remote measurements of the transferred, chirped frequency. The gates of the frequency counters, here driven by a 10-MHz oscillation derived from a hydrogen maser, play a role analogous to the 1-pulse per second (PPS) signals usually employed for time transfer. In general, for time transfer, the gates consequently must be related to the external clock. Synchronizing observations based on frequency measurements, on the other hand, only requires a stable oscillator driving the frequency counters. In a proof of principle, we demonstrate the suppression of symmetrical delays, such as the geometrical path delay. We transfer an optical frequency chirped by around 240 kHz/s over a fiber link of around 149 km. We observe an accuracy and simultaneity, as well as a precision (Allan deviation, 18,000 s averaging interval) of the transferred frequency of around 2 × 10(-19). We apply chirped frequency transfer to remote measurements of the synchronization between two counters' gate intervals. Here, we find a precision of around 200 ps at an estimated overall uncertainty of around 500 ps. The measurement results agree with those obtained from reference measurements, being well within the uncertainty. In the present setup, timing offsets up to 4 min can be measured unambiguously. We indicate how this range can be extended further.

  8. Development Of Frequency Transfer Via Optical Fiber Link at NICT

    DTIC Science & Technology

    2008-12-01

    al., 2006 “Comparison between frequency standards in Europe and the USA at the 10-15 uncertainty level,” Metrologia , 43, 109-120. [4] H. Kiuchi, T...M. Hosokawa, 2008, “Evaluation of caesium atomic fountain NICT-CsF1,” Metrologia , 45, 139-148. [12] M. Kumagai, H. Ito, G. Santarelli, C. Locke, J...free-run O ve rla pp in g Al la n D ev ia tio n Averaging time [sec] 40th Annual Precise Time and Time Interval (PTTI) Meeting 100 101 102 103

  9. Proper motion and secular variations of Keplerian orbital elements

    NASA Astrophysics Data System (ADS)

    Butkevich, Alexey G.

    2018-05-01

    High-precision observations require accurate modelling of secular changes in the orbital elements in order to extrapolate measurements over long time intervals, and to detect deviation from pure Keplerian motion caused, for example, by other bodies or relativistic effects. We consider the evolution of the Keplerian elements resulting from the gradual change of the apparent orbit orientation due to proper motion. We present rigorous formulae for the transformation of the orbit inclination, longitude of the ascending node and argument of the pericenter from one epoch to another, assuming uniform stellar motion and taking radial velocity into account. An approximate treatment, accurate to the second-order terms in time, is also given. The proper motion effects may be significant for long-period transiting planets. These theoretical results are applicable to the modelling of planetary transits and precise Doppler measurements as well as analysis of pulsar and eclipsing binary timing observations.

  10. Current Fluctuations in Stochastic Lattice Gases

    NASA Astrophysics Data System (ADS)

    Bertini, L.; de Sole, A.; Gabrielli, D.; Jona-Lasinio, G.; Landim, C.

    2005-01-01

    We study current fluctuations in lattice gases in the macroscopic limit extending the dynamic approach for density fluctuations developed in previous articles. More precisely, we establish a large deviation theory for the space-time fluctuations of the empirical current which include the previous results. We then estimate the probability of a fluctuation of the average current over a large time interval. It turns out that recent results by Bodineau and Derrida [Phys. Rev. Lett.922004180601] in certain cases underestimate this probability due to the occurrence of dynamical phase transitions.

  11. Comparing the cohort design and the nested case–control design in the presence of both time-invariant and time-dependent treatment and competing risks: bias and precision

    PubMed Central

    Austin, Peter C; Anderson, Geoffrey M; Cigsar, Candemir; Gruneir, Andrea

    2012-01-01

    Purpose Observational studies using electronic administrative healthcare databases are often used to estimate the effects of treatments and exposures. Traditionally, a cohort design has been used to estimate these effects, but increasingly, studies are using a nested case–control (NCC) design. The relative statistical efficiency of these two designs has not been examined in detail. Methods We used Monte Carlo simulations to compare these two designs in terms of the bias and precision of effect estimates. We examined three different settings: (A) treatment occurred at baseline, and there was a single outcome of interest; (B) treatment was time varying, and there was a single outcome; and C treatment occurred at baseline, and there was a secondary event that competed with the primary event of interest. Comparisons were made of percentage bias, length of 95% confidence interval, and mean squared error (MSE) as a combined measure of bias and precision. Results In Setting A, bias was similar between designs, but the cohort design was more precise and had a lower MSE in all scenarios. In Settings B and C, the cohort design was more precise and had a lower MSE in all scenarios. In both Settings B and C, the NCC design tended to result in estimates with greater bias compared with the cohort design. Conclusions We conclude that in a range of settings and scenarios, the cohort design is superior in terms of precision and MSE. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22653805

  12. Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals.

    PubMed

    Aagten-Murphy, David; Cappagli, Giulia; Burr, David

    2014-03-01

    Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors. © 2013.

  13. Balloon borne in-situ detection of OH in the stratosphere from 37 to 23 km

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpfle, R.M.; Lapson, L.B., Wennberg, P.O.; Anderson, J.G.

    1989-12-01

    The OH number density in the stratosphere has been measured over the altitude interval of 37 to 23 km at midday via balloon-borne gondola launched from Palestine, Texas on July 6, 1988. OH radicals are detected with a laser induced fluorescence instrument employing a 17 kHz repetition rate copper vapor laser pumped dye laser optically coupled to an enclosed flow, in-situ sampling chamber. OH abundances ranged from 88 {plus minus} 31 pptv (1.1 {plus minus} 0.4 {times} 10{sup 7} molec cm{sup {minus}3}) in the 36 to 35 km interval to 0.9 {plus minus} 0.8 pptv (8.7 {plus minus} 7.7 {times}10{supmore » 5} molec cm{sup {minus}3}) in the 24 to 23 km interval. The stated uncertainty ({plus minus} 1{sigma}) includes that from both measurement precision and accuracy. Simultaneous detection of ozone and water vapor densities was carried out with separate on-board instruments.« less

  14. Alpha7 Nicotinic Acetylcholine Receptors and Temporal Memory: Synergistic Effects of Combining Prenatal Choline and Nicotine on Reinforcement-Induced Resetting of an Interval Clock

    ERIC Educational Resources Information Center

    Cheng, Ruey-Kuang; Meck, Warren H.; Williams, Christina L.

    2006-01-01

    We previously showed that prenatal choline supplementation could increase the precision of timing and temporal memory and facilitate simultaneous temporal processing in mature and aged rats. In the present study, we investigated the ability of adult rats to selectively control the reinforcement-induced resetting of an internal clock as a function…

  15. A Novel Zero Velocity Interval Detection Algorithm for Self-Contained Pedestrian Navigation System with Inertial Sensors

    PubMed Central

    Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan

    2016-01-01

    Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266

  16. Reference interval estimation: Methodological comparison using extensive simulations and empirical data.

    PubMed

    Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S

    2017-12-01

    To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  17. Estimating climate sensitivity from paleo-data.

    NASA Astrophysics Data System (ADS)

    Crowley, T. J.; Hegerl, G. C.

    2003-12-01

    For twenty years estimates of climate sensitivity from the instrumental record have neen between about 1.5-4.5° C for a doubling of CO2. Various efforts, most notably by J. Hansen, and M. Hoffert and C. Covey. have been made to test this range against paleo-data for the ice age and Cretaceous, yielding approximately the same range with a "best guess" sensitivity of about 2.0-3.0° C. Here we re-examine this issue with new paleo-data and also include information for the time period 1000-present. For this latter interval formal pdfs can for the first time be calculated for paleo data. Regardless of the time interval examined we generally find that paleo-sensitivities still fall within the range of about 1.5-4.5° C. The primary impediments to more precise determinations involve not only uncertainties in forcings but also the paleo reconstructions. Barring a dramatic breakthrough in reconciliation of some long-standing differences in the magnitude of paleotemperature estimates for different proxies, the range of paleo-sensitivities will continue to have this uncertainty. This range can be considered either unsatisfactory or satisfactory. It is unsatisfactory because some may consider it insufficiently precise. It is satisfactory in the sense that the range is both robust and entirely consistent with the range independently estimated from the instrumental record.

  18. Timing and Causality in the Generation of Learned Eyelid Responses

    PubMed Central

    Sánchez-Campusano, Raudel; Gruart, Agnès; Delgado-García, José M.

    2011-01-01

    The cerebellum-red nucleus-facial motoneuron (Mn) pathway has been reported as being involved in the proper timing of classically conditioned eyelid responses. This special type of associative learning serves as a model of event timing for studying the role of the cerebellum in dynamic motor control. Here, we have re-analyzed the firing activities of cerebellar posterior interpositus (IP) neurons and orbicularis oculi (OO) Mns in alert behaving cats during classical eyeblink conditioning, using a delay paradigm. The aim was to revisit the hypothesis that the IP neurons (IPns) can be considered a neuronal phase-modulating device supporting OO Mns firing with an emergent timing mechanism and an explicit correlation code during learned eyelid movements. Optimized experimental and computational tools allowed us to determine the different causal relationships (temporal order and correlation code) during and between trials. These intra- and inter-trial timing strategies expanding from sub-second range (millisecond timing) to longer-lasting ranges (interval timing) expanded the functional domain of cerebellar timing beyond motor control. Interestingly, the results supported the above-mentioned hypothesis. The causal inferences were influenced by the precise motor and pre-motor spike timing in the cause-effect interval, and, in addition, the timing of the learned responses depended on cerebellar–Mn network causality. Furthermore, the timing of CRs depended upon the probability of simulated causal conditions in the cause-effect interval and not the mere duration of the inter-stimulus interval. In this work, the close relation between timing and causality was verified. It could thus be concluded that the firing activities of IPns may be related more to the proper performance of ongoing CRs (i.e., the proper timing as a consequence of the pertinent causality) than to their generation and/or initiation. PMID:21941469

  19. On the reliability of computed chaotic solutions of non-linear differential equations

    NASA Astrophysics Data System (ADS)

    Liao, Shijun

    2009-08-01

    A new concept, namely the critical predictable time Tc, is introduced to give a more precise description of computed chaotic solutions of non-linear differential equations: it is suggested that computed chaotic solutions are unreliable and doubtable when t > Tc. This provides us a strategy to detect reliable solution from a given computed result. In this way, the computational phenomena, such as computational chaos (CC), computational periodicity (CP) and computational prediction uncertainty, which are mainly based on long-term properties of computed time-series, can be completely avoided. Using this concept, the famous conclusion `accurate long-term prediction of chaos is impossible' should be replaced by a more precise conclusion that `accurate prediction of chaos beyond the critical predictable time Tc is impossible'. So, this concept also provides us a timescale to determine whether or not a particular time is long enough for a given non-linear dynamic system. Besides, the influence of data inaccuracy and various numerical schemes on the critical predictable time is investigated in details by using symbolic computation software as a tool. A reliable chaotic solution of Lorenz equation in a rather large interval 0 <= t < 1200 non-dimensional Lorenz time units is obtained for the first time. It is found that the precision of the initial condition and the computed data at each time step, which is mathematically necessary to get such a reliable chaotic solution in such a long time, is so high that it is physically impossible due to the Heisenberg uncertainty principle in quantum physics. This, however, provides us a so-called `precision paradox of chaos', which suggests that the prediction uncertainty of chaos is physically unavoidable, and that even the macroscopical phenomena might be essentially stochastic and thus could be described by probability more economically.

  20. Winnowing sequences from a database search.

    PubMed

    Berman, P; Zhang, Z; Wolf, Y I; Koonin, E V; Miller, W

    2000-01-01

    In database searches for sequence similarity, matches to a distinct sequence region (e.g., protein domain) are frequently obscured by numerous matches to another region of the same sequence. In order to cope with this problem, algorithms are developed to discard redundant matches. One model for this problem begins with a list of intervals, each with an associated score; each interval gives the range of positions in the query sequence that align to a database sequence, and the score is that of the alignment. If interval I is contained in interval J, and I's score is less than J's, then I is said to be dominated by J. The problem is then to identify each interval that is dominated by at least K other intervals, where K is a given level of "tolerable redundancy." An algorithm is developed to solve the problem in O(N log N) time and O(N*) space, where N is the number of intervals and N* is a precisely defined value that never exceeds N and is frequently much smaller. This criterion for discarding database hits has been implemented in the Blast program, as illustrated herein with examples. Several variations and extensions of this approach are also described.

  1. Estimating time-varying RSA to examine psychophysiological linkage of marital dyads.

    PubMed

    Gates, Kathleen M; Gatzke-Kopp, Lisa M; Sandsten, Maria; Blandon, Alysia Y

    2015-08-01

    One of the primary tenets of polyvagal theory dictates that parasympathetic influence on heart rate, often estimated by respiratory sinus arrhythmia (RSA), shifts rapidly in response to changing environmental demands. The current standard analytic approach of aggregating RSA estimates across time to arrive at one value fails to capture this dynamic property within individuals. By utilizing recent methodological developments that enable precise RSA estimates at smaller time intervals, we demonstrate the utility of computing time-varying RSA for assessing psychophysiological linkage (or synchrony) in husband-wife dyads using time-locked data collected in a naturalistic setting. © 2015 Society for Psychophysiological Research.

  2. Carrier-phase time transfer.

    PubMed

    Larson, K M; Levine, J

    1999-01-01

    We have conducted several time-transfer experiments using the phase of the GPS carrier rather than the code, as is done in current GPS-based time-transfer systems. Atomic clocks were connected to geodetic GPS receivers; we then used the GPS carrier-phase observations to estimate relative clock behavior at 6-minute intervals. GPS carrier-phase time transfer is more than an order of magnitude more precise than GPS common view time transfer and agrees, within the experimental uncertainty, with two-way satellite time-transfer measurements for a 2400 km baseline. GPS carrier-phase time transfer has a stability of 100 ps, which translates into a frequency uncertainty of about two parts in 10(-15) for an average time of 1 day.

  3. Some new results on irradiation characteristics of synthetic quartz crystals and their application to radiation hardening

    NASA Technical Reports Server (NTRS)

    Bahadur, H.; Parshad, R.

    1983-01-01

    The paper reports some new results on irradiation characteristics of synthetic quartz crystals and their application to radiation hardening. The present results show how the frequency shift in quartz crystals can be influenced by heat processing prior to irradiation and how this procedure can lead to radiation hardening for obtaining precise frequencies and time intervals from quartz oscillators in space.

  4. Stochastic simulation and analysis of biomolecular reaction networks

    PubMed Central

    Frazier, John M; Chushak, Yaroslav; Foy, Brent

    2009-01-01

    Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796

  5. Proceedings of the Annual Precise Time and Time Interval (PTTI) applications and Planning Meeting (20th) Held in Vienna, Virginia on 29 November-1 December 1988

    DTIC Science & Technology

    1988-12-01

    PERFORMANCE IN REAL TIME* Dr. James A. Barnes Austron Boulder, Co. Abstract Kalman filters and ARIMA models provide optimum control and evaluation tech...estimates of the model parameters (e.g., the phi’s and theta’s for an ARIMA model ). These model parameters are often evaluated in a batch mode on a...random walk FM, and linear frequency drift. In ARIMA models , this is equivalent to an ARIMA (0,2,2) with a non-zero average sec- ond difference. Using

  6. Proceedings of the Annual NASA and Department of Defense Precise Time and Time Interval (PITI) Planning Meeting (5th), Held at Goddard Space Flight Center on December 4-6, 1973

    DTIC Science & Technology

    1972-01-01

    and police stations in Washington, and since 1877 to Western Union for nation-wide distribution. In 1904 the first operational radio time signals were...to do the job with the accuracy and low cost demanded in these days of tight operating budgets. In closing, I would like to acknowledge the fine...signal received from a celestial source is recorded at each antenna on magnetic tape, and the tapes are cross-correlated by matching the streams of

  7. The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.

    2017-05-01

    The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.

  8. Proceedings of Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting (23rd) held in Pasadena, California on December 3-5, 1991

    DTIC Science & Technology

    1991-12-05

    second overshoot. Automatic steering was turned off for 9 days following the initial undershoot ( 48120 to 48129) and turned off from 48160 to the end...35780 0.08 A A 0.04 AA A A A35776 AA A A A AA A A A AL’ AA I- A 12 ,- U.U A __,, 0.00 . A A k 35772 I I I I I I -0.04 48000 48030 48060 48090 48120

  9. Proceedings of the Annual Precise Time and Time Interval (PTTI) applications and Planning Meeting (21st), Held in Redondo Beach, California on November 28-30, 1989

    DTIC Science & Technology

    1989-01-01

    Duke University Mr. Philip E. Talley Aerospace NOTE: NON-GOVERNMENT OFFICERS OF THE PITI ARE AUTOMATICALLY MEMBERS OF THE PTTI ADVISORY BOARD FOR THE...2763 Hughes Aircraft Space and Communications Mr. Philip E. Talley S12/W322, P. 0. Box 92919 The Aerospace Corporation Los Angeles, California 90009 550...and Mr. G.J. Trudeau for the mechanical construction of the masers, Mr. M. Kotler for assistance with the mechanical design, and Mr. W. Cazemier and

  10. A comparative evaluation of the marginal adaptation of a thermoplastic resin, a light cured wax and an inlay casting wax on stone dies: An in vitro study.

    PubMed

    Gopalan, Reji P; Nair, Vivek V; Harshakumar, K; Ravichandran, R; Lylajam, S; Viswambaran, Prasanth

    2018-01-01

    Different pattern materials do not produce copings with satisfactory, marginal accuracy when used on stone dies at varying time intervals. The purpose of this study was to evaluate and compare the vertical marginal accuracy of patterns formed from three materials, namely, thermoplastic resin, light cured wax and inlay casting wax at three-time intervals of 1, 12, and 24 h. A master die (zirconia abutment mimicking a prepared permanent maxillary central incisor) and metal sleeve (direct metal laser sintering crown #11) were fabricated. A total of 30 stone dies were obtained from the master die. Ten patterns were made each from the three materials and stored off the die at room temperature. The vertical marginal gaps were measured using digital microscope at 1, 12, and 24 h after reseating with gentle finger pressure. The results revealed a significant statistical difference in the marginal adaptation of three materials at all the three-time intervals. Light cured wax was found to be most accurate at all time intervals, followed by thermoplastic resin and inlay casting wax. Furthermore, there was a significant difference between all pairs of materials. The change in vertical marginal gap from 1 to 24 h between thermoplastic resin and light cured wax was not statistically significant. The marginal adaptation of all the three materials used, was well within the acceptable range of 25-70 μm. The resin pattern materials studied revealed significantly less dimensional change than inlay casting wax on storage at 1, 12, and 24 h time intervals. They may be employed in situations where high precision and delayed investing is expected.

  11. A microcomputer system for on-line study of atrioventricular node accommodation.

    PubMed

    Jenkins, J R; Clemo, H F; Belardinelli, L

    1987-11-01

    An automated on-line programmable stimulator and interval measurement system was developed to study atrioventricular node (AVN) accommodation. This dedicated microcomputer system measures and stores the stimulus-to-His bundle (S-H) interval from His bundle electrogram (HBE) recordings. Interval measurements for each beat are accurate to within 500 microsecond. This user-controlled system has been used to stimulate at any rate up to 6.5 Hz and to measure intervals up to 125 ms in isolated perfused guinea pig hearts. A built-in timer-reset mechanism prevents failure of the system in the absence of a His potential (i.e., 2:1 AV block). It may be modified for use in clinical studies or other experimental systems and has the ability to measure other physiological intervals. The system provides the precision in pacing and accuracy in the measurement of AVN conduction time that is necessary for meaningful analysis of AVN accommodation and has the simplicity of design and use that is not available in previously described systems. Furthermore, this computer system can be used not only in studies involving AV conduction, but also in any setting where programmed stimulation and interval measurement and recording need to be performed simultaneously.

  12. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298

  13. Neutron Star Spin Measurements and Dense Matter with LOFT

    NASA Technical Reports Server (NTRS)

    Strohmayer, Tod

    2011-01-01

    Observations over the last decade with RXTE have begun to reveal the X-ray binary progenitors of the fastest spinning neutron stars presently known. Detection and study of the spin rates of binary neutron stars has important implications for constraining the nature of dense matter present in neutron star interiors, as both the maximum spin rate and mass for neutron stars is set by the equation of state. Precision pulse timing of accreting neutron star binaries can enable mass constraints. Particularly promIsing is the combination of the pulse and eclipse timing, as for example, in systems like Swift 11749.4-2807. With its greater sensitivity, LOFT will enable deeper searches for the spin periods of the neutron stars, both during persistent outburst intervals and thermonuclear X-ray bursts, and enable more precise modeling of detected pulsations. I will explore the anticipated impact of LOFT on spin measurements and its potential for constraining dense matter in neutron stars

  14. Precision Spectroscopy in Cold Molecules: The Lowest Rotational Interval of He2 + and Metastable He2

    NASA Astrophysics Data System (ADS)

    Jansen, Paul; Semeria, Luca; Hofer, Laura Esteban; Scheidegger, Simon; Agner, Josef A.; Schmutz, Hansjürg; Merkt, Frédéric

    2015-09-01

    Multistage Zeeman deceleration was used to generate a slow, dense beam of translationally cold He2 molecules in the metastable a 3Σu+ state. Precision measurements of the Rydberg spectrum of these molecules at high values of the principal quantum number n have been carried out. The spin-rotational state selectivity of the Zeeman-deceleration process was exploited to reduce the spectral congestion, minimize residual Doppler shifts, resolve the Rydberg series around n =200 and assign their fine structure. The ionization energy of metastable He2 and the lowest rotational interval of the X+ 2Σu+ (ν+=0 ) ground state of 4He2+ have been determined with unprecedented precision and accuracy by Rydberg-series extrapolation. Comparison with ab initio predictions of the rotational energy level structure of 4He2+ [W.-C. Tung, M. Pavanello, and L. Adamowicz, J. Chem. Phys. 136, 104309 (2012)] enabled us to quantify the magnitude of relativistic and quantum-electrodynamics contributions to the fundamental rotational interval of He2+ .

  15. Aircraft Configuration and Flight Crew Compliance with Procedures While Conducting Flight Deck Based Interval Management (FIM) Operations

    NASA Technical Reports Server (NTRS)

    Shay, Rick; Swieringa, Kurt A.; Baxley, Brian T.

    2012-01-01

    Flight deck based Interval Management (FIM) applications using ADS-B are being developed to improve both the safety and capacity of the National Airspace System (NAS). FIM is expected to improve the safety and efficiency of the NAS by giving pilots the technology and procedures to precisely achieve an interval behind the preceding aircraft by a specific point. Concurrently but independently, Optimized Profile Descents (OPD) are being developed to help reduce fuel consumption and noise, however, the range of speeds available when flying an OPD results in a decrease in the delivery precision of aircraft to the runway. This requires the addition of a spacing buffer between aircraft, reducing system throughput. FIM addresses this problem by providing pilots with speed guidance to achieve a precise interval behind another aircraft, even while flying optimized descents. The Interval Management with Spacing to Parallel Dependent Runways (IMSPiDR) human-in-the-loop experiment employed 24 commercial pilots to explore the use of FIM equipment to conduct spacing operations behind two aircraft arriving to parallel runways, while flying an OPD during high-density operations. This paper describes the impact of variations in pilot operations; in particular configuring the aircraft, their compliance with FIM operating procedures, and their response to changes of the FIM speed. An example of the displayed FIM speeds used incorrectly by a pilot is also discussed. Finally, this paper examines the relationship between achieving airline operational goals for individual aircraft and the need for ATC to deliver aircraft to the runway with greater precision. The results show that aircraft can fly an OPD and conduct FIM operations to dependent parallel runways, enabling operational goals to be achieved efficiently while maintaining system throughput.

  16. An Inverse Method to Estimate the Root Water Uptake Source-Sink Term in Soil Water Transport Equation under the Effect of Superabsorbent Polymer

    PubMed Central

    Liao, Renkuan; Yang, Peiling; Wu, Wenyong; Ren, Shumei

    2016-01-01

    The widespread use of superabsorbent polymers (SAPs) in arid regions improves the efficiency of local land and water use. However, SAPs’ repeated absorption and release of water has periodic and unstable effects on both soil’s physical and chemical properties and on the growth of plant roots, which complicates modeling of water movement in SAP-treated soils. In this paper, we proposea model of soil water movement for SAP-treated soils. The residence time of SAP in the soil and the duration of the experiment were considered as the same parameter t. This simplifies previously proposed models in which the residence time of SAP in the soil and the experiment’s duration were considered as two independent parameters. Numerical testing was carried out on the inverse method of estimating the source/sink term of root water uptake in the model of soil water movement under the effect of SAP. The test results show that time interval, hydraulic parameters, test error, and instrument precision had a significant influence on the stability of the inverse method, while time step, layering of soil, and boundary conditions had relatively smaller effects. A comprehensive analysis of the method’s stability, calculation, and accuracy suggests that the proposed inverse method applies if the following conditions are satisfied: the time interval is between 5 d and 17 d; the time step is between 1000 and 10000; the test error is ≥ 0.9; the instrument precision is ≤ 0.03; and the rate of soil surface evaporation is ≤ 0.6 mm/d. PMID:27505000

  17. Cognitive assessment of mice strains heterozygous for cell-adhesion genes reveals strain-specific alterations in timing.

    PubMed

    Gallistel, C R; Tucci, Valter; Nolan, Patrick M; Schachner, Melitta; Jakovcevski, Igor; Kheifets, Aaron; Barboza, Luendro

    2014-03-05

    We used a fully automated system for the behavioural measurement of physiologically meaningful properties of basic mechanisms of cognition to test two strains of heterozygous mutant mice, Bfc (batface) and L1, and their wild-type littermate controls. Both of the target genes are involved in the establishment and maintenance of synapses. We find that the Bfc heterozygotes show reduced precision in their representation of interval duration, whereas the L1 heterozygotes show increased precision. These effects are functionally specific, because many other measures made on the same mice are unaffected, namely: the accuracy of matching temporal investment ratios to income ratios in a matching protocol, the rate of instrumental and classical conditioning, the latency to initiate a cued instrumental response, the trials on task and the impulsivity in a switch paradigm, the accuracy with which mice adjust timed switches to changes in the temporal constraints, the days to acquisition, and mean onset time and onset variability in the circadian anticipation of food availability.

  18. Cognitive assessment of mice strains heterozygous for cell-adhesion genes reveals strain-specific alterations in timing

    PubMed Central

    Gallistel, C. R.; Tucci, Valter; Nolan, Patrick M.; Schachner, Melitta; Jakovcevski, Igor; Kheifets, Aaron; Barboza, Luendro

    2014-01-01

    We used a fully automated system for the behavioural measurement of physiologically meaningful properties of basic mechanisms of cognition to test two strains of heterozygous mutant mice, Bfc (batface) and L1, and their wild-type littermate controls. Both of the target genes are involved in the establishment and maintenance of synapses. We find that the Bfc heterozygotes show reduced precision in their representation of interval duration, whereas the L1 heterozygotes show increased precision. These effects are functionally specific, because many other measures made on the same mice are unaffected, namely: the accuracy of matching temporal investment ratios to income ratios in a matching protocol, the rate of instrumental and classical conditioning, the latency to initiate a cued instrumental response, the trials on task and the impulsivity in a switch paradigm, the accuracy with which mice adjust timed switches to changes in the temporal constraints, the days to acquisition, and mean onset time and onset variability in the circadian anticipation of food availability. PMID:24446498

  19. Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R

    ERIC Educational Resources Information Center

    Dogan, C. Deha

    2017-01-01

    Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…

  20. Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff

    2012-01-01

    Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…

  1. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  2. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

    ERIC Educational Resources Information Center

    Kim, Kyung Yong; Lee, Won-Chan

    2018-01-01

    Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

  3. Highly accurate pulse-per-second timing distribution over optical fibre network using VCSEL side-mode injection

    NASA Astrophysics Data System (ADS)

    Wassin, Shukree; Isoe, George M.; Gamatham, Romeo R. G.; Leitch, Andrew W. R.; Gibbon, Tim B.

    2017-01-01

    Precise and accurate timing signals distributed between a centralized location and several end-users are widely used in both metro-access and speciality networks for Coordinated Universal Time (UTC), GPS satellite systems, banking, very long baseline interferometry and science projects such as SKA radio telescope. Such systems utilize time and frequency technology to ensure phase coherence among data signals distributed across an optical fibre network. For accurate timing requirements, precise time intervals should be measured between successive pulses. In this paper we describe a novel, all optical method for quantifying one-way propagation times and phase perturbations in the fibre length, using pulse-persecond (PPS) signals. The approach utilizes side mode injection of a 1550nm 10Gbps vertical cavity surface emitting laser (VCSEL) at the remote end. A 125 μs one-way time of flight was accurately measured for 25 km G655 fibre. Since the approach is all-optical, it avoids measurement inaccuracies introduced by electro-optical conversion phase delays. Furthermore, the implementation uses cost effective VCSEL technology and suited to a flexible range of network architectures, supporting a number of end-users conducting measurements at the remote end.

  4. Enhanced ionization of the Martian nightside ionosphere during solar energetic particle events

    NASA Astrophysics Data System (ADS)

    Nemec, F.; Morgan, D. D.; Dieval, C.; Gurnett, D. A.; Futaana, Y.

    2013-12-01

    The nightside ionosphere of Mars is highly variable and very irregular, controlled to a great extent by the configuration of the crustal magnetic fields. The ionospheric reflections observed by the MARSIS radar sounder on board the Mars Express spacecraft in this region are typically oblique (reflection by a distant feature), so that they cannot be used to determine the peak altitude precisely. Nevertheless, the peak electron density can be in principle readily determined. However, in more than 90% of measurements the peak electron densities are too low to be detected. We focus on the time intervals of solar energetic particle (SEP) events. One may expect high energy particle precipitation into the nightside ionosphere to increase the electron density there. Thus, comparison of characteristics between SEP/no-SEP time intervals is important to understand the formation mechanism of the nightside ionosphere. The time intervals of SEP events are determined using the increase in the background counts recorded by the ion sensor (IMA) of the ASPERA-3 particle instrument on board Mars Express. Then we use MARSIS measurements to determine how much the nightside ionosphere is enhanced during these time intervals. We show that the peak electron densities during these periods are large enough to be detected in more than 30% of measurements, while the reflections from the ground almost entirely disappear, indicating that the nightside electron densities are tremendously increased as compared to the normal nightside conditions. The influence of various parameters on the formation of the nightside ionosphere is thoroughly discussed.

  5. An experimental search strategy retrieves more precise results than PubMed and Google for questions about medical interventions

    PubMed Central

    Dylla, Daniel P.; Megison, Susan D.

    2015-01-01

    Objective. We compared the precision of a search strategy designed specifically to retrieve randomized controlled trials (RCTs) and systematic reviews of RCTs with search strategies designed for broader purposes. Methods. We designed an experimental search strategy that automatically revised searches up to five times by using increasingly restrictive queries as long at least 50 citations were retrieved. We compared the ability of the experimental and alternative strategies to retrieve studies relevant to 312 test questions. The primary outcome, search precision, was defined for each strategy as the proportion of relevant, high quality citations among the first 50 citations retrieved. Results. The experimental strategy had the highest median precision (5.5%; interquartile range [IQR]: 0%–12%) followed by the narrow strategy of the PubMed Clinical Queries (4.0%; IQR: 0%–10%). The experimental strategy found the most high quality citations (median 2; IQR: 0–6) and was the strategy most likely to find at least one high quality citation (73% of searches; 95% confidence interval 68%–78%). All comparisons were statistically significant. Conclusions. The experimental strategy performed the best in all outcomes although all strategies had low precision. PMID:25922798

  6. Pulse rate variability compared with Heart Rate Variability in children with and without sleep disordered breathing.

    PubMed

    Dehkordi, Parastoo; Garde, Ainara; Karlen, Walter; Wensley, David; Ansermino, J Mark; Dumont, Guy A

    2013-01-01

    Heart Rate Variability (HRV), the variation of time intervals between heartbeats, is one of the most promising and widely used quantitative markers of autonomic activity. Traditionally, HRV is measured as the series of instantaneous cycle intervals obtained from the electrocardiogram (ECG). In this study, we investigated the estimation of variation in heart rate from a photoplethysmography (PPG) signal, called pulse rate variability (PRV), and assessed its accuracy as an estimate of HRV in children with and without sleep disordered breathing (SDB). We recorded raw PPGs from 72 children using the Phone Oximeter, an oximeter connected to a mobile phone. Full polysomnography including ECG was simultaneously recorded for each subject. We used correlation and Bland-Altman analysis for comparing the parameters of HRV and PRV between two groups of children. Significant correlation (r > 0.90, p < 0.05) and close agreement were found between HRV and PRV for mean intervals, standard deviation of intervals (SDNN) and the root-mean square of the difference of successive intervals (RMSSD). However Bland-Altman analysis showed a large divergence for LF/HF ratio parameter. In addition, children with SDB had depressed SDNN and RMSSD and elevated LF/HF in comparison to children without SDB. In conclusion, PRV provides the accurate estimate of HRV in time domain analysis but does not reflect precise estimation for parameters in frequency domain.

  7. The posterior parietal cortex (PPC) mediates anticipatory motor control.

    PubMed

    Krause, Vanessa; Weber, Juliane; Pollok, Bettina

    2014-01-01

    Flexible and precisely timed motor control is based on functional interaction within a cortico-subcortical network. The left posterior parietal cortex (PPC) is supposed to be crucial for anticipatory motor control by sensorimotor feedback matching. Intention of the present study was to disentangle the specific relevance of the left PPC for anticipatory motor control using transcranial direct current stimulation (tDCS) since a causal link remains to be established. Anodal vs. cathodal tDCS was applied for 10 min over the left PPC in 16 right-handed subjects in separate sessions. Left primary motor cortex (M1) tDCS served as control condition and was applied in additional 15 subjects. Prior to and immediately after tDCS, subjects performed three tasks demanding temporal motor precision with respect to an auditory stimulus: sensorimotor synchronization as measure of anticipatory motor control, interval reproduction and simple reaction. Left PPC tDCS affected right hand synchronization but not simple reaction times. Motor anticipation was deteriorated by anodal tDCS, while cathodal tDCS yielded the reverse effect. The variability of interval reproduction was increased by anodal left M1 tDCS, whereas it was reduced by cathodal tDCS. No significant effects on simple reaction times were found. The present data support the hypothesis that left PPC is causally involved in right hand anticipatory motor control exceeding pure motor implementation as processed by M1 and possibly indicating subjective timing. Since M1 tDCS particularly affects motor implementation, the observed PPC effects are not likely to be explained by alterations of motor-cortical excitability. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Dissociating movement from movement timing in the rat primary motor cortex.

    PubMed

    Knudsen, Eric B; Powers, Marissa E; Moxon, Karen A

    2014-11-19

    Neural encoding of the passage of time to produce temporally precise movements remains an open question. Neurons in several brain regions across different experimental contexts encode estimates of temporal intervals by scaling their activity in proportion to the interval duration. In motor cortex the degree to which this scaled activity relies upon afferent feedback and is guided by motor output remains unclear. Using a neural reward paradigm to dissociate neural activity from motor output before and after complete spinal transection, we show that temporally scaled activity occurs in the rat hindlimb motor cortex in the absence of motor output and after transection. Context-dependent changes in the encoding are plastic, reversible, and re-established following injury. Therefore, in the absence of motor output and despite a loss of afferent feedback, thought necessary for timed movements, the rat motor cortex displays scaled activity during a broad range of temporally demanding tasks similar to that identified in other brain regions. Copyright © 2014 the authors 0270-6474/14/3415576-11$15.00/0.

  9. Confidence intervals and sample size calculations for the standardized mean difference effect size between two normal populations under heteroscedasticity.

    PubMed

    Shieh, G

    2013-12-01

    The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.

  10. Evidence for impulsivity in the Spontaneously Hypertensive Rat drawn from complementary response-withholding tasks

    PubMed Central

    Sanabria, Federico; Killeen, Peter R

    2008-01-01

    Background The inability to inhibit reinforced responses is a defining feature of ADHD associated with impulsivity. The Spontaneously Hypertensive Rat (SHR) has been extolled as an animal model of ADHD, but there is no clear experimental evidence of inhibition deficits in SHR. Attempts to demonstrate these deficits may have suffered from methodological and analytical limitations. Methods We provide a rationale for using two complementary response-withholding tasks to doubly dissociate impulsivity from motivational and motor processes. In the lever-holding task (LHT), continual lever depression was required for a minimum interval. Under a differential reinforcement of low rates schedule (DRL), a minimum interval was required between lever presses. Both tasks were studied using SHR and two normotensive control strains, Wistar-Kyoto (WKY) and Long Evans (LE), over an overlapping range of intervals (1 – 5 s for LHT and 5 – 60 s for DRL). Lever-holding and DRL performance was characterized as the output of a mixture of two processes, timing and iterative random responding; we call this account of response inhibition the Temporal Regulation (TR) model. In the context of TR, impulsivity was defined as a bias toward premature termination of the timed intervals. Results The TR model provided an accurate description of LHT and DRL performance. On the basis of TR parameter estimates, SHRs were more impulsive than LE rats across tasks and target times. WKY rats produced substantially shorter timed responses in the lever-holding task than in DRL, suggesting a motivational or motor deficit. The precision of timing by SHR, as measured by the variance of their timed intervals, was excellent, flouting expectations from ADHD research. Conclusion This research validates the TR model of response inhibition and supports SHR as an animal model of ADHD-related impulsivity. It indicates, however, that SHR's impulse-control deficit is not caused by imprecise timing. The use of ad hoc impulsivity metrics and of WKY as control strain for SHR impulsivity are called into question. PMID:18261220

  11. Interval Management with Spacing to Parallel Dependent Runways (IMSPIDR) Experiment and Results

    NASA Technical Reports Server (NTRS)

    Baxley, Brian T.; Swieringa, Kurt A.; Capron, William R.

    2012-01-01

    An area in aviation operations that may offer an increase in efficiency is the use of continuous descent arrivals (CDA), especially during dependent parallel runway operations. However, variations in aircraft descent angle and speed can cause inaccuracies in estimated time of arrival calculations, requiring an increase in the size of the buffer between aircraft. This in turn reduces airport throughput and limits the use of CDAs during high-density operations, particularly to dependent parallel runways. The Interval Management with Spacing to Parallel Dependent Runways (IMSPiDR) concept uses a trajectory-based spacing tool onboard the aircraft to achieve by the runway an air traffic control assigned spacing interval behind the previous aircraft. This paper describes the first ever experiment and results of this concept at NASA Langley. Pilots flew CDAs to the Dallas Fort-Worth airport using airspeed calculations from the spacing tool to achieve either a Required Time of Arrival (RTA) or Interval Management (IM) spacing interval at the runway threshold. Results indicate flight crews were able to land aircraft on the runway with a mean of 2 seconds and less than 4 seconds standard deviation of the air traffic control assigned time, even in the presence of forecast wind error and large time delay. Statistically significant differences in delivery precision and number of speed changes as a function of stream position were observed, however, there was no trend to the difference and the error did not increase during the operation. Two areas the flight crew indicated as not acceptable included the additional number of speed changes required during the wind shear event, and issuing an IM clearance via data link while at low altitude. A number of refinements and future spacing algorithm capabilities were also identified.

  12. Temporal precision and the capacity of auditory-verbal short-term memory.

    PubMed

    Gilbert, Rebecca A; Hitch, Graham J; Hartley, Tom

    2017-12-01

    The capacity of serially ordered auditory-verbal short-term memory (AVSTM) is sensitive to the timing of the material to be stored, and both temporal processing and AVSTM capacity are implicated in the development of language. We developed a novel "rehearsal-probe" task to investigate the relationship between temporal precision and the capacity to remember serial order. Participants listened to a sub-span sequence of spoken digits and silently rehearsed the items and their timing during an unfilled retention interval. After an unpredictable delay, a tone prompted report of the item being rehearsed at that moment. An initial experiment showed cyclic distributions of item responses over time, with peaks preserving serial order and broad, overlapping tails. The spread of the response distributions increased with additional memory load and correlated negatively with participants' auditory digit spans. A second study replicated the negative correlation and demonstrated its specificity to AVSTM by controlling for differences in visuo-spatial STM and nonverbal IQ. The results are consistent with the idea that a common resource underpins both the temporal precision and capacity of AVSTM. The rehearsal-probe task may provide a valuable tool for investigating links between temporal processing and AVSTM capacity in the context of speech and language abilities.

  13. Computed-tomography modeled polyether ether ketone (PEEK) implants in revision cranioplasty.

    PubMed

    O'Reilly, Eamon B; Barnett, Sam; Madden, Christopher; Welch, Babu; Mickey, Bruce; Rozen, Shai

    2015-03-01

    Traditional cranioplasty methods focus on pre-operative or intraoperative hand molding. Recently, CT-guided polyether ether ketone (PEEK) plate reconstruction enables precise, time-saving reconstruction. This case series aims to show a single institution experience with use of PEEK cranioplasty as an effective, safe, precise, reusable, and time-saving cranioplasty technique in large, complex cranial defects. We performed a 6-year retrospective review of cranioplasty procedures performed at our affiliated hospitals using PEEK implants. A total of nineteen patients underwent twenty-two cranioplasty procedures. Pre-operative, intra-operative, and post-operative data was collected. Nineteen patients underwent twenty-two procedures. Time interval from injury to loss of primary cranioplasty averaged 57.7 months (0-336 mo); 4.0 months (n=10, range 0-19) in cases of trauma. Time interval from primary cranioplasty loss to PEEK cranioplasty was 11.8 months for infection (n=11, range 6-25 mo), 12.2 months for trauma (n=5, range 2-27 mo), and 0.3 months for cosmetic or functional reconstructions (n=3, range 0-1). Similar surgical techniques were used in all patients. Drains were placed in 11/22 procedures. Varying techniques were used in skin closure, including adjacent tissue transfer (4/22) and free tissue transfer (1/22). The PEEK plate required modification in four procedures. Three patients had reoperation following PEEK plate reconstruction. Cranioplasty utilizing CT-guided PEEK plate allows easy inset, anatomic accuracy, mirror image aesthetics, simplification of complex 3D defects, and potential time savings. Additionally, it's easily manipulated in the operating room, and can be easily re-utilized in cases of intraoperative course changes or infection. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Utility of a point-of-care device for rapid determination of prothrombin time in trauma patients: a preliminary study.

    PubMed

    David, Jean-Stéphane; Levrat, Albrice; Inaba, Kenji; Macabeo, Caroline; Rugeri, Lucia; Fontaine, Oriane; Cheron, Aurélie; Piriou, Vincent

    2012-03-01

    Rapid and accurate determination of prothrombin time in trauma patients may help to faster control of bleeding induced coagulopathy. The goal of this prospective observational study was to investigate the accuracy of bedside measurements of prothrombin time by the mean of a point-of-care device (INRatio) in trauma patients. Fifty blood samples were drawn at admission and during the acute care phase for standard coagulation assays (prothrombin time, International Normalized Ratio [INR], and fibrinogen) and INRatio testing (INR(A)) from 48 trauma patients. Standard coagulation assays were available after a mean of 66 minutes. Median Injury Severity Score was 18, and 16 patients (33%) had a coagulopathy. Significant correlation was found between INR and INR(A) (r: 0.93, 95% confidence interval: 0.87-0.96). The mean difference (bias) for INR was 0.00, and standard deviation (precision) of the difference was 0.78. However, in cases where there was decreased hemoglobin (<10 gr · L(-1)) and fibrinogen (<1.5 gr · L(-1)), bias and precision were increased. To predict the need for fresh frozen plasma transfusion (INR > 1.5), INR(A) cutoff value of 1.3 resulted in a sensitivity of 92% and a specificity of 79%. The area under the receiver operating characteristic curve was 0.946 (95% confidence interval: 0,845-0,982). INRatio may be a useful device in the management of trauma patients with ongoing or suspected coagulopathy that may help to save at least 60 minutes in the process of obtaining a prothrombin time result. It may allow earlier detection of coagulopathy and, together with vital sign and hemoglobin, may help to guide fresh frozen plasma transfusion.

  15. Design of time interval generator based on hybrid counting method

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  16. GLONASS orbit/clock combination in VNIIFTRI

    NASA Astrophysics Data System (ADS)

    Bezmenov, I.; Pasynok, S.

    2015-08-01

    An algorithm and a program for GLONASS satellites orbit/clock combination based on daily precise orbits submitted by several Analytic Centers were developed. Some theoretical estimates for combine orbit positions RMS were derived. It was shown that under condition that RMS of satellite orbits provided by the Analytic Centers during a long time interval are commensurable the RMS of combine orbit positions is no greater than RMS of other satellite positions estimated by any of the Analytic Centers.

  17. Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.

    2008-01-01

    This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.

  18. An Intrinsic Role of Beta Oscillations in Memory for Time Estimation.

    PubMed

    Wiener, Martin; Parikh, Alomi; Krakow, Arielle; Coslett, H Branch

    2018-05-22

    The neural mechanisms underlying time perception are of vital importance to a comprehensive understanding of behavior and cognition. Recent work has suggested a supramodal role for beta oscillations in measuring temporal intervals. However, the precise function of beta oscillations and whether their manipulation alters timing has yet to be determined. To accomplish this, we first re-analyzed two, separate EEG datasets and demonstrate that beta oscillations are associated with the retention and comparison of a memory standard for duration. We next conducted a study of 20 human participants using transcranial alternating current stimulation (tACS), over frontocentral cortex, at alpha and beta frequencies, during a visual temporal bisection task, finding that beta stimulation exclusively shifts the perception of time such that stimuli are reported as longer in duration. Finally, we decomposed trialwise choice data with a drift diffusion model of timing, revealing that the shift in timing is caused by a change in the starting point of accumulation, rather than the drift rate or threshold. Our results provide evidence for the intrinsic involvement of beta oscillations in the perception of time, and point to a specific role for beta oscillations in the encoding and retention of memory for temporal intervals.

  19. Short-Term Depression, Temporal Summation, and Onset Inhibition Shape Interval Tuning in Midbrain Neurons

    PubMed Central

    Baker, Christa A.

    2014-01-01

    A variety of synaptic mechanisms can contribute to single-neuron selectivity for temporal intervals in sensory stimuli. However, it remains unknown how these mechanisms interact to establish single-neuron sensitivity to temporal patterns of sensory stimulation in vivo. Here we address this question in a circuit that allows us to control the precise temporal patterns of synaptic input to interval-tuned neurons in behaviorally relevant ways. We obtained in vivo intracellular recordings under multiple levels of current clamp from midbrain neurons in the mormyrid weakly electric fish Brienomyrus brachyistius during stimulation with electrosensory pulse trains. To reveal the excitatory and inhibitory inputs onto interval-tuned neurons, we then estimated the synaptic conductances underlying responses. We found short-term depression in excitatory and inhibitory pathways onto all interval-tuned neurons. Short-interval selectivity was associated with excitation that depressed less than inhibition at short intervals, as well as temporally summating excitation. Long-interval selectivity was associated with long-lasting onset inhibition. We investigated tuning after separately nullifying the contributions of temporal summation and depression, and found the greatest diversity of interval selectivity among neurons when both mechanisms were at play. Furthermore, eliminating the effects of depression decreased sensitivity to directional changes in interval. These findings demonstrate that variation in depression and summation of excitation and inhibition helps to establish tuning to behaviorally relevant intervals in communication signals, and that depression contributes to neural coding of interval sequences. This work reveals for the first time how the interplay between short-term plasticity and temporal summation mediates the decoding of temporal sequences in awake, behaving animals. PMID:25339741

  20. ASDTIC: A feedback control innovation

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Schoenfeld, A. D.

    1972-01-01

    The ASDTIC (Analog Signal to Discrete Time Interval Converter) control subsystem provides precise output control of high performance aerospace power supplies. The key to ASDTIC operation is that it stably controls output by sensing output energy change as well as output magnitude. The ASDTIC control subsystem and control module were developed to improve power supply performance during static and dynamic input voltage and output load variations, to reduce output voltage or current regulation due to component variations or aging, to maintain a stable feedback control with variations in the loop gain or loop time constants, and to standardize the feedback control subsystem for power conditioning equipment.

  1. Sampling and Control Circuit Board for an Inertial Measurement Unit

    NASA Technical Reports Server (NTRS)

    Chelmins, David T (Inventor); Sands, Obed (Inventor); Powis, Richard T., Jr. (Inventor)

    2016-01-01

    A circuit board that serves as a control and sampling interface to an inertial measurement unit ("IMU") is provided. The circuit board is also configured to interface with a local oscillator and an external trigger pulse. The circuit board is further configured to receive the external trigger pulse from an external source that time aligns the local oscillator and initiates sampling of the inertial measurement device for data at precise time intervals based on pulses from the local oscillator. The sampled data may be synchronized by the circuit board with other sensors of a navigation system via the trigger pulse.

  2. ASDTIC - A feedback control innovation.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Schoenfeld, A. D.

    1972-01-01

    The ASDTIC (analog signal to discrete time interval converter) control subsystem provides precise output control of high performance aerospace power supplies. The key to ASDTIC operation is that it stably controls output by sensing output energy change as well as output magnitude. The ASDTIC control subsystem and control module were developed to improve power supply performance during static and dynamic input voltage and output load variations, to reduce output voltage or current regulation due to component variations or aging, to maintain a stable feedback control with variations in the loop gain or loop time constants, and to standardize the feedback control subsystem for power conditioning equipment.

  3. Proceedings of the Annual Precise Time and Time Interval (PTTI) applications and Planning Meeting (9th), Held at NASA Goddard Space Flight Center, November 29 - December 1, 1977

    DTIC Science & Technology

    1978-03-01

    receiver. 7te rrinzinal caracteristics of such a device are its n.m- sass: srt, r.edir, and lcng term stability. The spectral nuri ty ca "- l .aser is...imperfection of a plastic , inhomogeneous, poorly-understood Earth, then problems begin to arise.The rotation axis of the crust is no longer fixed with...at NRL, the sample was manipulated with cleaned tweezers and placed on fresh, clean aluminum foil; plastic gloves were used also in the-handling of

  4. An apparatus for sequentially combining microvolumes of reagents by infrasonic mixing.

    PubMed

    Camien, M N; Warner, R C

    1984-05-01

    A method employing high-speed infrasonic mixing for obtaining timed samples for following the progress of a moderately rapid chemical reaction is described. Drops of 10 to 50 microliter each of two reagents are mixed to initiate the reaction, followed, after a measured time interval, by mixing with a drop of a third reagent to quench the reaction. The method was developed for measuring the rate of denaturation of covalently closed, circular DNA in NaOH at several temperatures. For this purpose the timed samples were analyzed by analytical ultracentrifugation. The apparatus was tested by determination of the rate of hydrolysis of 2,4-dinitrophenyl acetate in an alkaline buffer. The important characteristics of the method are (i) it requires very small volumes of sample and reagents; (ii) the components of the reaction mixture are pre-equilibrated and mixed with no transfer outside the prescribed constant temperature environment; (iii) the mixing is very rapid; and (iv) satisfactorily precise measurements of relatively short time intervals (approximately 2 sec minimum) between sequential mixings of the components are readily obtainable.

  5. Application of a temperature-dependent fluorescent dye (Rhodamine B) to the measurement of radiofrequency radiation-induced temperature changes in biological samples.

    PubMed

    Chen, Yuen Y; Wood, Andrew W

    2009-10-01

    We have applied a non-contact method for studying the temperature changes produced by radiofrequency (RF) radiation specifically to small biological samples. A temperature-dependent fluorescent dye, Rhodamine B, as imaged by laser scanning confocal microscopy (LSCM) was used to do this. The results were calibrated against real-time temperature measurements from fiber optic probes, with a calibration factor of 3.4% intensity change degrees C(-1) and a reproducibility of +/-6%. This non-contact method provided two-dimensional and three-dimensional images of temperature change and distributions in biological samples, at a spatial resolution of a few micrometers and with an estimated absolute precision of around 1.5 degrees C, with a differential precision of 0.4 degree C. Temperature rise within tissue was found to be non-uniform. Estimates of specific absorption rate (SAR) from absorbed power measurements were greater than those estimated from rate of temperature rise, measured at 1 min intervals, probably because this interval is too long to permit accurate estimation of initial temperature rise following start of RF exposure. Future experiments will aim to explore this.

  6. Measurement of the D/H, 18O/16O, and 17O/16O Isotope Ratios in Water by Laser Absorption Spectroscopy at 2.73 μm

    PubMed Central

    Wu, Tao; Chen, Weidong; Fertein, Eric; Masselin, Pascal; Gao, Xiaoming; Zhang, Weijun; Wang, Yingjian; Koeth, Johannes; Brückner, Daniela; He, Xingdao

    2014-01-01

    A compact isotope ratio laser spectrometry (IRLS) instrument was developed for simultaneous measurements of the D/H, 18O/16O and 17O/16O isotope ratios in water by laser absorption spectroscopy at 2.73 μm. Special attention is paid to the spectral data processing and implementation of a Kalman adaptive filtering to improve the measurement precision. Reduction of up to 3-fold in standard deviation in isotope ratio determination was obtained by the use of a Fourier filtering to remove undulation structure from spectrum baseline. Application of Kalman filtering enables isotope ratio measurement at 1 s time intervals with a precision (<1‰) better than that obtained by conventional 30 s averaging, while maintaining a fast system response. The implementation of the filter is described in detail and its effects on the accuracy and the precision of the isotope ratio measurements are investigated. PMID:24854363

  7. Using Aoristic Analysis to Link Remote and Ground-Level Phenological Observations

    NASA Astrophysics Data System (ADS)

    Henebry, G. M.

    2013-12-01

    Phenology is about observing events in time and space. With the advent of publically accessible geospatial datastreams and easy to use mapping software, specifying where an event occurs is much less of a challenge than it was just two decades ago. In contrast, specifying when an event occurs remains a nontrivial function of a population of organismal responses, sampling interval, compositing period, and reporting precision. I explore how aoristic analysis can be used to analyzing spatiotemporal events for which the location is known to acceptable levels of precision but for which temporal coordinates are poorly specified or only partially bounded. Aoristic analysis was developed in the late 1990s in the field of quantitative criminology to leverage temporally imprecise geospatial data of crime reports. Here I demonstrate how aoristic analysis can be used to link remotely sensed observations of land surface phenology to ground-level observations of organismal phenophase transitions. Explicit representation of the windows of temporal uncertainty with aoristic weights enables cross-validation exercises and forecasting efforts to avoid false precision.

  8. A Statistical Guide to the Design of Deep Mutational Scanning Experiments

    PubMed Central

    Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia

    2016-01-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710

  9. Possibility of New Precise Measurements of Muonic Helium Atom HFS at J-PARC MUSE

    NASA Astrophysics Data System (ADS)

    Strasser, P.; Shimomura, K.; Torii, H. A.

    We propose the next generation of precision microwave spectroscopy measurements of the ground state hyperfine structure (HFS) of the muonic helium atom. The HFS interval is a sensitive tool to test three-body atomic system and bound-state QED theory as well as precise direct determination of the negative muon magnetic moment and hence its mass. Previous measurements performed in 1980s at PSI and LAMPF had uncertainties dominated by statistical errors. The new high-intensity pulsed negative muon beam at J-PARC MUSE give an opportunity to improve these measurements by nearly two orders of magnitude for the HFS interval, and almost tenfold for the negative muon mass, thus providing a more precise test of CPT invariance and determination of the negative counterpart of the anomalous g-factor for the existing BNL muon g-2 experiment. Both measurements at zero field and at high magnetic field are considered. An overview of the different aspects of these new muonic helium HFS measurements is presented.

  10. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGES

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; ...

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/ 239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presencemore » of a significant quantity of 238U in the samples.« less

  11. Examining Exposure Assessment in Shift Work Research: A Study on Depression Among Nurses.

    PubMed

    Hall, Amy L; Franche, Renée-Louise; Koehoorn, Mieke

    2018-02-13

    Coarse exposure assessment and assignment is a common issue facing epidemiological studies of shift work. Such measures ignore a number of exposure characteristics that may impact on health, increasing the likelihood of biased effect estimates and masked exposure-response relationships. To demonstrate the impacts of exposure assessment precision in shift work research, this study investigated relationships between work schedule and depression in a large survey of Canadian nurses. The Canadian 2005 National Survey of the Work and Health of Nurses provided the analytic sample (n = 11450). Relationships between work schedule and depression were assessed using logistic regression models with high, moderate, and low-precision exposure groupings. The high-precision grouping described shift timing and rotation frequency, the moderate-precision grouping described shift timing, and the low-precision grouping described the presence/absence of shift work. Final model estimates were adjusted for the potential confounding effects of demographic and work variables, and bootstrap weights were used to generate sampling variances that accounted for the survey sample design. The high-precision exposure grouping model showed the strongest relationships between work schedule and depression, with increased odds ratios [ORs] for rapidly rotating (OR = 1.51, 95% confidence interval [CI] = 0.91-2.51) and undefined rotating (OR = 1.67, 95% CI = 0.92-3.02) shift workers, and a decreased OR for depression in slow rotating (OR = 0.79, 95% CI = 0.57-1.08) shift workers. For the low- and moderate-precision exposure grouping models, weak relationships were observed for all work schedule categories (OR range 0.95 to 0.99). Findings from this study support the need to consider and collect the data required for precise and conceptually driven exposure assessment and assignment in future studies of shift work and health. Further research into the effects of shift rotation frequency on depression is also recommended. © The Author(s) 2018. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  12. Quantifying the statistical complexity of low-frequency fluctuations in semiconductor lasers with optical feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiana-Alsina, J.; Torrent, M. C.; Masoller, C.

    Low-frequency fluctuations (LFFs) represent a dynamical instability that occurs in semiconductor lasers when they are operated near the lasing threshold and subject to moderate optical feedback. LFFs consist of sudden power dropouts followed by gradual, stepwise recoveries. We analyze experimental time series of intensity dropouts and quantify the complexity of the underlying dynamics employing two tools from information theory, namely, Shannon's entropy and the Martin, Plastino, and Rosso statistical complexity measure. These measures are computed using a method based on ordinal patterns, by which the relative length and ordering of consecutive interdropout intervals (i.e., the time intervals between consecutive intensitymore » dropouts) are analyzed, disregarding the precise timing of the dropouts and the absolute durations of the interdropout intervals. We show that this methodology is suitable for quantifying subtle characteristics of the LFFs, and in particular the transition to fully developed chaos that takes place when the laser's pump current is increased. Our method shows that the statistical complexity of the laser does not increase continuously with the pump current, but levels off before reaching the coherence collapse regime. This behavior coincides with that of the first- and second-order correlations of the interdropout intervals, suggesting that these correlations, and not the chaotic behavior, are what determine the level of complexity of the laser's dynamics. These results hold for two different dynamical regimes, namely, sustained LFFs and coexistence between LFFs and steady-state emission.« less

  13. Precision measurement of the three 2(3)P(J) helium fine structure intervals.

    PubMed

    Zelevinsky, T; Farkas, D; Gabrielse, G

    2005-11-11

    The three 2(3)P fine structure intervals of 4H are measured at an improved accuracy that is sufficient to test two-electron QED theory and to determine the fine structure constant alpha to 14 parts in 10(9). The more accurate determination of alpha, to a precision higher than attained with the quantum Hall and Josephson effects, awaits the reconciliation of two inconsistent theoretical calculations now being compared term by term. A low pressure helium discharge presents experimental uncertainties quite different than for earlier measurements and allows direct measurements of light pressure shifts.

  14. Choosing a reliability inspection plan for interval censored data

    DOE PAGES

    Lu, Lu; Anderson-Cook, Christine Michaela

    2017-04-19

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  15. Choosing a reliability inspection plan for interval censored data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Lu; Anderson-Cook, Christine Michaela

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  16. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  17. Electrochemical Impedance Spectrometer with an Environmental Chamber for Rapid Screening of New Precise Copolymers

    DTIC Science & Technology

    2017-10-07

    polymerization to make linear polyethylenes with carboxylic acid groups at precise intervals along the polymer . Precise acid- containing polymers provide...acid polyethylene and the a polymerized ionic liquids based on cyclopropenium. The instrument is also be used to study polymer segmental dynamics...Advances in batteries, fuel cells, and permselective membranes are materials limited. New acid- and ion-containing polymers must be designed and

  18. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus.

    PubMed

    Goldberg, Joshua F; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L Scott; Wangchuk, Tshewang R; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010-2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the "true" explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25-15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest.

  19. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus

    PubMed Central

    Goldberg, Joshua F.; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L. Scott; Wangchuk, Tshewang R.; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010–2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the “true” explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25–15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest. PMID:26536231

  20. Demonstration of Johnson noise thermometry with all-superconducting quantum voltage noise source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, Takahiro, E-mail: yamada-takahiro@aist.go.jp; Urano, Chiharu; Maezawa, Masaaki

    We present a Johnson noise thermometry (JNT) system based on an integrated quantum voltage noise source (IQVNS) that has been fully implemented using superconducting circuit technology. To enable precise measurement of Boltzmann's constant, an IQVNS chip was designed to produce intrinsically calculable pseudo-white noise to calibrate the JNT system. On-chip real-time generation of pseudo-random codes via simple circuits produced pseudo-voltage noise with a harmonic tone interval of less than 1 Hz, which was one order of magnitude finer than the harmonic tone interval of conventional quantum voltage noise sources. We estimated a value for Boltzmann's constant experimentally by performing JNT measurementsmore » at the temperature of the triple point of water using the IQVNS chip.« less

  1. Studies on time of death estimation in the early post mortem period -- application of a method based on eyeball temperature measurement to human bodies.

    PubMed

    Kaliszan, Michał

    2013-09-01

    This paper presents a verification of the thermodynamic model allowing an estimation of the time of death (TOD) by calculating the post mortem interval (PMI) based on a single eyeball temperature measurement at the death scene. The study was performed on 30 cases with known PMI, ranging from 1h 35min to 5h 15min, using pin probes connected to a high precision electronic thermometer (Dostmann-electronic). The measured eye temperatures ranged from 20.2 to 33.1°C. Rectal temperature was measured at the same time and ranged from 32.8 to 37.4°C. Ambient temperatures which ranged from -1 to 24°C, environmental conditions (still air to light wind) and the amount of hair on the head were also recorded every time. PMI was calculated using a formula based on Newton's law of cooling, previously derived and successfully tested in comprehensive studies on pigs and a few human cases. Thanks to both the significantly faster post mortem decrease of eye temperature and a residual or nonexistent plateau effect in the eye, as well as practically no influence of body mass, TOD in the human death cases could be estimated with good accuracy. The highest TOD estimation error during the post mortem intervals up to around 5h was 1h 16min, 1h 14min and 1h 03min, respectively in three cases among 30, while for the remaining 27 cases it was not more than 47min. The mean error for all 30 cases was ±31min. All that indicates that the proposed method is of quite good precision in the early post mortem period, with an accuracy of ±1h for a 95% confidence interval. On the basis of the presented method, TOD can be also calculated at the death scene with the use of a proposed portable electronic device (TOD-meter). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Gaussian signal relaxation around spin echoes: Implications for precise reversible transverse relaxation quantification of pulmonary tissue at 1.5 and 3 Tesla.

    PubMed

    Zapp, Jascha; Domsch, Sebastian; Weingärtner, Sebastian; Schad, Lothar R

    2017-05-01

    To characterize the reversible transverse relaxation in pulmonary tissue and to study the benefit of a quadratic exponential (Gaussian) model over the commonly used linear exponential model for increased quantification precision. A point-resolved spectroscopy sequence was used for comprehensive sampling of the relaxation around spin echoes. Measurements were performed in an ex vivo tissue sample and in healthy volunteers at 1.5 Tesla (T) and 3 T. The goodness of fit using χred2 and the precision of the fitted relaxation time by means of its confidence interval were compared between the two relaxation models. The Gaussian model provides enhanced descriptions of pulmonary relaxation with lower χred2 by average factors of 4 ex vivo and 3 in volunteers. The Gaussian model indicates higher sensitivity to tissue structure alteration with increased precision of reversible transverse relaxation time measurements also by average factors of 4 ex vivo and 3 in volunteers. The mean relaxation times of the Gaussian model in volunteers are T2,G' = (1.97 ± 0.27) msec at 1.5 T and T2,G' = (0.83 ± 0.21) msec at 3 T. Pulmonary signal relaxation was found to be accurately modeled as Gaussian, providing a potential biomarker T2,G' with high sensitivity. Magn Reson Med 77:1938-1945, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  3. Construction and testing of a simple and economical soil greenhouse gas automatic sampler

    USGS Publications Warehouse

    Ginting, D.; Arnold, S.L.; Arnold, N.S.; Tubbs, R.S.

    2007-01-01

    Quantification of soil greenhouse gas emissions requires considerable sampling to account for spatial and/or temporal variation. With manual sampling, additional personnel are often not available to sample multiple sites within a narrow time interval. The objectives were to construct an automatic gas sampler and to compare the accuracy and precision of automatic versus manual sampling. The automatic sampler was tested with carbon dioxide (CO2) fluxes that mimicked the range of CO2 fluxes during a typical corn-growing season in eastern Nebraska. Gas samples were drawn from the chamber at 0, 5, and 10 min manually and with the automatic sampler. The three samples drawn with the automatic sampler were transferred to pre-vacuumed vials after 1 h; thus the samples in syringe barrels stayed connected with the increasing CO2 concentration in the chamber. The automatic sampler sustains accuracy and precision in greenhouse gas sampling while improving time efficiency and reducing labor stress. Copyright ?? Taylor & Francis Group, LLC.

  4. Offset-frequency locking of extended-cavity diode lasers for precision spectroscopy of water at 1.38 μm.

    PubMed

    Gianfrani, Livio; Castrillo, Antonio; Fasci, Eugenio; Galzerano, Gianluca; Casa, Giovanni; Laporta, Paolo

    2010-10-11

    We describe a continuous-wave diode laser spectrometer for water-vapour precision spectroscopy at 1.38 μm. The spectrometer is based upon the use of a simple scheme for offset-frequency locking of a pair of extended-cavity diode lasers that allows to achieve unprecedented accuracy and reproducibility levels in measuring molecular absorption. When locked to the master laser with an offset frequency of 1.5 GHz, the slave laser exhibits residual frequency fluctuations of 1 kHz over a time interval of 25 minutes, for a 1-s integration time. The slave laser could be continuously tuned up to 3 GHz, the scan showing relative deviations from linearity below the 10{-6} level. Simultaneously, a capture range of the order of 1 GHz was obtained. Quantitative spectroscopy was also demonstrated by accurately determining relevant spectroscopic parameters for the 22,1→22,0line of the H2(18)O v1+v3 band at 1384.6008 nm.

  5. Levelling Profiles and a GPS Network to Monitor the Active Folding and Faulting Deformation in the Campo de Dalias (Betic Cordillera, Southeastern Spain)

    PubMed Central

    Marín-Lechado, Carlos; Galindo-Zaldívar, Jesús; Gil, Antonio José; Borque, María Jesús; de Lacy, María Clara; Pedrera, Antonio; López-Garrido, Angel Carlos; Alfaro, Pedro; García-Tortosa, Francisco; Ramos, Maria Isabel; Rodríguez-Caderot, Gracia; Rodríguez-Fernández, José; Ruiz-Constán, Ana; de Galdeano-Equiza, Carlos Sanz

    2010-01-01

    The Campo de Dalias is an area with relevant seismicity associated to the active tectonic deformations of the southern boundary of the Betic Cordillera. A non-permanent GPS network was installed to monitor, for the first time, the fault- and fold-related activity. In addition, two high precision levelling profiles were measured twice over a one-year period across the Balanegra Fault, one of the most active faults recognized in the area. The absence of significant movement of the main fault surface suggests seismogenic behaviour. The possible recurrence interval may be between 100 and 300 y. The repetitive GPS and high precision levelling monitoring of the fault surface during a long time period may help us to determine future fault behaviour with regard to the existence (or not) of a creep component, the accumulation of elastic deformation before faulting, and implications of the fold-fault relationship. PMID:22319309

  6. Novel method for high-throughput phenotyping of sleep in mice.

    PubMed

    Pack, Allan I; Galante, Raymond J; Maislin, Greg; Cater, Jacqueline; Metaxas, Dimitris; Lu, Shan; Zhang, Lin; Von Smith, Randy; Kay, Timothy; Lian, Jie; Svenson, Karen; Peters, Luanne L

    2007-01-17

    Assessment of sleep in mice currently requires initial implantation of chronic electrodes for assessment of electroencephalogram (EEG) and electromyogram (EMG) followed by time to recover from surgery. Hence, it is not ideal for high-throughput screening. To address this deficiency, a method of assessment of sleep and wakefulness in mice has been developed based on assessment of activity/inactivity either by digital video analysis or by breaking infrared beams in the mouse cage. It is based on the algorithm that any episode of continuous inactivity of > or =40 s is predicted to be sleep. The method gives excellent agreement in C57BL/6J male mice with simultaneous assessment of sleep by EEG/EMG recording. The average agreement over 8,640 10-s epochs in 24 h is 92% (n = 7 mice) with agreement in individual mice being 88-94%. Average EEG/EMG determined sleep per 2-h interval across the day was 59.4 min. The estimated mean difference (bias) per 2-h interval between inactivity-defined sleep and EEG/EMG-defined sleep was only 1.0 min (95% confidence interval for mean bias -0.06 to +2.6 min). The standard deviation of differences (precision) was 7.5 min per 2-h interval with 95% limits of agreement ranging from -13.7 to +15.7 min. Although bias significantly varied by time of day (P = 0.0007), the magnitude of time-of-day differences was not large (average bias during lights on and lights off was +5.0 and -3.0 min per 2-h interval, respectively). This method has applications in chemical mutagenesis and for studies of molecular changes in brain with sleep/wakefulness.

  7. Four eyes match better than two: Sharing of precise patch-use time among socially foraging domestic chicks.

    PubMed

    Xin, Qiuhong; Ogura, Yukiko; Matsushima, Toshiya

    2017-07-01

    To examine how resource competition contributes to patch-use behaviour, we examined domestic chicks foraging in an I-shaped maze equipped with two terminal feeders. In a variable interval schedule, one feeder supplied grains three times more frequently than the other, and the sides were reversed midway through the experiment. The maze was partitioned into two lanes by a transparent wall, so that chicks fictitiously competed without actual interference. Stay time at feeders was compared among three groups. The "single" group contained control chicks; the "pair" group comprised the pairs of chicks tested in the fictitious competition; "mirror" included single chicks accompanied by their respective mirror images. Both "pair" and "mirror" chicks showed facilitated running. In terms of the patch-use ratio, "pair" chicks showed precise matching at approximately 3:1 with significant mutual dependence, whereas "single" and "mirror" chicks showed a comparable under-matching. The facilitated running increased visits to feeders, but failed to predict the patch-use ratio of the subject. At the reversal, quick switching occurred similarly in all groups, but the "pair" chicks revealed a stronger memory-based matching. Perceived competition therefore contributes to precise matching and lasting memory of the better feeder, in a manner dissociated from socially facilitated food search. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Shot Peening Numerical Simulation of Aircraft Aluminum Alloy Structure

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Lv, Sheng-Li; Zhang, Wei

    2018-03-01

    After shot peening, the 7050 aluminum alloy has good anti-fatigue and anti-stress corrosion properties. In the shot peening process, the pellet collides with target material randomly, and generated residual stress distribution on the target material surface, which has great significance to improve material property. In this paper, a simplified numerical simulation model of shot peening was established. The influence of pellet collision velocity, pellet collision position and pellet collision time interval on the residual stress of shot peening was studied, which is simulated by the ANSYS/LS-DYNA software. The analysis results show that different velocity, different positions and different time intervals have great influence on the residual stress after shot peening. Comparing with the numerical simulation results based on Kriging model, the accuracy of the simulation results in this paper was verified. This study provides a reference for the optimization of the shot peening process, and makes an effective exploration for the precise shot peening numerical simulation.

  9. Quantum clocks and the foundations of relativity

    NASA Astrophysics Data System (ADS)

    Davies, Paul C. W.

    2004-05-01

    The conceptual foundations of the special and general theories of relativity differ greatly from those of quantum mechanics. Yet in all cases investigated so far, quantum mechanics seems to be consistent with the principles of relativity theory, when interpreted carefully. In this paper I report on a new investigation of this consistency using a model of a quantum clock to measure time intervals; a topic central to all metric theories of gravitation, and to cosmology. Results are presented for two important scenarios related to the foundations of relativity theory: the speed of light as a limiting velocity and the weak equivalence principle (WEP). These topics are investigated in the light of claims of superluminal propagation in quantum tunnelling and possible violations of WEP. Special attention is given to the role of highly non-classical states. I find that by using a definition of time intervals based on a precise model of a quantum clock, ambiguities are avoided and, at least in the scenarios investigated, there is consistency with the theory of relativity, albeit with some subtleties.

  10. Non-localization of eigenfunctions for Sturm-Liouville operators and applications

    NASA Astrophysics Data System (ADS)

    Liard, Thibault; Lissy, Pierre; Privat, Yannick

    2018-02-01

    In this article, we investigate a non-localization property of the eigenfunctions of Sturm-Liouville operators Aa = -∂xx + a (ṡ) Id with Dirichlet boundary conditions, where a (ṡ) runs over the bounded nonnegative potential functions on the interval (0 , L) with L > 0. More precisely, we address the extremal spectral problem of minimizing the L2-norm of a function e (ṡ) on a measurable subset ω of (0 , L), where e (ṡ) runs over all eigenfunctions of Aa, at the same time with respect to all subsets ω having a prescribed measure and all L∞ potential functions a (ṡ) having a prescribed essentially upper bound. We provide some existence and qualitative properties of the minimizers, as well as precise lower and upper estimates on the optimal value. Several consequences in control and stabilization theory are then highlighted.

  11. A Statistical Guide to the Design of Deep Mutational Scanning Experiments.

    PubMed

    Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia

    2016-09-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.

  12. Calibrating Late Cretaceous Terrestrial Cyclostratigraphy with High-precision U-Pb Zircon Geochronology: Qingshankou Formation of the Songliao Basin, China

    NASA Astrophysics Data System (ADS)

    Wang, T.; Ramezani, J.; Wang, C.

    2015-12-01

    A continuous succession of Late Cretaceous lacustrine strata has been recovered from the SK-I south (SK-Is) and SKI north (SK-In) boreholes in the long-lived Cretaceous Songliao Basin in Northeast China. Establishing a high-resolution chronostratigraphic framework is a prerequisite for integrating the Songliao record with the global marine Cretaceous. We present high-precision U-Pb zircon geochronology by the chemical abrasion isotope dilution thermal-ionization mass spectrometry method from multiple bentonite core samples from the Late Cretaceous Qingshankou Formation in order to assess the astrochronological model for the Songliao Basin cyclostratigraphy. Our results from the SK-Is core present major improvements in precision and accuracy over the previously published geochronology and allow a cycle-level calibration of the cyclostratigraphy. The resulting choronostratigraphy suggest a good first-order agreement between the radioisotope geochronology and the established astrochronological time scale over the corresponding interval. The dated bentonite beds near the 1780 m depth straddle a prominent oil shale layer of the Qingshankou Formation, which records a basin-wide lake anoxic event (LAE1), providing a direct age constraint for the LAE1. The latter appears to coincide in time with the Late Cretaceous (Turonian) global sea level change event Tu4 presently constrained at 91.8 Ma.

  13. Seabird nest counts: A test of monitoring metrics using Red-tailed Tropicbirds

    USGS Publications Warehouse

    Seavy, N.E.; Reynolds, M.H.

    2009-01-01

    Counts of nesting birds are often used to monitor the abundance of breeding pairs at colonies. Mean incubation counts (MICs) are counts of nests with eggs at intervals that correspond to the mean incubation period of a species. The sum of all counts during the nesting season (MICtotal) and the highest single count during the season (MICmax) are metrics that can be generated from this method. However, the utility of these metrics as measures of the number of breeding pairs has not been well tested. We used two approaches to evaluate the bias and precision of MIC metrics for quantifying annual variation in the number of breeding Red-tailed Tropicbirds (Phaethon rubricauda) nesting on two islands in the Papahnaumokukea Marine National Monument in the northwest Hawaiian Islands. First, we used data from nest plots with individually marked birds to generate simulated MIC metrics that we compared to the known number of nesting individuals. The MICtotal overestimated the number of pairs by about 5%, whereas the MICmax underestimated the number of pairs by about 60%. However, both metrics exhibited similar precision. Second, we used a 12-yr time series of island-wide MICs to compare estimates of temporal trend and annual variation using the MICmax and MICtotal. The 95% confidence intervals for the trend estimates were overlapping and the residual standard errors for the two metrics were similar. Our results suggest that both metrics offered similar precision for indices of breeding pairs of Red-tailed Tropicbirds, but that MICtotal was more accurate. ?? 2009 Association of Field Ornithologists.

  14. Evaluation of Flight Deck-Based Interval Management Crew Procedure Feasibility

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.; Murdoch, Jennifer L.; Hubbs, Clay E.; Swieringa, Kurt A.

    2013-01-01

    Air traffic demand is predicted to increase over the next 20 years, creating a need for new technologies and procedures to support this growth in a safe and efficient manner. The National Aeronautics and Space Administration's (NASA) Air Traffic Management Technology Demonstration - 1 (ATD-1) will operationally demonstrate the feasibility of efficient arrival operations combining ground-based and airborne NASA technologies. The integration of these technologies will increase throughput, reduce delay, conserve fuel, and minimize environmental impacts. The ground-based tools include Traffic Management Advisor with Terminal Metering for precise time-based scheduling and Controller Managed Spacing decision support tools for better managing aircraft delay with speed control. The core airborne technology in ATD-1 is Flight deck-based Interval Management (FIM). FIM tools provide pilots with speed commands calculated using information from Automatic Dependent Surveillance - Broadcast. The precise merging and spacing enabled by FIM avionics and flight crew procedures will reduce excess spacing buffers and result in higher terminal throughput. This paper describes a human-in-the-loop experiment designed to assess the acceptability and feasibility of the ATD-1 procedures used in a voice communications environment. This experiment utilized the ATD-1 integrated system of ground-based and airborne technologies. Pilot participants flew a high-fidelity fixed base simulator equipped with an airborne spacing algorithm and a FIM crew interface. Experiment scenarios involved multiple air traffic flows into the Dallas-Fort Worth Terminal Radar Control airspace. Results indicate that the proposed procedures were feasible for use by flight crews in a voice communications environment. The delivery accuracy at the achieve-by point was within +/- five seconds and the delivery precision was less than five seconds. Furthermore, FIM speed commands occurred at a rate of less than one per minute, and pilots found the frequency of the speed commands to be acceptable at all times throughout the experiment scenarios.

  15. Testing Astronomical and 40Ar/39Ar Timescales for the K/Pg Boundary Interval Using High-Resolution Magnetostratigraphy and U-Pb Geochronology in the Denver Basin of Colorado

    NASA Astrophysics Data System (ADS)

    Clyde, W.; Bowring, S. A.; Johnson, K. R.; Ramezani, J.; Jones, M. M.

    2015-12-01

    Accurate and precise calibration of the Geomagnetic Polarity Timescale (GPTS) in absolute time is critical for resolving rates of geological and biological processes which in turn help constrain the underlying causes of those processes. Numerical calibration of the GPTS was traditionally carried out by interpolation between a limited number of 40Ar/39Ar dated volcanic ash deposits from superpositional sequences with well-defined magnetostratigraphies. More recently, the Neogene part of the GPTS has been calibrated using high-resolution astrochronological methods, however the application of these approaches to pre-Neogene parts of the timescale is controversial given the uncertainties in relevant orbital parameters this far back in time and differing interpretations of local cyclostratigraphic records. The Cretaceous-Paleogene (K/Pg) boundary interval is a good example, where various astronomical and 40Ar/39Ar calibrations have been proposed with varying degrees of agreement. The Denver Basin (Colorado, USA) contains one of the most complete stratigraphic sequences across the K/Pg boundary in the world, preserving evidence of bolide impact as well as biotic extinction and recovery in a thick stratigraphic package that is accessible by both core and outcrop. We present a series of high-precision U-Pb age determinations from interbedded volcanic ash deposits within a tightly constrained magnetobiostratigraphic framework across the K/Pg boundary in the Denver Basin. This new timeline provides a precise absolute age for the K/Pg boundary, constrains the ages of magnetic polarity Chrons C28 to C30, and provides a direct and independent test of early Paleogene astronomical and 40Ar/39Ar based timescales. Temporal calibration of fossil pollen evidence of the "fern spike" in the Denver Basin shows that plant extinctions peaked within ~50-500 years of the bolide impact and primary productivity recovered ~500-5000 years after the impact.

  16. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process.

    PubMed

    Wilson, Lorna R M; Hopcraft, Keith I

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  17. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process

    NASA Astrophysics Data System (ADS)

    Wilson, Lorna R. M.; Hopcraft, Keith I.

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  18. A stretch/compress scheme for a high temporal resolution detector for the magnetic recoil spectrometer time (MRSt)

    DOE PAGES

    Hilsabeck, T. J.; Frenje, J. A.; Hares, J. D.; ...

    2016-08-02

    Here we present a time-resolved detector concept for the magnetic recoil spectrometer for time-resolved measurements of the NIF neutron spectrum. The measurement is challenging due to the time spreading of the recoil protons (or deuterons) as they transit an energy dispersing magnet system. Ions arrive at the focal plane of the magnetic spectrometer over an interval of tens of nanoseconds. We seek to measure the time-resolved neutron spectrum with 20 ps precision by manipulating an electron signal derived from the ions. A stretch-compress scheme is employed to remove transit time skewing while simultaneously reducing the bandwidth requirements for signal recording.more » Simulation results are presented along with design concepts for structures capable of establishing the required electromagnetic fields.« less

  19. A stretch/compress scheme for a high temporal resolution detector for the magnetic recoil spectrometer time (MRSt)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hilsabeck, T. J.; Frenje, J. A.; Hares, J. D.

    Here we present a time-resolved detector concept for the magnetic recoil spectrometer for time-resolved measurements of the NIF neutron spectrum. The measurement is challenging due to the time spreading of the recoil protons (or deuterons) as they transit an energy dispersing magnet system. Ions arrive at the focal plane of the magnetic spectrometer over an interval of tens of nanoseconds. We seek to measure the time-resolved neutron spectrum with 20 ps precision by manipulating an electron signal derived from the ions. A stretch-compress scheme is employed to remove transit time skewing while simultaneously reducing the bandwidth requirements for signal recording.more » Simulation results are presented along with design concepts for structures capable of establishing the required electromagnetic fields.« less

  20. Influence of double stimulation on sound-localization behavior in barn owls.

    PubMed

    Kettler, Lutz; Wagner, Hermann

    2014-12-01

    Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.

  1. Application of Geodetic Techniques for Antenna Positioning in a Ground Penetrating Radar Method

    NASA Astrophysics Data System (ADS)

    Mazurkiewicz, Ewelina; Ortyl, Łukasz; Karczewski, Jerzy

    2018-03-01

    The accuracy of determining the location of detectable subsurface objects is related to the accuracy of the position of georadar traces in a given profile, which in turn depends on the precise assessment of the distance covered by an antenna. During georadar measurements the distance covered by an antenna can be determined with a variety of methods. Recording traces at fixed time intervals is the simplest of them. A method which allows for more precise location of georadar traces is recording them at fixed distance intervals, which can be performed with the use of distance triggers (such as a measuring wheel or a hip chain). The search for methods eliminating these discrepancies can be based on the measurement of spatial coordinates of georadar traces conducted with the use of modern geodetic techniques for 3-D location. These techniques include above all a GNSS satellite system and electronic tachymeters. Application of the above mentioned methods increases the accuracy of space location of georadar traces. The article presents the results of georadar measurements performed with the use of geodetic techniques in the test area of Mydlniki in Krakow. A satellite receiver Leica system 1200 and a electronic tachymeter Leica 1102 TCRA were integrated with the georadar equipment. The accuracy of locating chosen subsurface structures was compared.

  2. Integrated stratigraphy and astronomical tuning of Smirra cores, lower Eocene, Umbria-Marche basin, Italy.

    NASA Astrophysics Data System (ADS)

    Lauretano, Vittoria; Turtù, Antonio; Hilgen, Frits; Galeotti, Simone; Catanzariti, Rita; Reichart, Gert Jan; Lourens, Lucas J.

    2016-04-01

    The early Eocene represents an ideal case study to analyse the impact of increase global warming on the ocean-atmosphere system. During this time interval, the Earth's surface experienced a long-term warming trend that culminated in a period of sustained high temperatures called the Early Eocene Climatic Optimum (EECO). These perturbations of the ocean-atmosphere system involved the global carbon cycle and global temperatures and have been linked to orbital forcing. Unravelling this complex climatic system strictly depends on the availability of high-quality suitable geological records and accurate age models. However, discrepancies between the astrochronological and radioisotopic dating techniques complicate the development of a robust time scale for the early Eocene (49-54 Ma). Here we present the first magneto-, bio-, chemo- and cyclostratigraphic results of the drilling of the land-based Smirra section, in the Umbria Marche Basin. The sediments recovered at Smirra provide a remarkably well-preserved and undisturbed succession of the early Palaeogene pelagic stratigraphy. Bulk stable carbon isotope and X-Ray Fluorescence (XRF) scanning records are employed in the construction of an astronomically tuned age model for the time interval between ~49 and ~54 Ma based on the tuning to long-eccentricity. These results are then compared to the astronomical tuning of the benthic carbon isotope record of ODP Site 1263 to evaluate the different age model options and improve the time scale of the early Eocene by assessing the precise number of eccentricity-related cycles comprised in this critical interval.

  3. A possible simplification for the estimation of area under the curve (AUC₀₋₁₂) of enteric-coated mycophenolate sodium in renal transplant patients receiving tacrolimus.

    PubMed

    Fleming, Denise H; Mathew, Binu S; Prasanna, Samuel; Annapandian, Vellaichamy M; John, George T

    2011-04-01

    Enteric-coated mycophenolate sodium (EC-MPS) is widely used in renal transplantation. With a delayed absorption profile, it has not been possible to develop limited sampling strategies to estimate area under the curve (mycophenolic acid [MPA] AUC₀₋₁₂), which have limited time points and are completed in 2 hours. We developed and validated simplified strategies to estimate MPA AUC₀₋₁₂ in an Indian renal transplant population prescribed EC-MPS together with prednisolone and tacrolimus. Intensive pharmacokinetic sampling (17 samples each) was performed in 18 patients to measure MPA AUC₀₋₁₂. The profiles at 1 month were used to develop the simplified strategies and those at 5.5 months used for validation. We followed two approaches. In one, the AUC was calculated using the trapezoidal rule with fewer time points followed by an extrapolation. In the second approach, by stepwise multiple regression analysis, models with different time points were identified and linear regression analysis performed. Using the trapezoidal rule, two equations were developed with six time points and sampling to 6 or 8 hours (8hrAUC[₀₋₁₂exp]) after the EC-MPS dose. On validation, the 8hrAUC(₀₋₁₂exp) compared with total measured AUC₀₋₁₂ had a coefficient of correlation (r²) of 0.872 with a bias and precision (95% confidence interval) of 0.54% (-6.07-7.15) and 9.73% (5.37-14.09), respectively. Second, limited sampling strategies were developed with four, five, six, seven, and eight time points and completion within 2 hours, 4 hours, 6 hours, and 8 hours after the EC-MPS dose. On validation, six, seven, and eight time point equations, all with sampling to 8 hours, had an acceptable r with the total measured MPA AUC₀₋₁₂ (0.817-0.927). In the six, seven, and eight time points, the bias (95% confidence interval) was 3.00% (-4.59 to 10.59), 0.29% (-5.4 to 5.97), and -0.72% (-5.34 to 3.89) and the precision (95% confidence interval) was 10.59% (5.06-16.13), 8.33% (4.55-12.1), and 6.92% (3.94-9.90), respectively. Of the eight simplified approaches, inclusion of seven or eight time points improved the accuracy of the predicted AUC compared with the actual and can be advocated based on the priority of the user.

  4. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  5. Longitudinal DXA Studies: Minimum scanning interval for pediatric assessment of body fat

    USDA-ARS?s Scientific Manuscript database

    The increased prevalence of obesity in the United States, has led to the increased use of dual-energy X-ray absorptiometry (DXA) for assessment of body fat mass (TBF) in pediatric populations. We examined DXA precision, in order to determine suitable scanning intervals for the measurement of change...

  6. Evaluating the temporal link between Siberian Traps magmatism and the end-Permian mass extinction (Invited)

    NASA Astrophysics Data System (ADS)

    Burgess, S. D.; Bowring, S. A.

    2013-12-01

    Interest in Large Igneous Provinces as agents for massive climatic and biological change is steadily increasing, though the temporal constraints on both are seldom precise enough to allow detailed testing of a causal relationship. The end-Permian mass extinction is one of the most biologically important and intensely studied events in Earth history and has been linked to many possible trigger mechanisms, from voluminous volcanism to bolide impact. Proposed kill mechanisms range from acidic and/or anoxic oceans to a cocktail of toxic gases, although the link between trigger and kill mechanisms is unconstrained due to the lack of a high-precision timeline. Critical to assessing the plausibility of different trigger and kill mechanisms is an accurate age model for the biotic crisis and the perturbations to the global carbon cycle and ocean chemistry. Recent work using the EARTHTIME U/Pb tracer solution has refined the timing of the onset and duration of the marine mass extinction event and the earliest Triassic recovery at the GSSP for the Permian-Triassic boundary in Meishan, China. This work constrains the mass extinction duration to less than 100 kyr and provides an accurate and precise time point for the onset of extinction, against which the timing of potential trigger mechanisms may be compared. For more than two decades, eruption and emplacement of the Siberian traps has been implicated as a potential trigger of the end-Permian extinction. In this scenario, magmatism drives the biotic crisis through mobilization of volatiles from the sedimentary rock with which intruding and erupting magmas interact. Massive volatile release is believed to trigger major changes in atmospheric chemistry and temperature, both of which have been proposed as kill mechanisms. Current temporal constrains on the timing and duration of the Siberian magmatism are an order of magnitude less precise than those for the mass extinction event and associated environmental perturbations, limiting detailed testing of a causal relationship. We present new high-precision U/Pb geochronology on zircon crystals isolated from a suite of shallowly intruded dolerites in the Noril'sk region and two welded tuffs in the Maymecha river-valley. These two sections are the most extensively studied in the magmatic province and although there are thick exposures of lava and volcaniclastic rock elsewhere, the Noril'sk and Maymecha-Kotuy sections are thought to be representative of the entire extrusive stratigraphy. Our dates suggest that intrusive and extrusive magmatism began within analytical uncertainty of the onset of mass extinction, permitting a causal connection with age precision at the ~ × 0.06 Ma level. The new dates also allow projection of the extinction interval and associated chemostratigraphy onto the Siberian trap stratigraphy, which suggests that ~300m of volcanicalstic rocks and ~1800m of lavas in the Maymecha-Kotuy section were erupted just prior to the onset of mass extinction. Comparison of a detailed eruption history to biological and chemical records over the extinction and recovery intervals allows for better evaluation of plausible kill mechanisms.

  7. A New Zenith Tropospheric Delay Grid Product for Real-Time PPP Applications over China.

    PubMed

    Lou, Yidong; Huang, Jinfang; Zhang, Weixing; Liang, Hong; Zheng, Fu; Liu, Jingnan

    2017-12-27

    Tropospheric delay is one of the major factors affecting the accuracy of electromagnetic distance measurements. To provide wide-area real-time high precision zenith tropospheric delay (ZTD), the temporal and spatial variations of ZTD with altitude were analyzed on the bases of the latest meteorological reanalysis product (ERA-Interim) provided by the European Center for Medium-Range Weather Forecasts (ECMWF). An inverse scale height model at given locations taking latitude, longitude and day of year as inputs was then developed and used to convert real-time ZTD at GPS stations in Crustal Movement Observation Network of China (CMONOC) from station height to mean sea level (MSL). The real-time ZTD grid product (RtZTD) over China was then generated with a time interval of 5 min. Compared with ZTD estimated in post-processing mode, the bias and error RMS of ZTD at test GPS stations derived from RtZTD are 0.39 and 1.56 cm, which is significantly more accurate than commonly used empirical models. In addition, simulated real-time kinematic Precise Point Positioning (PPP) tests show that using RtZTD could accelerate the BDS-PPP convergence time by up to 32% and 65% in the horizontal and vertical components (set coordinate error thresholds to 0.4 m), respectively. For GPS-PPP, the convergence time using RtZTD can be accelerated by up to 29% in the vertical component (0.2 m).

  8. Testing gravity with Lunar Laser Ranging: An update on the APOLLO experiment

    NASA Astrophysics Data System (ADS)

    Battat, James; Colmenares, Nick; Davis, Rodney; Ruixue, Louisa Huang; Murphy, Thomas W., Jr.; Apollo Collaboration

    2017-01-01

    The mystery of dark energy and the incompatibility of quantum mechanics and General Relativity indicate the need for precision experimental probes of gravitational physics. The Earth-Moon-Sun system is a fertile laboratory for such tests. The Apache Point Observatory Lunar Laser-ranging Operation (APOLLO) makes optical range measurements to retro-reflectors on the Moon with one millimeter precision. These measurements of the lunar orbit enable incisive constraints on gravitational phenomena such as the Strong Equivalence Principle and dG / dt (among others). Until now, the APOLLO team had not been able to assess the accuracy of our data, in large part because known limitations to lunar range models ensure data-model residuals at the centimeter scale. To directly measure the APOLLO system timing accuracy, we have built an Absolute timing Calibration System (ACS) that delivers photons to our detector at known, stable time intervals using a pulsed fiber laser locked to a cesium frequency standard. This scheme provides real-time calibration of the APOLLO system timing, synchronous with the range measurements. We installed the calibration system in August, 2016. In this talk, we will describe the ACS design, and present present preliminary results from the ACS calibration campaign. We acknowledge the support of both NSF and NASA

  9. A parts-per-billion measurement of the antiproton magnetic moment

    NASA Astrophysics Data System (ADS)

    Smorra, C.; Sellner, S.; Borchert, M. J.; Harrington, J. A.; Higuchi, T.; Nagahama, H.; Tanaka, T.; Mooser, A.; Schneider, G.; Bohman, M.; Blaum, K.; Matsuda, Y.; Ospelkaus, C.; Quint, W.; Walz, J.; Yamazaki, Y.; Ulmer, S.

    2017-10-01

    Precise comparisons of the fundamental properties of matter-antimatter conjugates provide sensitive tests of charge-parity-time (CPT) invariance, which is an important symmetry that rests on basic assumptions of the standard model of particle physics. Experiments on mesons, leptons and baryons have compared different properties of matter-antimatter conjugates with fractional uncertainties at the parts-per-billion level or better. One specific quantity, however, has so far only been known to a fractional uncertainty at the parts-per-million level: the magnetic moment of the antiproton, . The extraordinary difficulty in measuring with high precision is caused by its intrinsic smallness; for example, it is 660 times smaller than the magnetic moment of the positron. Here we report a high-precision measurement of in units of the nuclear magneton μN with a fractional precision of 1.5 parts per billion (68% confidence level). We use a two-particle spectroscopy method in an advanced cryogenic multi-Penning trap system. Our result  = -2.7928473441(42)μN (where the number in parentheses represents the 68% confidence interval on the last digits of the value) improves the precision of the previous best measurement by a factor of approximately 350. The measured value is consistent with the proton magnetic moment, μp = 2.792847350(9)μN, and is in agreement with CPT invariance. Consequently, this measurement constrains the magnitude of certain CPT-violating effects to below 1.8 × 10-24 gigaelectronvolts, and a possible splitting of the proton-antiproton magnetic moments by CPT-odd dimension-five interactions to below 6 × 10-12 Bohr magnetons.

  10. A parts-per-billion measurement of the antiproton magnetic moment.

    PubMed

    Smorra, C; Sellner, S; Borchert, M J; Harrington, J A; Higuchi, T; Nagahama, H; Tanaka, T; Mooser, A; Schneider, G; Bohman, M; Blaum, K; Matsuda, Y; Ospelkaus, C; Quint, W; Walz, J; Yamazaki, Y; Ulmer, S

    2017-10-18

    Precise comparisons of the fundamental properties of matter-antimatter conjugates provide sensitive tests of charge-parity-time (CPT) invariance, which is an important symmetry that rests on basic assumptions of the standard model of particle physics. Experiments on mesons, leptons and baryons have compared different properties of matter-antimatter conjugates with fractional uncertainties at the parts-per-billion level or better. One specific quantity, however, has so far only been known to a fractional uncertainty at the parts-per-million level: the magnetic moment of the antiproton, . The extraordinary difficulty in measuring with high precision is caused by its intrinsic smallness; for example, it is 660 times smaller than the magnetic moment of the positron. Here we report a high-precision measurement of in units of the nuclear magneton μ N with a fractional precision of 1.5 parts per billion (68% confidence level). We use a two-particle spectroscopy method in an advanced cryogenic multi-Penning trap system. Our result  = -2.7928473441(42)μ N (where the number in parentheses represents the 68% confidence interval on the last digits of the value) improves the precision of the previous best measurement by a factor of approximately 350. The measured value is consistent with the proton magnetic moment, μ p  = 2.792847350(9)μ N , and is in agreement with CPT invariance. Consequently, this measurement constrains the magnitude of certain CPT-violating effects to below 1.8 × 10 -24 gigaelectronvolts, and a possible splitting of the proton-antiproton magnetic moments by CPT-odd dimension-five interactions to below 6 × 10 -12 Bohr magnetons.

  11. On a distinctive feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets

    NASA Astrophysics Data System (ADS)

    Trifonenkov, A. V.; Trifonenkov, V. P.

    2017-01-01

    This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.

  12. Precision Experiments with Ultraslow Muons

    NASA Astrophysics Data System (ADS)

    Mills, Allen P.

    A source of ~105 ultraslow muons (USM) per second (~0.2 eV energy spread and 40 mm source diameter) reported by Miyake et al., and the demonstration of 100 K thermal muonium in vacuum by Antognini, et al., suggest possibilities for substantial improvements in the experimental precisions of the muonium 1S-2S interval and the muon g-2 measurements.

  13. Estimation of the uncertainty of analyte concentration from the measurement uncertainty.

    PubMed

    Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F

    2015-09-01

    Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.

  14. Drift in Neural Population Activity Causes Working Memory to Deteriorate Over Time.

    PubMed

    Schneegans, Sebastian; Bays, Paul M

    2018-05-23

    Short-term memories are thought to be maintained in the form of sustained spiking activity in neural populations. Decreases in recall precision observed with increasing number of memorized items can be accounted for by a limit on total spiking activity, resulting in fewer spikes contributing to the representation of each individual item. Longer retention intervals likewise reduce recall precision, but it is unknown what changes in population activity produce this effect. One possibility is that spiking activity becomes attenuated over time, such that the same mechanism accounts for both effects of set size and retention duration. Alternatively, reduced performance may be caused by drift in the encoded value over time, without a decrease in overall spiking activity. Human participants of either sex performed a variable-delay cued recall task with a saccadic response, providing a precise measure of recall latency. Based on a spike integration model of decision making, if the effects of set size and retention duration are both caused by decreased spiking activity, we would predict a fixed relationship between recall precision and response latency across conditions. In contrast, the drift hypothesis predicts no systematic changes in latency with increasing delays. Our results show both an increase in latency with set size, and a decrease in response precision with longer delays within each set size, but no systematic increase in latency for increasing delay durations. These results were quantitatively reproduced by a model based on a limited neural resource in which working memories drift rather than decay with time. SIGNIFICANCE STATEMENT Rapid deterioration over seconds is a defining feature of short-term memory, but what mechanism drives this degradation of internal representations? Here, we extend a successful population coding model of working memory by introducing possible mechanisms of delay effects. We show that a decay in neural signal over time predicts that the time required for memory retrieval will increase with delay, whereas a random drift in the stored value predicts no effect of delay on retrieval time. Testing these predictions in a multi-item memory task with an eye movement response, we identified drift as a key mechanism of memory decline. These results provide evidence for a dynamic spiking basis for working memory, in contrast to recent proposals of activity-silent storage. Copyright © 2018 Schneegans and Bays.

  15. Step scaling and the Yang-Mills gradient flow

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2014-06-01

    The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.

  16. Evaluating Protocol Lifecycle Time Intervals in HIV/AIDS Clinical Trials

    PubMed Central

    Schouten, Jeffrey T.; Dixon, Dennis; Varghese, Suresh; Cope, Marie T.; Marci, Joe; Kagan, Jonathan M.

    2014-01-01

    Background Identifying efficacious interventions for the prevention and treatment of human diseases depends on the efficient development and implementation of controlled clinical trials. Essential to reducing the time and burden of completing the clinical trial lifecycle is determining which aspects take the longest, delay other stages, and may lead to better resource utilization without diminishing scientific quality, safety, or the protection of human subjects. Purpose In this study we modeled time-to-event data to explore relationships between clinical trial protocol development and implementation times, as well as identify potential correlates of prolonged development and implementation. Methods We obtained time interval and participant accrual data from 111 interventional clinical trials initiated between 2006 and 2011 by NIH’s HIV/AIDS Clinical Trials Networks. We determined the time (in days) required to complete defined phases of clinical trial protocol development and implementation. Kaplan-Meier estimates were used to assess the rates at which protocols reached specified terminal events, stratified by study purpose (therapeutic, prevention) and phase group (pilot/phase I, phase II, and phase III/ IV). We also examined several potential correlates to prolonged development and implementation intervals. Results Even though phase grouping did not determine development or implementation times of either therapeutic or prevention studies, overall we observed wide variation in protocol development times. Moreover, we detected a trend toward phase III/IV therapeutic protocols exhibiting longer developmental (median 2 ½ years) and implementation times (>3years). We also found that protocols exceeding the median number of days for completing the development interval had significantly longer implementation. Limitations The use of a relatively small set of protocols may have limited our ability to detect differences across phase groupings. Some timing effects present for a specific study phase may have been masked by combining protocols into phase groupings. Presence of informative censoring, such as withdrawal of some protocols from development if they began showing signs of lost interest among investigators, complicates interpretation of Kaplan-Meier estimates. Because this study constitutes a retrospective examination over an extended period of time, it does not allow for the precise identification of relative factors impacting timing. Conclusions Delays not only increase the time and cost to complete clinical trials, but they also diminish their usefulness by failing to answer research questions in time. We believe that research analyzing the time spent traversing defined intervals across the clinical trial protocol development and implementation continuum can stimulate business process analyses and reengineering efforts that could lead to reductions in the time from clinical trial concept to results, thereby accelerating progress in clinical research. PMID:24980279

  17. Dating young geomorphic surfaces using age of colonizing Douglas fir in southwestern Washington and northwestern Oregon, USA

    USGS Publications Warehouse

    Pierson, T.C.

    2007-01-01

    Dating of dynamic, young (<500 years) geomorphic landforms, particularly volcanofluvial features, requires higher precision than is possible with radiocarbon dating. Minimum ages of recently created landforms have long been obtained from tree-ring ages of the oldest trees growing on new surfaces. But to estimate the year of landform creation requires that two time corrections be added to tree ages obtained from increment cores: (1) the time interval between stabilization of the new landform surface and germination of the sampled trees (germination lag time or GLT); and (2) the interval between seedling germination and growth to sampling height, if the trees are not cored at ground level. The sum of these two time intervals is the colonization time gap (CTG). Such time corrections have been needed for more precise dating of terraces and floodplains in lowland river valleys in the Cascade Range, where significant eruption-induced lateral shifting and vertical aggradation of channels can occur over years to decades, and where timing of such geomorphic changes can be critical to emergency planning. Earliest colonizing Douglas fir (Pseudotsuga menziesii) were sampled for tree-ring dating at eight sites on lowland (<750 m a.s.l.), recently formed surfaces of known age near three Cascade volcanoes - Mount Rainier, Mount St. Helens and Mount Hood - in southwestern Washington and northwestern Oregon. Increment cores or stem sections were taken at breast height and, where possible, at ground level from the largest, oldest-looking trees at each study site. At least ten trees were sampled at each site unless the total of early colonizers was less. Results indicate that a correction of four years should be used for GLT and 10 years for CTG if the single largest (and presumed oldest) Douglas fir growing on a surface of unknown age is sampled. This approach would have a potential error of up to 20 years. Error can be reduced by sampling the five largest Douglas fir instead of the single largest. A GLT correction of 5 years should be added to the mean ring-count age of the five largest trees growing on the surface being dated, if the trees are cored at ground level. This correction would have an approximate error of ??5 years. If the trees are cored at about 1.4 m above the round surface (breast height), a CTG correction of 11 years should be added to the mean age of the five sampled trees (with an error of about ??7 years).

  18. Astrochronology of the Pliensbachian-Toarcian transition in the Foum Tillicht section (central High Atlas, Morroco)

    NASA Astrophysics Data System (ADS)

    Martinez, Mathieu; Bodin, Stéphane; Krencker, François-Nicolas

    2015-04-01

    The Pliensbachian and Toarcian stages (Early Jurassic) are marked by a series of carbon cycle disturbances, major climatic changes and severe faunal turnovers. An accurate knowledge of the timing of the Pliensbachian-Toarcian age is a key for quantifying fluxes and rhythms of faunal and geochemical processes during these major environmental perturbations. Although many studies provided astrochronological frameworks of the Toarcian Stage and the Toarcian oceanic anoxic event, no precise time frame exists for the Pliensbachian-Toarcian transition, often condensed in the previously studied sections. Here, we provide an astrochronology of the Pliensbachian-Toarcian transition in the Foum Tillicht section (central High Atlas, Morocco). The section is composed of decimetric hemipelagic marl-limestone alternations accompanied by cyclic fluctuations in the δ13Cmicrite. In this section, the marl-limestone alternations reflect cyclic sea-level/climatic changes, which triggers rhythmic migrations of the surrounding carbonate platforms and modulates the amount of carbonate exported to the basin. The studied interval encompasses 142.15 m of the section, from the base of the series to a hiatus in the Early Toarcian, marked by an erosional surface. The Pliensbachian-Toarcian (P-To) Event, a negative excursion in carbonate δ13Cmicrite, is observed pro parte in this studied interval. δ13Cmicrite measurements were performed every ~2 m at the base of the section and every 0.20 m within the P-To Event interval. Spectral analyses were performed using the multi-taper method and the evolutive Fast Fourier Transform to get the accurate assessment of the main significant periods and their evolution throughout the studied interval. Two main cycles are observed in the series: the 405-kyr eccentricity cycles is observed throughout the series, while the obliquity cycles is observed within the P-To Event, in the most densely sampled interval. The studied interval covers a 3.6-Myr interval. The duration of the part of P-To Event covered in this analysis is assessed at 0.70 Myr. In addition, the interval from the base of the Toarcian to the first occurrence of the calcareous nannofossil C. superbus has a duration assessed from 0.47 to 0.55 Myr. This duration is significantly higher than most of assessments obtained by former cyclostratigraphy analyses, showing that previous studies underestimated the duration of this interval, often condensed in the Western Tethys. This study shows the potential of the Foum Tillicht section to provide a refined time frame of the Pliensbachian-Toarcian boundary, which could be integrated in the next Geological Time Scale.

  19. NASA hydrogen maser accuracy and stability in relation to world standards

    NASA Technical Reports Server (NTRS)

    Peters, H. E.; Percival, D. B.

    1973-01-01

    Frequency comparisons were made among five NASA hydrogen masers in 1969 and again in 1972 to a precision of one part in 10 to the 13th power. Frequency comparisons were also made between these masers and the cesium-beam ensembles of several international standards laboratories. The hydrogen maser frequency stabilities as related to IAT were comparable to the frequency stabilities of individual time scales with respect to IAT. The relative frequency variations among the NASA masers, measured after the three-year interval, were 2 + or - 2 parts in 10 to the 13th power. Thus time scales based on hydrogen masers would have excellent long-term stability and uniformity.

  20. Generation and Validation of Spatial Distribution of Hourly Wind Speed Time-Series using Machine Learning

    NASA Astrophysics Data System (ADS)

    Veronesi, F.; Grassi, S.

    2016-09-01

    Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners.

  1. Automated identification of ERP peaks through Dynamic Time Warping: an application to developmental dyslexia.

    PubMed

    Assecondi, Sara; Bianchi, A M; Hallez, H; Staelens, S; Casarotto, S; Lemahieu, I; Chiarenza, G A

    2009-10-01

    This article proposes a method to automatically identify and label event-related potential (ERP) components with high accuracy and precision. We present a framework, referred to as peak-picking Dynamic Time Warping (ppDTW), where a priori knowledge about the ERPs under investigation is used to define a reference signal. We developed a combination of peak-picking and Dynamic Time Warping (DTW) that makes the temporal intervals for peak-picking adaptive on the basis of the morphology of the data. We tested the procedure on experimental data recorded from a control group and from children diagnosed with developmental dyslexia. We compared our results with the traditional peak-picking. We demonstrated that our method achieves better performance than peak-picking, with an overall precision, recall and F-score of 93%, 86% and 89%, respectively, versus 93%, 80% and 85% achieved by peak-picking. We showed that our hybrid method outperforms peak-picking, when dealing with data involving several peaks of interest. The proposed method can reliably identify and label ERP components in challenging event-related recordings, thus assisting the clinician in an objective assessment of amplitudes and latencies of peaks of clinical interest.

  2. Precise chronology of differentiation of developing human primary dentition.

    PubMed

    Hu, Xuefeng; Xu, Shan; Lin, Chensheng; Zhang, Lishan; Chen, YiPing; Zhang, Yanding

    2014-02-01

    While correlation of developmental stage with embryonic age of the human primary dentition has been well documented, the available information regarding the differentiation timing of the primary teeth was largely based on the observation of initial mineralization and varies significantly. In this study, we aimed to document precise differentiation timing of the developing human primary dentition. We systematically examined the expression of odontogenic differentiation markers along with the formation of mineralized tissue in each developing maxillary and mandibular teeth from human embryos with well-defined embryonic age. We show that, despite that all primary teeth initiate development at the same time, odontogenic differentiation begins in the maxillary incisors at the 15th week and in the mandibular incisors at the 16th week of gestation, followed by the canine, the first primary premolar, and the second primary premolar at a week interval sequentially. Despite that the mandibular primary incisors erupt earlier than the maxillary incisors, this distal to proximal sequential differentiation of the human primary dentition coincides in general with the sequence of tooth eruption. Our results provide an accurate chronology of odontogenic differentiation of the developing human primary dentition, which could be used as reference for future studies of human tooth development.

  3. Picosecond Resolution Time-to-Digital Converter Using Gm-C Integrator and SAR-ADC

    NASA Astrophysics Data System (ADS)

    Xu, Zule; Miyahara, Masaya; Matsuzawa, Akira

    2014-04-01

    A picosecond resolution time-to-digital converter (TDC) is presented. The resolution of a conventional delay chain TDC is limited by the delay of a logic buffer. Various types of recent TDCs are successful in breaking this limitation, but they require a significant calibration effort to achieve picosecond resolution with a sufficient linear range. To address these issues, we propose a simple method to break the resolution limitation without any calibration: a Gm-C integrator followed by a successive approximation register analog-to-digital converter (SAR-ADC). This translates the time interval into charge, and then the charge is quantized. A prototype chip was fabricated in 90 nm CMOS. The measurement results reveal a 1 ps resolution, a -0.6/0.7 LSB differential nonlinearity (DNL), a -1.1/2.3 LSB integral nonlinearity (INL), and a 9-bit range. The measured 11.74 ps single-shot precision is caused by the noise of the integrator. We analyze the noise of the integrator and propose an improved front-end circuit to reduce this noise. The proposal is verified by simulations showing the maximum single-shot precision is less than 1 ps. The proposed front-end circuit can also diminish the mismatch effects.

  4. Wind Information Uplink to Aircraft Performing Interval Management Operations

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Barmore, Bryan E.; Swieringa, Kurt A.

    2016-01-01

    Interval Management (IM) is an ADS-B-enabled suite of applications that use ground and flight deck capabilities and procedures designed to support the relative spacing of aircraft (Barmore et al., 2004, Murdoch et al. 2009, Barmore 2009, Swieringa et al. 2011; Weitz et al. 2012). Relative spacing refers to managing the position of one aircraft to a time or distance relative to another aircraft, as opposed to a static reference point such as a point over the ground or clock time. This results in improved inter-aircraft spacing precision and is expected to allow aircraft to be spaced closer to the applicable separation standard than current operations. Consequently, if the reduced spacing is used in scheduling, IM can reduce the time interval between the first and last aircraft in an overall arrival flow, resulting in increased throughput. Because IM relies on speed changes to achieve precise spacing, it can reduce costly, low-altitude, vectoring, which increases both efficiency and throughput in capacity-constrained airspace without negatively impacting controller workload and task complexity. This is expected to increase overall system efficiency. The Flight Deck Interval Management (FIM) equipment provides speeds to the flight crew that will deliver them to the achieve-by point at the controller-specified time, i.e., assigned spacing goal, after the target aircraft crosses the achieve-by point (Figure 1.1). Since the IM and target aircraft may not be on the same arrival procedure, the FIM equipment predicts the estimated times of arrival (ETA) for both the IM and target aircraft to the achieve-by point. This involves generating an approximate four-dimensional trajectory for each aircraft. The accuracy of the wind data used to generate those trajectories is critical to the success of the IM operation. There are two main forms of uncertainty in the wind information used by the FIM equipment. The first is the accuracy of the forecast modeling done by the weather provider. This is generally a global environmental prediction obtained from a weather model such as the Rapid Refresh (RAP) from the National Centers for Environmental Prediction (NCEP). The weather forecast data will have errors relative to the actual, or truth, winds that the aircraft will encounter. The second source of uncertainty is that only a small subset of the forecast data can be uplinked to the aircraft for use by the FIM equipment. This results in loss of additional information. The Federal Aviation Administration (FAA) and RTCA are currently developing standards for the communication of wind and atmospheric data to the aircraft for use in NextGen operations. This study examines the impact of various wind forecast sampling methods on IM performance metrics to inform the standards development.

  5. Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements

    USGS Publications Warehouse

    Anthony, Robert E.; Ringler, Adam; Wilson, David

    2018-01-01

    The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.

  6. Using confidence intervals to evaluate the focus alignment of spectrograph detector arrays.

    PubMed

    Sawyer, Travis W; Hawkins, Kyle S; Damento, Michael

    2017-06-20

    High-resolution spectrographs extract detailed spectral information of a sample and are frequently used in astronomy, laser-induced breakdown spectroscopy, and Raman spectroscopy. These instruments employ dispersive elements such as prisms and diffraction gratings to spatially separate different wavelengths of light, which are then detected by a charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) detector array. Precise alignment along the optical axis (focus position) of the detector array is critical to maximize the instrumental resolution; however, traditional approaches of scanning the detector through focus lack a quantitative measure of precision, limiting the repeatability and relying on one's experience. Here we propose a method to evaluate the focus alignment of spectrograph detector arrays by establishing confidence intervals to measure the alignment precision. We show that propagation of uncertainty can be used to estimate the variance in an alignment, thus providing a quantitative and repeatable means to evaluate the precision and confidence of an alignment. We test the approach by aligning the detector array of a prototype miniature echelle spectrograph. The results indicate that the procedure effectively quantifies alignment precision, enabling one to objectively determine when an alignment has reached an acceptable level. This quantitative approach also provides a foundation for further optimization, including automated alignment. Furthermore, the procedure introduced here can be extended to other alignment techniques that rely on numerically fitting data to a model, providing a general framework for evaluating the precision of alignment methods.

  7. Trajectory of asteroid 2017 SB20 within the CRTBP

    NASA Astrophysics Data System (ADS)

    Tiwary, Rishikesh Dutta; Kushvah, Badam Singh; Ishwar, Bhola

    2018-06-01

    Regular monitoring the trajectory of asteroids to a future time is a necessity, because the variety of known probably unsafe near-Earth asteroids are increasing. The analysis is perform to avoid any incident or whether they would have a further future threat to the Earth or not. Recently a new Near Earth Asteroid (2017 SB20) has been observed to cross the Earth orbit. In view of this we obtain the trajectory of Asteroid in the circular restricted three body problem with radiation pressure and oblateness. We examine nature of Asteroid's orbit with Lyapunov Characteristic Exponents (LCEs) over a finite intervals of time. LCE of the system confirms that the motion of asteroid is chaotic in nature. With the effect of radiation pressure and oblateness the length of curve varies in both the planes. Oblateness factor is found to be more perturbative than radiation pressure. To see the precision of result obtain from numerical integration we show the error propagation and the numerical stability is assured around the singularity by applying regularized equations of motion for precise long-term study.

  8. Airborne Evaluation and Demonstration of a Time-Based Airborne Inter-Arrival Spacing Tool

    NASA Technical Reports Server (NTRS)

    Lohr, Gary W.; Oseguera-Lohr, Rosa M.; Abbott, Terence S.; Capron, William R.; Howell, Charles T.

    2005-01-01

    An airborne tool has been developed that allows an aircraft to obtain a precise inter-arrival time-based spacing interval from the preceding aircraft. The Advanced Terminal Area Approach Spacing (ATAAS) tool uses Automatic Dependent Surveillance-Broadcast (ADS-B) data to compute speed commands for the ATAAS-equipped aircraft to obtain this inter-arrival spacing behind another aircraft. The tool was evaluated in an operational environment at the Chicago O'Hare International Airport and in the surrounding terminal area with three participating aircraft flying fixed route area navigation (RNAV) paths and vector scenarios. Both manual and autothrottle speed management were included in the scenarios to demonstrate the ability to use ATAAS with either method of speed management. The results on the overall delivery precision of the tool, based on a target spacing of 90 seconds, were a mean of 90.8 seconds with a standard deviation of 7.7 seconds. The results for the RNAV and vector cases were, respectively, M=89.3, SD=4.9 and M=91.7, SD=9.0.

  9. Pharmacokinetics of 13-cis-retinoic acid in patients with advanced cancer.

    PubMed

    Goodman, G E; Einspahr, J G; Alberts, D S; Davis, T P; Leigh, S A; Chen, H S; Meyskens, F L

    1982-05-01

    13-cis-Retinoic acid (13-CRA) is a synthetic analog of vitamin A effective reversing preneoplastic lesions in both humans and animals. To study its physiochemical properties and disposition kinetics, we developed a rapid, sensitive, and precise high-performance liquid chromatography assay for 13-CRA in biological samples. This assay system resulted in a clear separation of 13-CRA from all-trans-retinoic acid and retinol and had a detection limit of 20 ng/ml plasma. Recovery was 89 +/- 6% (S.D.) at equivalent physiological concentrations with a precision of 8%. To study the disposition kinetics in humans, 13 patients received a p.o. bolus of 13-CRA and had blood samples collected at timed intervals. For the 10 patients studied on the first day of 13-CRA administration, the mean time to peak plasma concentration was 222 +/- 102 min. Interpatient peak 13-CRA plasma concentrations were found to be variable, suggesting irregular gastrointestinal absorption. Beta-Phase t 1/2 was approximately 25 hr. The prolonged terminal-phase plasma half-life may represent biliary excretion and enterohepatic circulation.

  10. Fast-Time Evaluations of Airborne Merging and Spacing in Terminal Arrival Operations

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Karthik; Barmore, Bryan; Bussink, Frank; Weitz, Lesley; Dahlene, Laura

    2005-01-01

    NASA researchers are developing new airborne technologies and procedures to increase runway throughput at capacity-constrained airports by improving the precision of inter-arrival spacing at the runway threshold. In this new operational concept, pilots of equipped aircraft are cleared to adjust aircraft speed to achieve a designated spacing interval at the runway threshold, relative to a designated lead aircraft. A new airborne toolset, prototypes of which are being developed at the NASA Langley Research Center, assists pilots in achieving this objective. The current prototype allows precision spacing operations to commence even when the aircraft and its lead are not yet in-trail, but are on merging arrival routes to the runway. A series of fast-time evaluations of the new toolset were conducted at the Langley Research Center during the summer of 2004. The study assessed toolset performance in a mixed fleet of aircraft on three merging arrival streams under a range of operating conditions. The results of the study indicate that the prototype possesses a high degree of robustness to moderate variations in operating conditions.

  11. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study.

    PubMed

    Kaplan, David; Chen, Jianshen

    2012-07-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.

  12. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  13. The Quality of Reporting of Measures of Precision in Animal Experiments in Implant Dentistry: A Methodological Study.

    PubMed

    Faggion, Clovis Mariano; Aranda, Luisiana; Diaz, Karla Tatiana; Shih, Ming-Chieh; Tu, Yu-Kang; Alarcón, Marco Antonio

    2016-01-01

    Information on precision of treatment-effect estimates is pivotal for understanding research findings. In animal experiments, which provide important information for supporting clinical trials in implant dentistry, inaccurate information may lead to biased clinical trials. The aim of this methodological study was to determine whether sample size calculation, standard errors, and confidence intervals for treatment-effect estimates are reported accurately in publications describing animal experiments in implant dentistry. MEDLINE (via PubMed), Scopus, and SciELO databases were searched to identify reports involving animal experiments with dental implants published from September 2010 to March 2015. Data from publications were extracted into a standardized form with nine items related to precision of treatment estimates and experiment characteristics. Data selection and extraction were performed independently and in duplicate, with disagreements resolved by discussion-based consensus. The chi-square and Fisher exact tests were used to assess differences in reporting according to study sponsorship type and impact factor of the journal of publication. The sample comprised reports of 161 animal experiments. Sample size calculation was reported in five (2%) publications. P values and confidence intervals were reported in 152 (94%) and 13 (8%) of these publications, respectively. Standard errors were reported in 19 (12%) publications. Confidence intervals were better reported in publications describing industry-supported animal experiments (P = .03) and with a higher impact factor (P = .02). Information on precision of estimates is rarely reported in publications describing animal experiments in implant dentistry. This lack of information makes it difficult to evaluate whether the translation of animal research findings to clinical trials is adequate.

  14. Time discrimination deficits in schizophrenia patients with first-rank (passivity) symptoms.

    PubMed

    Waters, Flavie; Jablensky, Assen

    2009-05-15

    Schizophrenia patients with first-rank (passivity) symptoms (FRS) report a loss of clear boundaries between the self and others and that their thoughts and actions are controlled by external forces. One of the more widely accepted explanatory models of FRS suggests a dysfunction in the 'forward model' system, whose role consists in predicting the sensory consequences of actions [Frith, C., 2006. The neural basis of hallucinations and delusions. Comptes Rendus Biologies 328, 169-175.]. There has been recent interest in the importance of timing precision underlying both the functioning of the forward model, and in processes contributing to the mechanisms of self-recognition [Haggard, P., Martin, F., Taylor-Clarke, M., Jeannerod, M., Franck, N., 2003. Awareness of action in schizophrenia. Neuroreport 14, 1081-1085.]. In the current study, we examined whether schizophrenia patients with FRS have a time perception impairment, using an auditory discrimination task requiring judgments of temporal intervals. Thirty-five schizophrenia patients (15 with, and 20 without, FRS), and 16 non-clinical controls completed the task. The results showed that patients with FRS experienced time differently by underestimating the duration of time intervals. Given the role of timing in shaping sensory awareness and in the formation of causal mental associations, a breakdown in timing mechanisms may affect the processes relating to the perceived control of actions and mental events, leading to disturbances of self-recognition in FRS.

  15. Fidelity of the ensemble code for visual motion in primate retina.

    PubMed

    Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J

    2005-07-01

    Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.

  16. Nonparametric change point estimation for survival distributions with a partially constant hazard rate.

    PubMed

    Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang

    2018-04-05

    We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.

  17. An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.

    2014-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.

  18. The P Value Problem in Otolaryngology: Shifting to Effect Sizes and Confidence Intervals.

    PubMed

    Vila, Peter M; Townsend, Melanie Elizabeth; Bhatt, Neel K; Kao, W Katherine; Sinha, Parul; Neely, J Gail

    2017-06-01

    There is a lack of reporting effect sizes and confidence intervals in the current biomedical literature. The objective of this article is to present a discussion of the recent paradigm shift encouraging the use of reporting effect sizes and confidence intervals. Although P values help to inform us about whether an effect exists due to chance, effect sizes inform us about the magnitude of the effect (clinical significance), and confidence intervals inform us about the range of plausible estimates for the general population mean (precision). Reporting effect sizes and confidence intervals is a necessary addition to the biomedical literature, and these concepts are reviewed in this article.

  19. Accurate determination of the fine-structure intervals in the 3P ground states of C-13 and C-12 by far-infrared laser magnetic resonance

    NASA Technical Reports Server (NTRS)

    Cooksy, A. L.; Saykally, R. J.; Brown, J. M.; Evenson, K. M.

    1986-01-01

    Accurate values are presented for the fine-structure intervals in the 3P ground state of neutral atomic C-12 and C-13 as obtained from laser magnetic resonance spectroscopy. The rigorous analysis of C-13 hyperfine structure, the measurement of resonant fields for C-12 transitions at several additional far-infrared laser frequencies, and the increased precision of the C-12 measurements, permit significant improvement in the evaluation of these energies relative to earlier work. These results will expedite the direct and precise measurement of these transitions in interstellar sources and should assist in the determination of the interstellar C-12/C-13 abundance ratio.

  20. Measurement of baseline and orientation between distributed aerospace platforms.

    PubMed

    Wang, Wen-Qin

    2013-01-01

    Distributed platforms play an important role in aerospace remote sensing, radar navigation, and wireless communication applications. However, besides the requirement of high accurate time and frequency synchronization for coherent signal processing, the baseline between the transmitting platform and receiving platform and the orientation of platform towards each other during data recording must be measured in real time. In this paper, we propose an improved pulsed duplex microwave ranging approach, which allows determining the spatial baseline and orientation between distributed aerospace platforms by the proposed high-precision time-interval estimation method. This approach is novel in the sense that it cancels the effect of oscillator frequency synchronization errors due to separate oscillators that are used in the platforms. Several performance specifications are also discussed. The effectiveness of the approach is verified by simulation results.

  1. Dual-comb spectroscopy of molecular electronic transitions in condensed phases

    NASA Astrophysics Data System (ADS)

    Cho, Byungmoon; Yoon, Tai Hyun; Cho, Minhaeng

    2018-03-01

    Dual-comb spectroscopy (DCS) utilizes two phase-locked optical frequency combs to allow scanless acquisition of spectra using only a single point detector. Although recent DCS measurements demonstrate rapid acquisition of absolutely calibrated spectral lines with unprecedented precision and accuracy, complex phase-locking schemes and multiple coherent averaging present significant challenges for widespread adoption of DCS. Here, we demonstrate Global Positioning System (GPS) disciplined DCS of a molecular electronic transition in solution at around 800 nm, where the absorption spectrum is recovered by using a single time-domain interferogram. We anticipate that this simplified dual-comb technique with absolute time interval measurement and ultrabroad bandwidth will allow adoption of DCS to tackle molecular dynamics investigation through its implementation in time-resolved nonlinear spectroscopic studies and coherent multidimensional spectroscopy of coupled chromophore systems.

  2. Ns-scaled time-gated fluorescence lifetime imaging for forensic document examination

    NASA Astrophysics Data System (ADS)

    Zhong, Xin; Wang, Xinwei; Zhou, Yan

    2018-01-01

    A method of ns-scaled time-gated fluorescence lifetime imaging (TFLI) is proposed to distinguish different fluorescent substances in forensic document examination. Compared with Video Spectral Comparator (VSC) which can examine fluorescence intensity images only, TFLI can detect questioned documents like falsification or alteration. TFLI system can enhance weak signal by accumulation method. The two fluorescence intensity images of the interval delay time tg are acquired by ICCD and fitted into fluorescence lifetime image. The lifetimes of fluorescence substances are represented by different colors, which make it easy to detect the fluorescent substances and the sequence of handwritings. It proves that TFLI is a powerful tool for forensic document examination. Furthermore, the advantages of TFLI system are ns-scaled precision preservation and powerful capture capability.

  3. A comparison of analysis methods to estimate contingency strength.

    PubMed

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  4. An absolute calibration system for millimeter-accuracy APOLLO measurements

    NASA Astrophysics Data System (ADS)

    Adelberger, E. G.; Battat, J. B. R.; Birkmeier, K. J.; Colmenares, N. R.; Davis, R.; Hoyle, C. D.; Huang, L. R.; McMillan, R. J.; Murphy, T. W., Jr.; Schlerman, E.; Skrobol, C.; Stubbs, C. W.; Zach, A.

    2017-12-01

    Lunar laser ranging provides a number of leading experimental tests of gravitation—important in our quest to unify general relativity and the standard model of physics. The apache point observatory lunar laser-ranging operation (APOLLO) has for years achieved median range precision at the  ∼2 mm level. Yet residuals in model-measurement comparisons are an order-of-magnitude larger, raising the question of whether the ranging data are not nearly as accurate as they are precise, or if the models are incomplete or ill-conditioned. This paper describes a new absolute calibration system (ACS) intended both as a tool for exposing and eliminating sources of systematic error, and also as a means to directly calibrate ranging data in situ. The system consists of a high-repetition-rate (80 MHz) laser emitting short (< 10 ps) pulses that are locked to a cesium clock. In essence, the ACS delivers photons to the APOLLO detector at exquisitely well-defined time intervals as a ‘truth’ input against which APOLLO’s timing performance may be judged and corrected. Preliminary analysis indicates no inaccuracies in APOLLO data beyond the  ∼3 mm level, suggesting that historical APOLLO data are of high quality and motivating continued work on model capabilities. The ACS provides the means to deliver APOLLO data both accurate and precise below the 2 mm level.

  5. Optically Stimulated Luminescence Analysis Method for High Dose Rate Using an Optical Fiber Type Dosimeter

    NASA Astrophysics Data System (ADS)

    Ueno, Katsunori; Tominaga, Kazuo; Tadokoro, Takahiro; Ishizawa, Koji; Takahashi, Yoshinori; Kuwabara, Hitoshi

    2016-08-01

    The investigation of air dose rates at locations in the Fukushima Dai-ichi Nuclear Power Station is necessary for safe removal of the molten nuclear fuel. The target performance for the investigation is to analyze a dose rate in the range of 10-3 Gy/h to 102 Gy/h with a measurement precision of ±4.0% full scale (F.S.) at a measurement interval of 60 s. In order to achieve this target, the authors proposed an optically stimulated luminescence (OSL) analysis method using prompt OSL for a wide dynamic range of dose rates; the OSL is generated using BaFBr:Eu with a fast decay time constant. The luminescence intensity by prompt OSL was formulated by the electron concentration of the trapping state during gamma ray and stimulation light irradiations. The prototype OSL monitor using BaFBr:Eu was manufactured for investigation of prompt OSL and evaluation of the measurement precision. The time dependence of the luminescence intensity by prompt OSL was analyzed by irradiating the OSL sensor in a 60Co irradiation facility. The measured dose rates were obtained in a prompt mode and an accumulating mode with a precision of ±3.3% F.S. for the dose rate range of 9.5 ×10-4 Gy/h to 1.2 ×102 Gy/h.

  6. The effect of methylphenidate and rearing environment on behavioral inhibition in adult male rats.

    PubMed

    Hill, Jade C; Covarrubias, Pablo; Terry, Joel; Sanabria, Federico

    2012-01-01

    The ability to withhold reinforced responses-behavioral inhibition-is impaired in various psychiatric conditions including Attention Deficit Hyperactivity Disorder (ADHD). Methodological and analytical limitations have constrained our understanding of the effects of pharmacological and environmental factors on behavioral inhibition. To determine the effects of acute methylphenidate (MPH) administration and rearing conditions (isolated vs. pair-housed) on behavioral inhibition in adult rats. Inhibitory capacity was evaluated using two response-withholding tasks, differential reinforcement of low rates (DRL) and fixed minimum interval (FMI) schedules of reinforcement. Both tasks made sugar pellets contingent on intervals longer than 6 s between consecutive responses. Inferences on inhibitory and timing capacities were drawn from the distribution of withholding times (interresponse times, or IRTs). MPH increased the number of intervals produced in both tasks. Estimates of behavioral inhibition increased with MPH dose in FMI and with social isolation in DRL. Nonetheless, burst responding in DRL and the divergence of DRL data relative to past studies, among other limitations, undermined the reliability of DRL data as the basis for inferences on behavioral inhibition. Inhibitory capacity was more precisely estimated from FMI than from DRL performance. Based on FMI data, MPH, but not a socially enriched environment, appears to improve inhibitory capacity. The highest dose of MPH tested, 8 mg/kg, did not reduce inhibitory capacity but reduced the responsiveness to waiting contingencies. These results support the use of the FMI schedule, complemented with appropriate analytic techniques, for the assessment of behavioral inhibition in animal models.

  7. High-Precision Pulse Generator

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Kleyner, Igor

    2011-01-01

    A document discusses a pulse generator with subnanosecond resolution implemented with a low-cost field-programmable gate array (FPGA) at low power levels. The method used exploits the fast carry chains of certain FPGAs. Prototypes have been built and tested in both Actel AX and Xilinx Virtex 4 technologies. In-flight calibration or control can be performed by using a similar and related technique as a time interval measurement circuit by measuring a period of the stable oscillator, as the delays through the fast carry chains will vary as a result of manufacturing variances as well as the result of environmental conditions (voltage, aging, temperature, and radiation).

  8. A new technique to determine the correlation between the QT interval and heart-rate for control and SIDS babies

    NASA Technical Reports Server (NTRS)

    Sadeh, D.; Shannon, D. C.; Abboud, S.; Akselrod, S.; Cohen, R. J.

    1987-01-01

    The ability of the autonomic nervous system to alter the QT interval in response to heart rate changes is essential to cardiovascular control. An accurate way to determine the relation between QT intervals and their corresponding RR intervals is described. A computer algorithm measures the RR intervals using digital filtering and cross-correlating the QRS sections of consecutive waveforms. The QT intervals is calculated by choosing a section of, the ECG that includes the T wave and cross-correlating it with all the consecutive T waves. At least 4000 pairs of QT-RR intervals are computed for each subject and a best fit correlation function determines the relations between the QT and RR intervals. This technique enables to establish a precise correlation between RR and QT in order to distinguish between control and SIDS babies.

  9. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age of the ash, therefore masking the true age of deposition. Trace element ratios such as Th/U, Yb/Gd, as well as Hf isotope analysis of dated zircon can be used to decipher the temporal evolution of the magmatic system before the eruption and deposition of the studied ashes, and resolve the complex system behaviour of the zircons. b) Changes in the source of the magma may happen between the deposition of two stratigraphically consecutive ash beds. They result in the modification of the trace element signature of zircon, but also of apatite (Ca5 (F, Cl, OH) (PO4)3). Trace element characteristics in apatite (e.g. Mg, Mn, Fe, F, Cl, Ce, and Y) are a reliable tool for distinguishing chemically similar groups of apatite crystals to unravel the geochemical fingerprint of one single ash bed. By establishing this fingerprint, ash beds of geographically separated geologic sections can be correlated even if they have not all been dated by U-Pb techniques. c) The ultimate goal of quantitative stratigraphy is to establish an age model that predicts the age of a synchronous time line with an associated 95% confidence interval for any such line within a stratigraphic sequence. We show how a Bayesian, non-parametric interpolation approach can be applied to very complex data sets and leads to a well-defined age solution, possibly identifying changes in sedimentation rate. The age of a geological time boundary bracketed by dated samples in such an age model can be defined with an associated uncertainty.

  10. Mechanism-based pharmacokinetic-pharmacodynamic modeling of the antinociceptive effect of buprenorphine in healthy volunteers.

    PubMed

    Yassen, Ashraf; Olofsen, Erik; Romberg, Raymonda; Sarton, Elise; Danhof, Meindert; Dahan, Albert

    2006-06-01

    The objective of this investigation was to characterize the pharmacokinetic-pharmacodynamic relation of buprenorphine's antinociceptive effect in healthy volunteers. Data on the time course of the antinociceptive effect after intravenous administration of 0.05-0.6 mg/70 kg buprenorphine in healthy volunteers was analyzed in conjunction with plasma concentrations by nonlinear mixed-effects analysis. A three-compartment pharmacokinetic model best described the concentration time course. Four structurally different pharmacokinetic-pharmacodynamic models were evaluated for their appropriateness to describe the time course of buprenorphine's antinociceptive effect: (1) E(max) model with an effect compartment model, (2) "power" model with an effect compartment model, (3) receptor association-dissociation model with a linear transduction function, and (4) combined biophase equilibration/receptor association-dissociation model with a linear transduction function. The latter pharmacokinetic-pharmacodynamic model described the time course of effect best and was used to explain time dependencies in buprenorphine's pharmacodynamics. The model converged, yielding precise estimation of the parameters characterizing hysteresis and the relation between relative receptor occupancy and antinociceptive effect. The rate constant describing biophase equilibration (k(eo)) was 0.00447 min(-1) (95% confidence interval, 0.00299-0.00595 min(-1)). The receptor dissociation rate constant (k(off)) was 0.0785 min(-1) (95% confidence interval, 0.0352-0.122 min(-1)), and k(on) was 0.0631 ml . ng(-1) . min(-1) (95% confidence interval, 0.0390-0.0872 ml . ng(-1) . min(-1)). This is consistent with observations in rats, suggesting that the rate-limiting step in the onset and offset of the antinociceptive effect is biophase distribution rather than slow receptor association-dissociation. In the dose range studied, no saturation of receptor occupancy occurred explaining the lack of a ceiling effect for antinociception.

  11. Boundary implications for frequency response of interval FIR and IIR filters

    NASA Technical Reports Server (NTRS)

    Bose, N. K.; Kim, K. D.

    1991-01-01

    It is shown that vertex implication results in parameter space apply to interval trigonometric polynomials. Subsequently, it is shown that the frequency responses of both interval FIR and IIR filters are bounded by the frequency responses of certain extreme filters. The results apply directly in the evaluation of properties of designed filters, especially because it is more realistic to bound the filter coefficients from above and below instead of determining those with infinite precision because of finite arithmetic effects. Illustrative examples are provided to show how the extreme filters might be easily derived in any specific interval FIR or IIR filter design problem.

  12. Apparatus for Ultrahigh Precision Measurement of 13 S1 - 23S 1 Interval in Positronium

    NASA Astrophysics Data System (ADS)

    Goldman, Harris J.

    Positronium (Ps) is a purely leptonic atom comprising an electron and its antimatter equivalent, the positron, in a quasi-stable bound state. Due to its fundamental nature, Ps is an ideal test bed for bound-state QED. Recent high-precision spectroscopic experiments reveal a discrepancy in the measurement of the proton charge radius rp, known as the Proton Charge Radius Puzzle. Spectroscopic measurments carried out on hydrogen and muonic hydrogen, the bound state of a muon and a proton, differ from other scattering and other spectroscopic experiments by 3.3sigma. The measurement of rp comes from fitting the resulting measurement of either the 1S-2S interval of hydrogen or the Lamb Shift in muonic hydrogen to theory. Neither of these atoms are governed purely by quantum electrodynamics (QED) alone as nuclear structure has a role to play. The ratio of the masses of the orbiting particle m to that of the nucleus M is a coefficient in a number of a QED corrections to the energy levels of hydrogen (m/M = 1/1836) and muonic hydrogen ( m/M = 207/1836) and reveals the importance of performing a complementary spectroscopic measurement in Ps, where m/M = 1. The last measurement of the 1S-2S interval was carried out by Fee, Chu, Mills, et al. in 1993 to a precision of 3.2 ppb. The state-of-the-art measurement on hydrogen is now at an uncertainty of 4.2 x 10-15. While the simplicity of Ps causes it to be appealing to test bound-state QED, its antiparticle-particle nature makes it difficult to work with: the ground state lifetime of the triplet state is 142 ns, and whereas the 2S lifetime in Ps is 1.14 micros, the 2S lifetime in hydrogen is 105x longer. We have designed and constructed an apparatus and experiment to measure the 1S-2S interval in Ps at precision levels that we expect to immediately improve upon the previous measurements by factor of 2x and pave the way for ultimate comparison to the hydrogenic measurements. The apparatus also opens the doors to a new frontier in high-precision spectroscopy: the sub-micros regime.

  13. Nonlinear effects in the time measurement device based on surface acoustic wave filter excitation.

    PubMed

    Prochazka, Ivan; Panek, Petr

    2009-07-01

    A transversal surface acoustic wave filter has been used as a time interpolator in a time interval measurement device. We are presenting the experiments and results of an analysis of the nonlinear effects in such a time interpolator. The analysis shows that the nonlinear distortion in the time interpolator circuits causes a deterministic measurement error which can be understood as the time interpolation nonlinearity. The dependence of this error on time of the measured events can be expressed as a sparse Fourier series thus it usually oscillates very quickly in comparison to the clock period. The theoretical model is in good agreement with experiments carried out on an experimental two-channel timing system. Using highly linear amplifiers in the time interpolator and adjusting the filter excitation level to the optimum, we have achieved the interpolation nonlinearity below 0.2 ps. The overall single-shot precision of the experimental timing device is 0.9 ps rms in each channel.

  14. Continuous Blood Pressure Monitoring in Daily Life

    NASA Astrophysics Data System (ADS)

    Lopez, Guillaume; Shuzo, Masaki; Ushida, Hiroyuki; Hidaka, Keita; Yanagimoto, Shintaro; Imai, Yasushi; Kosaka, Akio; Delaunay, Jean-Jacques; Yamada, Ichiro

    Continuous monitoring of blood pressure in daily life could improve early detection of cardiovascular disorders, as well as promoting healthcare. Conventional ambulatory blood pressure monitoring (ABPM) equipment can measure blood pressure at regular intervals for 24 hours, but is limited by long measuring time, low sampling rate, and constrained measuring posture. In this paper, we demonstrate a new method for continuous real-time measurement of blood pressure during daily activities. Our method is based on blood pressure estimation from pulse wave velocity (PWV) calculation, which formula we improved to take into account changes in the inner diameter of blood vessels. Blood pressure estimation results using our new method showed a greater precision of measured data during exercise, and a better accuracy than the conventional PWV method.

  15. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.

  16. High-resolution 40Ar 39Ar chronology of Oligocene volcanic rocks, San Juan Mountains, Colorado

    USGS Publications Warehouse

    Lanphere, M.A.

    1988-01-01

    The central San Juan caldera complex consists of seven calderas from which eight major ash-flow tuffs were erupted during a period of intense volcanic activity that lasted for approximately 2 m.y. about 26-28 Ma. The analytical precision of conventional K-Ar dating in this time interval is not sufficient to unambiguously resolve this complex history. However, 40Ar 39Ar incremental-heating experiments provide data for a high-resolution chronology that is consistent with stratigraphie relations. Weighted-mean age-spectrum plateau ages of biotite and sanidine are the most precise with standard deviations ranging from 0.08 to 0.21 m.y. The pooled estimate of standard deviation for the plateau ages of 12 minerals is about 0.5 percent or about 125,000 to 135,000 years. Age measurements on coexisting minerals from one tuff and on two samples of each of two other tuffs indicate that a precision in the age of a tuff of better than 100,000 years can be achieved at 27 Ma. New data indicate that the San Luis caldera is the youngest caldera in the central complex, not the Creede caldera as previously thought. ?? 1988.

  17. Preliminary results for a higher-precision measurement of the helium n=2 triplet P fine structure

    NASA Astrophysics Data System (ADS)

    Kato, K.; Skinner, T. D. G.; George, M. C.; Fitzakerley, D. W.; Vutha, A. C.; Storry, C. H.; Bezginov, N.; Valdez, T.; Hessels, E. A.

    2017-04-01

    Preliminary results for a higher-precision measurement of the n=2 triplet P J=1 to J=2 fine-structure interval in atomic helium are presented. A beam of metastable helium atoms is created in a liquid-nitrogen-cooled dc-discharge source, and is intensified using a 2D-MOT. These atoms are excited to the 2 triplet P state, and undergo a frequency-offset separated-oscillatory-field (FOSOF) microwave experiment. Only atoms which undergo a microwave transition, in the time-separated microwave fields are laser-excited to a Rydberg state and then Stark ionized and counted. Our new experimental design has eliminated the major systematic effects of previous experiments, and has led to a substantial improvement in the signal-to-noise ratio of the collected data. Our final improved measurement (with an expected uncertainty of less than 100 Hz) will allow for a test of 2-electron QED-theory in the helium n=2 triplet P system, and will be an important step towards obtaining a precise determination of the fine-structure constant. This research is supported by NSERC, CRC, CFI and NIST.

  18. The precise temporal calibration of dinosaur origins.

    PubMed

    Marsicano, Claudia A; Irmis, Randall B; Mancuso, Adriana C; Mundil, Roland; Chemale, Farid

    2016-01-19

    Dinosaurs have been major components of ecosystems for over 200 million years. Although different macroevolutionary scenarios exist to explain the Triassic origin and subsequent rise to dominance of dinosaurs and their closest relatives (dinosauromorphs), all lack critical support from a precise biostratigraphically independent temporal framework. The absence of robust geochronologic age control for comparing alternative scenarios makes it impossible to determine if observed faunal differences vary across time, space, or a combination of both. To better constrain the origin of dinosaurs, we produced radioisotopic ages for the Argentinian Chañares Formation, which preserves a quintessential assemblage of dinosaurian precursors (early dinosauromorphs) just before the first dinosaurs. Our new high-precision chemical abrasion thermal ionization mass spectrometry (CA-TIMS) U-Pb zircon ages reveal that the assemblage is early Carnian (early Late Triassic), 5- to 10-Ma younger than previously thought. Combined with other geochronologic data from the same basin, we constrain the rate of dinosaur origins, demonstrating their relatively rapid origin in a less than 5-Ma interval, thus halving the temporal gap between assemblages containing only dinosaur precursors and those with early dinosaurs. After their origin, dinosaurs only gradually dominated mid- to high-latitude terrestrial ecosystems millions of years later, closer to the Triassic-Jurassic boundary.

  19. Late Devonian conodonts and event stratigraphy in northwestern Algerian Sahara

    NASA Astrophysics Data System (ADS)

    Mahboubi, Abdessamed; Gatovsky, Yury

    2015-01-01

    Conodonts recovered from the Late Devonian South Marhouma section comprise 5 genera with 31 species (3 undetermined). The fauna establishes the presence of MN Zones 5, undifferentiated 6/7, 8/10 for the Middle Frasnian, the MN Zones 11, 12, 13 for the Upper Frasnian as well as the Early through Late triangularis Zones in the basal Famennian. The outcropping lithological succession is one of mostly nodular calcilutites alternating with numerous marly and shaly deposits, which, in the lower and upper part, comprise several dysoxic dark shale intervals. Among these the Upper Kellwasser horizon can be precisely dated and as such the presence of the terminal Frasnian Kellwasser Event is recognized for the first time in Algeria. Both the Middlesex and Rhinestreet Events cannot yet be precisely located, but supposedly occur among the dark shale horizons in the lower part of the section. However, their assignment to a precise level has so far not been established. Though poor in conodont abundance the South Marhouma section provides first evidence of the presence of several Montagne Noire conodont zones within the so far widely unstudied Frasnian of the Ougarta Chain. As such it is considered representative for the northwestern Algerian Saoura region.

  20. Evaluating abundance estimate precision and the assumptions of a count-based index for small mammals

    USGS Publications Warehouse

    Wiewel, A.S.; Adams, A.A.Y.; Rodda, G.H.

    2009-01-01

    Conservation and management of small mammals requires reliable knowledge of population size. We investigated precision of markrecapture and removal abundance estimates generated from live-trapping and snap-trapping data collected at sites on Guam (n 7), Rota (n 4), Saipan (n 5), and Tinian (n 3), in the Mariana Islands. We also evaluated a common index, captures per unit effort (CPUE), as a predictor of abundance. In addition, we evaluated cost and time associated with implementing live-trapping and snap-trapping and compared species-specific capture rates of selected live- and snap-traps. For all species, markrecapture estimates were consistently more precise than removal estimates based on coefficients of variation and 95 confidence intervals. The predictive utility of CPUE was poor but improved with increasing sampling duration. Nonetheless, modeling of sampling data revealed that underlying assumptions critical to application of an index of abundance, such as constant capture probability across space, time, and individuals, were not met. Although snap-trapping was cheaper and faster than live-trapping, the time difference was negligible when site preparation time was considered. Rattus diardii spp. captures were greatest in Haguruma live-traps (Standard Trading Co., Honolulu, HI) and Victor snap-traps (Woodstream Corporation, Lititz, PA), whereas Suncus murinus and Mus musculus captures were greatest in Sherman live-traps (H. B. Sherman Traps, Inc., Tallahassee, FL) and Museum Special snap-traps (Woodstream Corporation). Although snap-trapping and CPUE may have utility after validation against more rigorous methods, validation should occur across the full range of study conditions. Resources required for this level of validation would likely be better allocated towards implementing rigorous and robust methods.

  1. Analysis of the low molecular weight fraction of serum by LC-dual ESI-FT-ICR mass spectrometry: precision of retention time, mass, and ion abundance.

    PubMed

    Johnson, Kenneth L; Mason, Christopher J; Muddiman, David C; Eckel, Jeanette E

    2004-09-01

    This study quantifies the experimental uncertainty for LC retention time, mass measurement precision, and ion abundance obtained from replicate nLC-dual ESI-FT-ICR analyses of the low molecular weight fraction of serum. We used ultrafiltration to enrich the < 10-kDa fraction of components from the high-abundance proteins in a pooled serum sample derived from ovarian cancer patients. The THRASH algorithm for isotope cluster detection was applied to five replicate nLC-dual ESI-FT-ICR chromatograms. A simple two-level grouping algorithm was applied to the more than 7000 isotope clusters found in each replicate and identified 497 molecular species that appeared in at least four of the replicates. In addition, a representative set of 231 isotope clusters, corresponding to 188 unique molecular species, were manually interpreted to verify the automated algorithm and to set its tolerances. For nLC retention time reproducibility, 95% of the 497 species had a 95% confidence interval of the mean of +/- 0.9 min or less without the use of chromatographic alignment procedures. Furthermore, 95% of the 497 species had a mass measurement precision of < or = 3.2 and < or = 6.3 ppm for internally and externally calibrated spectra, respectively. Moreover, 95% of replicate ion abundance measurements, covering an ion abundance range of approximately 3 orders of magnitude, had a coefficient of variation of less than 62% without using any normalization functions. The variability of ion abundance was independent of LC retention time, mass, and ion abundance quartile. These measures of analytical reproducibility establish a statistical rationale for differentiating healthy and disease patient populations for the elucidation of biomarkers in the low molecular fraction of serum. Copyright 2004 American Chemical Society

  2. Calculation of Flight Deck Interval Management Assigned Spacing Goals Subject to Multiple Scheduling Constraints

    NASA Technical Reports Server (NTRS)

    Robinson, John E.

    2014-01-01

    The Federal Aviation Administration's Next Generation Air Transportation System will combine advanced air traffic management technologies, performance-based procedures, and state-of-the-art avionics to maintain efficient operations throughout the entire arrival phase of flight. Flight deck Interval Management (FIM) operations are expected to use sophisticated airborne spacing capabilities to meet precise in-trail spacing from top-of-descent to touchdown. Recent human-in-the-loop simulations by the National Aeronautics and Space Administration have found that selection of the assigned spacing goal using the runway schedule can lead to premature interruptions of the FIM operation during periods of high traffic demand. This study compares three methods for calculating the assigned spacing goal for a FIM operation that is also subject to time-based metering constraints. The particular paradigms investigated include: one based upon the desired runway spacing interval, one based upon the desired meter fix spacing interval, and a composite method that combines both intervals. These three paradigms are evaluated for the primary arrival procedures to Phoenix Sky Harbor International Airport using the entire set of Rapid Update Cycle wind forecasts from 2011. For typical meter fix and runway spacing intervals, the runway- and meter fix-based paradigms exhibit moderate FIM interruption rates due to their inability to consider multiple metering constraints. The addition of larger separation buffers decreases the FIM interruption rate but also significantly reduces the achievable runway throughput. The composite paradigm causes no FIM interruptions, and maintains higher runway throughput more often than the other paradigms. A key implication of the results with respect to time-based metering is that FIM operations using a single assigned spacing goal will not allow reduction of the arrival schedule's excess spacing buffer. Alternative solutions for conducting the FIM operation in a manner more compatible with the arrival schedule are discussed in detail.

  3. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

    ERIC Educational Resources Information Center

    Ruscio, John; Mullen, Tara

    2012-01-01

    It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

  4. Novel Screening Tool for Stroke Using Artificial Neural Network.

    PubMed

    Abedi, Vida; Goyal, Nitin; Tsivgoulis, Georgios; Hosseinichimeh, Niyousha; Hontecillas, Raquel; Bassaganya-Riera, Josep; Elijovich, Lucas; Metter, Jeffrey E; Alexandrov, Anne W; Liebeskind, David S; Alexandrov, Andrei V; Zand, Ramin

    2017-06-01

    The timely diagnosis of stroke at the initial examination is extremely important given the disease morbidity and narrow time window for intervention. The goal of this study was to develop a supervised learning method to recognize acute cerebral ischemia (ACI) and differentiate that from stroke mimics in an emergency setting. Consecutive patients presenting to the emergency department with stroke-like symptoms, within 4.5 hours of symptoms onset, in 2 tertiary care stroke centers were randomized for inclusion in the model. We developed an artificial neural network (ANN) model. The learning algorithm was based on backpropagation. To validate the model, we used a 10-fold cross-validation method. A total of 260 patients (equal number of stroke mimics and ACIs) were enrolled for the development and validation of our ANN model. Our analysis indicated that the average sensitivity and specificity of ANN for the diagnosis of ACI based on the 10-fold cross-validation analysis was 80.0% (95% confidence interval, 71.8-86.3) and 86.2% (95% confidence interval, 78.7-91.4), respectively. The median precision of ANN for the diagnosis of ACI was 92% (95% confidence interval, 88.7-95.3). Our results show that ANN can be an effective tool for the recognition of ACI and differentiation of ACI from stroke mimics at the initial examination. © 2017 American Heart Association, Inc.

  5. A Variable Oscillator Underlies the Measurement of Time Intervals in the Rostral Medial Prefrontal Cortex during Classical Eyeblink Conditioning in Rabbits.

    PubMed

    Caro-Martín, C Rocío; Leal-Campanario, Rocío; Sánchez-Campusano, Raudel; Delgado-García, José M; Gruart, Agnès

    2015-11-04

    We were interested in determining whether rostral medial prefrontal cortex (rmPFC) neurons participate in the measurement of conditioned stimulus-unconditioned stimulus (CS-US) time intervals during classical eyeblink conditioning. Rabbits were conditioned with a delay paradigm consisting of a tone as CS. The CS started 50, 250, 500, 1000, or 2000 ms before and coterminated with an air puff (100 ms) directed at the cornea as the US. Eyelid movements were recorded with the magnetic search coil technique and the EMG activity of the orbicularis oculi muscle. Firing activities of rmPFC neurons were recorded across conditioning sessions. Reflex and conditioned eyelid responses presented a dominant oscillatory frequency of ≈12 Hz. The firing rate of each recorded neuron presented a single peak of activity with a frequency dependent on the CS-US interval (i.e., ≈12 Hz for 250 ms, ≈6 Hz for 500 ms, and≈3 Hz for 1000 ms). Interestingly, rmPFC neurons presented their dominant firing peaks at three precise times evenly distributed with respect to CS start and also depending on the duration of the CS-US interval (only for intervals of 250, 500, and 1000 ms). No significant neural responses were recorded at very short (50 ms) or long (2000 ms) CS-US intervals. rmPFC neurons seem not to encode the oscillatory properties characterizing conditioned eyelid responses in rabbits, but are probably involved in the determination of CS-US intervals of an intermediate range (250-1000 ms). We propose that a variable oscillator underlies the generation of working memories in rabbits. The way in which brains generate working memories (those used for the transient processing and storage of newly acquired information) is still an intriguing question. Here, we report that the firing activities of neurons located in the rostromedial prefrontal cortex recorded in alert behaving rabbits are controlled by a dynamic oscillator. This oscillator generated firing frequencies in a variable band of 3-12 Hz depending on the conditioned stimulus-unconditioned stimulus intervals (1 s, 500 ms, 250 ms) selected for classical eyeblink conditioning of behaving rabbits. Shorter (50 ms) and longer (2 s) intervals failed to activate the oscillator and prevented the acquisition of conditioned eyelid responses. This is an unexpected mechanism to generate sustained firing activities in neural circuits generating working memories. Copyright © 2015 the authors 0270-6474/15/3514809-13$15.00/0.

  6. A high precision method for length-based separation of carbon nanotubes using bio-conjugation, SDS-PAGE and silver staining.

    PubMed

    Borzooeian, Zahra; Taslim, Mohammad E; Ghasemi, Omid; Rezvani, Saina; Borzooeian, Giti; Nourbakhsh, Amirhasan

    2018-01-01

    Parametric separation of carbon nanotubes, especially based on their length is a challenge for a number of nano-tech researchers. We demonstrate a method to combine bio-conjugation, SDS-PAGE, and silver staining in order to separate carbon nanotubes on the basis of length. Egg-white lysozyme, conjugated covalently onto the single-walled carbon nanotubes surfaces using carbodiimide method. The proposed conjugation of a biomolecule onto the carbon nanotubes surfaces is a novel idea and a significant step forward for creating an indicator for length-based carbon nanotubes separation. The conjugation step was followed by SDS-PAGE and the nanotube fragments were precisely visualized using silver staining. This high precision, inexpensive, rapid and simple separation method obviates the need for centrifugation, additional chemical analyses, and expensive spectroscopic techniques such as Raman spectroscopy to visualize carbon nanotube bands. In this method, we measured the length of nanotubes using different image analysis techniques which is based on a simplified hydrodynamic model. The method has high precision and resolution and is effective in separating the nanotubes by length which would be a valuable quality control tool for the manufacture of carbon nanotubes of specific lengths in bulk quantities. To this end, we were also able to measure the carbon nanotubes of different length, produced from different sonication time intervals.

  7. System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.

    2016-01-01

    The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.

  8. Novel artefact removal algorithms for co-registered EEG/fMRI based on selective averaging and subtraction.

    PubMed

    de Munck, Jan C; van Houdt, Petra J; Gonçalves, Sónia I; van Wegen, Erwin; Ossenblok, Pauly P W

    2013-01-01

    Co-registered EEG and functional MRI (EEG/fMRI) is a potential clinical tool for planning invasive EEG in patients with epilepsy. In addition, the analysis of EEG/fMRI data provides a fundamental insight into the precise physiological meaning of both fMRI and EEG data. Routine application of EEG/fMRI for localization of epileptic sources is hampered by large artefacts in the EEG, caused by switching of scanner gradients and heartbeat effects. Residuals of the ballistocardiogram (BCG) artefacts are similarly shaped as epileptic spikes, and may therefore cause false identification of spikes. In this study, new ideas and methods are presented to remove gradient artefacts and to reduce BCG artefacts of different shapes that mutually overlap in time. Gradient artefacts can be removed efficiently by subtracting an average artefact template when the EEG sampling frequency and EEG low-pass filtering are sufficient in relation to MR gradient switching (Gonçalves et al., 2007). When this is not the case, the gradient artefacts repeat themselves at time intervals that depend on the remainder between the fMRI repetition time and the closest multiple of the EEG acquisition time. These repetitions are deterministic, but difficult to predict due to the limited precision by which these timings are known. Therefore, we propose to estimate gradient artefact repetitions using a clustering algorithm, combined with selective averaging. Clustering of the gradient artefacts yields cleaner EEG for data recorded during scanning of a 3T scanner when using a sampling frequency of 2048 Hz. It even gives clean EEG when the EEG is sampled with only 256 Hz. Current BCG artefacts-reduction algorithms based on average template subtraction have the intrinsic limitation that they fail to deal properly with artefacts that overlap in time. To eliminate this constraint, the precise timings of artefact overlaps were modelled and represented in a sparse matrix. Next, the artefacts were disentangled with a least squares procedure. The relevance of this approach is illustrated by determining the BCG artefacts in a data set consisting of 29 healthy subjects recorded in a 1.5 T scanner and 15 patients with epilepsy recorded in a 3 T scanner. Analysis of the relationship between artefact amplitude, duration and heartbeat interval shows that in 22% (1.5T data) to 30% (3T data) of the cases BCG artefacts show an overlap. The BCG artefacts of the EEG/fMRI data recorded on the 1.5T scanner show a small negative correlation between HBI and BCG amplitude. In conclusion, the proposed methodology provides a substantial improvement of the quality of the EEG signal without excessive computer power or additional hardware than standard EEG-compatible equipment. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Investigations of interpolation errors of angle encoders for high precision angle metrology

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa

    2018-06-01

    Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.

  10. High-precision measurements of cementless acetabular components using model-based RSA: an experimental study.

    PubMed

    Baad-Hansen, Thomas; Kold, Søren; Kaptein, Bart L; Søballe, Kjeld

    2007-08-01

    In RSA, tantalum markers attached to metal-backed acetabular cups are often difficult to detect on stereo radiographs due to the high density of the metal shell. This results in occlusion of the prosthesis markers and may lead to inconclusive migration results. Within the last few years, new software systems have been developed to solve this problem. We compared the precision of 3 RSA systems in migration analysis of the acetabular component. A hemispherical and a non-hemispherical acetabular component were mounted in a phantom. Both acetabular components underwent migration analyses with 3 different RSA systems: conventional RSA using tantalum markers, an RSA system using a hemispherical cup algorithm, and a novel model-based RSA system. We found narrow confidence intervals, indicating high precision of the conventional marker system and model-based RSA with regard to migration and rotation. The confidence intervals of conventional RSA and model-based RSA were narrower than those of the hemispherical cup algorithm-based system regarding cup migration and rotation. The model-based RSA software combines the precision of the conventional RSA software with the convenience of the hemispherical cup algorithm-based system. Based on our findings, we believe that these new tools offer an improvement in the measurement of acetabular component migration.

  11. Automated semantic indexing of figure captions to improve radiology image retrieval.

    PubMed

    Kahn, Charles E; Rubin, Daniel L

    2009-01-01

    We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Estimated precision was 0.897 (95% confidence interval, 0.857-0.937). Estimated recall was 0.930 (95% confidence interval, 0.838-1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval.

  12. Imaging spectrometer measurement of water vapor in the 400 to 2500 nm spectral region

    NASA Technical Reports Server (NTRS)

    Green, Robert O.; Roberts, Dar A.; Conel, James E.; Dozier, Jeff

    1995-01-01

    The Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) measures the total upwelling spectral radiance from 400 to 2500 nm sampled at 10 nm intervals. The instrument acquires spectral data at an altitude of 20 km above sea level, as images of 11 by up to 100 km at 17x17 meter spatial sampling. We have developed a nonlinear spectral fitting algorithm coupled with a radiative transfer code to derive the total path water vapor from the spectrum, measured for each spatial element in an AVIRIS image. The algorithm compensates for variation in the surface spectral reflectance and atmospheric aerosols. It uses water vapor absorption bands centered at 940 nm, 1040 nm, and 1380 nm. We analyze data sets with water vapor abundances ranging from 1 to 40 perceptible millimeters. In one data set, the total path water vapor varies from 7 to 21 mm over a distance of less than 10 km. We have analyzed a time series of five images acquired at 12 minute intervals; these show spatially heterogeneous changes of advocated water vapor of 25 percent over 1 hour. The algorithm determines water vapor for images with a range of ground covers, including bare rock and soil, sparse to dense vegetation, snow and ice, open water, and clouds. The precision of the water vapor determination approaches one percent. However, the precision is sensitive to the absolute abundance and the absorption strength of the atmospheric water vapor band analyzed. We have evaluated the accuracy of the algorithm by comparing several surface-based determinations of water vapor at the time of the AVIRIS data acquisition. The agreement between the AVIRIS measured water vapor and the in situ surface radiometer and surface interferometer measured water vapor is 5 to 10 percent.

  13. Market-based control strategy for long-span structures considering the multi-time delay issue

    NASA Astrophysics Data System (ADS)

    Li, Hongnan; Song, Jianzhu; Li, Gang

    2017-01-01

    To solve the different time delays that exist in the control device installed on spatial structures, in this study, discrete analysis using a 2 N precise algorithm was selected to solve the multi-time-delay issue for long-span structures based on the market-based control (MBC) method. The concept of interval mixed energy was introduced from computational structural mechanics and optimal control research areas, and it translates the design of the MBC multi-time-delay controller into a solution for the segment matrix. This approach transforms the serial algorithm in time to parallel computing in space, greatly improving the solving efficiency and numerical stability. The designed controller is able to consider the issue of time delay with a linear controlling force combination and is especially effective for large time-delay conditions. A numerical example of a long-span structure was selected to demonstrate the effectiveness of the presented controller, and the time delay was found to have a significant impact on the results.

  14. IEEE 802.15.4 ZigBee-Based Time-of-Arrival Estimation for Wireless Sensor Networks.

    PubMed

    Cheon, Jeonghyeon; Hwang, Hyunsu; Kim, Dongsun; Jung, Yunho

    2016-02-05

    Precise time-of-arrival (TOA) estimation is one of the most important techniques in RF-based positioning systems that use wireless sensor networks (WSNs). Because the accuracy of TOA estimation is proportional to the RF signal bandwidth, using broad bandwidth is the most fundamental approach for achieving higher accuracy. Hence, ultra-wide-band (UWB) systems with a bandwidth of 500 MHz are commonly used. However, wireless systems with broad bandwidth suffer from the disadvantages of high complexity and high power consumption. Therefore, it is difficult to employ such systems in various WSN applications. In this paper, we present a precise time-of-arrival (TOA) estimation algorithm using an IEEE 802.15.4 ZigBee system with a narrow bandwidth of 2 MHz. In order to overcome the lack of bandwidth, the proposed algorithm estimates the fractional TOA within the sampling interval. Simulation results show that the proposed TOA estimation algorithm provides an accuracy of 0.5 m at a signal-to-noise ratio (SNR) of 8 dB and achieves an SNR gain of 5 dB as compared with the existing algorithm. In addition, experimental results indicate that the proposed algorithm provides accurate TOA estimation in a real indoor environment.

  15. Uncertainty in LiDAR derived Canopy Height Models in three unique forest ecosystems

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Leisso, N.; Scholl, V.; Hass, B.

    2016-12-01

    The National Ecological Observatory Network (NEON) is a continental-scale ecological observation platform designed to collect and disseminate data that contributes to understanding and forecasting the impacts of climate change, land use change, and invasive species on ecology. NEON will collect in-situ and airborne data over 81 sites across the US, including Alaska, Hawaii, and Puerto Rico. The Airborne Observation Platform (AOP) group within the NEON project operates a payload suite that includes a waveform / discrete LiDAR, imaging spectrometer (NIS) and high resolution RGB camera. One of the products derived from the discrete LiDAR is a canopy height model (CHM) raster developed at 1 m spatial resolution. Currently, it is hypothesized that differencing annually acquired CHM products allows identification of tree growth at in-situ distributed plots throughout the NEON sites. To test this hypothesis, the precision of the CHM product was determined through a specialized flight plan that independently repeated up to 20 observations of the same area with varying view geometries. The flight plan was acquired at three NEON sites, each with a unique forest types including 1) San Joaquin Experimental Range (SJER, open woodland dominated by oaks), 2) Soaproot Saddle (SOAP, mixed conifer deciduous forest), and 3) Oak Ridge National Laboratory (ORNL, oak hickory and pine forest). A CHM was developed for each flight line at each site and the overlap area was used to empirically estimate a site-specific precision of the CHM. The average cell-by-cell CHM precision at SJER, SOAP and ORNL was 1.34 m, 4.24 m and 0.72 m respectively. Given the average growth rate of the dominant species at each site and the average CHM uncertainty, the minimum time interval required between LiDAR acquisitions to confidently conclude growth had occurred at the plot scale was estimated to be between one and four years. The minimum interval time was shown to be primarily dependent on the CHM uncertainty and number of cells within a plot which contained vegetation. This indicates that users of NEON data should not expect that changes in canopy height can be confidently identified between annual AOP acquisitions for all areas of NEON sites.

  16. The relative contributions of processing speed and cognitive load to working memory accuracy in multiple sclerosis.

    PubMed

    Leavitt, Victoria M; Lengenfelder, Jean; Moore, Nancy B; Chiaravalloti, Nancy D; DeLuca, John

    2011-06-01

    Cognitive symptoms of multiple sclerosis (MS) include processing-speed deficits and working memory impairment. The precise manner in which these deficits interact in individuals with MS remains to be explicated. We hypothesized that providing more time on a complex working memory task would result in performance benefits for individuals with MS relative to healthy controls. Fifty-three individuals with clinically definite MS and 36 matched healthy controls performed a computerized task that systematically manipulated cognitive load. The interval between stimuli presentations was manipulated to provide increasing processing time. The results confirmed that individuals with MS who have processing-speed deficits significantly improve in performance accuracy when given additional time to process the information in working memory. Implications of these findings for developing appropriate cognitive rehabilitation interventions are discussed.

  17. Conditioned [corrected] stimulus informativeness governs conditioned stimulus-unconditioned stimulus associability.

    PubMed

    Ward, Ryan D; Gallistel, C R; Jensen, Greg; Richards, Vanessa L; Fairhurst, Stephen; Balsam, Peter D

    2012-07-01

    In a conditioning protocol, the onset of the conditioned stimulus ([CS]) provides information about when to expect reinforcement (unconditioned stimulus [US]). There are two sources of information from the CS in a delay conditioning paradigm in which the CS-US interval is fixed. The first depends on the informativeness, the degree to which CS onset reduces the average expected time to onset of the next US. The second depends only on how precisely a subject can represent a fixed-duration interval (the temporal Weber fraction). In three experiments with mice, we tested the differential impact of these two sources of information on rate of acquisition of conditioned responding (CS-US associability). In Experiment 1, we showed that associability (the inverse of trials to acquisition) increased in proportion to informativeness. In Experiment 2, we showed that fixing the duration of the US-US interval or the CS-US interval or both had no effect on associability. In Experiment 3, we equated the increase in information produced by varying the C/T ratio with the increase produced by fixing the duration of the CS-US interval. Associability increased with increased informativeness, but, as in Experiment 2, fixing the CS-US duration had no effect on associability. These results are consistent with the view that CS-US associability depends on the increased rate of reward signaled by CS onset. The results also provide further evidence that conditioned responding is temporally controlled when it emerges.

  18. CS Informativeness Governs CS-US Associability

    PubMed Central

    Ward, Ryan D.; Gallistel, C. R.; Jensen, Greg; Richards, Vanessa L.; Fairhurst, Stephen; Balsam, Peter D

    2012-01-01

    In a conditioning protocol, the onset of the conditioned stimulus (CS) provides information about when to expect reinforcement (the US). There are two sources of information from the CS in a delay conditioning paradigm in which the CS-US interval is fixed. The first depends on the informativeness, the degree to which CS onset reduces the average expected time to onset of the next US. The second depends only on how precisely a subject can represent a fixed-duration interval (the temporal Weber fraction). In three experiments with mice, we tested the differential impact of these two sources of information on rate of acquisition of conditioned responding (CS-US associability). In Experiment 1, we show that associability (the inverse of trials to acquisition) increases in proportion to informativeness. In Experiment 2, we show that fixing the duration of the US-US interval or the CS-US interval or both has no effect on associability. In Experiment 3, we equated the increase in information produced by varying the C̅/T̅ ratio with the increase produced by fixing the duration of the CS-US interval. Associability increased with increased informativeness, but, as in Experiment 2, fixing the CS-US duration had no effect on associability. These results are consistent with the view that CS-US associability depends on the increased rate of reward signaled by CS onset. The results also provide further evidence that conditioned responding is temporally controlled when it emerges. PMID:22468633

  19. Purely temporal figure-ground segregation.

    PubMed

    Kandil, F I; Fahle, M

    2001-05-01

    Visual figure-ground segregation is achieved by exploiting differences in features such as luminance, colour, motion or presentation time between a figure and its surround. Here we determine the shortest delay times required for figure-ground segregation based on purely temporal features. Previous studies usually employed stimulus onset asynchronies between figure- and ground-containing possible artefacts based on apparent motion cues or on luminance differences. Our stimuli systematically avoid these artefacts by constantly showing 20 x 20 'colons' that flip by 90 degrees around their midpoints at constant time intervals. Colons constituting the background flip in-phase whereas those constituting the target flip with a phase delay. We tested the impact of frequency modulation and phase reduction on target detection. Younger subjects performed well above chance even at temporal delays as short as 13 ms, whilst older subjects required up to three times longer delays in some conditions. Figure-ground segregation can rely on purely temporal delays down to around 10 ms even in the absence of luminance and motion artefacts, indicating a temporal precision of cortical information processing almost an order of magnitude lower than the one required for some models of feature binding in the visual cortex [e.g. Singer, W. (1999), Curr. Opin. Neurobiol., 9, 189-194]. Hence, in our experiment, observers are unable to use temporal stimulus features with the precision required for these models.

  20. The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.

    PubMed

    Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P

    2014-01-01

    To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.

  1. Is Perruchet's dissociation between eyeblink conditioned responding and outcome expectancy evidence for two learning systems?

    PubMed

    Weidemann, Gabrielle; Tangen, Jason M; Lovibond, Peter F; Mitchell, Christopher J

    2009-04-01

    P. Perruchet (1985b) showed a double dissociation of conditioned responses (CRs) and expectancy for an airpuff unconditioned stimulus (US) in a 50% partial reinforcement schedule in human eyeblink conditioning. In the Perruchet effect, participants show an increase in CRs and a concurrent decrease in expectancy for the airpuff across runs of reinforced trials; conversely, participants show a decrease in CRs and a concurrent increase in expectancy for the airpuff across runs of nonreinforced trials. Three eyeblink conditioning experiments investigated whether the linear trend in eyeblink CRs in the Perruchet effect is a result of changes in associative strength of the conditioned stimulus (CS), US sensitization, or learning the precise timing of the US. Experiments 1 and 2 demonstrated that the linear trend in eyeblink CRs is not the result of US sensitization. Experiment 3 showed that the linear trend in eyeblink CRs is present with both a fixed and a variable CS-US interval and so is not the result of learning the precise timing of the US. The results are difficult to reconcile with a single learning process model of associative learning in which expectancy mediates CRs. Copyright (c) 2009 APA, all rights reserved.

  2. Warning: This keyboard will deconstruct--the role of the keyboard in skilled typewriting.

    PubMed

    Crump, Matthew J C; Logan, Gordon D

    2010-06-01

    Skilled actions are commonly assumed to be controlled by precise internal schemas or cognitive maps. We challenge these ideas in the context of skilled typing, where prominent theories assume that typing is controlled by a well-learned cognitive map that plans finger movements without feedback. In two experiments, we demonstrate that online physical interaction with the keyboard critically mediates typing skill. Typists performed single-word and paragraph typing tasks on a regular keyboard, a laser-projection keyboard, and two deconstructed keyboards, made by removing successive layers of a regular keyboard. Averaged over the laser and deconstructed keyboards, response times for the first keystroke increased by 37%, the interval between keystrokes increased by 120%, and error rate increased by 177%, relative to those of the regular keyboard. A schema view predicts no influence of external motor feedback, because actions could be planned internally with high precision. We argue that the expert knowledge mediating action control emerges during online interaction with the physical environment.

  3. Precise Measurement of the CP Violation Parameter sin2Φ 1 in B⁰→(cc̄)K⁰ Decays

    DOE PAGES

    Adachi, I.; Aihara, H.; Asner, D. M.; ...

    2012-04-23

    We present a precise measurement of the CP violation parameter sin2Φ 1 and the direct CP violation parameter A f using the final data sample of 772×10⁶ BB¯¯¯ pairs collected at the Υ(4S) resonance with the Belle detector at the KEKB asymmetric-energy e⁺e⁻ collider. One neutral B meson is reconstructed in a J/ψK 0 S, ψ(2S)K 0 S, χ c1K 0 S, or J/ψK 0 L CP eigenstate and its flavor is identified from the decay products of the accompanying B meson. From the distribution of proper-time intervals between the two B decays, we obtain the following CP violation parameters:more » sin2Φ 1=0.667±0.023(stat)±0.012(syst) and A f=0.006±0.016(stat)±0.012(syst).« less

  4. Time perception and time perspective differences between adolescents and adults.

    PubMed

    Siu, Nicolson Y F; Lam, Heidi H Y; Le, Jacqueline J Y; Przepiorka, Aneta M

    2014-09-01

    The present experiment aimed to investigate the differences in time perception and time perspective between subjects representing two developmental stages, namely adolescence and middle adulthood. Twenty Chinese adolescents aged 15-25 and twenty Chinese adults aged 35-55 participated in the study. A time discrimination task and a time reproduction task were implemented to measure the accuracy of their time perception. The Zimbardo Time Perspective Inventory (Short-Form) was adopted to assess their time orientation. It was found that adolescents performed better than adults in both the time discrimination task and the time reproduction task. Adolescents were able to differentiate different time intervals with greater accuracy and reproduce the target duration more precisely. For the time reproduction task, it was also found that adults tended to overestimate the duration of the target stimuli while adolescents were more likely to underestimate it. As regards time perspective, adults were more future-oriented than adolescents, whereas adolescents were more present-oriented than adults. No significant relationship was found between time perspective and time perception. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Effects of experimental design on calibration curve precision in routine analysis

    PubMed Central

    Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.

    1998-01-01

    A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816

  6. High-precision GPS autonomous platforms for sea ice dynamics and physical oceanography

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Wilkinson, J.; Olsson, M.; Rodwell, S.; James, A.; Hagan, B.; Hwang, B.; Forsberg, R.; Gerdes, R.; Johannessen, J.; Wadhams, P.; Nettles, M.; Padman, L.

    2012-12-01

    Project "Arctic Ocean sea ice and ocean circulation using satellite methods" (SATICE), is the first high-rate, high-precision, continuous GPS positioning experiment on sea ice in the Arctic Ocean. The SATICE systems collect continuous, dual-frequency carrier-phase GPS data while drifting on sea ice. Additional geophysical measurements also collected include ocean water pressure, ocean surface salinity, atmospheric pressure, snow-depth, air-ice-ocean temperature profiles, photographic imagery, and others, enabling sea ice drift, freeboard, weather, ice mass balance, and sea-level height determination. Relatively large volumes of data from each buoy are streamed over a satellite link to a central computer on the Internet in near real time, where they are processed to estimate the time-varying buoy positions. SATICE system obtains continuous GPS data at sub-minute intervals with a positioning precision of a few centimetres in all three dimensions. Although monitoring of sea ice motions goes back to the early days of satellite observations, these autonomous platforms bring out a level of spatio-temporal detail that has never been seen before, especially in the vertical axis. These high-resolution data allows us to address new polar science questions and challenge our present understanding of both sea ice dynamics and Arctic oceanography. We will describe the technology behind this new autonomous platform, which could also be adapted to other applications that require high resolution positioning information with sustained operations and observations in the polar marine environment, and present results pertaining to sea ice dynamics and physical oceanography.

  7. Distribution and Characteristics of Repeating Earthquakes in Northern California

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.; Zechar, J. D.; Shaw, B. E.

    2012-12-01

    Repeating earthquakes are playing an increasingly important role in the study of fault processes and behavior, and have the potential to improve hazard assessment, earthquake forecast, and seismic monitoring capabilities. These events rupture the same fault patch repeatedly, generating virtually identical seismograms. In California, repeating earthquakes have been found predominately along the creeping section of the central San Andreas Fault, where they are believed to represent failing asperities on an otherwise creeping fault. Here, we use the northern California double-difference catalog of 450,000 precisely located events (1984-2009) and associated database of 2 billion waveform cross-correlation measurements to systematically search for repeating earthquakes across various tectonic regions. An initial search for pairs of earthquakes with high-correlation coefficients and similar magnitudes resulted in 4,610 clusters including a total of over 26,000 earthquakes. A subsequent double-difference re-analysis of these clusters resulted in 1,879 sequences (8,640 events) where a common rupture area can be resolved to the precision of a few tens of meters or less. These repeating earthquake sequences (RES) include between 3 and 24 events with magnitudes up to ML=4. We compute precise relative magnitudes between events in each sequence from differential amplitude measurements. Differences between these and standard coda-duration magnitudes have a standard deviation of 0.09. The RES occur throughout northern California, but RES with 10 or more events (6%) only occur along the central San Andreas and Calaveras faults. We are establishing baseline characteristics for each sequence, such as recurrence intervals and their coefficient of variation (CV), in order to compare them across tectonic regions. CVs for these clusters range from 0.002 to 2.6, indicating a range of behavior between periodic occurrence (CV~0), random occurrence, and temporal clustering. 10% of the RES show burst-like behavior with mean recurrence times smaller than one month. 5% of the RES have mean recurrence times greater than one year and include more than 10 earthquakes. Earthquakes in the 50 most periodic sequences (CV<0.2) do not appear to be predictable by either time- or slip-predictable models, consistent with previous findings. We demonstrate that changes in recurrence intervals of repeating earthquakes can be routinely monitored. This is especially important for sequences with CV~0, as they may indicate changes in the loading rate. We also present results from retrospective forecast experiments based on near-real time hazard functions.

  8. Sample size requirements for the design of reliability studies: precision consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.

  9. The influence of prior experience and expected timing on vibrotactile discrimination

    PubMed Central

    Karim, Muhsin; Harris, Justin A.; Langdon, Angela; Breakspear, Michael

    2013-01-01

    Vibrotactile discrimination tasks involve perceptual judgements on stimulus pairs separated by a brief interstimulus interval (ISI). Despite their apparent simplicity, decision making during these tasks is biased by prior experience in a manner that is not well understood. A striking example is when participants perform well on trials where the first stimulus is closer to the mean of the stimulus-set than the second stimulus, and perform comparatively poorly when the first stimulus is further from the stimulus mean. This “time-order effect” suggests that participants implicitly encode the mean of the stimulus-set and use this internal standard to bias decisions on any given trial. For relatively short ISIs, the magnitude of the time-order effect typically increases with the distance of the first stimulus from the global mean. Working from the premise that the time-order effect reflects the loss of precision in working memory representations, we predicted that the influence of the time-order effect, and this superimposed “distance” effect, would monotonically increase for trials with longer ISIs. However, by varying the ISI across four intervals (300, 600, 1200, and 2400 ms) we instead found a complex, non-linear dependence of the time-order effect on both the ISI and the distance, with the time-order effect being paradoxically stronger at short ISIs. We also found that this relationship depended strongly on participants' prior experience of the ISI (from previous task titration). The time-order effect not only depends on participants' expectations concerning the distribution of stimuli, but also on the expected timing of the trials. PMID:24399927

  10. Effects of integration time on in-water radiometric profiles.

    PubMed

    D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito

    2018-03-05

    This work investigates the effects of integration time on in-water downward irradiance E d , upward irradiance E u and upwelling radiance L u profile data acquired with free-fall hyperspectral systems. Analyzed quantities are the subsurface value and the diffuse attenuation coefficient derived by applying linear and non-linear regression schemes. Case studies include oligotrophic waters (Case-1), as well as waters dominated by Colored Dissolved Organic Matter (CDOM) and Non-Algal Particles (NAP). Assuming a 24-bit digitization, measurements resulting from the accumulation of photons over integration times varying between 8 and 2048ms are evaluated at depths corresponding to: 1) the beginning of each integration interval (Fst); 2) the end of each integration interval (Lst); 3) the averages of Fst and Lst values (Avg); and finally 4) the values weighted accounting for the diffuse attenuation coefficient of water (Wgt). Statistical figures show that the effects of integration time can bias results well above 5% as a function of the depth definition. Results indicate the validity of the Wgt depth definition and the fair applicability of the Avg one. Instead, both the Fst and Lst depths should not be adopted since they may introduce pronounced biases in E u and L u regression products for highly absorbing waters. Finally, the study reconfirms the relevance of combining multiple radiometric casts into a single profile to increase precision of regression products.

  11. The MIT OSO-7 X-ray experiment. A five color survey of the positions and time variations of cosmic X-ray sources

    NASA Technical Reports Server (NTRS)

    Taylor, R. S.; Clark, G. W.

    1971-01-01

    The all-sky, X-ray measurements are made in five broad energy bands from 0.5 to 60 keV with X-ray collimators of one and three degree FWHM response. Working with the onboard star sensor source locations may be determined to a precision of plus or minus 0.1 deg. The experiment is located in wheel compartment number three of the spacecraft. A time division logic system divides each wheel rotation into 256 data bins in each of which X-ray counts are accumulated over a 190 second interval. Measurement chain circuits include provision for both geometric and risetime anticoincidence. A detailed description of the instrument is included as is pertinent operating information.

  12. End-to-End Flow Control for Visual-Haptic Communication under Bandwidth Change

    NASA Astrophysics Data System (ADS)

    Yashiro, Daisuke; Tian, Dapeng; Yakoh, Takahiro

    This paper proposes an end-to-end flow controller for visual-haptic communication. A visual-haptic communication system transmits non-real-time packets, which contain large-size visual data, and real-time packets, which contain small-size haptic data. When the transmission rate of visual data exceeds the communication bandwidth, the visual-haptic communication system becomes unstable owing to buffer overflow. To solve this problem, an end-to-end flow controller is proposed. This controller determines the optimal transmission rate of visual data on the basis of the traffic conditions, which are estimated by the packets for haptic communication. Experimental results confirm that in the proposed method, a short packet-sending interval and a short delay are achieved under bandwidth change, and thus, high-precision visual-haptic communication is realized.

  13. Advanced system on a chip microelectronics for spacecraft and science instruments

    NASA Astrophysics Data System (ADS)

    Paschalidis, Nikolaos P.

    2003-01-01

    The explosive growth of the modern microelectronics field opens new horizons for the development of new lightweight, low power, and smart spacecraft and science instrumentation systems in the new millennium explorations. Although this growth is mostly driven by the commercial need for low power, portable and computationally intensive products, the applicability is obvious in the space sector. The additional difficulties needed to be overcome for applicability in space include radiation hardness for total ionizing dose and single event effects (SEE), and reliability. Additionally, this new capability introduces a whole new philosophy of design and R&D, with strong implications in organizational and inter-agency program management. One key component specifically developed towards low power, small size, highly autonomous spacecraft systems, is the smart sensor remote input/output (TRIO) chip. TRIO can interface to 32 transducers with current sources/sinks and voltage sensing. It includes front-end analog signal processing, a 10-bit ADC, memory, and standard serial and parallel I/Os. These functions are very useful for spacecraft and subsystems health and status monitoring, and control actions. The key contributions of the TRIO are feasibility of modular architectures, elimination of several miles of wire harnessing, and power savings by orders of magnitude. TRIO freely operates from a single power supply 2.5- 5.5 V with power dissipation <10 mW. This system on a chip device rapidly becomes a NASA and Commercial Space standard as it is already selected by thousands in several new millennium missions, including Europa Orbiter, Mars Surveyor Program, Solar Probe, Pluto Express, Stereo, Contour, Messenger, etc. In the Science Instrumentation field common instruments that can greatly take advantage of the new technologies are: energetic-particle/plasma and wave instruments, imagers, mass spectrometers, X-ray and UV spectrographs, magnetometers, laser rangefinding instruments, etc. Common measurements that apply to many of these instruments are precise time interval measurement and high resolution read-out of solid state detectors. A precise time interval measurement chip was specially developed that achieves ˜100 ps (×10 improvement) time resolution at a power dissipation ˜20 mW (×50 improvement), dead time ˜1.5 μs (×20 improvement), and chip die size 5 mm×5 mm versus two 20 cm×20 cm doubled sided boards. This device is selected as a key enabling technology for several NASA particle, delay line imaging, and laser range finding instruments onboard (NASA Image, Messenger, etc. missions). Another device with universal application is radiation energy read-out from solid state detectors. Multi-channel low-power and end-to-end sensor input—digital output is key for the new generation instruments. The readout channel comprises of a Charge Sensitive Preamplifier with a target sensitivity of ˜1 KeV FWHM at 20 pf detector capacitance, a Shaper Amplifier with programmable time constant/gain, and an ADC. The read-out chip together with the precise time interval chip comprises the essential elements of a common particle spectroscopy instrument. To mention some more applications fast-signal acquisition—and digitization is a very useful function for a category of instrument such as mass spectroscopy and profile laser rangefinding. The single chip approach includes a high bandwidth preamplifier, fast sampling ˜5 ns, analog memory ˜10K locations, 12-bit ADC and serial/parallel I/Os. The wealth of the applications proves the advanced microelectronics field as a key enabling technology for the new millennium space exploration.

  14. DORIS-based point mascons for the long term stability of precise orbit solutions

    NASA Astrophysics Data System (ADS)

    Cerri, L.; Lemoine, J. M.; Mercier, F.; Zelensky, N. P.; Lemoine, F. G.

    2013-08-01

    In recent years non-tidal Time Varying Gravity (TVG) has emerged as the most important contributor in the error budget of Precision Orbit Determination (POD) solutions for altimeter satellites' orbits. The Gravity Recovery And Climate Experiment (GRACE) mission has provided POD analysts with static and time-varying gravity models that are very accurate over the 2002-2012 time interval, but whose linear rates cannot be safely extrapolated before and after the GRACE lifespan. One such model based on a combination of data from GRACE and Lageos from 2002-2010, is used in the dynamic POD solutions developed for the Geophysical Data Records (GDRs) of the Jason series of altimeter missions and the equivalent products from lower altitude missions such as Envisat, Cryosat-2, and HY-2A. In order to accommodate long-term time-variable gravity variations not included in the background geopotential model, we assess the feasibility of using DORIS data to observe local mass variations using point mascons. In particular, we show that the point-mascon approach can stabilize the geographically correlated orbit errors which are of fundamental interest for the analysis of regional Mean Sea Level trends based on altimeter data, and can therefore provide an interim solution in the event of GRACE data loss. The time series of point-mass solutions for Greenland and Antarctica show good agreement with independent series derived from GRACE data, indicating a mass loss at rate of 210 Gt/year and 110 Gt/year respectively.

  15. Global gray-level thresholding based on object size.

    PubMed

    Ranefall, Petter; Wählby, Carolina

    2016-04-01

    In this article, we propose a fast and robust global gray-level thresholding method based on object size, where the selection of threshold level is based on recall and maximum precision with regard to objects within a given size interval. The method relies on the component tree representation, which can be computed in quasi-linear time. Feature-based segmentation is especially suitable for biomedical microscopy applications where objects often vary in number, but have limited variation in size. We show that for real images of cell nuclei and synthetic data sets mimicking fluorescent spots the proposed method is more robust than all standard global thresholding methods available for microscopy applications in ImageJ and CellProfiler. The proposed method, provided as ImageJ and CellProfiler plugins, is simple to use and the only required input is an interval of the expected object sizes. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  16. Flight Test Evaluation of the ATD-1 Interval Management Application

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.; Roper, Roy D.; Abbott, Terence S.; Levitt, Ian; Scharl, Julien

    2017-01-01

    Interval Management (IM) is a concept designed to be used by air traffic controllers and flight crews to more efficiently and precisely manage inter-aircraft spacing. Both government and industry have been working together to develop the IM concept and standards for both ground automation and supporting avionics. NASA contracted with Boeing, Honeywell, and United Airlines to build and flight test an avionics prototype based on NASA's spacing algorithm and conduct a flight test. The flight test investigated four different types of IM operations over the course of nineteen days, and included en route, arrival, and final approach phases of flight. This paper examines the spacing accuracy achieved during the flight test and the rate of speed commands provided to the flight crew. Many of the time-based IM operations met or exceeded the operational design goals set out in the standards for the maintain operations and a subset of the achieve operations. Those operations which did not meet the goals were due to issues that are identified and will be further analyzed.

  17. Gene expression during blow fly development: improving the precision of age estimates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2011-01-01

    Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.

  18. A comparison of the accuracy of patterns processed from an inlay casting wax, an auto-polymerized resin and a light-cured resin pattern material.

    PubMed

    Rajagopal, Praveen; Chitre, Vidya; Aras, Meena A

    2012-01-01

    Traditionally, inlay casting waxes have been used to fabricate patterns for castings. Newer resin pattern materials offer greater rigidity and strength, allowing easier laboratory and intraoral adjustment without the fear of pattern damage. They also claim to possess a greater dimensional stability when compared to inlay wax. This study attempted to determine and compare the marginal accuracy of patterns fabricated from an inlay casting wax, an autopolymerized pattern resin and a light polymerized pattern resin on storage off the die for varying time intervals. Ten patterns each were fabricated from an inlay casting wax (GC Corp., Tokyo, Japan), an autopolymerized resin pattern material (Pattern resin, GC Corp, Tokyo, Japan) and a light-cured resin pattern material (Palavit GLC, Hereaus Kulzer GmbH, Germany). The completed patterns were stored off the die at room temperature. Marginal gaps were evaluated by reseating the patterns on their respective dies and observing it under a stereomicroscope at 1, 12, and 24 h intervals after pattern fabrication. The results revealed that the inlay wax showed a significantly greater marginal discrepancy at the 12 and 24 h intervals. The autopolymerized resin showed an initial (at 1 h) marginal discrepancy slightly greater than inlay wax, but showed a significantly less marginal gap (as compared to inlay wax) at the other two time intervals. The light-cured resin proved to be significantly more dimensionally stable, and showed minimal change during the storage period. The resin pattern materials studied, undergo a significantly less dimensional change than the inlay waxes on prolonged storage. They would possibly be a better alternative to inlay wax in situations requiring high precision or when delayed investment (more than 1 h) of patterns can be expected.

  19. Automated Semantic Indexing of Figure Captions to Improve Radiology Image Retrieval

    PubMed Central

    Kahn, Charles E.; Rubin, Daniel L.

    2009-01-01

    Objective We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. Design The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Measurements Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Results Estimated precision was 0.897 (95% confidence interval, 0.857–0.937). Estimated recall was 0.930 (95% confidence interval, 0.838–1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Conclusion Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval. PMID:19261938

  20. Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems

    PubMed Central

    Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia

    2016-01-01

    The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351

  1. Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.

    PubMed

    Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia

    2016-12-18

    The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.

  2. Intercepting beats in predesignated target zones.

    PubMed

    Craig, Cathy; Pepping, Gert-Jan; Grealy, Madeleine

    2005-09-01

    Moving to a rhythm necessitates precise timing between the movement of the chosen limb and the timing imposed by the beats. However, the temporal information specifying the moment when a beat will sound (the moment onto which one must synchronise one's movement) is not continuously provided by the acoustic array. Because of this informational void, the actors need some form of prospective information that will allow them to act sufficiently ahead of time in order to get their hand in the right place at the right time. In this acoustic interception study, where participants were asked to move between two targets in such a way that they arrived and stopped in the target zone at the same time as a beat sounded, we tested a model derived from tau-coupling theory (Lee DN (1998) Ecol Psychol 10:221-250). This model attempts to explain the form of a potential timing guide that specifies the duration of the inter-beat intervals and also describes how this informational guide can be used in the timing and guidance of movements. The results of our first experiment show that, for inter-beat intervals of less than 3 s, a large proportion of the movement (over 70%) can be explained by the proposed model. However, a second experiment, which augments the time between beats so that it surpasses 3 s, shows a marked decline in the percentage of information/movement coupling. A close analysis of the movement kinematics indicates a lack of control and anticipation in the participants' movements. The implications of these findings, in light of other research studies, are discussed.

  3. Testing Precision Screening for Breast Cancer

    Cancer.gov

    An NCI research article about individualized approaches that could help identify those at risk of breast cancer who need to be screened and testing screening intervals that are appropriate for each person’s level of risk.

  4. DAM - detection and mapping

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Integrated set of manual procedures, computer programs, and graphic devices processes multispectral scanner data from orbiting Landsat into precisely registered and formatted maps of surface water and other resources at variety of scales, sheet formats, and tick intervals.

  5. Impulsive Effects on Quasi-Synchronization of Neural Networks With Parameter Mismatches and Time-Varying Delay.

    PubMed

    Tang, Ze; Park, Ju H; Feng, Jianwen

    2018-04-01

    This paper is concerned with the exponential synchronization issue of nonidentically coupled neural networks with time-varying delay. Due to the parameter mismatch phenomena existed in neural networks, the problem of quasi-synchronization is thus discussed by applying some impulsive control strategies. Based on the definition of average impulsive interval and the extended comparison principle for impulsive systems, some criteria for achieving the quasi-synchronization of neural networks are derived. More extensive ranges of impulsive effects are discussed so that impulse could either play an effective role or play an adverse role in the final network synchronization. In addition, according to the extended formula for the variation of parameters with time-varying delay, precisely exponential convergence rates and quasi-synchronization errors are obtained, respectively, in view of different types impulsive effects. Finally, some numerical simulations with different types of impulsive effects are presented to illustrate the effectiveness of theoretical analysis.

  6. Random cascade model in the limit of infinite integral scale as the exponential of a nonstationary 1/f noise: Application to volatility fluctuations in stock markets

    NASA Astrophysics Data System (ADS)

    Muzy, Jean-François; Baïle, Rachel; Bacry, Emmanuel

    2013-04-01

    In this paper we propose a new model for volatility fluctuations in financial time series. This model relies on a nonstationary Gaussian process that exhibits aging behavior. It turns out that its properties, over any finite time interval, are very close to continuous cascade models. These latter models are indeed well known to reproduce faithfully the main stylized facts of financial time series. However, it involves a large-scale parameter (the so-called “integral scale” where the cascade is initiated) that is hard to interpret in finance. Moreover, the empirical value of the integral scale is in general deeply correlated to the overall length of the sample. This feature is precisely predicted by our model, which, as illustrated by various examples from daily stock index data, quantitatively reproduces the empirical observations.

  7. Unfolding large-scale online collaborative human dynamics

    PubMed Central

    Zha, Yilong; Zhou, Tao; Zhou, Changsong

    2016-01-01

    Large-scale interacting human activities underlie all social and economic phenomena, but quantitative understanding of regular patterns and mechanism is very challenging and still rare. Self-organized online collaborative activities with a precise record of event timing provide unprecedented opportunity. Our empirical analysis of the history of millions of updates in Wikipedia shows a universal double–power-law distribution of time intervals between consecutive updates of an article. We then propose a generic model to unfold collaborative human activities into three modules: (i) individual behavior characterized by Poissonian initiation of an action, (ii) human interaction captured by a cascading response to previous actions with a power-law waiting time, and (iii) population growth due to the increasing number of interacting individuals. This unfolding allows us to obtain an analytical formula that is fully supported by the universal patterns in empirical data. Our modeling approaches reveal “simplicity” beyond complex interacting human activities. PMID:27911766

  8. A new model for the estimation of time of death from vitreous potassium levels corrected for age and temperature.

    PubMed

    Zilg, B; Bernard, S; Alkass, K; Berg, S; Druid, H

    2015-09-01

    Analysis of potassium concentration in the vitreous fluid of the eye is frequently used by forensic pathologists to estimate the postmortem interval (PMI), particularly when other methods commonly used in the early phase of an investigation can no longer be applied. The postmortem rise in vitreous potassium has been recognized for several decades and is readily explained by a diffusion of potassium from surrounding cells into the vitreous fluid. However, there is no consensus regarding the mathematical equation that best describes this increase. The existing models assume a linear increase, but different slopes and starting points have been proposed. In this study, vitreous potassium levels, and a number of factors that may influence these levels, were examined in 462 cases with known postmortem intervals that ranged from 2h to 17 days. We found that the postmortem rise in potassium followed a non-linear curve and that decedent age and ambient temperature influenced the variability by 16% and 5%, respectively. A long duration of agony and a high alcohol level at the time of death contributed less than 1% variability, and evaluation of additional possible factors revealed no detectable impact on the rise of vitreous potassium. Two equations were subsequently generated, one that represents the best fit of the potassium concentrations alone, and a second that represents potassium concentrations with correction for decedent age and/or ambient temperature. The former was associated with narrow confidence intervals in the early postmortem phase, but the intervals gradually increased with longer PMIs. For the latter equation, the confidence intervals were reduced at all PMIs. Therefore, the model that best describes the observed postmortem rise in vitreous potassium levels includes potassium concentration, decedent age, and ambient temperature. Furthermore, the precision of these equations, particularly for long PMIs, is expected to gradually improve by adjusting the constants as more reference data are added over time. A web application that facilitates this calculation process and allows for such future modifications has been developed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Marginal Structural Models for Case-Cohort Study Designs to Estimate the Association of Antiretroviral Therapy Initiation With Incident AIDS or Death

    PubMed Central

    Cole, Stephen R.; Hudgens, Michael G.; Tien, Phyllis C.; Anastos, Kathryn; Kingsley, Lawrence; Chmiel, Joan S.; Jacobson, Lisa P.

    2012-01-01

    To estimate the association of antiretroviral therapy initiation with incident acquired immunodeficiency syndrome (AIDS) or death while accounting for time-varying confounding in a cost-efficient manner, the authors combined a case-cohort study design with inverse probability-weighted estimation of a marginal structural Cox proportional hazards model. A total of 950 adults who were positive for human immunodeficiency virus type 1 were followed in 2 US cohort studies between 1995 and 2007. In the full cohort, 211 AIDS cases or deaths occurred during 4,456 person-years. In an illustrative 20% random subcohort of 190 participants, 41 AIDS cases or deaths occurred during 861 person-years. Accounting for measured confounders and determinants of dropout by inverse probability weighting, the full cohort hazard ratio was 0.41 (95% confidence interval: 0.26, 0.65) and the case-cohort hazard ratio was 0.47 (95% confidence interval: 0.26, 0.83). Standard multivariable-adjusted hazard ratios were closer to the null, regardless of study design. The precision lost with the case-cohort design was modest given the cost savings. Results from Monte Carlo simulations demonstrated that the proposed approach yields approximately unbiased estimates of the hazard ratio with appropriate confidence interval coverage. Marginal structural model analysis of case-cohort study designs provides a cost-efficient design coupled with an accurate analytic method for research settings in which there is time-varying confounding. PMID:22302074

  10. Assessing the Impact of Different Measurement Time Intervals on Observed Long-Term Wind Speed Trends

    NASA Astrophysics Data System (ADS)

    Azorin-Molina, C.; Vicente-Serrano, S. M.; McVicar, T.; Jerez, S.; Revuelto, J.; López Moreno, J. I.

    2014-12-01

    During the last two decades climate studies have reported a tendency toward a decline in measured near-surface wind speed in some regions of Europe, North America, Asia and Australia. This weakening in observed wind speed has been recently termed "global stilling", showing a worldwide average trend of -0.140 m s-1 dec-1 during last 50-years. The precise cause of the "global stilling" remains largely uncertain and has been hypothetically attributed to several factors, mainly related to: (i) an increasing surface roughness (i.e. forest growth, land use changes, and urbanization); (ii) a slowdown in large-scale atmospheric circulation; (iii) instrumental drifts and technological improvements, maintenance, and shifts in measurements sites and calibration issues; (iv) sunlight dimming due to air pollution; and (v) astronomical changes. This study proposed a novel investigation aimed at analyzing how different measurement time intervals used to calculate a wind speed series can affect the sign and magnitude of long-term wind speed trends. For instance, National Weather Services across the globe estimate daily average wind speed using different time intervals and formulae that may affect the trend results. Firstly, we carried out a comprehensive review of wind studies reporting the sign and magnitude of wind speed trend and the sampling intervals used. Secondly, we analyzed near-surface wind speed trends recorded at 59 land-based stations across Spain comparing monthly mean wind speed series obtained from: (a) daily mean wind speed data averaged from standard 10-min mean observations at 0000, 0700, 1300 and 1800 UTC; and (b) average wind speed of 24 hourly measurements (i.e., wind run measurements) from 0000 to 2400 UTC. Thirdly and finally, we quantified the impact of anemometer drift (i.e. bearing malfunction) by presenting preliminary results (1-year of paired measurements) from a comparison of one new anemometer sensor against one malfunctioned anenometer sensor due to old bearings.

  11. Estimating daily fat yield from a single milking on test day for herds with a robotic milking system.

    PubMed

    Peeters, R; Galesloot, P J B

    2002-03-01

    The objective of this study was to estimate the daily fat yield and fat percentage from one sampled milking per cow per test day in an automatic milking system herd, when the milking times and milk yields of all individual milkings are recorded by the automatic milking system. Multiple regression models were used to estimate the 24-h fat percentage when only one milking is sampled for components and milk yields and milking times are known for all milkings in the 24-h period before the sampled milking. In total, 10,697 cow test day records, from 595 herd tests at 91 Dutch herds milked with an automatic milking system, were used. The best model to predict 24-h fat percentage included fat percentage, protein percentage, milk yield and milking interval of the sampled milking, milk yield, and milking interval of the preceding milking, and the interaction between milking interval and the ratio of fat and protein percentage of the sampled milking. This model gave a standard deviation of the prediction error (SE) for 24-h fat percentage of 0.321 and a correlation between the predicted and actual 24-h fat percentage of 0.910. For the 24-h fat yield, we found SE = 90 g and correlation = 0.967. This precision is slightly better than that of present a.m.-p.m. testing schemes. Extra attention must be paid to correctly matching the sample jars and the milkings. Furthermore, milkings with an interval of less than 4 h must be excluded from sampling as well as milkings that are interrupted or that follow an interrupted milking. Under these restrictions (correct matching, interval of at least 4 h, and no interrupted milking), one sampled milking suffices to get a satisfactory estimate for the test-day fat yield.

  12. Precise Orbit Solution for Swarm Using Space-Borne GPS Data and Optimized Pseudo-Stochastic Pulses.

    PubMed

    Zhang, Bingbing; Wang, Zhengtao; Zhou, Lv; Feng, Jiandi; Qiu, Yaodong; Li, Fupeng

    2017-03-20

    Swarm is a European Space Agency (ESA) project that was launched on 22 November 2013, which consists of three Swarm satellites. Swarm precise orbits are essential to the success of the above project. This study investigates how well Swarm zero-differenced (ZD) reduced-dynamic orbit solutions can be determined using space-borne GPS data and optimized pseudo-stochastic pulses under high ionospheric activity. We choose Swarm space-borne GPS data from 1-25 October 2014, and Swarm reduced-dynamic orbits are obtained. Orbit quality is assessed by GPS phase observation residuals and compared with Precise Science Orbits (PSOs) released by ESA. Results show that pseudo-stochastic pulses with a time interval of 6 min and a priori standard deviation (STD) of 10 -2 mm/s in radial (R), along-track (T) and cross-track (N) directions are optimized to Swarm ZD reduced-dynamic precise orbit determination (POD). During high ionospheric activity, the mean Root Mean Square (RMS) of Swarm GPS phase residuals is at 9-11 mm, Swarm orbit solutions are also compared with Swarm PSOs released by ESA and the accuracy of Swarm orbits can reach 2-4 cm in R, T and N directions. Independent Satellite Laser Ranging (SLR) validation indicates that Swarm reduced-dynamic orbits have an accuracy of 2-4 cm. Swarm-B orbit quality is better than those of Swarm-A and Swarm-C. The Swarm orbits can be applied to the geomagnetic, geoelectric and gravity field recovery.

  13. The precise temporal calibration of dinosaur origins

    PubMed Central

    Marsicano, Claudia A.; Irmis, Randall B.; Mancuso, Adriana C.; Mundil, Roland; Chemale, Farid

    2016-01-01

    Dinosaurs have been major components of ecosystems for over 200 million years. Although different macroevolutionary scenarios exist to explain the Triassic origin and subsequent rise to dominance of dinosaurs and their closest relatives (dinosauromorphs), all lack critical support from a precise biostratigraphically independent temporal framework. The absence of robust geochronologic age control for comparing alternative scenarios makes it impossible to determine if observed faunal differences vary across time, space, or a combination of both. To better constrain the origin of dinosaurs, we produced radioisotopic ages for the Argentinian Chañares Formation, which preserves a quintessential assemblage of dinosaurian precursors (early dinosauromorphs) just before the first dinosaurs. Our new high-precision chemical abrasion thermal ionization mass spectrometry (CA-TIMS) U–Pb zircon ages reveal that the assemblage is early Carnian (early Late Triassic), 5- to 10-Ma younger than previously thought. Combined with other geochronologic data from the same basin, we constrain the rate of dinosaur origins, demonstrating their relatively rapid origin in a less than 5-Ma interval, thus halving the temporal gap between assemblages containing only dinosaur precursors and those with early dinosaurs. After their origin, dinosaurs only gradually dominated mid- to high-latitude terrestrial ecosystems millions of years later, closer to the Triassic–Jurassic boundary. PMID:26644579

  14. Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.

    PubMed

    Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young

    2016-01-01

    Introduction . We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods . We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results . The real values and the PACS measurement changes according to tilt value have no significant correlations ( p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements ( p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion . Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.

  15. Associations between motor timing, music practice, and intelligence studied in a large sample of twins.

    PubMed

    Ullén, Fredrik; Mosing, Miriam A; Madison, Guy

    2015-03-01

    Music performance depends critically on precise processing of time. A common model behavior in studies of motor timing is isochronous serial interval production (ISIP), that is, hand/finger movements with a regular beat. ISIP accuracy is related to both music practice and intelligence. Here we present a study of these associations in a large twin cohort, demonstrating that the effects of music practice and intelligence on motor timing are additive, with no significant multiplicative (interaction) effect. Furthermore, the association between music practice and motor timing was analyzed with the use of a co-twin control design using intrapair differences. These analyses revealed that the phenotypic association disappeared when all genetic and common environmental factors were controlled. This suggests that the observed association may not reflect a causal effect of music practice on ISIP performance but rather reflect common influences (e.g., genetic effects) on both outcomes. The relevance of these findings for models of practice and expert performance is discussed. © 2014 New York Academy of Sciences.

  16. POSTFUNDOPLICATION DYSPHAGIA CAUSES SIMILAR WATER INGESTION DYNAMICS AS ACHALASIA.

    PubMed

    Dantas, Roberto Oliveira; Santos, Carla Manfredi; Cassiani, Rachel Aguiar; Alves, Leda Maria Tavares; Nascimento, Weslania Viviane

    2016-01-01

    - After surgical treatment of gastroesophageal reflux disease dysphagia is a symptom in the majority of patients, with decrease in intensity over time. However, some patients may have persistent dysphagia. - The objective of this investigation was to evaluate the dynamics of water ingestion in patients with postfundoplication dysphagia compared with patients with dysphagia caused by achalasia, idiopathic or consequent to Chagas' disease, and controls. - Thirty-three patients with postfundoplication dysphagia, assessed more than one year after surgery, together with 50 patients with Chagas' disease, 27 patients with idiopathic achalasia and 88 controls were all evaluated by the water swallow test. They drunk, in triplicate, 50 mL of water without breaks while being precisely timed and the number of swallows counted. Also measured was: (a) inter-swallows interval - the time to complete the task, divided by the number of swallows during the task; (b) swallowing flow - volume drunk divided by the time taken; (c) volume of each swallow - volume drunk divided by the number of swallows. - Patients with postfundoplication dysphagia, Chagas' disease and idiopathic achalasia took longer to ingest all the volume, had an increased number of swallows, an increase in interval between swallows, a decrease in swallowing flow and a decrease in water volume of each swallow compared with the controls. There was no difference between the three groups of patients. There was no correlation between postfundoplication time and the results. - It was concluded that patients with postfundoplication dysphagia have similar water ingestion dynamics as patients with achalasia.

  17. An improved chronology for the Lateglacial palaeoenvironmental record of Lake Haemelsee, Germany: challenges for independent site comparisons

    NASA Astrophysics Data System (ADS)

    Lane, Christine; Brauer, Achim; Ramsey Christopher, Bronk; Engels, Stefan; Haliuc, Aritina; Hoek, Wim; Hubay, Katalin; Jones, Gwydion; Sachse, Dirk; Staff, Richard; Turner, Falko; Wagner-Cremer, Frederike

    2016-04-01

    Exploring temporal and spatial variability of environmental response to climatic changes requires the comparison of widespread palaeoenvironmental sequences on their own, independently-derived, age models. High precision age-models can be constructed using statistical methods to combine absolute and relative age estimates measured using a range of techniques. Such an approach may help to highlight otherwise unrecognised uncertainties, where a single dating method has been applied in isolation. Radiocarbon dating, tephrochronology and varve counting have been combined within a Bayesian depositional model to build a chronology for a sediment sequence from Lake Haemelsee (Northern Germany) that continuously covers the entire Lateglacial and early Holocene. Each of the dating techniques used brought its own challenges. Radiocarbon dates provide the only absolute ages measured directly in the record, however a low macrofossil content led to small sample sizes and a limited number of low precision dates. A floating varved interval provided restricted but very precise relative dating for sediments covering the Allerød to Younger Dryas transition. Well-spaced, visible and crypto- tephra layers, including the widespread Laacher See , Vedde Ash, Askja-S and Saksunarvatn tephra layers, allow absolute ages for the tephra layers established in other locations to be imported into the Haemelsee sequence. These layers also provide multiple tie-lines that allow the Haemelsee sequences to be directly compared at particular moments in time, and within particular intervals, to other important Lateglacial archives. However, selecting the "best" published tephra ages to use in the Haemelsee age model is not simple and risks biasing comparison of the palaeoenvironmental record to fit one or another comparative archive. Here we investigate the use of multiple age models for the Haemelsee record, in order to retain an independent approach to investigating the environmental transitions of the Lateglacial to Early Holocene.

  18. Power in Bayesian Mediation Analysis for Small Sample Research

    PubMed Central

    Miočević, Milica; MacKinnon, David P.; Levy, Roy

    2018-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results. PMID:29662296

  19. Power in Bayesian Mediation Analysis for Small Sample Research.

    PubMed

    Miočević, Milica; MacKinnon, David P; Levy, Roy

    2017-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results.

  20. A fast algorithm to compute precise type-2 centroids for real-time control applications.

    PubMed

    Chakraborty, Sumantra; Konar, Amit; Ralescu, Anca; Pal, Nikhil R

    2015-02-01

    An interval type-2 fuzzy set (IT2 FS) is characterized by its upper and lower membership functions containing all possible embedded fuzzy sets, which together is referred to as the footprint of uncertainty (FOU). The FOU results in a span of uncertainty measured in the defuzzified space and is determined by the positional difference of the centroids of all the embedded fuzzy sets taken together. This paper provides a closed-form formula to evaluate the span of uncertainty of an IT2 FS. The closed-form formula offers a precise measurement of the degree of uncertainty in an IT2 FS with a runtime complexity less than that of the classical iterative Karnik-Mendel algorithm and other formulations employing the iterative Newton-Raphson algorithm. This paper also demonstrates a real-time control application using the proposed closed-form formula of centroids with reduced root mean square error and computational overhead than those of the existing methods. Computer simulations for this real-time control application indicate that parallel realization of the IT2 defuzzification outperforms its competitors with respect to maximum overshoot even at high sampling rates. Furthermore, in the presence of measurement noise in system (plant) states, the proposed IT2 FS based scheme outperforms its type-1 counterpart with respect to peak overshoot and root mean square error in plant response.

  1. Fixed-interval performance and self-control in children.

    PubMed Central

    Darcheville, J C; Rivière, V; Wearden, J H

    1992-01-01

    Operant responses of 16 children (mean age 6 years and 1 month) were reinforced according to different fixed-interval schedules (with interreinforcer intervals of 20, 30, or 40 s) in which the reinforcers were either 20-s or 40-s presentations of a cartoon. In another procedure, they received training on a self-control paradigm in which both reinforcer delay (0.5 s or 40 s) and reinforcer duration (20 s or 40 s of cartoons) varied, and subjects were offered a choice between various combinations of delay and duration. Individual differences in behavior under the self-control procedure were precisely mirrored by individual differences under the fixed-interval schedule. Children who chose the smaller immediate reinforcer on the self-control procedure (impulsive) produced short postreinforcement pauses and high response rates in the fixed-interval conditions, and both measures changed little with changes in fixed-interval value. Conversely, children who chose the larger delayed reinforcer in the self-control condition (the self-controlled subjects) exhibited lower response rates and long postreinforcement pauses, which changed systematically with changes in the interval, in their fixed-interval performances. PMID:1573372

  2. High-precision U-Pb geochronology of the Jurassic Yanliao Biota from Jianchang (western Liaoning Province, China): Age constraints on the rise of feathered dinosaurs and eutherian mammals

    NASA Astrophysics Data System (ADS)

    Chu, Zhuyin; He, Huaiyu; Ramezani, Jahandar; Bowring, Samuel A.; Hu, Dongyu; Zhang, Lijun; Zheng, Shaolin; Wang, Xiaolin; Zhou, Zhonghe; Deng, Chenglong; Guo, Jinghui

    2016-10-01

    The Yanliao Biota of northeastern China comprises the oldest feathered dinosaurs, transitional pterosaurs, as well as the earliest eutherian mammals, multituberculate mammals, and new euharamiyidan species that are key elements of the Mesozoic biotic record. Recent discovery of the Yanliao Biota in the Daxishan section near the town of Linglongta, Jianchang County in western Liaoning Province have greatly enhanced our knowledge of the transition from dinosaurs to birds, primitive to derived pterosaurs, and the early evolution of mammals. Nevertheless, fundamental questions regarding the correlation of fossil-bearing strata, rates of dinosaur and mammalian evolution, and their relationship to environmental change in deep time remain unresolved due to the paucity of precise and accurate temporal constraints. These limitations underscore the importance of placing the rich fossil record of Jianchang within a high-resolution chronostratigraphic framework that has thus far been hampered by the relatively low precision of in situ radioisotopic dating techniques. Here we present high-precision U-Pb zircon geochronology by the chemical abrasion isotope dilution thermal ionization mass spectrometry (CA-ID-TIMS) from three interstratified ash beds previously dated by secondary-ion mass spectrometry (SIMS) technique. The results constrain the key fossil horizons of the Daxishan section to an interval spanning 160.89 to 160.25 Ma with 2σ analytical uncertainties that range from ±46 to ±69 kyr. These data place the Yanliao Biota from Jianchang in the Oxfordian Stage of the Late Jurassic, and mark the Daxishan section as the site of Earth's oldest precisely dated feathered dinosaurs and eutherian mammals.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreras, Ignacio; Trujillo, Ignacio, E-mail: i.ferreras@ucl.ac.uk

    At the core of the standard cosmological model lies the assumption that the redshift of distant galaxies is independent of photon wavelength. This invariance of cosmological redshift with wavelength is routinely found in all galaxy spectra with a precision of Δ z ∼ 10{sup −4}. The combined use of approximately half a million high-quality galaxy spectra from the Sloan Digital Sky Survey (SDSS) allows us to explore this invariance down to a nominal precision in redshift of 10{sup −6} (statistical). Our analysis is performed over the redshift interval 0.02 < z < 0.25. We use the centroids of spectral linesmore » over the 3700–6800 Å rest-frame optical window. We do not find any difference in redshift between the blue and red sides down to a precision of 10{sup −6} at z ≲ 0.1 and 10{sup −5} at 0.1 ≲ z ≲ 0.25 (i.e., at least an order of magnitude better than with single galaxy spectra). This is the first time the wavelength-independence of the (1 + z ) redshift law is confirmed over a wide spectral window at this precision level. This result holds independently of the stellar population of the galaxies and their kinematical properties. This result is also robust against wavelength calibration issues. The limited spectral resolution ( R ∼ 2000) of the SDSS data, combined with the asymmetric wavelength sampling of the spectral features in the observed restframe due to the (1 + z ) stretching of the lines, prevent our methodology from achieving a precision higher than 10{sup −5}, at z > 0.1. Future attempts to constrain this law will require high quality galaxy spectra at higher resolution ( R ≳ 10,000).« less

  4. Quantification of nimesulide in human plasma by high-performance liquid chromatography/tandem mass spectrometry. Application to bioequivalence studies.

    PubMed

    Barrientos-Astigarraga, R E; Vannuchi, Y B; Sucupira, M; Moreno, R A; Muscará, M N; De Nucci, G

    2001-12-01

    A method based on liquid chromatography with negative ion electrospray ionization and tandem mass spectrometry is described for the determination of nimesulide in human plasma. Liquid-liquid extraction using a mixture of diethyl ether and dichloromethane was employed and celecoxib was used as an internal standard. The chromatographic run time was 4.5 min and the weighted (1/x) calibration curve was linear in the range 10.0-2000 ng x ml(-1). The limit of quantification was 10 ng x ml(-1), the intra-batch precision was 6.3, 2.1 and 2.1% and the intra-batch accuracy was 3.2, 0.3 and 0.1% for 30, 300 and 1200 ng x ml(-1) respectively. The inter-batch precision was 2.3, 2.8 and 2.7% and the accuracy was 3.3, 0.3 and 0.1% for 30, 300 and 1200 ng x ml(-1) respectively. This method was employed in a bioequivalence study of one nimesulide drop formulation (nimesulide 50 mg x ml(-1) drop, Medley S/A Indústria Farmacêutica, Brazil) against one standard nimesulide drop formulation (Nisulid, 50 mg x ml(-1) drop, Astra Médica, Brazil). Twenty-four healthy volunteers (both sexes) took part in the study and received a single oral dose of nimesulide (100 mg, equivalent to 2 ml of either formulation) in an open, randomized, two-period crossover way, with a 2-week washout interval between periods. The 90% confidence interval (CI) for geometric mean ratios between nimesulide and Nisulid were 93.1-109.6% for C(max), 87.7-99.8% for AUC(last) and 88.1-99.7% for AUC(0-infinity). Since the 90% CI for the above-mentioned parameters were included in the 80-125% interval proposed by the US Food and Drug Administration, the two formulations were considered bioequivalent in terms of both rate and extent of absorption. Copyright 2001 John Wiley & Sons, Ltd.

  5. Linear and volumetric dimensional changes of injection-molded PMMA denture base resins.

    PubMed

    El Bahra, Shadi; Ludwig, Klaus; Samran, Abdulaziz; Freitag-Wolf, Sandra; Kern, Matthias

    2013-11-01

    The aim of this study was to evaluate the linear and volumetric dimensional changes of six denture base resins processed by their corresponding injection-molding systems at 3 time intervals of water storage. Two heat-curing (SR Ivocap Hi Impact and Lucitone 199) and four auto-curing (IvoBase Hybrid, IvoBase Hi Impact, PalaXpress, and Futura Gen) acrylic resins were used with their specific injection-molding technique to fabricate 6 specimens of each material. Linear and volumetric dimensional changes were determined by means of a digital caliper and an electronic hydrostatic balance, respectively, after water storage of 1, 30, or 90 days. Means and standard deviations of linear and volumetric dimensional changes were calculated in percentage (%). Statistical analysis was done using Student's and Welch's t tests with Bonferroni-Holm correction for multiple comparisons (α=0.05). Statistically significant differences in linear dimensional changes between resins were demonstrated at all three time intervals of water immersion (p≤0.05), with exception of the following comparisons which showed no significant difference: IvoBase Hi Impact/SR Ivocap Hi Impact and PalaXpress/Lucitone 199 after 1 day, Futura Gen/PalaXpress and PalaXpress/Lucitone 199 after 30 days, and IvoBase Hybrid/IvoBase Hi Impact after 90 days. Also, statistically significant differences in volumetric dimensional changes between resins were found at all three time intervals of water immersion (p≤0.05), with exception of the comparison between PalaXpress and Futura Gen. Denture base resins (IvoBase Hybrid and IvoBase Hi Impact) processed by the new injection-molding system (IvoBase), revealed superior dimensional precision. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  6. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  7. A one-kilogram quartz resonator as a mass standard.

    PubMed

    Vig, John; Howe, David

    2013-02-01

    The SI unit of mass, the kilogram, is defined by a single artifact, the International Prototype Kilogram. This artifact, the primary mass standard, suffers from long-term instabilities that are neither well understood nor easily monitored. A secondary mass standard consisting of a 1-kg quartz resonator in ultrahigh vacuum is proposed. The frequency stability of such a resonator is likely to be far higher than the mass stability of the primary mass standard. Moreover, the resonator would provide a link to the SI time-interval unit. When compared with a laboratory-grade atomic frequency standard or GPS time, the frequency of the resonator could be monitored, on a continuous basis, with 10(-15) precision in only a few days of averaging. It could also be coordinated, worldwide, with other resonator mass standards without the need to transport the standards.

  8. Thermalization of oscillator chains with onsite anharmonicity and comparison with kinetic theory

    DOE PAGES

    Mendl, Christian B.; Lu, Jianfeng; Lukkarinen, Jani

    2016-12-02

    We perform microscopic molecular dynamics simulations of particle chains with an onsite anharmonicity to study relaxation of spatially homogeneous states to equilibrium, and directly compare the simulations with the corresponding Boltzmann-Peierls kinetic theory. The Wigner function serves as a common interface between the microscopic and kinetic level. We demonstrate quantitative agreement after an initial transient time interval. In particular, besides energy conservation, we observe the additional quasiconservation of the phonon density, defined via an ensemble average of the related microscopic field variables and exactly conserved by the kinetic equations. On superkinetic time scales, density quasiconservation is lost while energy remainsmore » conserved, and we find evidence for eventual relaxation of the density to its canonical ensemble value. Furthermore, the precise mechanism remains unknown and is not captured by the Boltzmann-Peierls equations.« less

  9. Thermalization of oscillator chains with onsite anharmonicity and comparison with kinetic theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendl, Christian B.; Lu, Jianfeng; Lukkarinen, Jani

    We perform microscopic molecular dynamics simulations of particle chains with an onsite anharmonicity to study relaxation of spatially homogeneous states to equilibrium, and directly compare the simulations with the corresponding Boltzmann-Peierls kinetic theory. The Wigner function serves as a common interface between the microscopic and kinetic level. We demonstrate quantitative agreement after an initial transient time interval. In particular, besides energy conservation, we observe the additional quasiconservation of the phonon density, defined via an ensemble average of the related microscopic field variables and exactly conserved by the kinetic equations. On superkinetic time scales, density quasiconservation is lost while energy remainsmore » conserved, and we find evidence for eventual relaxation of the density to its canonical ensemble value. Furthermore, the precise mechanism remains unknown and is not captured by the Boltzmann-Peierls equations.« less

  10. Human coffee drinking: manipulation of concentration and caffeine dose.

    PubMed Central

    Griffiths, R R; Bigelow, G E; Liebson, I A; O'Keeffe, M; O'Leary, D; Russ, N

    1986-01-01

    In a residential research ward coffee drinking was studied in 9 volunteer human subjects with histories of heavy coffee drinking. A series of five experiments was undertaken to characterize adlibitum coffee consumption and to investigate the effects of manipulating coffee concentration, caffeine dose per cup, and caffeine preloads prior to coffee drinking. Manipulations were double-blind and scheduled in randomized sequences across days. When cups of coffee were freely available, coffee drinking tended to be rather regularly spaced during the day with intercup intervals becoming progressively longer throughout the day; experimental manipulations showed that this lengthening of intercup intervals was not due to accumulating caffeine levels. Number of cups of coffee consumed was an inverted U-shaped function of both coffee concentration and caffeine dose per cup; however, coffee-concentration and dose-per-cup manipulations did not produce similar effects on other measures of coffee drinking (intercup interval, time to drink a cup, within-day distribution of cups). Caffeine preload produced dose-related decreases in number of cups consumed. As a whole, these experiments provide some limited evidence for both the suppressive and the reinforcing effects of caffeine on coffee consumption. Examination of total daily coffee and caffeine intake across experiments, however, provides no evidence for precise regulation (i.e., titration) of coffee or caffeine intake. PMID:3958660

  11. Autogenic geomorphic processes determine the resolution and fidelity of terrestrial paleoclimate records.

    PubMed

    Foreman, Brady Z; Straub, Kyle M

    2017-09-01

    Terrestrial paleoclimate records rely on proxies hosted in alluvial strata whose beds are deposited by unsteady and nonlinear geomorphic processes. It is broadly assumed that this renders the resultant time series of terrestrial paleoclimatic variability noisy and incomplete. We evaluate this assumption using a model of oscillating climate and the precise topographic evolution of an experimental alluvial system. We find that geomorphic stochasticity can create aliasing in the time series and spurious climate signals, but these issues are eliminated when the period of climate oscillation is longer than a key time scale of internal dynamics in the geomorphic system. This emergent autogenic geomorphic behavior imparts regularity to deposition and represents a natural discretization interval of the continuous climate signal. We propose that this time scale in nature could be in excess of 10 4 years but would still allow assessments of the rates of climate change at resolutions finer than the existing age model techniques in isolation.

  12. Autogenic geomorphic processes determine the resolution and fidelity of terrestrial paleoclimate records

    PubMed Central

    Foreman, Brady Z.; Straub, Kyle M.

    2017-01-01

    Terrestrial paleoclimate records rely on proxies hosted in alluvial strata whose beds are deposited by unsteady and nonlinear geomorphic processes. It is broadly assumed that this renders the resultant time series of terrestrial paleoclimatic variability noisy and incomplete. We evaluate this assumption using a model of oscillating climate and the precise topographic evolution of an experimental alluvial system. We find that geomorphic stochasticity can create aliasing in the time series and spurious climate signals, but these issues are eliminated when the period of climate oscillation is longer than a key time scale of internal dynamics in the geomorphic system. This emergent autogenic geomorphic behavior imparts regularity to deposition and represents a natural discretization interval of the continuous climate signal. We propose that this time scale in nature could be in excess of 104 years but would still allow assessments of the rates of climate change at resolutions finer than the existing age model techniques in isolation. PMID:28924607

  13. Temporal Delineation and Quantification of Short Term Clustered Mining Seismicity

    NASA Astrophysics Data System (ADS)

    Woodward, Kyle; Wesseloo, Johan; Potvin, Yves

    2017-07-01

    The assessment of the temporal characteristics of seismicity is fundamental to understanding and quantifying the seismic hazard associated with mining, the effectiveness of strategies and tactics used to manage seismic hazard, and the relationship between seismicity and changes to the mining environment. This article aims to improve the accuracy and precision in which the temporal dimension of seismic responses can be quantified and delineated. We present a review and discussion on the occurrence of time-dependent mining seismicity with a specific focus on temporal modelling and the modified Omori law (MOL). This forms the basis for the development of a simple weighted metric that allows for the consistent temporal delineation and quantification of a seismic response. The optimisation of this metric allows for the selection of the most appropriate modelling interval given the temporal attributes of time-dependent mining seismicity. We evaluate the performance weighted metric for the modelling of a synthetic seismic dataset. This assessment shows that seismic responses can be quantified and delineated by the MOL, with reasonable accuracy and precision, when the modelling is optimised by evaluating the weighted MLE metric. Furthermore, this assessment highlights that decreased weighted MLE metric performance can be expected if there is a lack of contrast between the temporal characteristics of events associated with different processes.

  14. Multidisciplinary design and analytic approaches to advance prospective research on the multilevel determinants of child health.

    PubMed

    Johnson, Sara B; Little, Todd D; Masyn, Katherine; Mehta, Paras D; Ghazarian, Sharon R

    2017-06-01

    Characterizing the determinants of child health and development over time, and identifying the mechanisms by which these determinants operate, is a research priority. The growth of precision medicine has increased awareness and refinement of conceptual frameworks, data management systems, and analytic methods for multilevel data. This article reviews key methodological challenges in cohort studies designed to investigate multilevel influences on child health and strategies to address them. We review and summarize methodological challenges that could undermine prospective studies of the multilevel determinants of child health and ways to address them, borrowing approaches from the social and behavioral sciences. Nested data, variation in intervals of data collection and assessment, missing data, construct measurement across development and reporters, and unobserved population heterogeneity pose challenges in prospective multilevel cohort studies with children. We discuss innovations in missing data, innovations in person-oriented analyses, and innovations in multilevel modeling to address these challenges. Study design and analytic approaches that facilitate the integration across multiple levels, and that account for changes in people and the multiple, dynamic, nested systems in which they participate over time, are crucial to fully realize the promise of precision medicine for children and adolescents. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Motor control by precisely timed spike patterns

    PubMed Central

    Srivastava, Kyle H.; Holmes, Caroline M.; Vellema, Michiel; Pack, Andrea R.; Elemans, Coen P. H.; Nemenman, Ilya; Sober, Samuel J.

    2017-01-01

    A fundamental problem in neuroscience is understanding how sequences of action potentials (“spikes”) encode information about sensory signals and motor outputs. Although traditional theories assume that this information is conveyed by the total number of spikes fired within a specified time interval (spike rate), recent studies have shown that additional information is carried by the millisecond-scale timing patterns of action potentials (spike timing). However, it is unknown whether or how subtle differences in spike timing drive differences in perception or behavior, leaving it unclear whether the information in spike timing actually plays a role in brain function. By examining the activity of individual motor units (the muscle fibers innervated by a single motor neuron) and manipulating patterns of activation of these neurons, we provide both correlative and causal evidence that the nervous system uses millisecond-scale variations in the timing of spikes within multispike patterns to control a vertebrate behavior—namely, respiration in the Bengalese finch, a songbird. These findings suggest that a fundamental assumption of current theories of motor coding requires revision. PMID:28100491

  16. Interval Management: Development and Implementation of an Airborne Spacing Concept

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Penhallegon, William J.; Weitz, Lesley A.; Bone, Randall S.; Levitt, Ian; Flores Kriegsfeld, Julia A.; Arbuckle, Doug; Johnson, William C.

    2016-01-01

    Interval Management is a suite of ADS-B-enabled applications that allows the air traffic controller to instruct a flight crew to achieve and maintain a desired spacing relative to another aircraft. The flight crew, assisted by automation, manages the speed of their aircraft to deliver more precise inter-aircraft spacing than is otherwise possible, which increases traffic throughput at the same or higher levels of safety. Interval Management has evolved from a long history of research and is now seen as a core NextGen capability. With avionics standards recently published, completion of an Investment Analysis Readiness Decision by the FAA, and multiple flight tests planned, Interval Management will soon be part of everyday use in the National Airspace System. Second generation, Advanced Interval Management capabilities are being planned to provide a wider range of operations and improved performance and benefits. This paper briefly reviews the evolution of Interval Management and describes current development and deployment plans. It also reviews concepts under development as the next generation of applications.

  17. Measuring levee elevation heights in North Louisiana.

    DOT National Transportation Integrated Search

    2010-01-01

    The primary goals of this research are to measure the elevation and centerline coordinates of the top of federal and local levees and also to ensure that the resulting global positioning system (GPS) measurement data is within a precision interval of...

  18. Monitoring on Xi'an ground fissures deformation with TerraSAR-X data

    USGS Publications Warehouse

    Zhao, C.; Zhang, Q.; Zhu, W.; Lu, Z.

    2012-01-01

    Owing to the fine resolution of TerraSAR-X data provided since 2007, this paper applied 6 TerraSAR data (strip mode) during 3rd Dec. 2009 to 23rd Mar. 2010 to detect and monitor the active fissures over Xi'an region. Three themes have been designed for high precision detection and monitoring of Xi'an-Chang'an fissures, as small baseline subsets (SBAS) to test the atmospheric effects of differential interferograms pair stepwise, 2-pass differential interferogram with very short baseline perpendicular to generate the whole deformation map with 44 days interval, and finally, corner reflector (CR) technique was used to closely monitor the relative deformation time series between two CRs settled crossing two ground fissures. Results showed that TerraSAR data are a good choice for small-scale ground fissures detection and monitoring, while special considerations should be taken for their great temporal and baseline decorrelation. Secondly, ground fissures in Xi'an were mostly detected at the joint section of stable and deformable regions. Lastly, CR-InSAR had potential ability to monitor relative deformation crossing fissures with millimeter precision.

  19. Regular Patterns in Cerebellar Purkinje Cell Simple Spike Trains

    PubMed Central

    Shin, Soon-Lim; Hoebeek, Freek E.; Schonewille, Martijn; De Zeeuw, Chris I.; Aertsen, Ad; De Schutter, Erik

    2007-01-01

    Background Cerebellar Purkinje cells (PC) in vivo are commonly reported to generate irregular spike trains, documented by high coefficients of variation of interspike-intervals (ISI). In strong contrast, they fire very regularly in the in vitro slice preparation. We studied the nature of this difference in firing properties by focusing on short-term variability and its dependence on behavioral state. Methodology/Principal Findings Using an analysis based on CV2 values, we could isolate precise regular spiking patterns, lasting up to hundreds of milliseconds, in PC simple spike trains recorded in both anesthetized and awake rodents. Regular spike patterns, defined by low variability of successive ISIs, comprised over half of the spikes, showed a wide range of mean ISIs, and were affected by behavioral state and tactile stimulation. Interestingly, regular patterns often coincided in nearby Purkinje cells without precise synchronization of individual spikes. Regular patterns exclusively appeared during the up state of the PC membrane potential, while single ISIs occurred both during up and down states. Possible functional consequences of regular spike patterns were investigated by modeling the synaptic conductance in neurons of the deep cerebellar nuclei (DCN). Simulations showed that these regular patterns caused epochs of relatively constant synaptic conductance in DCN neurons. Conclusions/Significance Our findings indicate that the apparent irregularity in cerebellar PC simple spike trains in vivo is most likely caused by mixing of different regular spike patterns, separated by single long intervals, over time. We propose that PCs may signal information, at least in part, in regular spike patterns to downstream DCN neurons. PMID:17534435

  20. Complement dependent cytotoxicity (CDC) activity of a humanized anti Lewis-Y antibody: FACS-based assay versus the 'classical' radioactive method -- qualification, comparison and application of the FACS-based approach.

    PubMed

    Nechansky, A; Szolar, O H J; Siegl, P; Zinoecker, I; Halanek, N; Wiederkum, S; Kircheis, R

    2009-05-01

    The fully humanized Lewis-Y carbohydrate specific monoclonal antibody (mAb) IGN311 is currently tested in a passive immunotherapy approach in a clinical phase I trail and therefore regulatory requirements demand qualified assays for product analysis. To demonstrate the functionality of its Fc-region, the capacity of IGN311 to mediate complement dependent cytotoxicity (CDC) against human breast cancer cells was evaluated. The "classical" radioactive method using chromium-51 and a FACS-based assay were established and qualified according to ICH guidelines. Parameters evaluated were specificity, response function, bias, repeatability (intra-day precision), intermediate precision (operator-time different), and linearity (assay range). In the course of a fully nested design, a four-parameter logistic equation was identified as appropriate calibration model for both methods. For the radioactive assay, the bias ranged from -6.1% to -3.6%. The intermediate precision for future means of duplicate measurements revealed values from 12.5% to 15.9% and the total error (beta-expectation tolerance interval) of the method was found to be <40%. For the FACS-based assay, the bias ranged from -8.3% to 0.6% and the intermediate precision for future means of duplicate measurements revealed values from 4.2% to 8.0%. The total error of the method was found to be <25%. The presented data demonstrate that the FACS-based CDC is more accurate than the radioactive assay. Also, the elimination of radioactivity and the 'real-time' counting of apoptotic cells further justifies the implementation of this method which was subsequently applied for testing the influence of storage at 4 degrees C and 25 degrees C ('stability testing') on the potency of IGN311 drug product. The obtained results demonstrate that the qualified functional assay represents a stability indicating test method.

  1. Integrative stratigraphy during extreme environmental changes and biotic recovery time: The Early Triassic in Indian Himalaya

    NASA Astrophysics Data System (ADS)

    Richoz, Sylvain; Krystyn, Leopold; Algeo, Thomas; Bhargava, Om

    2014-05-01

    The understanding of extreme environmental changes as major extinction events, perturbations of global biogeochemical cycles or rapid climate shifts is based on a precise timing of the different events. But especially in such moving environments exact correlations are difficult to establish what underlines the necessity of an integrated stratigraphy by using all tools at disposition. A Lower Triassic section at Mud in the Spiti Valley (Western Himalaya, India) is a candidate section for the GSSP of the Induan-Olenekian Boundary (IOB). The succession was deposited in a deep-shelf setting on the southern margin of the Neotethys Ocean. The section contains abundant fossils allowing a very precise regional biostratigraphy and displays no signs of sedimentary breaks. Analysis of pelagic faunas proves a significant, two-step radiation phase in ammonoids and conodonts close to the Induan-Olenekian boundary. These diversifications are coupled with a short-termed positive δ13Ccarb excursion of global evidence. The Spiti δ13Ccarb excursion displays, however, different amplitude and biostratigraphic position than in other relevant sections for this time interval. In this study, we analyzed δ13Ccarb, δ13Corg, and δ15Norg as well as major, trace, and REE concentrations for a 16-m-thick interval spanning the mid-Griesbachian to early Spathian substages, to better constrains the chain of events. Prior to the first radiation step, high difference gradient between the δ13Ccarb values of tempestite beds with shallow carbonate and carbonate originated in deeper water is interpreted as a sign of a stratified water column. This effect disappears with the onset of better oxygenated conditions at the time of the ammonoid-conodont radiation, which correspond as well to δ13Ccarb, δ13Corg and δ15Norg positive excursions. A decrease in Mo and U concentrations occurring at the same point suggests a shift toward locally less reducing conditions. The second step coincided with the change from terrigenous to almost pure carbonate sedimentation. This new set of data demonstrates from on hand the rapidity of radiation of the pelagic fauna in the aftermath of the Permian-Triassic extinction as soon as environmental conditions were favourable again. On the other hand, it demonstrates that bathymetry, for example, but also other local factors, could have had a significant impact in the timing of these radiations and may hamper solid worldwide correlations.

  2. Nonparametric methods in actigraphy: An update

    PubMed Central

    Gonçalves, Bruno S.B.; Cavalcanti, Paula R.A.; Tavares, Gracilene R.; Campos, Tania F.; Araujo, John F.

    2014-01-01

    Circadian rhythmicity in humans has been well studied using actigraphy, a method of measuring gross motor movement. As actigraphic technology continues to evolve, it is important for data analysis to keep pace with new variables and features. Our objective is to study the behavior of two variables, interdaily stability and intradaily variability, to describe rest activity rhythm. Simulated data and actigraphy data of humans, rats, and marmosets were used in this study. We modified the method of calculation for IV and IS by modifying the time intervals of analysis. For each variable, we calculated the average value (IVm and ISm) results for each time interval. Simulated data showed that (1) synchronization analysis depends on sample size, and (2) fragmentation is independent of the amplitude of the generated noise. We were able to obtain a significant difference in the fragmentation patterns of stroke patients using an IVm variable, while the variable IV60 was not identified. Rhythmic synchronization of activity and rest was significantly higher in young than adults with Parkinson׳s when using the ISM variable; however, this difference was not seen using IS60. We propose an updated format to calculate rhythmic fragmentation, including two additional optional variables. These alternative methods of nonparametric analysis aim to more precisely detect sleep–wake cycle fragmentation and synchronization. PMID:26483921

  3. Cross-Sectional HIV Incidence Surveillance: A Benchmarking of Approaches for Estimating the 'Mean Duration of Recent Infection'.

    PubMed

    Kassanjee, Reshma; De Angelis, Daniela; Farah, Marian; Hanson, Debra; Labuschagne, Jan Phillipus Lourens; Laeyendecker, Oliver; Le Vu, Stéphane; Tom, Brian; Wang, Rui; Welte, Alex

    2017-03-01

    The application of biomarkers for 'recent' infection in cross-sectional HIV incidence surveillance requires the estimation of critical biomarker characteristics. Various approaches have been employed for using longitudinal data to estimate the Mean Duration of Recent Infection (MDRI) - the average time in the 'recent' state. In this systematic benchmarking of MDRI estimation approaches, a simulation platform was used to measure accuracy and precision of over twenty approaches, in thirty scenarios capturing various study designs, subject behaviors and test dynamics that may be encountered in practice. Results highlight that assuming a single continuous sojourn in the 'recent' state can produce substantial bias. Simple interpolation provides useful MDRI estimates provided subjects are tested at regular intervals. Regression performs the best - while 'random effects' describe the subject-clustering in the data, regression models without random effects proved easy to implement, stable, and of similar accuracy in scenarios considered; robustness to parametric assumptions was improved by regressing 'recent'/'non-recent' classifications rather than continuous biomarker readings. All approaches were vulnerable to incorrect assumptions about subjects' (unobserved) infection times. Results provided show the relationships between MDRI estimation performance and the number of subjects, inter-visit intervals, missed visits, loss to follow-up, and aspects of biomarker signal and noise.

  4. Feasibility Criteria for Interval Management Operations as Part of Arrival Management Operations

    NASA Technical Reports Server (NTRS)

    Levitt, Ian M.; Weitz, Lesley A.; Barmore, Bryan E.; Castle, Michael W.

    2014-01-01

    Interval Management (IM) is a future airborne spacing concept that aims to provide more precise inter-aircraft spacing to yield throughput improvements and greater use of fuel efficient trajectories for arrival and approach operations. To participate in an IM operation, an aircraft must be equipped with avionics that provide speeds to achieve and maintain an assigned spacing interval relative to another aircraft. It is not expected that all aircraft will be equipped with the necessary avionics, but rather that IM fits into a larger arrival management concept developed to support the broader mixed-equipage environment. Arrival management concepts are comprised of three parts: a ground-based sequencing and scheduling function to develop an overall arrival strategy, ground-based tools to support the management of aircraft to that schedule, and the IM tools necessary for the IM operation (i.e., ground-based set-up, initiation, and monitoring, and the flight-deck tools to conduct the IM operation). The Federal Aviation Administration is deploying a near-term ground-automation system to support metering operations in the National Airspace System, which falls within the first two components of the arrival management concept. This paper develops a methodology for determining the required delivery precision at controlled meter points for aircraft that are being managed to a schedule and aircraft being managed to a relative spacing interval in order to achieve desired flow rates and adequate separation at the meter points.

  5. Precise Orbit Solution for Swarm Using Space-Borne GPS Data and Optimized Pseudo-Stochastic Pulses

    PubMed Central

    Zhang, Bingbing; Wang, Zhengtao; Zhou, Lv; Feng, Jiandi; Qiu, Yaodong; Li, Fupeng

    2017-01-01

    Swarm is a European Space Agency (ESA) project that was launched on 22 November 2013, which consists of three Swarm satellites. Swarm precise orbits are essential to the success of the above project. This study investigates how well Swarm zero-differenced (ZD) reduced-dynamic orbit solutions can be determined using space-borne GPS data and optimized pseudo-stochastic pulses under high ionospheric activity. We choose Swarm space-borne GPS data from 1–25 October 2014, and Swarm reduced-dynamic orbits are obtained. Orbit quality is assessed by GPS phase observation residuals and compared with Precise Science Orbits (PSOs) released by ESA. Results show that pseudo-stochastic pulses with a time interval of 6 min and a priori standard deviation (STD) of 10−2 mm/s in radial (R), along-track (T) and cross-track (N) directions are optimized to Swarm ZD reduced-dynamic precise orbit determination (POD). During high ionospheric activity, the mean Root Mean Square (RMS) of Swarm GPS phase residuals is at 9–11 mm, Swarm orbit solutions are also compared with Swarm PSOs released by ESA and the accuracy of Swarm orbits can reach 2–4 cm in R, T and N directions. Independent Satellite Laser Ranging (SLR) validation indicates that Swarm reduced-dynamic orbits have an accuracy of 2–4 cm. Swarm-B orbit quality is better than those of Swarm-A and Swarm-C. The Swarm orbits can be applied to the geomagnetic, geoelectric and gravity field recovery. PMID:28335538

  6. Accurate and consistent automatic seismocardiogram annotation without concurrent ECG.

    PubMed

    Laurin, A; Khosrow-Khavar, F; Blaber, A P; Tavakolian, Kouhyar

    2016-09-01

    Seismocardiography (SCG) is the measurement of vibrations in the sternum caused by the beating of the heart. Precise cardiac mechanical timings that are easily obtained from SCG are critically dependent on accurate identification of fiducial points. So far, SCG annotation has relied on concurrent ECG measurements. An algorithm capable of annotating SCG without the use any other concurrent measurement was designed. We subjected 18 participants to graded lower body negative pressure. We collected ECG and SCG, obtained R peaks from the former, and annotated the latter by hand, using these identified peaks. We also annotated the SCG automatically. We compared the isovolumic moment timings obtained by hand to those obtained using our algorithm. Mean  ±  confidence interval of the percentage of accurately annotated cardiac cycles were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for levels of negative pressure 0, -20, -30, -40, and  -50 mmHg. LF/HF ratios, the relative power of low-frequency variations to high-frequency variations in heart beat intervals, obtained from isovolumic moments were also compared to those obtained from R peaks. The mean differences  ±  confidence interval were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for increasing levels of negative pressure. The accuracy and consistency of the algorithm enables the use of SCG as a stand-alone heart monitoring tool in healthy individuals at rest, and could serve as a basis for an eventual application in pathological cases.

  7. SPA- STATISTICAL PACKAGE FOR TIME AND FREQUENCY DOMAIN ANALYSIS

    NASA Technical Reports Server (NTRS)

    Brownlow, J. D.

    1994-01-01

    The need for statistical analysis often arises when data is in the form of a time series. This type of data is usually a collection of numerical observations made at specified time intervals. Two kinds of analysis may be performed on the data. First, the time series may be treated as a set of independent observations using a time domain analysis to derive the usual statistical properties including the mean, variance, and distribution form. Secondly, the order and time intervals of the observations may be used in a frequency domain analysis to examine the time series for periodicities. In almost all practical applications, the collected data is actually a mixture of the desired signal and a noise signal which is collected over a finite time period with a finite precision. Therefore, any statistical calculations and analyses are actually estimates. The Spectrum Analysis (SPA) program was developed to perform a wide range of statistical estimation functions. SPA can provide the data analyst with a rigorous tool for performing time and frequency domain studies. In a time domain statistical analysis the SPA program will compute the mean variance, standard deviation, mean square, and root mean square. It also lists the data maximum, data minimum, and the number of observations included in the sample. In addition, a histogram of the time domain data is generated, a normal curve is fit to the histogram, and a goodness-of-fit test is performed. These time domain calculations may be performed on both raw and filtered data. For a frequency domain statistical analysis the SPA program computes the power spectrum, cross spectrum, coherence, phase angle, amplitude ratio, and transfer function. The estimates of the frequency domain parameters may be smoothed with the use of Hann-Tukey, Hamming, Barlett, or moving average windows. Various digital filters are available to isolate data frequency components. Frequency components with periods longer than the data collection interval are removed by least-squares detrending. As many as ten channels of data may be analyzed at one time. Both tabular and plotted output may be generated by the SPA program. This program is written in FORTRAN IV and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 142K (octal) of 60 bit words. This core requirement can be reduced by segmentation of the program. The SPA program was developed in 1978.

  8. Dried blood spot testing for seven steroids using liquid chromatography-tandem mass spectrometry with reference interval determination in the Korean population.

    PubMed

    Kim, Borahm; Lee, Mi Na; Park, Hyung Doo; Kim, Jong Won; Chang, Yun Sil; Park, Won Soon; Lee, Soo Youn

    2015-11-01

    Conventional screening for congenital adrenal hyperplasia (CAH) using immunoassays generates a large number of false-positive results. A more specific liquid chromatography-tandem mass spectrometry (LC-MS/MS) method has been introduced to minimize unnecessary follow-ups. However, because of limited data on its use in the Korean population, LC-MS/MS has not yet been incorporated into newborn screening programs in this region. The present study aims to develop and validate an LC-MS/MS method for the simultaneous determination of seven steroids in dried blood spots (DBS) for CAH screening, and to define age-specific reference intervals in the Korean population. We developed and validated an LC-MS/MS method to determine the reference intervals of cortisol, 17-hydroxyprogesterone, 11-deoxycortisol, 21-deoxycortisol, androstenedione, corticosterone, and 11-deoxycorticosterone simultaneously in 453 DBS samples. The samples were from Korean subjects stratified by age group (78 full-term neonates, 76 premature neonates, 89 children, and 100 adults). The accuracy, precision, matrix effects, and extraction recovery were satisfactory for all the steroids at three concentrations; values of intra- and inter-day precision coefficients of variance, bias, and recovery were 0.7-7.7%, -1.5-9.8%, and 49.3-97.5%, respectively. The linearity range was 1-100 ng/mL for cortisol and 0.5-50 ng/mL for other steroids (R²>0.99). The reference intervals were in agreement with the previous reports. This LC-MS/MS method and the reference intervals validated in the Korean population can be successfully applied to analyze seven steroids in DBS for the diagnosis of CAH.

  9. Clinical Evaluation of the BD FACSPresto™ Near-Patient CD4 Counter in Kenya

    PubMed Central

    Angira, Francis; Akoth, Benta; Omolo, Paul; Opollo, Valarie; Bornheimer, Scott; Judge, Kevin; Tilahun, Henok; Lu, Beverly; Omana-Zapata, Imelda; Zeh, Clement

    2016-01-01

    Background The BD FACSPresto™ Near-Patient CD4 Counter was developed to expand HIV/AIDS management in resource-limited settings. It measures absolute CD4 counts (AbsCD4), percent CD4 (%CD4), and hemoglobin (Hb) from a single drop of capillary or venous blood in approximately 23 minutes, with throughput of 10 samples per hour. We assessed the performance of the BD FACSPresto system, evaluating accuracy, stability, linearity, precision, and reference intervals using capillary and venous blood at KEMRI/CDC HIV-research laboratory, Kisumu, Kenya, and precision and linearity at BD Biosciences, California, USA. Methods For accuracy, venous samples were tested using the BD FACSCalibur™ instrument with BD Tritest™ CD3/CD4/CD45 reagent, BD Trucount™ tubes, and BD Multiset™ software for AbsCD4 and %CD4, and the Sysmex™ KX-21N for Hb. Stability studies evaluated duration of staining (18–120-minute incubation), and effects of venous blood storage <6–24 hours post-draw. A normal cohort was tested for reference intervals. Precision covered multiple days, operators, and instruments. Linearity required mixing two pools of samples, to obtain evenly spaced concentrations for AbsCD4, total lymphocytes, and Hb. Results AbsCD4 and %CD4 venous/capillary (N = 189/ N = 162) accuracy results gave Deming regression slopes within 0.97–1.03 and R2 ≥0.96. For Hb, Deming regression results were R2 ≥0.94 and slope ≥0.94 for both venous and capillary samples. Stability varied within 10% 2 hours after staining and for venous blood stored less than 24 hours. Reference intervals results showed that gender—but not age—differences were statistically significant (p<0.05). Precision results had <3.5% coefficient of variation for AbsCD4, %CD4, and Hb, except for low AbsCD4 samples (<6.8%). Linearity was 42–4,897 cells/μL for AbsCD4, 182–11,704 cells/μL for total lymphocytes, and 2–24 g/dL for Hb. Conclusions The BD FACSPresto system provides accurate, precise clinical results for capillary or venous blood samples and is suitable for near-patient CD4 testing. Trial Registration ClinicalTrials.gov NCT02396355 PMID:27483008

  10. Precise Temperature Measurement for Increasing the Survival of Newborn Babies in Incubator Environments

    PubMed Central

    Frischer, Robert; Penhaker, Marek; Krejcar, Ondrej; Kacerovsky, Marian; Selamat, Ali

    2014-01-01

    Precise temperature measurement is essential in a wide range of applications in the medical environment, however the regarding the problem of temperature measurement inside a simple incubator, neither a simple nor a low cost solution have been proposed yet. Given that standard temperature sensors don't satisfy the necessary expectations, the problem is not measuring temperature, but rather achieving the desired sensitivity. In response, this paper introduces a novel hardware design as well as the implementation that increases measurement sensitivity in defined temperature intervals at low cost. PMID:25494352

  11. The Effect of Quantum-Mechanical Interference on Precise Measurements of the n = 2 Triplet P Fine Structure of Helium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsman, A.; Horbatsch, M.; Hessels, E. A., E-mail: hessels@yorku.ca

    2015-09-15

    For many decades, improvements in both theory and experiment of the fine structure of the n = 2 triplet P levels of helium have allowed for an increasingly precise determination of the fine-structure constant. Recently, it has been observed that quantum-mechanical interference between neighboring resonances can cause significant shifts, even if such neighboring resonances are separated by thousands of natural widths. The shifts depend in detail on the experimental method used for the measurement, as well as the specific experimental parameters employed. Here, we review how these shifts apply for the most precise measurements of the helium 2{sup 3}P fine-structuremore » intervals.« less

  12. Testing the limits of Paleozoic chronostratigraphic correlation via high-resolution (13Ccarb) biochemostratigraphy across the Llandovery–Wenlock (Silurian) boundary: Is a unified Phanerozoic time scale achievable?

    USGS Publications Warehouse

    Cramer, Bradley D.; Loydell, David K.; Samtleben, Christian; Munnecke, Axel; Kaljo, Dimitri; Mannik, Peep; Martma, Tonu; Jeppsson, Lennart; Kleffner, Mark A.; Barrick, James E.; Johnson, Craig A.; Emsbo, Poul; Joachimski, Michael M.; Bickert, Torsten; Saltzman, Matthew R.

    2010-01-01

    The resolution and fidelity of global chronostratigraphic correlation are direct functions of the time period under consideration. By virtue of deep-ocean cores and astrochronology, the Cenozoic and Mesozoic time scales carry error bars of a few thousand years (k.y.) to a few hundred k.y. In contrast, most of the Paleozoic time scale carries error bars of plus or minus a few million years (m.y.), and chronostratigraphic control better than ??1 m.y. is considered "high resolution." The general lack of Paleozoic abyssal sediments and paucity of orbitally tuned Paleozoic data series combined with the relative incompleteness of the Paleozoic stratigraphic record have proven historically to be such an obstacle to intercontinental chronostratigraphic correlation that resolving the Paleozoic time scale to the level achieved during the Mesozoic and Cenozoic was viewed as impractical, impossible, or both. Here, we utilize integrated graptolite, conodont, and carbonate carbon isotope (??13Ccarb) data from three paleocontinents (Baltica, Avalonia, and Laurentia) to demonstrate chronostratigraphic control for upper Llando very through middle Wenlock (Telychian-Sheinwoodian, ~436-426 Ma) strata with a resolution of a few hundred k.y. The interval surrounding the base of the Wenlock Series can now be correlated globally with precision approaching 100 k.y., but some intervals (e.g., uppermost Telychian and upper Shein-woodian) are either yet to be studied in sufficient detail or do not show sufficient biologic speciation and/or extinction or carbon isotopic features to delineate such small time slices. Although producing such resolution during the Paleozoic presents an array of challenges unique to the era, we have begun to demonstrate that erecting a Paleozoic time scale comparable to that of younger eras is achievable. ?? 2010 Geological Society of America.

  13. Direct high-precision U-Pb geochronology of the end-Cretaceous extinction and calibration of Paleocene astronomical timescales

    NASA Astrophysics Data System (ADS)

    Clyde, William C.; Ramezani, Jahandar; Johnson, Kirk R.; Bowring, Samuel A.; Jones, Matthew M.

    2016-10-01

    The Cretaceous-Paleogene (K-Pg) boundary is the best known and most widely recognized global time horizon in Earth history and coincides with one of the two largest known mass extinctions. We present a series of new high-precision uranium-lead (U-Pb) age determinations by the chemical abrasion isotope dilution thermal ionization mass spectrometry (CA-ID-TIMS) method from volcanic ash deposits within a tightly constrained magnetobiostratigraphic framework across the K-Pg boundary in the Denver Basin, Colorado, USA. This new timeline provides a precise interpolated absolute age for the K-Pg boundary of 66.021 ± 0.024 / 0.039 / 0.081 Ma, constrains the ages of magnetic polarity Chrons C28 to C30, and offers a direct and independent test of early Paleogene astronomical and 40Ar/39Ar based timescales. Temporal calibration of paleontological and palynological data from the same deposits shows that the interval between the extinction of the dinosaurs and the appearance of earliest Cenozoic mammals in the Denver Basin lasted ∼185 ky (and no more than 570 ky) and the 'fern spike' lasted ∼1 ky (and no more than 71 ky) after the K-Pg boundary layer was deposited, indicating rapid rates of biotic extinction and initial recovery in the Denver Basin during this event.

  14. Simple validated LC-MS/MS method for the determination of atropine and scopolamine in plasma for clinical and forensic toxicological purposes.

    PubMed

    Koželj, Gordana; Perharič, Lucija; Stanovnik, Lovro; Prosen, Helena

    2014-08-05

    A liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for the determination of atropine and scopolamine in 100μL human plasma was developed and validated. Sample pretreatment consisted of protein precipitation with acetonitrile followed by a concentration step. Analytes and levobupivacaine (internal standard) were separated on a Zorbax XDB-CN column (75mm×4.6mm i.d., 3.5μm) with gradient elution (purified water, acetonitrile, formic acid). The triple quadrupole MS was operated in ESI positive mode. Matrix effect was estimated for deproteinised plasma samples. Selected reaction monitoring (SRM) was used for quantification in the range of 0.10-50.00ng/mL. Interday precision for both tropanes and intraday precision for atropine was <10%, intraday precision for scopolamine was <14% and <18% at lower limit of quantification (LLOQ). Mean interday and intraday accuracies for atropine were within ±7% and for scopolamine within ±11%. The method can be used for determination of therapeutic and toxic levels of both compounds and has been successfully applied to a study of pharmacodynamic and pharmacokinetic properties of tropanes, where plasma samples of volunteers were collected at fixed time intervals after ingestion of a buckwheat meal, spiked with five low doses of tropanes. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. [The search for a precise method of measurement in psychical experiments].

    PubMed

    Borck, Cornelius

    2002-06-01

    In a series of three brief case studies, it is reconstructed how cognition and psychic activity were explored as energetic and economic transformations in a variety of experimental settings. 1. In the 1870s, the German psychiatrist Emil Kraepelin started his search for an objective measurement of cognitive performance in which he engaged over several decades. His investigations resulted in a graphic representation of cognitive efficiency, the "arbeitscurve", delineating the numbers of additions per time interval in close resemblance to representations of machine efficiency. 2. At the turn of the century, the American nutrition scientist and agronomist Wilbur Olin Atwater convinced himself in a series of precision measurements that the human motor was a so perfectly closed input-output system that he rejected any mental surplus in the form of cognitive energy transformations as contradictions to the principle of the conservation of energy. 3. At the beginning of the twentieth century and on the basis of Atwater's results, the German psychiatrist Hans Berger stipulated a special form of psychic energy for mediating between the principle of the conservation of energy and mental causality. Berger attempted to quantify psychic energy as one factor of brain metabolism. In the three cases of precision investigations into psychic life presented here, the experimental space of psychophysiology turned mental activity into a form of machine-like behavior.

  16. Techniques and equipment required for precise stream gaging in tide-affected fresh-water reaches of the Sacramento River, California

    USGS Publications Warehouse

    Smith, Winchell

    1971-01-01

    Current-meter measurements of high accuracy will be required for calibration of an acoustic flow-metering system proposed for installation in the Sacramento River at Chipps Island in California. This report presents an analysis of the problem of making continuous accurate current-meter measurements in this channel where the flow regime is changing constantly in response to tidal action. Gaging-system requirements are delineated, and a brief description is given of the several applicable techniques that have been developed by others. None of these techniques provides the accuracies required for the flowmeter calibration. A new system is described--one which has been assembled and tested in prototype and which will provide the matrix of data needed for accurate continuous current-meter measurements. Analysis of a large quantity of data on the velocity distribution in the channel of the Sacramento River at Chipps Island shows that adequate definition of the velocity can be made during the dominant flow periods--that is, at times other than slack-water periods--by use of current meters suspended at elevations 0.2 and 0.8 of the depth below the water surface. However, additional velocity surveys will be necessary to determine whether or not small systematic corrections need be applied during periods of rapidly changing flow. In the proposed system all gaged parameters, including velocities, depths, position in the stream, and related times, are monitored continuously as a boat moves across the river on the selected cross section. Data are recorded photographically and transferred later onto punchcards for computer processing. Computer programs have been written to permit computation of instantaneous discharges at any selected time interval throughout the period of the current meter measurement program. It is anticipated that current-meter traverses will be made at intervals of about one-half hour over periods of several days. Capability of performance for protracted periods was, consequently, one of the important elements in system design. Analysis of error sources in the proposed system indicates that errors in individual computed discharges can be kept smaller than 1.5 percent if the expected precision in all measured parameters is maintained.

  17. Noninertial coordinate time: A new concept affecting time standards, time transfers, and clock synchronization

    NASA Technical Reports Server (NTRS)

    Deines, Steven D.

    1992-01-01

    Relativity compensations must be made in precise and accurate measurements whenever an observer is accelerated. Although many believe the Earth-centered frame is sufficiently inertial, accelerations of the Earth, as evidenced by the tides, prove that it is technically a noninertial system for even an Earth-based observer. Using the constant speed of light, a set of fixed remote clocks in an inertial frame can be synchronized to a fixed master clock transmitting its time in that frame. The time on the remote clock defines the coordinate time at that coordinate position. However, the synchronization procedure for an accelerated frame is affected, because the distance between the master and remote clocks is altered due to the acceleration of the remote clock toward or away from the master clock during the transmission interval. An exact metric that converts observations from noninertial frames to inertial frames was recently derived. Using this metric with other physical relationships, a new concept of noninertial coordinate time is defined. This noninertial coordinate time includes all relativity compensations. This new issue raises several timekeeping issues, such as proper time standards, time transfer process, and clock synchronization, all in a noninertial frame such as Earth.

  18. Saccadic eye movements do not disrupt the deployment of feature-based attention.

    PubMed

    Kalogeropoulou, Zampeta; Rolfs, Martin

    2017-07-01

    The tight link of saccades to covert spatial attention has been firmly established, yet their relation to other forms of visual selection remains poorly understood. Here we studied the temporal dynamics of feature-based attention (FBA) during fixation and across saccades. Participants reported the orientation (on a continuous scale) of one of two sets of spatially interspersed Gabors (black or white). We tested performance at different intervals between the onset of a colored cue (black or white, indicating which stimulus was the most probable target; red: neutral condition) and the stimulus. FBA built up after cue onset: Benefits (errors for valid vs. neutral cues), costs (invalid vs. neutral), and the overall cueing effect (valid vs. invalid) increased with the cue-stimulus interval. Critically, we also tested visual performance at different intervals after a saccade, when FBA had been fully deployed before saccade initiation. Cueing effects were evident immediately after the saccade and were predicted most accurately and most precisely by fully deployed FBA, indicating that FBA was continuous throughout saccades. Finally, a decomposition of orientation reports into target reports and random guesses confirmed continuity of report precision and guess rates across the saccade. We discuss the role of FBA in perceptual continuity across saccades.

  19. Temporally selective attention modulates early perceptual processing: event-related potential evidence.

    PubMed

    Sanders, Lisa D; Astheimer, Lori B

    2008-05-01

    Some of the most important information we encounter changes so rapidly that our perceptual systems cannot process all of it in detail. Spatially selective attention is critical for perception when more information than can be processed in detail is presented simultaneously at distinct locations. When presented with complex, rapidly changing information, listeners may need to selectively attend to specific times rather than to locations. We present evidence that listeners can direct selective attention to time points that differ by as little as 500 msec, and that doing so improves target detection, affects baseline neural activity preceding stimulus presentation, and modulates auditory evoked potentials at a perceptually early stage. These data demonstrate that attentional modulation of early perceptual processing is temporally precise and that listeners can flexibly allocate temporally selective attention over short intervals, making it a viable mechanism for preferentially processing the most relevant segments in rapidly changing streams.

  20. GTARG - The TOPEX/Poseidon ground track maintenance maneuver targeting program

    NASA Technical Reports Server (NTRS)

    Shapiro, Bruce E.; Bhat, Ramachandra S.

    1993-01-01

    GTARG is a computer program used to design orbit maintenance maneuvers for the TOPEX/Poseidon satellite. These maneuvers ensure that the ground track is kept within +/-1 km with of an = 9.9 day exact repeat pattern. Maneuver parameters are determined using either of two targeting strategies: longitude targeting, which maximizes the time between maneuvers, and time targeting, in which maneuvers are targeted to occur at specific intervals. The GTARG algorithm propagates nonsingular mean elements, taking into account anticipated error sigma's in orbit determination, Delta v execution, drag prediction and Delta v quantization. A satellite unique drag model is used which incorporates an approximate mean orbital Jacchia-Roberts atmosphere and a variable mean area model. Maneuver Delta v magnitudes are targeted to precisely maintain either the unbiased ground track itself, or a comfortable (3 sigma) error envelope about the unbiased ground track.

  1. TIME DEPENDENCE OF THE e{sup −} FLUX MEASURED BY PAMELA DURING THE 2006 JULY–2009 DECEMBER SOLAR MINIMUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adriani, O.; Bongi, M.; Barbarino, G. C.

    2015-09-10

    Precision measurements of the electron component of cosmic radiation provide important information about the origin and propagation of cosmic rays in the Galaxy not accessible from the study of cosmic-ray nuclear components due to their differing diffusion and energy-loss processes. However, when measured near Earth, the effects of propagation and modulation of Galactic cosmic rays in the heliosphere, particularly significant for energies up to at least 30 GeV, must be properly taken into account. In this paper the electron (e{sup −}) spectra measured by the Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics down to 70 MeV from 2006 Julymore » to 2009 December over six-month time intervals are presented. Fluxes are compared with a state-of-the-art three-dimensional model of solar modulation that reproduces the observations remarkably well.« less

  2. A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Munoz, Cesar

    2015-01-01

    In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.

  3. Compositional Solution Space Quantification for Probabilistic Software Analysis

    NASA Technical Reports Server (NTRS)

    Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem

    2014-01-01

    Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.

  4. THERMOSTAT FOR LOWER TEMPERATURES

    PubMed Central

    Stier, T. J. B.; Crozier, W. J.

    1933-01-01

    Details are given concerning the construction and operation of relatively simple thermostats which permit maintaining precise temperatures down to 0°C. (with water), or temperatures above that of the ordinary room, and in which the temperature may be quickly altered at short intervals to new levels. PMID:19872736

  5. Measuring levee elevation heights in North Louisiana.

    DOT National Transportation Integrated Search

    2009-12-01

    The primary goals of this research are to measure the elevation and centerline coordinates of the top of federal and local levees and to ensure that resulting GPS measurement data are within a precise interval of plus or minus 3/10ths of a foot verti...

  6. High-precision measurements of wetland sediment elevation. II The rod surface elevation table

    USGS Publications Warehouse

    Cahoon, D.R.; Lynch, J.C.; Perez, B.C.; Segura, B.; Holland, R.D.; Stelly, C.; Stephenson, G.; Hensel, P.

    2002-01-01

    A new high-precision device for measuring sediment elevation in emergent and shallow water wetland systems is described. The rod surface-elevation table (RSET) is a balanced, lightweight mechanical leveling device that attaches to both shallow ( 1 m in order to be stable. The pipe is driven to refusal but typically to a depth shallower than the rod bench mark because of greater surface resistance of the pipe. Thus, the RSET makes it possible to partition change in sediment elevation over shallower (e.g., the root zone) and deeper depths of the sediment profile than is possible with the SET. The confidence intervals for the height of an individual pin measured by two different operators with the RSET under laboratory conditions were A? 1.0 and A? 1.5 mm. Under field conditions, confidence intervals for the measured height of an individual pin ranged from A? 1.3 mm in a mangrove forest up to A? 4.3 mm in a salt marsh.

  7. An hp symplectic pseudospectral method for nonlinear optimal control

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong

    2017-01-01

    An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.

  8. System implications of the ambulance arrival-to-patient contact interval on response interval compliance.

    PubMed

    Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A

    1994-01-01

    In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.

  9. Astronomically calibrated 40Ar/39Ar age for the Toba supereruption and global synchronization of late Quaternary records

    PubMed Central

    Storey, Michael; Roberts, Richard G.; Saidin, Mokhtar

    2012-01-01

    The Toba supereruption in Sumatra, ∼74 thousand years (ka) ago, was the largest terrestrial volcanic event of the Quaternary. Ash and sulfate aerosols were deposited in both hemispheres, forming a time-marker horizon that can be used to synchronize late Quaternary records globally. A precise numerical age for this event has proved elusive, with dating uncertainties larger than the millennial-scale climate cycles that characterized this period. We report an astronomically calibrated 40Ar/39Ar age of 73.88 ± 0.32 ka (1σ, full external errors) for sanidine crystals extracted from Toba deposits in the Lenggong Valley, Malaysia, 350 km from the eruption source and 6 km from an archaeological site with stone artifacts buried by ash. If these artifacts were made by Homo sapiens, as has been suggested, then our age indicates that modern humans had reached Southeast Asia by ∼74 ka ago. Our 40Ar/39Ar age is an order-of-magnitude more precise than previous estimates, resolving the timing of the eruption to the middle of the cold interval between Dansgaard–Oeschger events 20 and 19, when a peak in sulfate concentration occurred as registered by Greenland ice cores. This peak is followed by a ∼10 °C drop in the Greenland surface temperature over ∼150 y, revealing the possible climatic impact of the eruption. Our 40Ar/39Ar age also provides a high-precision calibration point for other ice, marine, and terrestrial archives containing Toba sulfates and ash, facilitating their global synchronization at unprecedented resolution for a critical period in Earth and human history beyond the range of 14C dating. PMID:23112159

  10. Astronomically calibrated 40Ar/39Ar age for the Toba supereruption and global synchronization of late Quaternary records

    NASA Astrophysics Data System (ADS)

    Storey, Michael; Roberts, Richard G.; Saidin, Mokhtar

    2012-11-01

    The Toba supereruption in Sumatra, ∼74 thousand years (ka) ago, was the largest terrestrial volcanic event of the Quaternary. Ash and sulfate aerosols were deposited in both hemispheres, forming a time-marker horizon that can be used to synchronize late Quaternary records globally. A precise numerical age for this event has proved elusive, with dating uncertainties larger than the millennial-scale climate cycles that characterized this period. We report an astronomically calibrated 40Ar/39Ar age of 73.88 ± 0.32 ka (1σ, full external errors) for sanidine crystals extracted from Toba deposits in the Lenggong Valley, Malaysia, 350 km from the eruption source and 6 km from an archaeological site with stone artifacts buried by ash. If these artifacts were made by Homo sapiens, as has been suggested, then our age indicates that modern humans had reached Southeast Asia by ∼74 ka ago. Our 40Ar/39Ar age is an order-of-magnitude more precise than previous estimates, resolving the timing of the eruption to the middle of the cold interval between Dansgaard-Oeschger events 20 and 19, when a peak in sulfate concentration occurred as registered by Greenland ice cores. This peak is followed by a ∼10 °C drop in the Greenland surface temperature over ∼150 y, revealing the possible climatic impact of the eruption. Our 40Ar/39Ar age also provides a high-precision calibration point for other ice, marine, and terrestrial archives containing Toba sulfates and ash, facilitating their global synchronization at unprecedented resolution for a critical period in Earth and human history beyond the range of 14C dating.

  11. Astronomically calibrated 40Ar/39Ar age for the Toba supereruption and global synchronization of late Quaternary records.

    PubMed

    Storey, Michael; Roberts, Richard G; Saidin, Mokhtar

    2012-11-13

    The Toba supereruption in Sumatra, ∼74 thousand years (ka) ago, was the largest terrestrial volcanic event of the Quaternary. Ash and sulfate aerosols were deposited in both hemispheres, forming a time-marker horizon that can be used to synchronize late Quaternary records globally. A precise numerical age for this event has proved elusive, with dating uncertainties larger than the millennial-scale climate cycles that characterized this period. We report an astronomically calibrated (40)Ar/(39)Ar age of 73.88 ± 0.32 ka (1σ, full external errors) for sanidine crystals extracted from Toba deposits in the Lenggong Valley, Malaysia, 350 km from the eruption source and 6 km from an archaeological site with stone artifacts buried by ash. If these artifacts were made by Homo sapiens, as has been suggested, then our age indicates that modern humans had reached Southeast Asia by ∼74 ka ago. Our (40)Ar/(39)Ar age is an order-of-magnitude more precise than previous estimates, resolving the timing of the eruption to the middle of the cold interval between Dansgaard-Oeschger events 20 and 19, when a peak in sulfate concentration occurred as registered by Greenland ice cores. This peak is followed by a ∼10 °C drop in the Greenland surface temperature over ∼150 y, revealing the possible climatic impact of the eruption. Our (40)Ar/(39)Ar age also provides a high-precision calibration point for other ice, marine, and terrestrial archives containing Toba sulfates and ash, facilitating their global synchronization at unprecedented resolution for a critical period in Earth and human history beyond the range of (14)C dating.

  12. An accurate Kriging-based regional ionospheric model using combined GPS/BeiDou observations

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed

    2018-01-01

    In this study, we propose a regional ionospheric model (RIM) based on both of the GPS-only and the combined GPS/BeiDou observations for single-frequency precise point positioning (SF-PPP) users in Europe. GPS/BeiDou observations from 16 reference stations are processed in the zero-difference mode. A least-squares algorithm is developed to determine the vertical total electron content (VTEC) bi-linear function parameters for a 15-minute time interval. The Kriging interpolation method is used to estimate the VTEC values at a 1 ° × 1 ° grid. The resulting RIMs are validated for PPP applications using GNSS observations from another set of stations. The SF-PPP accuracy and convergence time obtained through the proposed RIMs are computed and compared with those obtained through the international GNSS service global ionospheric maps (IGS-GIM). The results show that the RIMs speed up the convergence time and enhance the overall positioning accuracy in comparison with the IGS-GIM model, particularly the combined GPS/BeiDou-based model.

  13. High-precision numerical integration of equations in dynamics

    NASA Astrophysics Data System (ADS)

    Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.

    2018-05-01

    An important requirement for the process of solving differential equations in Dynamics, such as the equations of the motion of celestial bodies and, in particular, the motion of cosmic robotic systems is high accuracy at large time intervals. One of effective tools for obtaining such solutions is the Taylor series method. In this connection, we note that it is very advantageous to reduce the given equations of Dynamics to systems with polynomial (in unknowns) right-hand sides. This allows us to obtain effective algorithms for finding the Taylor coefficients, a priori error estimates at each step of integration, and an optimal choice of the order of the approximation used. In the paper, these questions are discussed and appropriate algorithms are considered.

  14. Report of the first Nimbus-7 SMMR Experiment Team Workshop

    NASA Technical Reports Server (NTRS)

    Campbell, W. J.; Gloersen, P.

    1983-01-01

    Preliminary results of sea ice and techniques for calculating sea ice concentration and multiyear fraction from the microwave radiances obtained from the Nimbus-7 SMMR were presented. From these results, it is evident that these groups used different and independent approaches in deriving sea ice emissivities and algorithms. This precluded precise comparisons of their results. A common set of sea ice emissivities were defined for all groups to use for subsequent more careful comparison of the results from the various sea ice parameter algorithms. To this end, three different geographical areas in two different time intervals were defined as typifying SMMR beam-filling conditions for first year sea ice, multiyear sea ice, and open water and to be used for determining the required microwave emissivities.

  15. Estimation of postmortem interval through albumin in CSF by simple dye binding method.

    PubMed

    Parmar, Ankita K; Menon, Shobhana K

    2015-12-01

    Estimation of postmortem interval is a very important question in some medicolegal investigations. For the precise estimation of postmortem interval, there is a need of a method which can give accurate estimation. Bromocresol green (BCG) is a simple dye binding method and widely used in routine practice. Application of this method in forensic practice may bring revolutionary changes. In this study, cerebrospinal fluid was aspirated from cisternal puncture from 100 autopsies. A study was carried out on concentration of albumin with respect to postmortem interval. After death, albumin present in CSF undergoes changes, after 72 h of death, concentration of albumin has become 0.012 mM, and this decrease was linear from 2 h to 72 h. An important relationship was found between albumin concentration and postmortem interval with an error of ± 1-4h. The study concludes that CSF albumin can be a useful and significant parameter in estimation of postmortem interval. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    PubMed

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the measurement of hours worked each week by GPs strongly varied according to the number of GPs included and the frequency of measurements per GP during the week measured. The best balance between both dimensions will depend upon different circumstances, such as the target group and the budget available.

  17. The Logic of Summative Confidence

    ERIC Educational Resources Information Center

    Gugiu, P. Cristian

    2007-01-01

    The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data…

  18. 40Ar 39Ar age constraints on neogene sedimentary beds, Upper Ramparts, half-way Pillar and Canyon village sites, Porcupine river, east-central Alaska

    USGS Publications Warehouse

    Kunk, Michael J.; Rieck, H.; Fouch, T.D.; Carter, L.D.

    1994-01-01

    40Ar 39Ar ages of volcanic rocks are used to provide numerical constraints on the age of middle and upper Miocene sedimentary strata collected along the Porcupine River. Intercalated sedimentary rocks north of latitude 67??10???N in the Porcupine terrane of east-central Alaska contain a rich record of plant fossils. The fossils are valuable indicators of this interior region's paleoclimate during the time of their deposition. Integration of the 40Ar 39Ar results with paleomagnetic and sedimentological data allows for refinements in estimating the timing of deposition and duration of selected sedimentary intervals. 40Ar 39Ar plateau age spectra, from whole rock basalt samples, collected along the Upper Ramparts and near Half-way Pillar on the Porcupine River, range from 15.7 ?? 0.1 Ma at site 90-6 to 14.4 ?? 0.1 Ma at site 90-2. With exception of the youngest basalt flow at site 90-2, all of the samples are of reversed magnetic polarity, and all 40Ar 39Ar age spectrum results are consistent with the deposition of the entire stratigraphic section during a single interval of reversed magnetic polarity. The youngest flow at site 90-2 was emplaced during an interval of normal polarity. With age, paleomagnetic and sedimentological data, the ages of the Middle Miocene sedimentary rocks between the basalt flows at sites 90-1 and 90-2 can be assigned to an interval within the limits of analytical precision of 15.2 ?? 0.1 Ma; thus, the sediments were deposited during the peak of the Middle Miocene thermal maximum. Sediments in the upper parts of sites 90-1 and 90-2 were probably deposited during cooling from the Middle Miocene thermal maximum. 40Ar 39Ar results of plagioclase and biotite from a single tephra, collected at sites 90-7 and 90-8 along the Canyon Village section of the Porcupine River, indicate an age of 6.57 ?? 0.02 Ma for its time of eruption and deposition. These results, together with sedimentological and paleomagnetic data, suggest that all of the Upper Miocene lacustrine sedimentary rocks at these sites were deposited during a single interval of reversed magnetic polarity and may represent a duration of only about 40,000 years. The age of this tephra corresponds with a late late Miocene warm climatic interval. The results from the Upper Ramparts and Half-way Pillar sites are used to estimate a minimum interval of continental flood basalt activity of 1.1-1.5 million years, and to set limits for the timing and duration of Tertiary extensional tectonic activity in the Porcupine terrane. Our data indicate that the oroclinal flexure that formed before the deposition of the basalts at the eastern end of the Brooks Range was created prior to 15.7 ?? 0.1 Ma. ?? 1994.

  19. Investigation of modulation parameters in multiplexing gas chromatography.

    PubMed

    Trapp, Oliver

    2010-10-22

    Combination of information technology and separation sciences opens a new avenue to achieve high sample throughputs and therefore is of great interest to bypass bottlenecks in catalyst screening of parallelized reactors or using multitier well plates in reaction optimization. Multiplexing gas chromatography utilizes pseudo-random injection sequences derived from Hadamard matrices to perform rapid sample injections which gives a convoluted chromatogram containing the information of a single sample or of several samples with similar analyte composition. The conventional chromatogram is obtained by application of the Hadamard transform using the known injection sequence or in case of several samples an averaged transformed chromatogram is obtained which can be used in a Gauss-Jordan deconvolution procedure to obtain all single chromatograms of the individual samples. The performance of such a system depends on the modulation precision and on the parameters, e.g. the sequence length and modulation interval. Here we demonstrate the effects of the sequence length and modulation interval on the deconvoluted chromatogram, peak shapes and peak integration for sequences between 9-bit (511 elements) and 13-bit (8191 elements) and modulation intervals Δt between 5 s and 500 ms using a mixture of five components. It could be demonstrated that even for high-speed modulation at time intervals of 500 ms the chromatographic information is very well preserved and that the separation efficiency can be improved by very narrow sample injections. Furthermore this study shows that the relative peak areas in multiplexed chromatograms do not deviate from conventionally recorded chromatograms. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. The Rotator Interval – A Link Between Anatomy and Ultrasound

    PubMed Central

    Tamborrini, Giorgio; Möller, Ingrid; Bong, David; Miguel, Maribel; Marx, Christian; Müller, Andreas Marc; Müller-Gerbl, Magdalena

    2017-01-01

    Shoulder pathologies of the rotator cuff of the shoulder are common in clinical practice. The focus of this pictorial essay is to discuss the anatomical details of the rotator interval of the shoulder, correlate the anatomy with normal ultrasound images and present selected pathologies. We focus on the imaging of the rotator interval that is actually the anterosuperior aspect of the glenohumeral joint capsule that is reinforced externally by the coracohumeral ligament, internally by the superior glenohumeral ligament and capsular fibers which blend together and insert medially and laterally to the bicipital groove. In this article we demonstrate the capability of high-resolution musculoskeletal ultrasound to visualize the detailed anatomy of the rotator interval. MSUS has a higher spatial resolution than other imaging techniques and the ability to examine these structures dynamically and to utilize the probe for precise anatomic localization of the patient’s pain by sono-palpation. PMID:28845477

  1. The Rotator Interval - A Link Between Anatomy and Ultrasound.

    PubMed

    Tamborrini, Giorgio; Möller, Ingrid; Bong, David; Miguel, Maribel; Marx, Christian; Müller, Andreas Marc; Müller-Gerbl, Magdalena

    2017-06-01

    Shoulder pathologies of the rotator cuff of the shoulder are common in clinical practice. The focus of this pictorial essay is to discuss the anatomical details of the rotator interval of the shoulder, correlate the anatomy with normal ultrasound images and present selected pathologies. We focus on the imaging of the rotator interval that is actually the anterosuperior aspect of the glenohumeral joint capsule that is reinforced externally by the coracohumeral ligament, internally by the superior glenohumeral ligament and capsular fibers which blend together and insert medially and laterally to the bicipital groove. In this article we demonstrate the capability of high-resolution musculoskeletal ultrasound to visualize the detailed anatomy of the rotator interval. MSUS has a higher spatial resolution than other imaging techniques and the ability to examine these structures dynamically and to utilize the probe for precise anatomic localization of the patient's pain by sono-palpation.

  2. Pyroglutamate (5-oxoproline) measured with hydrophilic interaction chromatography (HILIC) tandem mass spectrometry in acutely ill patients.

    PubMed

    Pretorius, Carel J; Reade, Michael C; Warnholtz, Chris; McWhinney, Brett; Phua, Meng Mei; Lipman, Jeffrey; Ungerer, Jacobus P J

    2017-03-01

    Pyroglutamic acid (PGA) is challenging to quantify in plasma and is a rare cause of metabolic acidosis that is associated with inherited disorders or acquired after exposure to drugs. We developed a hydrophilic interaction liquid chromatography tandem mass spectrometry method with a short analysis time. We established a reference interval and then measured PGA in acutely ill patients to investigate associations with clinical, pharmaceutical and laboratory parameters. The assay limit of the blank was 0.14μmol/L and was linear to 5000μmol/L with good precision. In-source formation of PGA from glutamate and glutamine was avoided by chromatographic separation. The PGA in controls had a reference interval of 22.6 to 47.8μmol/L. The median PGA concentration in acutely ill patients was similar (P=0.21), but 18 individuals were above the reference interval with concentrations up to 250μmol/L. We detected an association between PGA concentration and antibiotic and acetaminophen administration as well as renal impairment and severity of illness. Elevations of PGA in this unselected cohort were small compared to those reported in patients with pyroglutamic acidosis. The method is suitable for routine clinical use. We confirmed several expected associations with PGA in an acutely ill population. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  3. Temporal patterns of inputs to cerebellum necessary and sufficient for trace eyelid conditioning.

    PubMed

    Kalmbach, Brian E; Ohyama, Tatsuya; Mauk, Michael D

    2010-08-01

    Trace eyelid conditioning is a form of associative learning that requires several forebrain structures and cerebellum. Previous work suggests that at least two conditioned stimulus (CS)-driven signals are available to the cerebellum via mossy fiber inputs during trace conditioning: one driven by and terminating with the tone and a second driven by medial prefrontal cortex (mPFC) that persists through the stimulus-free trace interval to overlap in time with the unconditioned stimulus (US). We used electric stimulation of mossy fibers to determine whether this pattern of dual inputs is necessary and sufficient for cerebellar learning to express normal trace eyelid responses. We find that presenting the cerebellum with one input that mimics persistent activity observed in mPFC and the lateral pontine nuclei during trace eyelid conditioning and another that mimics tone-elicited mossy fiber activity is sufficient to produce responses whose properties quantitatively match trace eyelid responses using a tone. Probe trials with each input delivered separately provide evidence that the cerebellum learns to respond to the mPFC-like input (that overlaps with the US) and learns to suppress responding to the tone-like input (that does not). This contributes to precisely timed responses and the well-documented influence of tone offset on the timing of trace responses. Computer simulations suggest that the underlying cerebellar mechanisms involve activation of different subsets of granule cells during the tone and during the stimulus-free trace interval. These results indicate that tone-driven and mPFC-like inputs are necessary and sufficient for the cerebellum to learn well-timed trace conditioned responses.

  4. New method of extracting information of arterial oxygen saturation based on ∑ | 𝚫 |

    NASA Astrophysics Data System (ADS)

    Dai, Wenting; Lin, Ling; Li, Gang

    2017-04-01

    Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.

  5. New method of extracting information of arterial oxygen saturation based on ∑|𝚫 |

    NASA Astrophysics Data System (ADS)

    Wenting, Dai; Ling, Lin; Gang, Li

    2017-04-01

    Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.

  6. Comparison of nerve trimming with the Er:YAG laser and steel knife

    NASA Astrophysics Data System (ADS)

    Josephson, G. D.; Bass, Lawrence S.; Kasabian, A. K.

    1995-05-01

    Best outcome in nerve repair requires precise alignment and minimization of scar at the repair interface. Surgeons attempt to create the sharpest cut surface at the nerve edge prior to approximation. Pulsed laser modalities are being investigated in several medical applications which require precise atraumatic cutting. We compared nerve trimming with the Er:YAG laser (1375 J/cm2) to conventional steel knife trimming prior to neurorrhaphy. Sprague- Dawley rats were anesthetized with ketamine and xylazine. Under operating microscope magnification the sciatic nerve was dissected and transected using one of the test techniques. In the laser group, the pulses were directed axially across the nerve using a stage which fixed laser fiber/nerve distance and orientation. Specimens were sent for scanning electron microscopy (SEM) at time zero. Epineurial repairs were performed with 10 - 0 nylon simple interrupted sutures. At intervals to 90 days, specimens were harvested and sectioned longitudinally and axially for histologic examination. Time zero SEM revealed clean cuts in both groups but individual axons were clearly visible in all laser specimens. Small pits were also visible on the cut surface of laser treated nerves. No significant differences in nerve morphology were seen during healing. Further studies to quantify axon counts, and functional outcome will be needed to assess this technique of nerve trimming. Delivery system improvements will also be required, to make the technique clinically practical.

  7. Assessing the Effect of Early Visual Cortex Transcranial Magnetic Stimulation on Working Memory Consolidation.

    PubMed

    van Lamsweerde, Amanda E; Johnson, Jeffrey S

    2017-07-01

    Maintaining visual working memory (VWM) representations recruits a network of brain regions, including the frontal, posterior parietal, and occipital cortices; however, it is unclear to what extent the occipital cortex is engaged in VWM after sensory encoding is completed. Noninvasive brain stimulation data show that stimulation of this region can affect working memory (WM) during the early consolidation time period, but it remains unclear whether it does so by influencing the number of items that are stored or their precision. In this study, we investigated whether single-pulse transcranial magnetic stimulation (spTMS) to the occipital cortex during VWM consolidation affects the quantity or quality of VWM representations. In three experiments, we disrupted VWM consolidation with either a visual mask or spTMS to retinotopic early visual cortex. We found robust masking effects on the quantity of VWM representations up to 200 msec poststimulus offset and smaller, more variable effects on WM quality. Similarly, spTMS decreased the quantity of VWM representations, but only when it was applied immediately following stimulus offset. Like visual masks, spTMS also produced small and variable effects on WM precision. The disruptive effects of both masks and TMS were greatly reduced or entirely absent within 200 msec of stimulus offset. However, there was a reduction in swap rate across all time intervals, which may indicate a sustained role of the early visual cortex in maintaining spatial information.

  8. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of prescribed β values and gap distributions. The aliasing method, however, does not itself account for sampling irregularity, and this introduces some bias in the result. Nonetheless, the wavelet method is recommended for estimating β in irregular time series until improved methods are developed. Finally, all methods' performances depend strongly on the sampling irregularity, highlighting that the accuracy and precision of each method are data specific. Accurately quantifying the strength of fractal scaling in irregular water-quality time series remains an unresolved challenge for the hydrologic community and for other disciplines that must grapple with irregular sampling.

  9. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation

    NASA Astrophysics Data System (ADS)

    Li, Xingxing

    2014-05-01

    Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to displacements is accompanied by a drift due to the potential uncompensated errors. Li et al. (2013) presented a temporal point positioning (TPP) method to quickly capture coseismic displacements with a single GPS receiver in real-time. The TPP approach can overcome the convergence problem of precise point positioning (PPP), and also avoids the integration and de-trending process of the variometric approach. The performance of TPP is demonstrated to be at few centimeters level of displacement accuracy for even twenty minutes interval with real-time precise orbit and clock products. In this study, we firstly present and compare the observation models and processing strategies of the current existing single-receiver methods for real-time GPS seismology. Furthermore, we propose several refinements to the variometric approach in order to eliminate the drift trend in the integrated coseismic displacements. The mathematical relationship between these methods is discussed in detail and their equivalence is also proved. The impact of error components such as satellite ephemeris, ionospheric delay, tropospheric delay, and geometry change on the retrieved displacements are carefully analyzed and investigated. Finally, the performance of these single-receiver approaches for real-time GPS seismology is validated using 1 Hz GPS data collected during the Tohoku-Oki earthquake (Mw 9.0, March 11, 2011) in Japan. It is shown that few centimeters accuracy of coseismic displacements is achievable. Keywords: High-rate GPS; real-time GPS seismology; a single receiver; PPP; variometric approach; temporal point positioning; error analysis; coseismic displacement; fault slip inversion;

  10. Criteria for optimizing food composition tables in relation to studies of habitual food intakes.

    PubMed

    Joyanes, María; Lema, Lourdes

    2006-01-01

    The purpose of this study is to increase the accuracy, reliability, and precision of food composition data and, in consequence, better approximate nutrient intake estimations and recommendations. To do this it is necessary to specify and taken into account factors that play an important role in the variation of composition in order to avoid excessively broad dispersions and irregularities in data distributions. This implies the presentation of representative and, as consequence, extrapolable data, with nutritionally grounded confidence intervals. This study suggests a methodology that better approaches the accuracy, reliability, and precision of food composition data.

  11. The effects of a time-based intervention on experienced middle-aged rats

    PubMed Central

    Peterson, Jennifer R.; Kirkpatrick, Kimberly

    2016-01-01

    Impulsive behavior is a common symptom in Attention Deficit Hyperactivity Disorder, schizophrenia, drug abuse, smoking, obesity and compulsive gambling. Stable levels of impulsive choice have been found in humans and rats and a recent study reported significant test-retest reliability of impulsive choice behavior after 1 and 5 months in rats. Time-based behavioral interventions have been successful in decreasing impulsive choices. These interventions led to improvements in the ability to time and respond more appropriately to adventitious choices. The current study examined the use of a time-based intervention in experienced, middle-aged rats. This intervention utilized a variable interval schedule previously found to be successful in improving timing and decreasing impulsive choice. This study found that the intervention led to a decrease in impulsive choices and there was a significant correlation between the improvement in self-control and post-intervention temporal precision in middle-aged rats. Although there were no overall group difference in bisection performance, individual differences were observed, suggesting an improvement in timing. This is an important contribution to the field because previous studies have utilized only young rats and because previous research indicates a decrease in general timing abilities with age. PMID:27826006

  12. Time reproduction in children with ADHD and their nonaffected siblings.

    PubMed

    Rommelse, Nanda N J; Oosterlaan, Jaap; Buitelaar, Jan; Faraone, Stephen V; Sergeant, Joseph A

    2007-05-01

    Time reproduction is deficient in children with attention-deficit/hyperactivity disorder (ADHD). Whether this deficit is familial and could therefore serve as a candidate endophenotype has not been previously investigated. It is unknown whether timing deficits are also measurable in adolescent children with ADHD and nonaffected siblings. These issues were investigated in 226 children with ADHD, 188 nonaffected siblings, and 162 normal controls ages 5 to 19. Children participated in a visual and auditory time reproduction task. They reproduced interval lengths of 4, 8, 12, 16, and 20 seconds. Children with ADHD and their nonaffected siblings were less precise than controls, particularly when task difficulty was systematically increased. Time reproduction skills were familial. Time reproduction deficits were more pronounced in younger children with ADHD than in older children. Children with ADHD could be clearly dissociated from control children until the age of 9. After this age, group differences were somewhat attenuated, but were still present. Differences between nonaffected siblings and controls were constant across the age range studied. Deficits were unaffected by modality. Time reproduction may serve as a candidate endophenotype for ADHD, predominantly in younger children with (a genetic risk for) ADHD.

  13. Chronology of magmatic and biological events during mass extinctions

    NASA Astrophysics Data System (ADS)

    Schaltegger, U.; Davies, J.; Baresel, B.; Bucher, H.

    2016-12-01

    For mass extinctions, high-precision geochronology is key to understanding: 1) the age and duration of mass extinction intervals, derived from palaeo-biodiversity or chemical proxies in marine sections, and 2) the age and duration of the magmatism responsible for injecting volatiles into the atmosphere. Using high-precision geochronology, here we investigate the sequence of events linked to the Triassic-Jurassic boundary (TJB) and the Permian-Triassic boundary (PTB) mass extinctions. At the TJB, the model of Guex et al. (2016) invokes degassing of early magmas produced by thermal erosion of cratonic lithosphere as a trigger of climate disturbance in the late Rhaetian. We provide geochronological evidence that such early intrusives from the CAMP (Central Atlantic Magmatic Province), predate the end-Triassic extinction event (Blackburn et al. 2013) by 100 kyr (Davies et al., subm.). We propose that these early intrusions and associated explosive volcanism (currently unidentified) initiate the extinction, followed by the younger basalt eruptions of the CAMP. We also provide accurate and precise calibration of the PTB in marine sections in S. China: The PTB and the extinction event coincide within 30 kyr in deep water settings; a hiatus followed by microbial limestone deposition in shallow water settings is of <100 kyr duration. The PTB extinction interval is preceded by up to 300 kyr by the onset of partly alkaline explosive, extrusive and intrusive rocks, which are suggested as the trigger of the mass extinction, rather than the subsequent basalt flows of the Siberian Traps (Burgess and Bowring 2015). From temporal constraints, the main inferences that can be made are: The duration of extinction events is in the x10 kyr range during the initial intrusive activity of a Large Igneous Province, and is postdated by the majority of basalt flows over several 100 kyr. For modeling climate change associated with mass extinctions, volatiles released from the basalt flows may thus not be relevant. Initial igneous activity must be explosive for producing sufficient volumes of volatiles over a sufficiently long time that could generate climatic change. Baresel et al., submitted; Blackburn et al. 2013, Science; Burgess and Bowring 2015, Sci Advances; Davies et al., submitted; Guex et al., 2016, Sci. Rep.

  14. High-precision U-Pb zircon geochronological constraints on the End-Triassic Mass Extinction, the late Triassic Astronomical Time Scale and geochemical evolution of CAMP magmatism

    NASA Astrophysics Data System (ADS)

    Blackburn, T. J.; Olsen, P. E.; Bowring, S. A.; McLean, N. M.; Kent, D. V.; Puffer, J. H.; McHone, G.; Rasbury, T.

    2012-12-01

    Mass extinction events that punctuate Earth's history have had a large influence on the evolution, diversity and composition of our planet's biosphere. The approximate temporal coincidence between the five major extinction events over the last 542 million years and the eruption of Large Igneous Provinces (LIPs) has led to the speculation that climate and environmental perturbations generated by the emplacement of a large volume of magma in a short period of time triggered each global biologic crisis. Establishing a causal link between extinction and the onset and tempo of LIP eruption has proved difficult because of the geographic separation between LIP volcanic deposits and stratigraphic sequences preserving evidence of the extinction. In most cases, the uncertainties on available radioisotopic dates used to correlate between geographically separated study areas often exceed the duration of both the extinction interval and LIP volcanism by an order of magnitude. The "end-Triassic extinction" (ETE) is one of the "big five" and is characterized by the disappearance of several terrestrial and marine species and dominance of Dinosaurs for the next 134 million years. Speculation on the cause has centered on massive climate perturbations thought to accompany the eruption of flood basalts related to the Central Atlantic Magmatic Province (CAMP), the most aerially extensive and volumetrically one of the largest LIPs on Earth. Despite an approximate temporal coincidence between extinction and volcanism, there lacks evidence placing the eruption of CAMP prior to or at the initiation of the extinction. Estimates of the timing and/or duration of CAMP volcanism provided by astrochronology and Ar-Ar geochronology differ by an order of magnitude, precluding high-precision tests of the relationship between LIP volcanism and the mass extinction, the causes of which are dependent upon the rate of magma eruption. Here we present high precision zircon U-Pb ID-TIMS geochronologic data for eight CAMP flows and sills from the eastern U.S. and Morocco. These data are used first to independently test the astronomically calibrated time scale and sediment accumulation rates within the Triassic-Jurassic rift basins along the eastern North America. The U-Pb, paleontological, magnetostratigraphic and astronomical data are combined to constrain the onset and duration of the CAMP and clarify the temporal relationship between the CAMP and the ETE. The dataset together allows more precise estimates of eruptive volume per unit time, a requirement for rigorous evaluation of climate-driven models for the extinction.

  15. Longitudinal DXA studies: minimum scanning interval for pediatric assessment of body fat

    USDA-ARS?s Scientific Manuscript database

    The increased prevalence of obesity in the United States has led to the increased use of Dual-energy X-ray absorptiometry (DXA) for assessment of body fat (TBF). The importance of early intervention has focused attention on pediatric populations. We used DXA precision analyses to determine suitable ...

  16. Fundamental limits of scintillation detector timing precision

    NASA Astrophysics Data System (ADS)

    Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.

    2014-07-01

    In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10 000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A-1/2 more than any other factor, we tabulated the parameter B, where R = BA-1/2. An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10 000 photoelectrons ns-1. A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10 000 photoelectrons ns-1.

  17. Fundamental Limits of Scintillation Detector Timing Precision

    PubMed Central

    Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.

    2014-01-01

    In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10,000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A−1/2 more than any other factor, we tabulated the parameter B, where R = BA−1/2. An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10,000 photoelectrons/ns. A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10,000 photoelectrons/ns. PMID:24874216

  18. An Exploratory Study of Runway Arrival Procedures: Time Based Arrival and Self-Spacing

    NASA Technical Reports Server (NTRS)

    Houston, Vincent E.; Barmore, Bryan

    2009-01-01

    The ability of a flight crew to deliver their aircraft to its arrival runway on time is important to the overall efficiency of the National Airspace System (NAS). Over the past several years, the NAS has been stressed almost to its limits resulting in problems such as airport congestion, flight delay, and flight cancellation to reach levels that have never been seen before in the NAS. It is predicted that this situation will worsen by the year 2025, due to an anticipated increase in air traffic operations to one-and-a-half to three times its current level. Improved arrival efficiency, in terms of both capacity and environmental impact, is an important part of improving NAS operations. One way to improve the arrival performance of an aircraft is to enable the flight crew to precisely deliver their aircraft to a specified point at either a specified time or specified interval relative to another aircraft. This gives the flight crew more control to make the necessary adjustments to their aircraft s performance with less tactical control from the controller; it may also decrease the controller s workload. Two approaches to precise time navigation have been proposed: Time-Based Arrivals (e.g., required times of arrival) and Self-Spacing. Time-Based Arrivals make use of an aircraft s Flight Management System (FMS) to deliver the aircraft to the runway threshold at a given time. Self-Spacing enables the flight crew to achieve an ATC assigned spacing goals at the runway threshold relative to another aircraft. The Joint Planning and Development Office (JPDO), a multi-agency initiative established to plan and coordinate the development of the Next Generation Air Transportation System (NextGen), has asked for data for both of these concepts to facilitate future research and development. This paper provides a first look at the delivery performance of these two concepts under various initial and environmental conditions in an air traffic simulation environment.

  19. Data logging of body temperatures provides precise information on phenology of reproductive events in a free-living arctic hibernator

    USGS Publications Warehouse

    Williams, C.T.; Sheriff, M.J.; Schmutz, J.A.; Kohl, F.; Toien, O.; Buck, C.L.; Barnes, B.M.

    2011-01-01

    Precise measures of phenology are critical to understanding how animals organize their annual cycles and how individuals and populations respond to climate-induced changes in physical and ecological stressors. We show that patterns of core body temperature (T b) can be used to precisely determine the timing of key seasonal events including hibernation, mating and parturition, and immergence and emergence from the hibernacula in free-living arctic ground squirrels (Urocitellus parryii). Using temperature loggers that recorded T b every 20 min for up to 18 months, we monitored core T b from three females that subsequently gave birth in captivity and from 66 female and 57 male ground squirrels free-living in the northern foothills of the Brooks Range Alaska. In addition, dates of emergence from hibernation were visually confirmed for four free-living male squirrels. Average T b in captive females decreased by 0.5–1.0°C during gestation and abruptly increased by 1–1.5°C on the day of parturition. In free-living females, similar shifts in T b were observed in 78% (n = 9) of yearlings and 94% (n = 31) of adults; females without the shift are assumed not to have given birth. Three of four ground squirrels for which dates of emergence from hibernation were visually confirmed did not exhibit obvious diurnal rhythms in T b until they first emerged onto the surface when T b patterns became diurnal. In free-living males undergoing reproductive maturation, this pre-emergence euthermic interval averaged 20.4 days (n = 56). T b-loggers represent a cost-effective and logistically feasible method to precisely investigate the phenology of reproduction and hibernation in ground squirrels.

  20. Experimental assessment of precision and accuracy of radiostereometric analysis for the determination of polyethylene wear in a total hip replacement model.

    PubMed

    Bragdon, Charles R; Malchau, Henrik; Yuan, Xunhua; Perinchief, Rebecca; Kärrholm, Johan; Börlin, Niclas; Estok, Daniel M; Harris, William H

    2002-07-01

    The purpose of this study was to develop and test a phantom model based on actual total hip replacement (THR) components to simulate the true penetration of the femoral head resulting from polyethylene wear. This model was used to study both the accuracy and the precision of radiostereometric analysis, RSA, in measuring wear. We also used this model to evaluate optimum tantalum bead configuration for this particular cup design when used in a clinical setting. A physical model of a total hip replacement (a phantom) was constructed which could simulate progressive, three-dimensional (3-D) penetration of the femoral head into the polyethylene component of a THR. Using a coordinate measuring machine (CMM) the positioning of the femoral head using the phantom was measured to be accurate to within 7 microm. The accuracy and precision of an RSA analysis system was determined from five repeat examinations of the phantom using various experimental set-ups of the phantom. The accuracy of the radiostereometric analysis, in this optimal experimental set-up studied was 33 microm for the medial direction, 22 microm for the superior direction, 86 microm for the posterior direction and 55 microm for the resultant 3-D vector length. The corresponding precision at the 95% confidence interval of the test results for repositioning the phantom five times, measured 8.4 microm for the medial direction, 5.5 microm for the superior direction, 16.0 microm for the posterior direction, and 13.5 microm for the resultant 3-D vector length. This in vitro model is proposed as a useful tool for developing a standard for the evaluation of radiostereometric and other radiographic methods used to measure in vivo wear.

  1. Precise GPS ephemerides from DMA and NGS tested by time transfer

    NASA Technical Reports Server (NTRS)

    Lewandowski, Wlodzimierz W.; Petit, Gerard; Thomas, Claudine

    1992-01-01

    It was shown that the use of the Defense Mapping Agency's (DMA) precise ephemerides brings a significant improvement to the accuracy of GPS time transfer. At present a new set of precise ephemerides produced by the National Geodetic Survey (NGS) has been made available to the timing community. This study demonstrates that both types of precise ephemerides improve long-distance GPS time transfer and remove the effects of Selective Availability (SA) degradation of broadcast ephemerides. The issue of overcoming SA is also discussed in terms of the routine availability of precise ephemerides.

  2. Calculation of the confidence intervals for transformation parameters in the registration of medical images

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.

    2010-01-01

    Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals. PMID:19138877

  3. Discrete restricted four-body problem: Existence of proof of equilibria and reproducibility of periodic orbits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minesaki, Yukitaka

    2015-01-01

    We propose the discrete-time restricted four-body problem (d-R4BP), which approximates the orbits of the restricted four-body problem (R4BP). The d-R4BP is given as a special case of the discrete-time chain regularization of the general N-body problem published in Minesaki. Moreover, we analytically prove that the d-R4BP yields the correct orbits corresponding to the elliptic relative equilibrium solutions of the R4BP when the three primaries form an equilateral triangle at any time. Such orbits include the orbit of a relative equilibrium solution already discovered by Baltagiannis and Papadakis. Until the proof in this work, there has been no discrete analog thatmore » preserves the orbits of elliptic relative equilibrium solutions in the R4BP. For a long time interval, the d-R4BP can precisely compute some stable periodic orbits in the Sun–Jupiter–Trojan asteroid–spacecraft system that cannot necessarily be reproduced by other generic integrators.« less

  4. speed-ne: Software to simulate and estimate genetic effective population size (Ne ) from linkage disequilibrium observed in single samples.

    PubMed

    Hamilton, Matthew B; Tartakovsky, Maria; Battocletti, Amy

    2018-05-01

    The genetic effective population size, N e , can be estimated from the average gametic disequilibrium (r2^) between pairs of loci, but such estimates require evaluation of assumptions and currently have few methods to estimate confidence intervals. speed-ne is a suite of matlab computer code functions to estimate Ne^ from r2^ with a graphical user interface and a rich set of outputs that aid in understanding data patterns and comparing multiple estimators. speed-ne includes functions to either generate or input simulated genotype data to facilitate comparative studies of Ne^ estimators under various population genetic scenarios. speed-ne was validated with data simulated under both time-forward and time-backward coalescent models of genetic drift. Three classes of estimators were compared with simulated data to examine several general questions: what are the impacts of microsatellite null alleles on Ne^, how should missing data be treated, and does disequilibrium contributed by reduced recombination among some loci in a sample impact Ne^. Estimators differed greatly in precision in the scenarios examined, and a widely employed Ne^ estimator exhibited the largest variances among replicate data sets. speed-ne implements several jackknife approaches to estimate confidence intervals, and simulated data showed that jackknifing over loci and jackknifing over individuals provided ~95% confidence interval coverage for some estimators and should be useful for empirical studies. speed-ne provides an open-source extensible tool for estimation of Ne^ from empirical genotype data and to conduct simulations of both microsatellite and single nucleotide polymorphism (SNP) data types to develop expectations and to compare Ne^ estimators. © 2018 John Wiley & Sons Ltd.

  5. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  6. Quantifying human decomposition in an indoor setting and implications for postmortem interval estimation.

    PubMed

    Ceciliason, Ann-Sofie; Andersson, M Gunnar; Lindström, Anders; Sandler, Håkan

    2018-02-01

    This study's objective is to obtain accuracy and precision in estimating the postmortem interval (PMI) for decomposing human remains discovered in indoor settings. Data were collected prospectively from 140 forensic cases with a known date of death, scored according to the Total Body Score (TBS) scale at the post-mortem examination. In our model setting, it is estimated that, in cases with or without the presence of blowfly larvae, approximately 45% or 66% respectively, of the variance in TBS can be derived from Accumulated Degree-Days (ADD). The precision in estimating ADD/PMI from TBS is, in our setting, moderate to low. However, dividing the cases into defined subgroups suggests the possibility to increase the precision of the model. Our findings also suggest a significant seasonal difference with concomitant influence on TBS in the complete data set, possibly initiated by the presence of insect activity mainly during summer. PMI may be underestimated in cases with presence of desiccation. Likewise, there is a need for evaluating the effect of insect activity, to avoid overestimating the PMI. Our data sample indicates that the scoring method might need to be slightly modified to better reflect indoor decomposition, especially in cases with insect infestations or/and extensive desiccation. When applying TBS in an indoor setting, the model requires distinct inclusion criteria and a defined population. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Adaptation to short photoperiods augments circadian food anticipatory activity in Siberian hamsters.

    PubMed

    Bradley, Sean P; Prendergast, Brian J

    2014-06-01

    This article is part of a Special Issue "Energy Balance". Both the light-dark cycle and the timing of food intake can entrain circadian rhythms. Entrainment to food is mediated by a food entrainable circadian oscillator (FEO) that is formally and mechanistically separable from the hypothalamic light-entrainable oscillator. This experiment examined whether seasonal changes in day length affect the function of the FEO in male Siberian hamsters (Phodopus sungorus). Hamsters housed in long (LD; 15 h light/day) or short (SD; 9h light/day) photoperiods were subjected to a timed-feeding schedule for 10 days, during which food was available only during a 5h interval of the light phase. Running wheel activity occurring within a 3h window immediately prior to actual or anticipated food delivery was operationally-defined as food anticipatory activity (FAA). After the timed-feeding interval, hamsters were fed ad libitum, and FAA was assessed 2 and 7 days later via probe trials of total food deprivation. During timed-feeding, all hamsters exhibited increases FAA, but FAA emerged more rapidly in SD; in probe trials, FAA was greater in magnitude and persistence in SD. Gonadectomy in LD did not induce the SD-like FAA phenotype, indicating that withdrawal of gonadal hormones is not sufficient to mediate the effects of photoperiod on FAA. Entrainment of the circadian system to light markedly affects the functional output of the FEO via gonadal hormone-independent mechanisms. Rapid emergence and persistent expression of FAA in SD may reflect a seasonal adaptation that directs behavior toward sources of nutrition with high temporal precision at times of year when food is scarce. © 2013.

  8. Physiological adaptations to low-volume, high-intensity interval training in health and disease.

    PubMed

    Gibala, Martin J; Little, Jonathan P; Macdonald, Maureen J; Hawley, John A

    2012-03-01

    Exercise training is a clinically proven, cost-effective, primary intervention that delays and in many cases prevents the health burdens associated with many chronic diseases. However, the precise type and dose of exercise needed to accrue health benefits is a contentious issue with no clear consensus recommendations for the prevention of inactivity-related disorders and chronic diseases. A growing body of evidence demonstrates that high-intensity interval training (HIT) can serve as an effective alternate to traditional endurance-based training, inducing similar or even superior physiological adaptations in healthy individuals and diseased populations, at least when compared on a matched-work basis. While less well studied, low-volume HIT can also stimulate physiological remodelling comparable to moderate-intensity continuous training despite a substantially lower time commitment and reduced total exercise volume. Such findings are important given that 'lack of time' remains the most commonly cited barrier to regular exercise participation. Here we review some of the mechanisms responsible for improved skeletal muscle metabolic control and changes in cardiovascular function in response to low-volume HIT. We also consider the limited evidence regarding the potential application of HIT to people with, or at risk for, cardiometabolic disorders including type 2 diabetes. Finally, we provide insight on the utility of low-volume HIT for improving performance in athletes and highlight suggestions for future research.

  9. Paleoclimate change in the Nakuru basin, Kenya, at 119 - 109 ka derived from δ18Odiatom and diatom assemblages and 40Ar/39Ar geochronology

    NASA Astrophysics Data System (ADS)

    Bergner, Andreas; Deino, Alan; Leng, Melanie; Gasse, Francoise

    2016-04-01

    A 4.5m-thick diatomite bed deposited during the cold interval of the penultimate interglacial at ~119 - 109 ka documents a period in which a deep freshwater lake filled the Nakuru basin in the Central Kenya Rift (CKR), East Africa. Palaeohydrological conditions of the basin are reconstructed for the paleolake highstand using δ18Odiatom and characterization of diatom assemblages. The age of the diatomite deposit is established by precise 40Ar/39Ar-dating of intercalated pumice tuffs. The paleolake experienced multiple hydrological fluctuations on sub-orbital (~1,500 to 2,000 years) time scales. The magnitude of the δ18Odiatom change (+/- 3‰) and significant changes in the plankton-littoral ratio of the diatom assemblage (+/- 25%) suggest that the paleolake record can be interpreted in the context of long-term climatic change in East Africa. Using 40Ar/39Ar age control and nominal diatomite-sedimentation rates we establish a simplified age model of paleohydrological vs. climatic change, from which we conclude that more humid conditions prevailed in equatorial East Africa during the late Pleistocene over a relatively long time interval of several thousands years. Then, extreme insolation at eccentricity maximum and weakened zonal air-pressure gradients in the tropics favored intensified ITCZ-like convection over East Africa and deep-freshwater lake conditions.

  10. Drift Mode Accelerometry for Spaceborne Gravity Measurements

    NASA Astrophysics Data System (ADS)

    Conklin, J. W.; Shelley, R.; Chilton, A.; Olatunde, T.; Ciani, G.; Mueller, G.

    2014-12-01

    A drift mode accelerometer is a precision instrument for spacecraft that overcomes much of the acceleration noise and readout dynamic range limitations of traditional electrostatic accelerometers. It has the potential of achieving acceleration noise performance similar to that of drag-free systems over a restricted frequency band without the need for external drag-free control or continuous spacecraft propulsion. Like traditional accelerometers, the drift mode accelerometer contains a high-density test mass surrounded by an electrode housing, which can control and sense all six degrees of freedom of the test mass. Unlike traditional accelerometers, the suspension system is operated with a low duty cycle so that the limiting suspension force noise only acts over brief, known time intervals, which can be accounted for in the data analysis. The readout is performed using a laser interferometer which is immune to the dynamic range limitations of even the best voltage references typically used to determine the inertial acceleration of electrostatic accelerometers. This presentation describes operation and performance modeling for such a device with respect to a low Earth orbiting satellite geodesy mission. Methods for testing the drift mode accelerometer with the University of Florida precision torsion pendulum will also be discussed.

  11. Pb-Pb dating of individual chondrules from the CBa chondrite Gujba: Assessment of the impact plume formation model

    PubMed Central

    Bollard, Jean; Connelly, James N.; Bizzarro, Martin

    2016-01-01

    The CB chondrites are metal-rich meteorites with characteristics that sharply distinguish them from other chondrite groups. Their unusual chemical and petrologic features and a young formation age of bulk chondrules dated from the CBa chondrite Gujba are interpreted to reflect a single-stage impact origin. Here, we report high-precision internal isochrons for four individual chondrules of the Gujba chondrite to probe the formation history of CB chondrites and evaluate the concordancy of relevant short-lived radionuclide chronometers. All four chondrules define a brief formation interval with a weighted mean age of 4562.49 ± 0.21 Myr, consistent with its origin from the vapor-melt impact plume generated by colliding planetesimals. Formation in a debris disk mostly devoid of nebular gas and dust sets an upper limit for the solar protoplanetary disk lifetime at 4.8 ± 0.3 Myr. Finally, given the well-behaved Pb-Pb systematics of all four chondrules, a precise formation age and the concordancy of the Mn-Cr, Hf-W, and I-Xe short-lived radionuclide relative chronometers, we propose that Gujba may serve as a suitable time anchor for these systems. PMID:27429545

  12. Air Traffic Management Technology Demostration: 1 Research and Procedural Testing of Routes

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.; Kibler, Jennifer L.; Hubbs, Clay E.; Smail, James W.

    2015-01-01

    NASA's Air Traffic Management Technology Demonstration-1 (ATD-1) will operationally demonstrate the feasibility of efficient arrival operations combining ground-based and airborne NASA technologies. The ATD-1 integrated system consists of the Traffic Management Advisor with Terminal Metering which generates precise time-based schedules to the runway and merge points; Controller Managed Spacing decision support tools which provide controllers with speed advisories and other information needed to meet the schedule; and Flight deck-based Interval Management avionics and procedures which allow flight crews to adjust their speed to achieve precise relative spacing. Initial studies identified air-ground challenges related to the integration of these three scheduling and spacing technologies, and NASA's airborne spacing algorithm was modified to address some of these challenges. The Research and Procedural Testing of Routes human-in-the-loop experiment was then conducted to assess the performance of the new spacing algorithm. The results of this experiment indicate that the algorithm performed as designed, and the pilot participants found the airborne spacing concept, air-ground procedures, and crew interface to be acceptable. However, the researchers concluded that the data revealed issues with the frequency of speed changes and speed reversals.

  13. Geodetic Evidence of Magma Beneath the Puna Geothermal Ventures Power Plant, Lower East Rift Zone, Kilauea Volcano, Hawaii.

    NASA Astrophysics Data System (ADS)

    Anderson, J. L.

    2008-12-01

    Precise level surveys of the Puna Geothermal Ventures power plant site have been conducted at 2 to 3 year intervals over the past 16 years following an initial pre-production base-line survey in 1992. Pre-1992 USGS studies near the plant showed slow general subsidence and this pattern has continued since then. The average rate of subsidence for the first 11 years of the present survey series was 0.71 cm per year (1992- 2003). It was against this background of subsidence that small but significant upward movements were detected in 2005 in an area approximately 500 m wide directly under the power plant. This positive anomaly had an amplitude of only 0.5 cm but was clearly discernable because of the part-per-million resolution possible with traditional precise leveling. The 13-year (at that time) data set made it possible to interpret this event with confidence. The cause of the deformation was reported in 2005 to be shallow and localized in comparison to factors contributing to the subsidence of the surrounding area. Subsequent drilling activity penetrated magma beneath the anomaly, providing strong physical evidence that fluid pressure was the probable cause of the anomaly.

  14. Short daily exposure to hand-arm vibrations in Swedish car mechanics.

    PubMed

    Barregård, Lars

    2003-01-01

    The aim of the study was to examine the daily exposure times to hand-arm vibrations in Swedish car mechanics, to test a method for estimating the exposure time without observing the workers for whole days, and to use the results for predicting the prevalence of vibration-induced white fingers (VWF) by the ISO 5349-model. Six garages were surveyed. In each garage, 5-10 car mechanics were observed in random order every 30 seconds throughout working days. The daily exposure time for each mechanic was estimated from the fraction of the observations that the mechanic was exposed. A total of 51 mechanics were observed, most of them on two different working days, yielding estimates for 95 days. The median effective exposure time was 10 minutes per day (95% confidence interval 5-15 minutes, arithmetic mean 14 minutes, maximum 80 minutes), and most of the exposure time was attributable to fastening and loosening nuts. The within-worker and between-worker variability was high (total sigma2 0.99, geometric standard deviation of 2.7). Using the observed exposure time and data on vibration levels of the main tools in Swedish car mechanics (average weighted acceleration level of 3.5 m/s2), the model in ISO-standard 5349 would predict that only three percent of the car mechanics will suffer from VWF after 20 years of exposure. In contrast, a recent survey of VWF showed the prevalence to be 25 percent. The precision of the observation method was estimated and was found to be good for the group daily mean. On the individual level the precision was only acceptable if the daily exposure time was > or = 40 minutes. In conclusion, the daily exposure time was short and the vibration level was limited. Nevertheless, hand-arm vibrations cause VWF in a significant number of car mechanics. The method of observing workers intermittently seemed to work well.

  15. Testing the molecular clock using mechanistic models of fossil preservation and molecular evolution

    PubMed Central

    2017-01-01

    Molecular sequence data provide information about relative times only, and fossil-based age constraints are the ultimate source of information about absolute times in molecular clock dating analyses. Thus, fossil calibrations are critical to molecular clock dating, but competing methods are difficult to evaluate empirically because the true evolutionary time scale is never known. Here, we combine mechanistic models of fossil preservation and sequence evolution in simulations to evaluate different approaches to constructing fossil calibrations and their impact on Bayesian molecular clock dating, and the relative impact of fossil versus molecular sampling. We show that divergence time estimation is impacted by the model of fossil preservation, sampling intensity and tree shape. The addition of sequence data may improve molecular clock estimates, but accuracy and precision is dominated by the quality of the fossil calibrations. Posterior means and medians are poor representatives of true divergence times; posterior intervals provide a much more accurate estimate of divergence times, though they may be wide and often do not have high coverage probability. Our results highlight the importance of increased fossil sampling and improved statistical approaches to generating calibrations, which should incorporate the non-uniform nature of ecological and temporal fossil species distributions. PMID:28637852

  16. [Effect of continuous renal replacement therapy on the plasma concentration of imipenem in severe infection patients with acute renal injury].

    PubMed

    Yu, Bin; Liu, Lixia; Xing, Dong; Zhao, Congcong; Hu, Zhenjie

    2015-05-01

    To investigate the extracorporeal clearance rate of imipenem in severe infection patients in the mode of continuous vena-venous hemofiltration (CVVH) during continuous renal replacement therapy (CRRT), in order to approach if the concentration of imipenem in plasma could achieve effective levels of anti-infection, and to explore the effect of time and anticoagulation measure on imipenem clearance during CRRT treatment. A prospective observational study was conducted. All adult severe infection patients complicating acute kidney injury (AKI) in the Department of Critical Care Medicine of the Fourth Hospital of Hebei Medical University from March 2013 to September 2014, who were prescribed imipenem as part of their required medical care, and CRRT for treatment of AKI were enrolled. 0.5 g doses of imipenem was administered intravenously every 6 hours or 8 hours according to random number table, and infused over 0.5 hour. The unfractionated heparin was used for anticoagulation in the patients without contraindications, and no anticoagulation strategy was used in the patients with high risk of bleeding. At 24 hours after first time of administration, postfilter venous blood and ultrafiltrate samples were collected at 0, 0.25, 0.5, 0.75, 1, 2, 5, 6, and 8 hours after imipenem administration. The concentration of imipenem in above samples was determined with liquid chromatography-mass spectrometer/mass spectrometer (LC-MS/MS). A total of 25 patients were enrolled. Thirteen patients received imipenem intravenously every 6 hours, and 12 patients, every 8 hours. The anticoagulation was conducted with heparin in 13 cases, and 12 cases without anticoagulation. The intra-day precision, inter-day precision, matrix effect, and recovery rate in low, medium, and high concentration of plasma and ultrafiltrate, and the stability of samples under different conditions showed a good result, the error of accuracy was controlled in the range of ±15%. With the application of Prismaflex blood filtration system and AN69-M100 filter, under the mode with CVVH, the total clearance rate of imipenem was (8.874±2.828) L/h when the actual dose of replacement fluid was (31.63±1.48) mL×kg⁻¹×h⁻¹, the total CRRT clearance rate of imipenem in vitro was (2.211±0.539) L/h, which accounting for (30.1±15.7)% of the total drug clearance. In 6 hours interval dosage regimen, the percentages of the time > 4× minimum inhibitory concentration (MIC) at specific 4×MIC of 2, 4, 6, and 8 μg/mL of imipenem were more than 40% of the dosing interval. But in the 8 hours interval dosage regimen, when the level was above the 4×MIC of 4 μg/mL, maintaining time would drop below 40% of the dosing interval, with significant differences compared with that in 6 hours interval dosage regimen [4×MIC = 2 μg/mL: (60.84±20.25)% vs. (94.01±12.46)%, t = 4.977, P = 0.001; 4×MIC = 4 μg/mL: (39.85±15.88)% vs. (68.74±9.57)%, t = 5.562, P = 0.000; 4×MIC = 6 g/mL: (27.58±13.70)% vs. (53.97±8.36)%, t = 5.867, P = 0.000; 4×MIC = 8 μg/mL: ( 8.87±12.43)% vs. (43.48±7.83)%, t = 5.976, P = 0.000]. No significant change in sieving coefficient of imipenem was found within a short time (6 hours), which indicated that there was no effect of anticoagulation on clearance of imipenem by AN69-M100 filter, and no statistical significance was found with repeated measure analysis (F = 0.186, P > 0.05 ). The clearance rate of imipenem is increased significantly in vitro under the mode of CVVH with the actual dose of replacement fluid was (31.63±1.48) mL×kg⁻¹×h⁻¹ in severe infective patients with severe sepsis complicating AKI, affecting the level of plasma drug concentration, need to adjust the dosage regimen. When the time of the dosing interval was shortened, the concentration of imipenem in patients' plasma could be increased significantly. In a short period of time, the sieving coefficient of imipenem through AN69 filter is not affected by anticoagulation measures and time cleaning efficiency will not decline.

  17. Recommended Changes to Interval Management to Achieve Operational Implementation

    NASA Technical Reports Server (NTRS)

    Baxley, Brian; Swieringa, Kurt; Roper, Roy; Hubbs, Clay; Goess, Paul; Shay, Richard

    2017-01-01

    A 19-day flight test of an Interval Management (IM) avionics prototype was conducted in Washington State using three aircraft to precisely achieve and maintain a spacing interval behind the preceding aircraft. NASA contracted with Boeing, Honeywell, and United Airlines to build this prototype, and then worked closely with them, the FAA, and other industry partners to test this prototype in flight. Four different IM operation types were investigated during this test in the en route, arrival, and final approach phases of flight. Many of the IM operations met or exceeded the design goals established prior to the test. However, there were issues discovered throughout the flight test, including the rate and magnitude of IM commanded speed changes and the difference between expected and actual aircraft deceleration rates.

  18. Balloon borne in-situ detection of OH in the stratosphere from 37 to 23 km

    NASA Technical Reports Server (NTRS)

    Stimpfle, R. M.; Lapson, L. B.; Wennberg, P. O.; Anderson, J. G.

    1989-01-01

    The OH number density in the stratosphere has been measured over the altitude interval of 37 to 23 km at midday via a balloon-borne gondola launched from Palestine, Texas on July 6, 1988. OH radicals are detected with a laser-induced fluorescence instrument employing a 17-kHz-repetition-rate copper vapor laser-pumped dye laser optically coupled to an enclosed flow, in-situ sampling chamber. OH abundances ranged from 88 + or - 3l pptv in the 36 to 35 km interval to 0.9 + or - 0.8 pptv in the 24 to 23 km interval. The stated uncertainty includes that from both measurement precision and accuracy. Simultaneous detection of ozone and water vapor densities was carried out with separate on-board instruments.

  19. Interval Timing Is Preserved Despite Circadian Desynchrony in Rats: Constant Light and Heavy Water Studies.

    PubMed

    Petersen, Christian C; Mistlberger, Ralph E

    2017-08-01

    The mechanisms that enable mammals to time events that recur at 24-h intervals (circadian timing) and at arbitrary intervals in the seconds-to-minutes range (interval timing) are thought to be distinct at the computational and neurobiological levels. Recent evidence that disruption of circadian rhythmicity by constant light (LL) abolishes interval timing in mice challenges this assumption and suggests a critical role for circadian clocks in short interval timing. We sought to confirm and extend this finding by examining interval timing in rats in which circadian rhythmicity was disrupted by long-term exposure to LL or by chronic intake of 25% D 2 O. Adult, male Sprague-Dawley rats were housed in a light-dark (LD) cycle or in LL until free-running circadian rhythmicity was markedly disrupted or abolished. The rats were then trained and tested on 15- and 30-sec peak-interval procedures, with water restriction used to motivate task performance. Interval timing was found to be unimpaired in LL rats, but a weak circadian activity rhythm was apparently rescued by the training procedure, possibly due to binge feeding that occurred during the 15-min water access period that followed training each day. A second group of rats in LL were therefore restricted to 6 daily meals scheduled at 4-h intervals. Despite a complete absence of circadian rhythmicity in this group, interval timing was again unaffected. To eliminate all possible temporal cues, we tested a third group of rats in LL by using a pseudo-randomized schedule. Again, interval timing remained accurate. Finally, rats tested in LD received 25% D 2 O in place of drinking water. This markedly lengthened the circadian period and caused a failure of LD entrainment but did not disrupt interval timing. These results indicate that interval timing in rats is resistant to disruption by manipulations of circadian timekeeping previously shown to impair interval timing in mice.

  20. Ages, durations and behavioural implications of Middle Stone Age industries in southern Africa: advances in optical dating of individual grains of quartz

    NASA Astrophysics Data System (ADS)

    Jacobs, Z.

    2009-04-01

    Recent developments in OSL dating have focussed on the measurement of individual sand-sized grains of quartz. Single-grain dating allows the identification of contaminant grains in a sample and their exclusion before final age determination, and the ability to directly check the stratigraphic integrity of archaeological sequences and address concerns about post-deposition sediment mixing. These benefits result in single-grain OSL ages being both accurate and precise. Even greater precision can be attained by adopting a systematic approach to the collection and analysis of OSL data. This involves one operator using the same OSL stimulation and detection instrument, laboratory radiation sources, calibration standards, and analytical procedures for all samples. By holding these experimental parameters constant, sources of error common to all samples are removed, enabling far greater resolution of the true age structure. This approach was recently used to determine the timing and duration of two bursts of Middle Stone Age technological and behavioural innovation - the Still Bay (SB) and Howieson's Poort (HP) - in southern Africa. These distinctive artefacts are associated with the first evidence for symbols and personal ornaments, and may have been the catalyst for the expansion of Homo sapiens populations in Africa 80,000-60,000 years ago and for the subsequent migration of modern humans out of Africa. Testing such hypotheses, and the putative role of climate change, has been hampered by poor age constraints for the HP and SB industries. Previous attempts to resolve the start and end dates of these industries had been largely obscured by the chronological' haze' arising from a variety of different materials being dated by different methods using different equipment, calibration standards, measurement procedures and techniques of data analysis. By clearing this haze and placing all ages on a common timescale, we were able to constrain the timing of the SB and HP, and the gap between them, to better than 3000 years at the 95% confidence interval. Both industries occur within the interval of population expansions in Africa inferred from genetic studies. A meta-analysis shows that our new ages are consistent with previous estimates but are more precise, revealing a lack of spatial patterning of the HP and SB across varied climatic and ecological zones. We find a temporal coincidence with major swings in climate, but not uniquely with these industries. Environmental factors may, therefore, have been responsible for episodic occupation of rock shelters, but were not the forcing mechanism behind the emergence of modern human behaviour.

  1. Sensitivity, specificity, and reproducibility of four measures of laboratory turnaround time.

    PubMed

    Valenstein, P N; Emancipator, K

    1989-04-01

    The authors studied the performance of four measures of laboratory turnaround time: the mean, median, 90th percentile, and proportion of tests reported within a predetermined cut-off interval (proportion of acceptable tests [PAT]). Measures were examined with the use of turnaround time data from 11,070 stat partial thromboplastin times, 16,761 urine cultures, and 28,055 stat electrolyte panels performed by a single laboratory. For laboratories with long turnaround times, the most important quality of a turnaround time measure is high reproducibility, so that improvement in reporting speed can be distinguished from random variation resulting from sampling. The mean was found to be the most reproducible of the four measures, followed by the median. The mean achieved acceptable precision with sample sizes of 100-500 tests. For laboratories with normally rapid turnaround times, the most important quality of a measure is high sensitivity and specificity for detecting whether turnaround time has dropped below standards. The PAT was found to be the best measure of turnaround time in this setting but required sample sizes of at least 500 tests to achieve acceptable accuracy. Laboratory turnaround time may be measured for different reasons. The method of measurement should be chosen with an eye toward its intended application.

  2. Study on miss distance based on projectile shock wave sensor

    NASA Astrophysics Data System (ADS)

    Gu, Guohua; Cheng, Gang; Zhang, Chenjun; Zhou, Lei

    2017-05-01

    The paper establishes miss distance models based on physical characteristic of shock-wave. The aerodynamic theory shows that the shock-wave of flying super-sonic projectile is generated for the projectile compressing and expending its ambient atmosphere. It advances getting miss distance according to interval of the first sensors, which first catches shock-wave, to solve the problem such as noise filtering on severe background, and signals of amplifier vibration dynamic disposal and electromagnetism compatibility, in order to improves the precision and reliability of gathering wave N signals. For the first time, it can identify the kinds of pills and firing units automatically, measure miss distance and azimuth when pills are firing. Application shows that the tactics and technique index is advanced all of the world.

  3. Millisecond resolution electron fluxes from the Cluster satellites: Calibrated EDI ambient electron data

    NASA Astrophysics Data System (ADS)

    Förster, Matthias; Rashev, Mikhail; Haaland, Stein

    2017-04-01

    The Electron Drift Instrument (EDI) onboard Cluster can measure 500 eV and 1 keV electron fluxes with high time resolution during passive operation phases in its Ambient Electron (AE) mode. Data from this mode is available in the Cluster Science Archive since October 2004 with a cadence of 16 Hz in the normal mode or 128 Hz for burst mode telemetry intervals. The fluxes are recorded at pitch angles of 0, 90, and 180 degrees. This paper describes the calibration and validation of these measurements. The high resolution AE data allow precise temporal and spatial diagnostics of magnetospheric boundaries and will be used for case studies and statistical studies of low energy electron fluxes in the near-Earth space. We show examples of applications.

  4. Immunological aspects of nonimmediate reactions to beta-lactam antibiotics.

    PubMed

    Rodilla, Esther Morena; González, Ignacio Dávila; Yges, Elena Laffond; Bellido, Francisco Javier Múñoz; Bara, María Teresa Gracia; Toledano, Félix Lorente

    2010-09-01

    beta-lactam antibiotics are the agents most frequently implied in immune drug adverse reactions. These can be classified as immediate or nonimmediate according to the time interval between the last drug administration and their onset. Mechanisms of immediate IgE-mediated reactions are widely studied and are therefore better understood. Nonimmediate reactions include a broad number of clinical entities like mild maculopapular exanthemas, the most common, and other less frequent but more severe reactions such as Stevens-Johnson syndrome, toxic epidermal necrolysis, acute exanthematic pustulosis or cytopenias. These nonimmediate reactions are mainly mediated by T cells but the precise underlying mechanisms are not well elucidated. This fact complicates the allergological evaluation of patients with this type of reaction and available tests have demonstrated poor sensitivity and specificity.

  5. Irregular synchronous activity in stochastically-coupled networks of integrate-and-fire neurons.

    PubMed

    Lin, J K; Pawelzik, K; Ernst, U; Sejnowski, T J

    1998-08-01

    We investigate the spatial and temporal aspects of firing patterns in a network of integrate-and-fire neurons arranged in a one-dimensional ring topology. The coupling is stochastic and shaped like a Mexican hat with local excitation and lateral inhibition. With perfect precision in the couplings, the attractors of activity in the network occur at every position in the ring. Inhomogeneities in the coupling break the translational invariance of localized attractors and lead to synchronization within highly active as well as weakly active clusters. The interspike interval variability is high, consistent with recent observations of spike time distributions in visual cortex. The robustness of our results is demonstrated with more realistic simulations on a network of McGregor neurons which model conductance changes and after-hyperpolarization potassium currents.

  6. Detecting elevation changes over mountain glaciers in Tibet and the Himalayas by TOPEX/Poseidon and Jason-2 radar altimeters: comparison with ICESat results

    NASA Astrophysics Data System (ADS)

    Hwang, C.; Cheng, Y. S.

    2015-12-01

    In most cases, mountain glaciers are narrow and situated over steep slopes. A laser-based altimeter such as ICESat has a small illuminated footprint at about 70 m, thus allowing to measure precise elevations over narrow mountain glaciers. However, unlike a typical radar altimeter mission, ICESat does not have repeat ground tracks (except in its early phase) to measure heights of a specific point at different times. Within a time span, usually a reference digital elevation model is used to compute height anomalies at ICESat's measurement sites over a designated area, which are then averaged to produce a representative height change (anomaly) in this area. In contrast, a radar altimeter such as TOPEX/Poseidon (TP; its follow-on missions are Jason-1 and -2), repeats its ground tracks at an even time interval (10 days for TP), but has a larger illuminated footprint than ICESat's (about 1 km or larger), making it difficult to measure precise elevations over narrow mountain glaciers. Here we demonstrate the potential of TP and Jason-2 radar altimeters in detecting elevation changes over mountain glaciers that are sufficiently wide and smooth. We select several glacier-covered sites in Mt. Tanggula (Tibet) and the Himalayas to experiment with methods that can generate precise height measurements from the two altimeters. Over the same spot, ranging errors due to slope, volume scattering and radar penetration can be common between repeat cycles, and may be reduced by differencing successive heights. We retracked radar waveforms and classify the surfaces using the SRTM-derived elevations. The effects of terrain and slope are reduced by fitting a surface to the height measurements from repeat cycles. We remove outlier heights and apply a smoothing filter to form final time series of glacier elevation change at the selected sites, which are compared with the results from ICESat (note the different mission times). Because TP and Jason-2 measure height changes every 10 days, clear annual and inter-annual oscillations of glacier heights are present in the resulting time series, in comparison to the unevenly sampled height changes from ICESat that do not show such oscillations. The rates of glacier elevation change from T/P and Jason-2 are mostly negative, but vary with locations and heights.

  7. Population density estimated from locations of individuals on a passive detector array

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

    2009-01-01

    The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

  8. Precise Time - Naval Oceanography Portal

    Science.gov Websites

    section Advanced Search... Sections Home Time Earth Orientation Astronomy Meteorology Oceanography Ice You are here: Home › USNO › Precise Time USNO Logo USNO Navigation Master Clock GPS Display Clocks TWSTT Telephone Time NTP Info Precise Time The U. S. Naval Observatory is charged with maintaining the

  9. A high-order time-accurate interrogation method for time-resolved PIV

    NASA Astrophysics Data System (ADS)

    Lynch, Kyle; Scarano, Fulvio

    2013-03-01

    A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In both cases, it is demonstrated that the measurement time interval can be significantly extended without compromising the correlation signal-to-noise ratio and with no increase of the truncation error. The increase of velocity dynamic range scales more than linearly with the number of frames included for the analysis, which supersedes by one order of magnitude the pair correlation by window deformation. The main factors influencing the performance of the method are discussed, namely the number of images composing the sequence and the polynomial order chosen to represent the motion throughout the trajectory.

  10. Verification of Abbott 25-OH-vitamin D assay on the architect system.

    PubMed

    Hutchinson, Katrina; Healy, Martin; Crowley, Vivion; Louw, Michael; Rochev, Yury

    2017-04-01

    Analytical and clinical verification of both old and new generations of the Abbott total 25-hydroxyvitamin D (25OHD) assays, and an examination of reference Intervals. Determination of between-run precision, and Deming comparison between patient sample results for 25OHD on the Abbott Architect, DiaSorin Liaison and AB SCIEX API 4000 (LC-MS/MS). Establishment of uncertainty of measurement for 25OHD Architect methods using old and new generations of the reagents, and estimation of reference interval in healthy Irish population. For between-run precision the manufacturer claims 2.8% coefficients of variation (CVs) of 2.8% and 4.6% for their high and low controls, respectively. Our instrument showed CVs between 4% and 6.2% for all levels of the controls on both generations of the Abbott reagents. The between-run uncertainties were 0.28 and 0.36, with expanded uncertainties 0.87 and 0.98 for the old and the new generations of reagent, respectively. The difference between all methods used for patients' samples was within total allowable error, and the instruments produced clinically equivalent results. The results covered the medical decision points of 30, 40, 50 and 125 nmol/L. The reference interval for total 25OHD in our healthy Irish subjects was lower than recommended levels (24-111 nmol/L). In a clinical laboratory Abbott 25OHD immunoassays are a useful, rapid and accurate method for measuring total 25OHD. The new generation of the assay was confirmed to be reliable, accurate, and a good indicator for 25OHD measurement. More study is needed to establish reference intervals that correctly represent the healthy population in Ireland.

  11. Accuracy and precision of as-received implant torque wrenches.

    PubMed

    Britton-Vidal, Eduardo; Baker, Philip; Mettenburg, Donald; Pannu, Darshanjit S; Looney, Stephen W; Londono, Jimmy; Rueggeberg, Frederick A

    2014-10-01

    Previous implant torque evaluation did not determine if the target value fell within a confidence interval for the population mean of the test groups, disallowing determination of whether a specific type of wrench met a standardized goal value. The purpose of this study was to measure both the accuracy and precision of 2 different configurations (spring style and peak break) of as-received implant torque wrenches and compare the measured values to manufacturer-stated values. Ten wrenches from 4 manufacturers, representing a variety of torque-limiting mechanisms and specificity of use (with either a specific brand or universally with any brand of implant product). Drivers were placed into the wrench, and tightening torque was applied to reach predetermined values using a NIST-calibrated digital torque wrench. Five replications of measurement were made for each wrench and averaged to provide a single value from that instrument. The target torque value for each wrench brand was compared to the 95% confidence interval for the true population mean of measured values to see if it fell within the measured range. Only 1 wrench brand (Nobel Biocare) demonstrated the target torque value falling within the 95% confidence interval for the true population mean. For others, the targeted torque value fell above the 95% confidence interval (Straumann and Imtec) or below (Salvin Torq). Neither type of torque-limiting mechanism nor designation of a wrench to be used as a dedicated brand-only product or to be used as a universal product on many brands affected the ability of a wrench to deliver torque values where the true population mean included the target torque level. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  12. Real-time estimation of BDS/GPS high-rate satellite clock offsets using sequential least squares

    NASA Astrophysics Data System (ADS)

    Fu, Wenju; Yang, Yuanxi; Zhang, Qin; Huang, Guanwen

    2018-07-01

    The real-time precise satellite clock product is one of key prerequisites for real-time Precise Point Positioning (PPP). The accuracy of the 24-hour predicted satellite clock product with 15 min sampling interval and an update of 6 h provided by the International GNSS Service (IGS) is only 3 ns, which could not meet the needs of all real-time PPP applications. The real-time estimation of high-rate satellite clock offsets is an efficient method for improving the accuracy. In this paper, the sequential least squares method to estimate real-time satellite clock offsets with high sample rate is proposed to improve the computational speed by applying an optimized sparse matrix operation to compute the normal equation and using special measures to take full advantage of modern computer power. The method is first applied to BeiDou Navigation Satellite System (BDS) and provides real-time estimation with a 1 s sample rate. The results show that the amount of time taken to process a single epoch is about 0.12 s using 28 stations. The Standard Deviation (STD) and Root Mean Square (RMS) of the real-time estimated BDS satellite clock offsets are 0.17 ns and 0.44 ns respectively when compared to German Research Center for Geosciences (GFZ) final clock products. The positioning performance of the real-time estimated satellite clock offsets is evaluated. The RMSs of the real-time BDS kinematic PPP in east, north, and vertical components are 7.6 cm, 6.4 cm and 19.6 cm respectively. The method is also applied to Global Positioning System (GPS) with a 10 s sample rate and the computational time of most epochs is less than 1.5 s with 75 stations. The STD and RMS of the real-time estimated GPS satellite clocks are 0.11 ns and 0.27 ns, respectively. The accuracies of 5.6 cm, 2.6 cm and 7.9 cm in east, north, and vertical components are achieved for the real-time GPS kinematic PPP.

  13. Evaluation of scaling invariance embedded in short time series.

    PubMed

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  14. Evaluation of Scaling Invariance Embedded in Short Time Series

    PubMed Central

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356

  15. Consequences of Secondary Calibrations on Divergence Time Estimates.

    PubMed

    Schenk, John J

    2016-01-01

    Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.

  16. Studying the dynamics of interbeat interval time series of healthy and congestive heart failure subjects using scale based symbolic entropy analysis

    PubMed Central

    Awan, Imtiaz; Aziz, Wajid; Habib, Nazneen; Alowibdi, Jalal S.; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali

    2018-01-01

    Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features. PMID:29771977

  17. Studying the dynamics of interbeat interval time series of healthy and congestive heart failure subjects using scale based symbolic entropy analysis.

    PubMed

    Awan, Imtiaz; Aziz, Wajid; Shah, Imran Hussain; Habib, Nazneen; Alowibdi, Jalal S; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali

    2018-01-01

    Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features.

  18. Calibration of PS09, PS10, and PS11 trans-Alaska pipeline system strong-motion instruments, with acceleration, velocity, and displacement records of the Denali fault earthquake, 03 November 2002

    USGS Publications Warehouse

    Evans, John R.; Jensen, E. Gray; Sell, Russell; Stephens, Christopher D.; Nyman, Douglas J.; Hamilton, Robert C.; Hager, William C.

    2006-01-01

    In September, 2003, the Alyeska Pipeline Service Company (APSC) and the U.S. Geological Survey (USGS) embarked on a joint effort to extract, test, and calibrate the accelerometers, amplifiers, and bandpass filters from the earthquake monitoring systems (EMS) at Pump Stations 09, 10, and 11 of the Trans-Alaska Pipeline System (TAPS). These were the three closest strong-motion seismographs to the Denali fault when it ruptured in the MW 7.9 earthquake of 03 November 2002 (22:12:41 UTC). The surface rupture is only 3.0 km from PS10 and 55.5 km from PS09 but PS11 is 124.2 km away from a small rupture splay and 126.9 km from the main trace. Here we briefly describe precision calibration results for all three instruments. Included with this report is a link to the seismograms reprocessed using these new calibrations: http://nsmp.wr.usgs.gov/data_sets/20021103_2212_taps.html Calibration information in this paper applies at the time of the Denali fault earthquake (03 November 2002), but not necessarily at other times because equipment at these stations is changed by APSC personnel at irregular intervals. In particular, the equipment at PS09, PS10, and PS11 was changed by our joint crew in September, 2003, so that we could perform these calibrations. The equipment stayed the same from at least the time of the earthquake until that retrieval, and these calibrations apply for that interval.

  19. Paleoclimatological analysis of Late Eocene core, Manning Formation, Brazos County, Texas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yancey, T.; Elsik, W.

    1994-09-01

    A core of the basal part of the Manning Formation was drilled to provide a baseline for paleoclimate analysis of the expanded section of siliciclastic sediments of late Eocene age in the outcrop belt. The interdeltaic Jackson Stage deposits of this area include 20+ cyclic units containing both lignite and shallow marine sediments. Depositional environments can be determined with precision and the repetitive nature of cycles allows comparisons of the same environment throughout, effectively removing depositional environment as a variable in interpretation of climate signal. Underlying Yegua strata contain similar cycles, providing 35+ equivalent environmental transacts within a 6 m.y.more » time interval of Jackson and Yegua section, when additional cores are taken. The core is from a cycle deposited during maximum flooding of the Jackson Stage, with deposits ranging from shoreface (carbonaceous) to midshelf, beyond the range of storm sand deposition. Sediments are leached of carbonate, but contain foram test linings, agglutinated forams, fish debris, and rich assemblages of terrestrial and marine palynomorphs. All samples examined contain marine dinoflagellates, which are most abundant in transgressive and maximum flood zones, along with agglutinated forams and fish debris. This same interval contains two separate pulses of reworked palynomorphs. The transgressive interval contains Glaphyrocysta intricata, normally present in Yegua sediments. Pollen indicates fluctuating subtropical to tropical paleoclimates, with three short cycles of cooler temperatures, indicated by abundance peaks of alder pollen (Alnus) in transgressive, maximum flood, and highstand deposits.« less

  20. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

Top