2015-01-01
The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690
Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks
2016-04-01
Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard
Association of auricular pressing and heart rate variability in pre-exam anxiety students.
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-03-25
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety.
Association of auricular pressing and heart rate variability in pre-exam anxiety students
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-01-01
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety. PMID:25206734
The effects of auditory stimulation with music on heart rate variability in healthy women.
Roque, Adriano L; Valenti, Vitor E; Guida, Heraldo L; Campos, Mônica F; Knap, André; Vanderlei, Luiz Carlos M; Ferreira, Lucas L; Ferreira, Celso; Abreu, Luiz Carlos de
2013-07-01
There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level.
The effects of auditory stimulation with music on heart rate variability in healthy women
Roque, Adriano L.; Valenti, Vitor E.; Guida, Heraldo L.; Campos, Mônica F.; Knap, André; Vanderlei, Luiz Carlos M.; Ferreira, Lucas L.; Ferreira, Celso; de Abreu, Luiz Carlos
2013-01-01
OBJECTIVES: There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. METHODS: We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. RESULTS: The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. CONCLUSION: We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level. PMID:23917660
Active laser ranging with frequency transfer using frequency comb
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hongyuan; Wei, Haoyun; Yang, Honglei
2016-05-02
A comb-based active laser ranging scheme is proposed for enhanced distance resolution and a common time standard for the entire system. Three frequency combs with different repetition rates are used as light sources at the two ends where the distance is measured. Pulse positions are determined through asynchronous optical sampling and type II second harmonic generation. Results show that the system achieves a maximum residual of 379.6 nm and a standard deviation of 92.9 nm with 2000 averages over 23.6 m. Moreover, as for the frequency transfer, an atom clock and an adjustable signal generator, synchronized to the atom clock, are used asmore » time standards for the two ends to appraise the frequency deviation introduced by the proposed system. The system achieves a residual fractional deviation of 1.3 × 10{sup −16} for 1 s, allowing precise frequency transfer between the two clocks at the two ends.« less
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.
Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-10
We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.
An estimator for the standard deviation of a natural frequency. I.
NASA Technical Reports Server (NTRS)
Schiff, A. J.; Bogdanoff, J. L.
1971-01-01
A brief review of mean-square approximate systems is given. The case in which the masses are deterministic is considered first in the derivation of an estimator for the upper bound of the standard deviation of a natural frequency. Two examples presented include a two-degree-of-freedom system and a case in which the disorder in the springs is perfectly correlated. For purposes of comparison, a Monte Carlo simulation was done on a digital computer.
Nilsonne, A; Sundberg, J; Ternström, S; Askenfelt, A
1988-02-01
A method of measuring the rate of change of fundamental frequency has been developed in an effort to find acoustic voice parameters that could be useful in psychiatric research. A minicomputer program was used to extract seven parameters from the fundamental frequency contour of tape-recorded speech samples: (1) the average rate of change of the fundamental frequency and (2) its standard deviation, (3) the absolute rate of fundamental frequency change, (4) the total reading time, (5) the percent pause time of the total reading time, (6) the mean, and (7) the standard deviation of the fundamental frequency distribution. The method is demonstrated on (a) a material consisting of synthetic speech and (b) voice recordings of depressed patients who were examined during depression and after improvement.
Acoustic analysis of speech variables during depression and after improvement.
Nilsonne, A
1987-09-01
Speech recordings were made of 16 depressed patients during depression and after clinical improvement. The recordings were analyzed using a computer program which extracts acoustic parameters from the fundamental frequency contour of the voice. The percent pause time, the standard deviation of the voice fundamental frequency distribution, the standard deviation of the rate of change of the voice fundamental frequency and the average speed of voice change were found to correlate to the clinical state of the patient. The mean fundamental frequency, the total reading time and the average rate of change of the voice fundamental frequency did not differ between the depressed and the improved group. The acoustic measures were more strongly correlated to the clinical state of the patient as measured by global depression scores than to single depressive symptoms such as retardation or agitation.
NASA Astrophysics Data System (ADS)
Popescu, Gheorghe
2001-06-01
An international frequency comparison was carried out at the Bundesamt fuer Eich- und Vermessungswessen (BEV), Vienna, within the framework of the EUROMET Project #498 from August 29 to September 5, 1999. The frequency differences obtained when the RO.1 laser from the National Institute for Laser, Plasma and Radiation Physics (NILPRP), Romania, was compared with five lasers from Austria (BEV1), Czech Republic (PLD1), France (BIPM3), Poland (GUM1) and Hungary (OMH1) are reported. Frequency differences were computed by using the matrix determinations for the group d, e, f, g. Considering the frequency differences measured for a group of three lasers compared to each other, we call the closing frequency the difference between measured and expected frequency difference (resulting from the previous two measurements). For the RO1 laser, when the BIPM3 laser was the reference laser, the closing frequencies range from +8.1 kHz to - 3.8 kHz. The relative Allan standard deviation was used to express the frequency stability and resulted 3.8 parts in 1012 for 100 s sampling time and 14000 s duration of the measurements. The averaged offset frequency relative to the BIPM4 stationary laser was 5.6 kHz and the standard deviation was 9.9 kHz.
Brief Report: Cognitive Correlates of Enlarged Head Circumference in Children with Autism.
ERIC Educational Resources Information Center
Deutsch, Curtis K.; Joseph, Robert M.
2003-01-01
A study examined the frequency and cognitive correlates of enlarged head circumference in 63 children with autism (ages 4-14). Macrocephaly occurred at a significantly higher frequency. Children with discrepantly high nonverbal abilities had a mean standardized head circumference that was more than 1 standard deviation greater than the reference…
NASA Astrophysics Data System (ADS)
Aguayo-Rodríguez, Gustavo; Zaldívar-Huerta, Ignacio E.; Rodríguez-Asomoza, Jorge; García-Juárez, Alejandro; Alonso-Rubio, Paul
2010-01-01
The generation, distribution and processing of microwave signals in the optical domain is a topic of research due to many advantages such as low loss, light weight, broadband width, and immunity to electromagnetic interference. In this sense, a novel all-optical microwave photonic filter scheme is proposed and experimentally demonstrated in the frequency range of 0.01-15.0 GHz. A microwave signal generated by optical mixing drives the microwave photonic filter. Basically, photonic filter is composed by a multimode laser diode, an integrated Mach- Zehnder intensity modulator, and 28.3-Km of single-mode standard fiber. Frequency response of the microwave photonic filter depends of the emission spectral characteristics of the multimode laser diode, the physical length of the single-mode standard fiber, and the chromatic dispersion factor associated to this type of fiber. Frequency response of the photonic filter is composed of a low-pass band centered at zero frequency, and several band-pass lobes located periodically on the microwave frequency range. Experimental results are compared by means of numerical simulations in Matlab exhibiting a small deviation in the frequency range of 0.01-5.0 GHz. However, this deviation is more evident when higher frequencies are reached. In this paper, we evaluate the causes of this deviation in the range of 5.0-15.0 GHz analyzing the parameters involved in the frequency response. This analysis permits to improve the performance of the photonic microwave filter to higher frequencies.
Dmitrieva, E S; Gel'man, V Ia; Zaĭtseva, K A; Orlov, A M
2009-01-01
Comparative study of acoustic correlates of emotional intonation was conducted on two types of speech material: sensible speech utterances and short meaningless words. The corpus of speech signals of different emotional intonations (happy, angry, frightened, sad and neutral) was created using the actor's method of simulation of emotions. Native Russian 20-70-year-old speakers (both professional actors and non-actors) participated in the study. In the corpus, the following characteristics were analyzed: mean values and standard deviations of the power, fundamental frequency, frequencies of the first and second formants, and utterance duration. Comparison of each emotional intonation with "neutral" utterances showed the greatest deviations of the fundamental frequency and frequencies of the first formant. The direction of these deviations was independent of the semantic content of speech utterance and its duration, age, gender, and being actor or non-actor, though the personal features of the speakers affected the absolute values of these frequencies.
NASA Technical Reports Server (NTRS)
Smith, Wayne Farrior
1973-01-01
The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.
Laser frequency stabilization using a commercial wavelength meter
NASA Astrophysics Data System (ADS)
Couturier, Luc; Nosske, Ingo; Hu, Fachao; Tan, Canzhu; Qiao, Chang; Jiang, Y. H.; Chen, Peng; Weidemüller, Matthias
2018-04-01
We present the characterization of a laser frequency stabilization scheme using a state-of-the-art wavelength meter based on solid Fizeau interferometers. For a frequency-doubled Ti-sapphire laser operated at 461 nm, an absolute Allan deviation below 10-9 with a standard deviation of 1 MHz over 10 h is achieved. Using this laser for cooling and trapping of strontium atoms, the wavemeter scheme provides excellent stability in single-channel operation. Multi-channel operation with a multimode fiber switch results in fluctuations of the atomic fluorescence correlated to residual frequency excursions of the laser. The wavemeter-based frequency stabilization scheme can be applied to a wide range of atoms and molecules for laser spectroscopy, cooling, and trapping.
47 CFR 95.637 - Modulation standards.
Code of Federal Regulations, 2012 CFR
2012-10-01
... frequency deviation of plus or minus 2.5 kHz, and the audio frequency response must not exceed 3.125 kHz..., must automatically prevent a greater than normal audio level from causing overmodulation. The transmitter also must include audio frequency low pass filtering, unless it complies with the applicable...
47 CFR 95.637 - Modulation standards.
Code of Federal Regulations, 2011 CFR
2011-10-01
... frequency deviation of plus or minus 2.5 kHz, and the audio frequency response must not exceed 3.125 kHz..., must automatically prevent a greater than normal audio level from causing overmodulation. The transmitter also must include audio frequency low pass filtering, unless it complies with the applicable...
47 CFR 95.637 - Modulation standards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... frequency deviation of plus or minus 2.5 kHz, and the audio frequency response must not exceed 3.125 kHz..., must automatically prevent a greater than normal audio level from causing overmodulation. The transmitter also must include audio frequency low pass filtering, unless it complies with the applicable...
47 CFR 95.637 - Modulation standards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... frequency deviation of plus or minus 2.5 kHz, and the audio frequency response must not exceed 3.125 kHz..., must automatically prevent a greater than normal audio level from causing overmodulation. The transmitter also must include audio frequency low pass filtering, unless it complies with the applicable...
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
NASA Technical Reports Server (NTRS)
Wu, Andy
1995-01-01
Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.
Optical-frequency transfer over a single-span 1840 km fiber link.
Droste, S; Ozimek, F; Udem, Th; Predehl, K; Hänsch, T W; Schnatz, H; Grosche, G; Holzwarth, R
2013-09-13
To compare the increasing number of optical frequency standards, highly stable optical signals have to be transferred over continental distances. We demonstrate optical-frequency transfer over a 1840-km underground optical fiber link using a single-span stabilization. The low inherent noise introduced by the fiber allows us to reach short term instabilities expressed as the modified Allan deviation of 2×10(-15) for a gate time τ of 1 s reaching 4×10(-19) in just 100 s. We find no systematic offset between the sent and transferred frequencies within the statistical uncertainty of about 3×10(-19). The spectral noise distribution of our fiber link at low Fourier frequencies leads to a τ(-2) slope in the modified Allan deviation, which is also derived theoretically.
Children's Use of the Prosodic Characteristics of Infant-Directed Speech.
ERIC Educational Resources Information Center
Weppelman, Tammy L.; Bostow, Angela; Schiffer, Ryan; Elbert-Perez, Evelyn; Newman, Rochelle S.
2003-01-01
Examined whether young children (4 years of age) show prosodic changes when speaking to infants. Measured children's word duration in infant-directed speech compared to adult-directed speech, examined amplitude variability, and examined both average fundamental frequency and fundamental frequency standard deviation. Results indicate that…
Zi, Fei; Wu, Xuejian; Zhong, Weicheng; Parker, Richard H; Yu, Chenghui; Budker, Simon; Lu, Xuanhui; Müller, Holger
2017-04-01
We present a hybrid laser frequency stabilization method combining modulation transfer spectroscopy (MTS) and frequency modulation spectroscopy (FMS) for the cesium D2 transition. In a typical pump-probe setup, the error signal is a combination of the DC-coupled MTS error signal and the AC-coupled FMS error signal. This combines the long-term stability of the former with the high signal-to-noise ratio of the latter. In addition, we enhance the long-term frequency stability with laser intensity stabilization. By measuring the frequency difference between two independent hybrid spectroscopies, we investigate the short-and long-term stability. We find a long-term stability of 7.8 kHz characterized by a standard deviation of the beating frequency drift over the course of 10 h and a short-term stability of 1.9 kHz characterized by an Allan deviation of that at 2 s of integration time.
Chapinal, N; de Passillé, A M; Rushen, J; Tucker, C B
2011-02-01
Restless behavior, as measured by the steps taken or weight shifting between legs, may be a useful tool to assess the comfort of dairy cattle. These behaviors increase when cows stand on uncomfortable surfaces or are lame. The objective of this study was to compare 2 measures of restless behavior, stepping behavior and changes in weight distribution, on 2 standing surfaces: concrete and rubber. Twelve cows stood on a weighing platform with 1 scale/hoof for 1h. The platform was covered with either concrete or rubber, presented in a crossover design. Restlessness, as measured by both the frequency of steps and weight shifting (measured as the standard deviation of weight applied over time to the legs), increased over 1h of forced standing on either concrete or rubber. A positive relationship was found between the frequency of steps and the standard deviation of weight over 1h for both treatments and pairs of legs (r ≥ 0.66). No differences existed in the standard deviation of weight applied to the front (27.6 ± 1.6 kg) or rear legs (33.5 ± 1.4 kg) or the frequency of steps (10.2 ± 1.6 and 20.8 ± 3.2 steps/10 min for the front and rear pair, respectively) between rubber and concrete. Measures of restlessness are promising tools for assessing specific types of discomfort, such as those associated with lameness, but additional tools are needed to assess comfort of non-concrete standing surfaces. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Long-term comparisons between two-way satellite and geodetic time transfer systems.
Plumb, John F; Larson, Kristine M
2005-11-01
Global Positioning System (GPS) observations recorded in the United States and Europe were used to evaluate time transfer capabilities of GETT (geodetic time transfer). Timing estimates were compared with two-way satellite time and frequency transfer (TWSTFT) systems. A comparison of calibrated links at the U.S. Naval Observatory, Washington, D.C., and Colorado Springs, CO, yielded agreement of 2.17 ns over 6 months with a standard deviation of 0.73 ns. An uncalibrated link between the National Institute of Standards and Technology (NIST) and Physikalisch-Technische Bundesanstalt, Braunschweig, Germany, has a standard deviation of 0.79 ns over the same time period.
Windowed and Wavelet Analysis of Marine Stratocumulus Cloud Inhomogeneity
NASA Technical Reports Server (NTRS)
Gollmer, Steven M.; Harshvardhan; Cahalan, Robert F.; Snider, Jack B.
1995-01-01
To improve radiative transfer calculations for inhomogeneous clouds, a consistent means of modeling inhomogeneity is needed. One current method of modeling cloud inhomogeneity is through the use of fractal parameters. This method is based on the supposition that cloud inhomogeneity over a large range of scales is related. An analysis technique named wavelet analysis provides a means of studying the multiscale nature of cloud inhomogeneity. In this paper, the authors discuss the analysis and modeling of cloud inhomogeneity through the use of wavelet analysis. Wavelet analysis as well as other windowed analysis techniques are used to study liquid water path (LWP) measurements obtained during the marine stratocumulus phase of the First ISCCP (International Satellite Cloud Climatology Project) Regional Experiment. Statistics obtained using analysis windows, which are translated to span the LWP dataset, are used to study the local (small scale) properties of the cloud field as well as their time dependence. The LWP data are transformed onto an orthogonal wavelet basis that represents the data as a number of times series. Each of these time series lies within a frequency band and has a mean frequency that is half the frequency of the previous band. Wavelet analysis combined with translated analysis windows reveals that the local standard deviation of each frequency band is correlated with the local standard deviation of the other frequency bands. The ratio between the standard deviation of adjacent frequency bands is 0.9 and remains constant with respect to time. This ratio defined as the variance coupling parameter is applicable to all of the frequency bands studied and appears to be related to the slope of the data's power spectrum. Similar analyses are performed on two cloud inhomogeneity models, which use fractal-based concepts to introduce inhomogeneity into a uniform cloud field. The bounded cascade model does this by iteratively redistributing LWP at each scale using the value of the local mean. This model is reformulated into a wavelet multiresolution framework, thereby presenting a number of variants of the bounded cascade model. One variant introduced in this paper is the 'variance coupled model,' which redistributes LWP using the local standard deviation and the variance coupling parameter. While the bounded cascade model provides an elegant two- parameter model for generating cloud inhomogeneity, the multiresolution framework provides more flexibility at the expense of model complexity. Comparisons are made with the results from the LWP data analysis to demonstrate both the strengths and weaknesses of these models.
NASA Astrophysics Data System (ADS)
Xiong, Bing; Wang, Zhen-Guo; Fan, Xiao-Qiang; Wang, Yi
2017-04-01
To study the characteristics of flow separation and self-excited oscillation of a shock train in a rectangular duct, a simple test case has been conducted and analyzed. The high-speed Schlieren technique and high-frequency pressure measurements have been adopted to collect the data. The experimental results show that there are two separation modes in the duct under M3 incoming condition. The separation mode switch has great effects on the flow effects, such as the pressure distribution, the standard deviation distribution and so on. The separation mode switch can be judged by the history of pressure standard deviation. When it comes to the self-excited oscillation of a shock train, the frequency contents in the undisturbed region, the intermittent region, and the separated bubble have been compared. It was found that the low-frequency disturbance induced by the upstream shock foot motions can travel downstream and the frequency will be magnified by the separation bubble. The oscillation of the small shock foot and the oscillation of the large shock foot are associated with each other rather than oscillating independently.
Barth, Nancy A.; Veilleux, Andrea G.
2012-01-01
The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.
Murasawa, Kengo; Sato, Koki; Hidaka, Takehiko
2011-05-01
A new method for measuring optical-beat frequencies in the terahertz (THz) region using microwave higher harmonics is presented. A microwave signal was applied to the antenna gap of a photoconductive (PC) device emitting a continuous electromagnetic wave at about 1 THz by the photomixing technique. The microwave higher harmonics with THz frequencies are generated in the PC device owing to the nonlinearity of the biased photoconductance, which is briefly described in this article. Thirteen nearly periodic peaks in the photocurrent were observed when the microwave was swept from 16 to 20 GHz at a power of -48 dBm. The nearly periodic peaks are generated by the homodyne detection of the optical beat with the microwave higher harmonics when the frequency of the harmonics coincides with the optical-beat frequency. Each peak frequency and its peak width were determined by fitting a Gaussian function, and the order of microwave harmonics was determined using a coarse (i.e., lower resolution) measurement of the optical-beat frequency. By applying the Kalman algorithm to the peak frequencies of the higher harmonics and their standard deviations, the optical-beat frequency near 1 THz was estimated to be 1029.81 GHz with the standard deviation of 0.82 GHz. The proposed method is applicable to a conventional THz-wave generator with a photomixer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-10-03
This report is a six-part statistical summary of surface weather observations for Torrejon AB, Madrid Spain. It contains the following parts: (A) Weather Conditions; Atmospheric Phenomena; (B) Precipitation, Snowfall and Snow Depth (daily amounts and extreme values); (C) Surface winds; (D) Ceiling Versus Visibility; Sky Cover; (E) Psychrometric Summaries (daily maximum and minimum temperatures, extreme maximum and minimum temperatures, psychrometric summary of wet-bulb temperature depression versus dry-bulb temperature, means and standard deviations of dry-bulb, wet-bulb and dew-point temperatures and relative humidity); and (F) Pressure Summary (means, standard, deviations, and observation counts of station pressure and sea-level pressure). Data in thismore » report are presented in tabular form, in most cases in percentage frequency of occurrence or cumulative percentage frequency of occurrence tables.« less
Zarbo, Richard J; Copeland, Jacqueline R; Varney, Ruan C
2017-10-01
To develop a business subsystem fulfilling International Organization for Standardization 15189 nonconformance management regulatory standard, facilitating employee engagement in problem identification and resolution to effect quality improvement and risk mitigation. From 2012 to 2016, the integrated laboratories of the Henry Ford Health System used a quality technical team to develop and improve a management subsystem designed to identify, track, trend, and summarize nonconformances based on frequency, risk, and root cause for elimination at the level of the work. Programmatic improvements and training resulted in markedly increased documentation culminating in 71,641 deviations in 2016 classified by a taxonomy of 281 defect types into preanalytic (74.8%), analytic (23.6%), and postanalytic (1.6%) testing phases. The top 10 deviations accounted for 55,843 (78%) of the total. Deviation management is a key subsystem of managers' standard work whereby knowledge of nonconformities assists in directing corrective actions and continuous improvements that promote consistent execution and higher levels of performance. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Acoustic response variability in automotive vehicles
NASA Astrophysics Data System (ADS)
Hills, E.; Mace, B. R.; Ferguson, N. S.
2009-03-01
A statistical analysis of a series of measurements of the audio-frequency response of a large set of automotive vehicles is presented: a small hatchback model with both a three-door (411 vehicles) and five-door (403 vehicles) derivative and a mid-sized family five-door car (316 vehicles). The sets included vehicles of various specifications, engines, gearboxes, interior trim, wheels and tyres. The tests were performed in a hemianechoic chamber with the temperature and humidity recorded. Two tests were performed on each vehicle and the interior cabin noise measured. In the first, the excitation was acoustically induced by sets of external loudspeakers. In the second test, predominantly structure-borne noise was induced by running the vehicle at a steady speed on a rough roller. For both types of excitation, it is seen that the effects of temperature are small, indicating that manufacturing variability is larger than that due to temperature for the tests conducted. It is also observed that there are no significant outlying vehicles, i.e. there are at most only a few vehicles that consistently have the lowest or highest noise levels over the whole spectrum. For the acoustically excited tests, measured 1/3-octave noise reduction levels typically have a spread of 5 dB or so and the normalised standard deviation of the linear data is typically 0.1 or higher. Regarding the statistical distribution of the linear data, a lognormal distribution is a somewhat better fit than a Gaussian distribution for lower 1/3-octave bands, while the reverse is true at higher frequencies. For the distribution of the overall linear levels, a Gaussian distribution is generally the most representative. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the acoustically induced airborne cabin noise is best described by a Gaussian distribution with a normalised standard deviation between 0.09 and 0.145. There is generally considerable variability in the roller-induced noise, with individual 1/3-octave levels varying by typically 15 dB or so and with the normalised standard deviation being in the range 0.2-0.35 or more. These levels are strongly affected by wheel rim and tyre constructions. For vehicles with nominally identical wheel rims and tyres, the normalised standard deviation for 1/3-octave levels in the frequency range 40-600 Hz is 0.2 or so. The distribution of the linear roller-induced noise level in each 1/3-octave frequency band is well described by a lognormal distribution as is the overall level. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the roller-induced road noise is best described by a lognormal distribution with a normalised standard deviation of 0.2 or so, but that this can be significantly affected by the tyre and rim type, especially at lower frequencies.
Investigation of a L1-optimized choke ring ground plane for a low-cost GPS receiver-system
NASA Astrophysics Data System (ADS)
Zhang, Li; Schwieger, Volker
2018-01-01
Besides the geodetic dual-frequency GNSS receivers-systems (receiver and antenna), there are also low-cost single-frequency GPS receiver-systems. The multipath effect is a limiting factor of accuracy for both geodetic dual-frequency and low-cost single-frequency GPS receivers. And the multipath effect is for the short baselines dominating error (typical for the monitoring in Engineering Geodesy). So accuracy and reliability of GPS measurement for monitoring can be improved by reducing the multipath signal. In this paper, the self-constructed L1-optimized choke ring ground plane (CR-GP) is applied to reduce the multipath signal. Its design will be described and its performance will be investigated. The results show that the introduced low-cost single-frequency GPS receiver-system, which contains the Ublox LEA-6T single-frequency GPS receiver and Trimble Bullet III antenna with a self-constructed L1-optimized CR-GP, can reach standard deviations of 3 mm in east, 5 mm in north and 9 mm in height in the test field which has many reflectors. This accuracy is comparable with the geodetic dual-frequency GNSS receiver-system. The improvement of the standard deviation of the measurement using the CR-GP is about 50 % and 35 % compared to the used antenna without shielding and with flat ground plane respectively.
Berenbrock, Charles
2003-01-01
Improved flood-frequency estimates for short-term (10 or fewer years of record) streamflow-gaging stations were needed to support instream flow studies by the U.S. Forest Service, which are focused on quantifying water rights necessary to maintain or restore productive fish habitat. Because peak-flow data for short-term gaging stations can be biased by having been collected during an unusually wet, dry, or otherwise unrepresentative period of record, the data may not represent the full range of potential floods at a site. To test whether peak-flow estimates for short-term gaging stations could be improved, the two-station comparison method was used to adjust the logarithmic mean and logarithmic standard deviation of peak flows for seven short-term gaging stations in the Salmon and Clearwater River Basins, central Idaho. Correlation coefficients determined from regression of peak flows for paired short-term and long-term (more than 10 years of record) gaging stations over a concurrent period of record indicated that the mean and standard deviation of peak flows for all short-term gaging stations would be improved. Flood-frequency estimates for seven short-term gaging stations were determined using the adjusted mean and standard deviation. The original (unadjusted) flood-frequency estimates for three of the seven short-term gaging stations differed from the adjusted estimates by less than 10 percent, probably because the data were collected during periods representing the full range of peak flows. Unadjusted flood-frequency estimates for four short-term gaging stations differed from the adjusted estimates by more than 10 percent; unadjusted estimates for Little Slate Creek and Salmon River near Obsidian differed from adjusted estimates by nearly 30 percent. These large differences probably are attributable to unrepresentative periods of peak-flow data collection.
NASA Astrophysics Data System (ADS)
Monroe, Roberta Lynn
The intrinsic fundamental frequency effect among vowels is a vocalic phenomenon of adult speech in which high vowels have higher fundamental frequencies in relation to low vowels. Acoustic investigations of children's speech have shown that variability of the speech signal decreases as children's ages increase. Fundamental frequency measures have been suggested as an indirect metric for the development of laryngeal stability and coordination. Studies of the intrinsic fundamental frequency effect have been conducted among 8- and 9-year old children and in infants. The present study investigated this effect among 2- and 4-year old children. Eight 2-year old and eight 4-year old children produced four vowels, /ae/, /i/, /u/, and /a/, in CVC syllables. Three measures of fundamental frequency were taken. These were mean fundamental frequency, the intra-utterance standard deviation of the fundamental frequency, and the extent to which the cycle-to-cycle pattern of the fundamental frequency was predicted by a linear trend. An analysis of variance was performed to compare the two age groups, the four vowels, and the earlier and later repetitions of the CVC syllables. A significant difference between the two age groups was detected using the intra-utterance standard deviation of the fundamental frequency. Mean fundamental frequencies and linear trend analysis showed that voicing of the preceding consonant determined the statistical significance of the age-group comparisons. Statistically significant differences among the fundamental frequencies of the four vowels were not detected for either age group.
File Carving and Malware Identification Algorithms Applied to Firmware Reverse Engineering
2013-03-21
33 3.5 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.6 Experimental...consider a byte value rate-of-change frequency metric [32]. Their system calculates the absolute value of the distance between all consecutive bytes, then...the rate-of-change means and standard deviations. Karresand and Shahmehri use the same distance metric for both byte value frequency and rate-of-change
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
NASA Astrophysics Data System (ADS)
Tran, Duong Duy
The statistics of broadband acoustic signal transmissions in a random continental shelf waveguide are characterized for the fully saturated regime. The probability distribution of broadband signal energies after saturated multi-path propagation is derived using coherence theory. The frequency components obtained from Fourier decomposition of a broadband signal are each assumed to be fully saturated, where the energy spectral density obeys the exponential distribution with 5.6 dB standard deviation and unity scintillation index. When the signal bandwidth and measurement time are respectively larger than the correlation bandwidth and correlation time of its energy spectral density components, the broadband signal energy obtained by integrating the energy spectral density across the signal bandwidth then follows the Gamma distribution with standard deviation smaller than 5.6 dB and scintillation index less than unity. The theory is verified with broadband transmissions in the Gulf of Maine shallow water waveguide in the 300-1200 Hz frequency range. The standard deviations of received broadband signal energies range from 2.7 to 4.6 dB for effective bandwidths up to 42 Hz, while the standard deviations of individual energy spectral density components are roughly 5.6 dB. The energy spectral density correlation bandwidths of the received broadband signals are found to be larger for signals with higher center frequency. Sperm whales in the New England continental shelf and slope were passively localized, in both range and bearing using a single low-frequency (< 2500 Hz), densely sampled, towed horizontal coherent hydrophone array system. Whale bearings were estimated using time-domain beamforming that provided high coherent array gain in sperm whale click signal-to-noise ratio. Whale ranges from the receiver array center were estimated using the moving array triangulation technique from a sequence of whale bearing measurements. The dive profile was estimated for a sperm whale in the shallow waters of the Gulf of Maine with 160 m water-column depth, located close to the array's near-field where depth estimation was feasible by employing time difference of arrival of the direct and multiply reflected click signals received on the array. The dependence of broadband energy on bandwidth and measurement time was verified employing recorded sperm whale clicks in the Gulf of Maine.
Zarkovic, Andrea; Mora, Justin; McKelvie, James; Gamble, Greg
2007-12-01
The aim of the study was to establish the correlation between visual filed loss as shown by second-generation Frequency Doubling Technology (Humphrey Matrix) and Standard Automated Perimetry (Humphrey Field Analyser) in patients with glaucoma. Also, compared were the test duration and reliability. Forty right eyes from glaucoma patients from a private ophthalmology practice were included in this prospective study. All participants had tests within an 8-month period. Pattern deviation plots and mean deviation were compared to establish the correlation between the two perimetry tests. Overall correlation and correlation between hemifields, quadrants and individual test locations were assessed. Humphrey Field Analyser tests were slightly more reliable (37/40 vs. 34/40 for Matrix)) but overall of longer duration. There was good correlation (0.69) between mean deviations. Superior hemifields and superonasal quadrants had the highest correlation (0.88 [95% CI 0.79, 0.94]). Correlation between individual points was independent of distance from the macula. Generally, the Matrix and Humphrey Field Analyser perimetry correlate well; however, each machine utilizes a different method of analysing data and thus the direct comparison should be made with caution.
Doozandeh, Azadeh; Irandoost, Farnoosh; Mirzajani, Ali; Yazdani, Shahin; Pakravan, Mohammad; Esfandiari, Hamed
2017-01-01
This study aimed to compare second-generation frequency-doubling technology (FDT) perimetry with standard automated perimetry (SAP) in mild glaucoma. Forty-seven eyes of 47 participants who had mild visual field defect by SAP were included in this study. All participants were examined using SITA 24-2 (SITA-SAP) and matrix 24-2 (Matrix-FDT). The correlations of global indices and the number of defects on pattern deviation (PD) plots were determined. Agreement between two sets regarding the stage of visual field damage was assessed. Pearson's correlation, intra-cluster comparison, paired t-test, and 95% limit of agreement were calculated. Although there was no significant difference between global indices, the agreement between the two devices regarding the global indices was weak (the limit of agreement for mean deviation was -6.08 to 6.08 and that for pattern standard deviation was -4.42 to 3.42). The agreement between SITA-SAP and Matrix-FDT regarding the Glaucoma Hemifield Test (GHT) and the number of defective points in each quadrant and staging of the visual field damage was also weak. Because the correlation between SITA-SAP and Matrix-FDT regarding global indices, GHT, number of defective points, and stage of the visual field damage in mild glaucoma is weak, Matrix-FDT cannot be used interchangeably with SITA-SAP in the early stages of glaucoma.
Frequency of Bolton tooth-size discrepancies among orthodontic patients.
Freeman, J E; Maskeroni, A J; Lorton, L
1996-07-01
The purpose of this study was to determine the percentage of orthodontic patients who present with an interarch tooth-size discrepancy likely to affect treatment planning or results. The Bolton tooth-size discrepancies of 157 patients accepted for treatment in an orthodontic residency program were evaluated for the frequency and the magnitude of deviation from Bolton's mean. Discrepancies outside of 2 SD were considered as potentially significant with regard to treatment planning and treatment results. Although the mean of the sample was nearly identical to that of Bolton's, the range and standard deviation varied considerably with a large percentage of the orthodontic patients having discrepancies outside of Bolton's 2 SD. With such a high frequency of significant discrepancies it would seem prudent to routinely perform a tooth-size analysis and incorporate the findings into orthodontic treatment planning.
Artes, Paul H; Hutchison, Donna M; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C
2005-07-01
To compare test results from second-generation Frequency-Doubling Technology perimetry (FDT2, Humphrey Matrix; Carl-Zeiss Meditec, Dublin, CA) and standard automated perimetry (SAP) in patients with glaucoma. Specifically, to examine the relationship between visual field sensitivity and test-retest variability and to compare total and pattern deviation probability maps between both techniques. Fifteen patients with glaucoma who had early to moderately advanced visual field loss with SAP (mean MD, -4.0 dB; range, +0.2 to -16.1) were enrolled in the study. Patients attended three sessions. During each session, one eye was examined twice with FDT2 (24-2 threshold test) and twice with SAP (Swedish Interactive Threshold Algorithm [SITA] Standard 24-2 test), in random order. We compared threshold values between FDT2 and SAP at test locations with similar visual field coordinates. Test-retest variability, established in terms of test-retest intervals and standard deviations (SDs), was investigated as a function of visual field sensitivity (estimated by baseline threshold and mean threshold, respectively). The magnitude of visual field defects apparent in total and pattern deviation probability maps were compared between both techniques by ordinal scoring. The global visual field indices mean deviation (MD) and pattern standard deviation (PSD) of FDT2 and SAP correlated highly (r > 0.8; P < 0.001). At test locations with high sensitivity (>25 dB with SAP), threshold estimates from FDT2 and SAP exhibited a close, linear relationship, with a slope of approximately 2.0. However, at test locations with lower sensitivity, the relationship was much weaker and ceased to be linear. In comparison with FDT2, SAP showed a slightly larger proportion of test locations with absolute defects (3.0% vs. 2.2% with SAP and FDT2, respectively, P < 0.001). Whereas SAP showed a significant increase in test-retest variability at test locations with lower sensitivity (P < 0.001), there was no relationship between variability and sensitivity with FDT2 (P = 0.46). In comparison with SAP, FDT2 exhibited narrower test-retest intervals at test locations with lower sensitivity (SAP thresholds <25 dB). A comparison of the total and pattern deviation maps between both techniques showed that the total deviation analyses of FDT2 may slightly underestimate the visual field loss apparent with SAP. However, the pattern-deviation maps of both instruments agreed well with each other. The test-retest variability of FDT2 is uniform over the measurement range of the instrument. These properties may provide advantages for the monitoring of patients with glaucoma that should be investigated in longitudinal studies.
An estimator for the standard deviation of a natural frequency. II.
NASA Technical Reports Server (NTRS)
Schiff, A. J.; Bogdanoff, J. L.
1971-01-01
A method has been presented for estimating the variability of a system's natural frequencies arising from the variability of the system's parameters. The only information required to obtain the estimates is the member variability, in the form of second-order properties, and the natural frequencies and mode shapes of the mean system. It has also been established for the systems studied by means of Monte Carlo estimates that the specification of second-order properties is an adequate description of member variability.
NASA Technical Reports Server (NTRS)
Moore, H. J.; Wu, S. C.
1973-01-01
The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.
Quantifying the heterogeneity of the tectonic stress field using borehole data
Schoenball, Martin; Davatzes, Nicholas C.
2017-01-01
The heterogeneity of the tectonic stress field is a fundamental property which influences earthquake dynamics and subsurface engineering. Self-similar scaling of stress heterogeneities is frequently assumed to explain characteristics of earthquakes such as the magnitude-frequency relation. However, observational evidence for such scaling of the stress field heterogeneity is scarce.We analyze the local stress orientations using image logs of two closely spaced boreholes in the Coso Geothermal Field with sub-vertical and deviated trajectories, respectively, each spanning about 2 km in depth. Both the mean and the standard deviation of stress orientation indicators (borehole breakouts, drilling-induced fractures and petal-centerline fractures) determined from each borehole agree to the limit of the resolution of our method although measurements at specific depths may not. We find that the standard deviation in these boreholes strongly depends on the interval length analyzed, generally increasing up to a wellbore log length of about 600 m and constant for longer intervals. We find the same behavior in global data from the World Stress Map. This suggests that the standard deviation of stress indicators characterizes the heterogeneity of the tectonic stress field rather than the quality of the stress measurement. A large standard deviation of a stress measurement might be an expression of strong crustal heterogeneity rather than of an unreliable stress determination. Robust characterization of stress heterogeneity requires logs that sample stress indicators along a representative sample volume of at least 1 km.
The Rydberg constant and proton size from atomic hydrogen
NASA Astrophysics Data System (ADS)
Beyer, Axel; Maisenbacher, Lothar; Matveev, Arthur; Pohl, Randolf; Khabarova, Ksenia; Grinin, Alexey; Lamour, Tobias; Yost, Dylan C.; Hänsch, Theodor W.; Kolachevsky, Nikolai; Udem, Thomas
2017-10-01
At the core of the “proton radius puzzle” is a four-standard deviation discrepancy between the proton root-mean-square charge radii (rp) determined from the regular hydrogen (H) and the muonic hydrogen (µp) atoms. Using a cryogenic beam of H atoms, we measured the 2S-4P transition frequency in H, yielding the values of the Rydberg constant R∞ = 10973731.568076(96) per meterand rp = 0.8335(95) femtometer. Our rp value is 3.3 combined standard deviations smaller than the previous H world data, but in good agreement with the µp value. We motivate an asymmetric fit function, which eliminates line shifts from quantum interference of neighboring atomic resonances.
Eigbefoh, J O; Okpere, E E; Ande, B; Asonye, C
2005-02-01
Vitamin A deficiency sub clinical or overt, is associated with adverse maternal, fetal and neonatal outcome. This is also true for an excess of vitamin A. The challenge in pregnancy is to detect sub clinical vitamin A deficiency in patients for whom supplements or dietary manipulation will be of benefit. This was a cross sectional case controlled study at the University of Benin Teaching Hospital to compare the Helen Keller Food Frequency Chart with biochemical methods in the determination of vitamin A status in pregnancy. Data was collected from Antenatal patients (142). Using serum Biochemistry three categories of patient were recognized. Patients with normal vitamin A levels (N=100 women with blood vitamin A within two standard deviation of the mean) Twenty-four women (24) had low vitamin A levels (N=24, patients with blood vitamin A level at less than 2 standard deviation below the mean). Eighteen patients (18) had high vitamin A levels (patients with blood vitamin A levels at greater than two standard deviation above the mean). All recruited patients had a dietary assessment using the Helen Keller Food Frequency Chart. The Helen Keller Food Frequency Chart (HKFFC) was found to have a high degree of sensitivity (74.5%) and a high specificity (75%) in detection of patients with vitamin A deficiency. The positive predictive value was 93.62%. The low negative predictive rate of 37.5% however implies that a positive test is more important than a negative test. The HKFFC was unable to differentiate patients with normal or high vitamin A levels. Dietary assessment with the HKFFC is a cheap effective method to detect sub clinical vitamin A deficiency in pregnancy. It is an easy cost effective screening tool to select patients for whom dietary manipulation and or vitamin A supplementation may be beneficial.
Yang, Chin-Lung; Zheng, Gou-Tsun
2015-11-20
This study proposes using wireless low power thermal sensors for basal-body-temperature detection using frequency modulated telemetry devices. A long-term monitoring sensor requires low-power circuits including a sampling circuit and oscillator. Moreover, temperature compensated technologies are necessary because the modulated frequency might have additional frequency deviations caused by the varying temperature. The temperature compensated oscillator is composed of a ring oscillator and a controlled-steering current source with temperature compensation, so the output frequency of the oscillator does not drift with temperature variations. The chip is fabricated in a standard Taiwan Semiconductor Manufacturing Company (TSMC) 0.18-μm complementary metal oxide semiconductor (CMOS) process, and the chip area is 0.9 mm². The power consumption of the sampling amplifier is 128 µW. The power consumption of the voltage controlled oscillator (VCO) core is less than 40 µW, and the output is -3.04 dBm with a buffer stage. The output voltage of the bandgap reference circuit is 1 V. For temperature measurements, the maximum error is 0.18 °C with a standard deviation of ±0.061 °C, which is superior to the required specification of 0.1 °C.
Cunningham, Charles H; Dominguez Viqueira, William; Hurd, Ralph E; Chen, Albert P
2014-02-01
Blip-reversed echo-planar imaging (EPI) is investigated as a method for measuring and correcting the spatial shifts that occur due to bulk frequency offsets in (13)C metabolic imaging in vivo. By reversing the k-space trajectory for every other time point, the direction of the spatial shift for a given frequency is reversed. Here, mutual information is used to find the 'best' alignment between images and thereby measure the frequency offset. Time-resolved 3D images of pyruvate/lactate/urea were acquired with 5 s temporal resolution over a 1 min duration in rats (N = 6). For each rat, a second injection was performed with the demodulation frequency purposely mis-set by +35 Hz, to test the correction for erroneous shifts in the images. Overall, the shift induced by the 35 Hz frequency offset was 5.9 ± 0.6 mm (mean ± standard deviation). This agrees well with the expected 5.7 mm shift based on the 2.02 ms delay between k-space lines (giving 30.9 Hz per pixel). The 0.6 mm standard deviation in the correction corresponds to a frequency-detection accuracy of 4 Hz. A method was presented for ensuring the spatial registration between (13)C metabolic images and conventional anatomical images when long echo-planar readouts are used. The frequency correction method was shown to have an accuracy of 4 Hz. Summing the spatially corrected frames gave a signal-to-noise ratio (SNR) improvement factor of 2 or greater, compared with the highest single frame. Copyright © 2013 John Wiley & Sons, Ltd.
Slant path L- and S-Band tree shadowing measurements
NASA Technical Reports Server (NTRS)
Vogel, Wolfhard J.; Torrence, Geoffrey W.
1994-01-01
This contribution presents selected results from simultaneous L- and S-Band slant-path fade measurements through a pecan, a cottonwood, and a pine tree employing a tower-mounted transmitter and dual-frequency receiver. A single, circularly-polarized antenna was used at each end of the link. The objective was to provide information for personal communications satellite design on the correlation of tree shadowing between frequencies near 1620 and 2500 MHz. Fades were measured along 10 m lateral distance with 5 cm spacing. Instantaneous fade differences between L- and S-Band exhibited normal distribution with means usually near 0 dB and standard deviations from 5.2 to 7.5 dB. The cottonwood tree was an exception, with 5.4 dB higher average fading at S- than at L-Band. The spatial autocorrelation reduced to near zero with lags of about 10 lambda. The fade slope in dB/MHz is normally distributed with zero mean and standard deviation increasing with fade level.
Slant path L- and S-Band tree shadowing measurements
NASA Astrophysics Data System (ADS)
Vogel, Wolfhard J.; Torrence, Geoffrey W.
1994-08-01
This contribution presents selected results from simultaneous L- and S-Band slant-path fade measurements through a pecan, a cottonwood, and a pine tree employing a tower-mounted transmitter and dual-frequency receiver. A single, circularly-polarized antenna was used at each end of the link. The objective was to provide information for personal communications satellite design on the correlation of tree shadowing between frequencies near 1620 and 2500 MHz. Fades were measured along 10 m lateral distance with 5 cm spacing. Instantaneous fade differences between L- and S-Band exhibited normal distribution with means usually near 0 dB and standard deviations from 5.2 to 7.5 dB. The cottonwood tree was an exception, with 5.4 dB higher average fading at S- than at L-Band. The spatial autocorrelation reduced to near zero with lags of about 10 lambda. The fade slope in dB/MHz is normally distributed with zero mean and standard deviation increasing with fade level.
Sakai, Tsutomu; Matsushima, Masato; Shikishima, Keigo; Kitahara, Kenji
2007-05-01
To examine performance characteristics of frequency-doubling perimetry (FDP) in comparison with standard automated perimetry (SAP) in patients with resolved optic neuritis in a short-term follow-up study. Comparative consecutive case series. Twenty patients with resolved optic neuritis and 20 healthy volunteers participated in this study. The subjects were patients who recovered normal vision (1.0 or better) after optic neuritis. The Swedish interactive thresholding algorithm 30-2 program was used for SAP and a full-threshold 30-2 program was used for FDP. Using both forms of perimetry, the mean deviation (MD), pattern standard deviation (PSD), and the percentage of abnormal points significantly depressed <0.5% in the total deviation probability plot were compared. The visual fields were divided into 5 zones, and the mean sensitivity in each zone in affected eyes was compared with that in healthy eyes of the volunteers within 2 weeks of vision recovery and in follow-up after 2 weeks and 2 and 5 months. Standard automated perimetry and FDP showed general depression in the fovea and extrafoveal areas. Correlations between SAP and FDP were statistically significant for MD (Pearson r>0.75; P<0.001) and PSD (r>0.6; P<0.005). Defects detected with FDP were larger than with SAP in 14 eyes (70 %). In follow-up after 2 weeks and again after 2 and 5 months, FDP indicated slower improvement in visual field defects in the fovea and extrafoveal areas, whereas SAP indicated rapid improvement in these defects. Frequency-doubling perimetry is at least comparable with and potentially more sensitive than SAP in detecting visual field defects in resolved optic neuritis. This short-term follow-up study in patients with resolved optic neuritis suggests that FDP detects characteristics of slower recovery more effectively than SAP in the fovea and extrafoveal areas. These properties may allow more accurate detection of visual field defects and may prove advantageous for monitoring of patients with resolved optic neuritis.
Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels
Laenen, Antonius; Curtis, R. E.
1989-01-01
Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)
Pardo, Deborah; Jenouvrier, Stéphanie; Weimerskirch, Henri; Barbraud, Christophe
2017-06-19
Climate changes include concurrent changes in environmental mean, variance and extremes, and it is challenging to understand their respective impact on wild populations, especially when contrasted age-dependent responses to climate occur. We assessed how changes in mean and standard deviation of sea surface temperature (SST), frequency and magnitude of warm SST extreme climatic events (ECE) influenced the stochastic population growth rate log( λ s ) and age structure of a black-browed albatross population. For changes in SST around historical levels observed since 1982, changes in standard deviation had a larger (threefold) and negative impact on log( λ s ) compared to changes in mean. By contrast, the mean had a positive impact on log( λ s ). The historical SST mean was lower than the optimal SST value for which log( λ s ) was maximized. Thus, a larger environmental mean increased the occurrence of SST close to this optimum that buffered the negative effect of ECE. This 'climate safety margin' (i.e. difference between optimal and historical climatic conditions) and the specific shape of the population growth rate response to climate for a species determine how ECE affect the population. For a wider range in SST, both the mean and standard deviation had negative impact on log( λ s ), with changes in the mean having a greater effect than the standard deviation. Furthermore, around SST historical levels increases in either mean or standard deviation of the SST distribution led to a younger population, with potentially important conservation implications for black-browed albatrosses.This article is part of the themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events'. © 2017 The Author(s).
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
Allan deviation analysis of financial return series
NASA Astrophysics Data System (ADS)
Hernández-Pérez, R.
2012-05-01
We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.
Hyperfine-resolved transition frequency list of fundamental vibration bands of H35Cl and H37Cl
NASA Astrophysics Data System (ADS)
Iwakuni, Kana; Sera, Hideyuki; Abe, Masashi; Sasada, Hiroyuki
2014-12-01
Sub-Doppler resolution spectroscopy of the fundamental vibration bands of H35Cl and H37Cl has been carried out from 87.1 to 89.9 THz. We have determined the absolute transition frequencies of the hyperfine-resolved R(0) to R(4) transitions with a typical uncertainty of 10 kHz. We have also yielded six molecular constants for each isotopomer in the vibrational excited state, which reproduce the determined frequencies with a standard deviation of about 10 kHz.
A Computer Program for Preliminary Data Analysis
Dennis L. Schweitzer
1967-01-01
ABSTRACT. -- A computer program written in FORTRAN has been designed to summarize data. Class frequencies, means, and standard deviations are printed for as many as 100 independent variables. Cross-classifications of an observed dependent variable and of a dependent variable predicted by a multiple regression equation can also be generated.
The Effect of Lung Volume on Selected Phonatory and Articulatory Variables.
ERIC Educational Resources Information Center
Dromey, Christopher; Ramig, Lorraine Olson
1998-01-01
This study examined effects of manipulating lung volume on phonatory and articulatory kinematic behavior during sentence production in ten healthy adults. Significant differences at different lung volume levels were found for sound pressure level, fundamental frequency, semitone standard deviation, and upper and lower lip displacements and peak…
Preschool Children's Interaction with ICT at Home
ERIC Educational Resources Information Center
Konca, Ahmet Sami; Koksalan, Bahadir
2017-01-01
The purpose of this research is to determine preschool students' usage profile of information and communication technology (ICT). To investigate children's use of ICT, a questionnaire was completed by the parents of 703 children, age 4-6. Frequency, percentage, mean and standard deviation were used to describe the interaction. In addition,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles, B.N.
1955-05-12
Charts of the geographical distribution of the annual and seasonal D-values and their standard deviations at altitudes of 4500, 6000, and 7000 feeet over Eurasia are derived, which are used to estimate the frequency of baro system errors.
A Profile of Homeschooling in South Dakota
ERIC Educational Resources Information Center
Boschee, Bonni F.; Boschee, Floyd
2011-01-01
The authors conducted a statewide study to determine which factors influenced parents' decision making in electing to homeschool their children rather than send them to public school education in South Dakota. Analysis of data, using frequencies, percentages, means, and standard deviations revealed that the most prevalent reasons for homeschooling…
The Sound of Dominance: Vocal Precursors of Perceived Dominance during Interpersonal Influence.
ERIC Educational Resources Information Center
Tusing, Kyle James; Dillard, James Price
2000-01-01
Determines the effects of vocal cues on judgments of dominance in an interpersonal influence context. Indicates that mean amplitude and amplitude standard deviation were positively associated with dominance judgments, whereas speech rate was negatively associated with dominance judgments. Finds that mean fundamental frequency was positively…
1981-09-15
Deviation Standard deviation of detrended phase component is calculated as 2 - 1/2 in radians, as measured at the receiver output, and not corrected for...next section, were calculated they were corrected for the finite receiver reference frequency of;. f-2 402 MI~z in the following manner. Assuming a...for quiet and disturbed times. The position of the geometrica ,’. enhancement for individual cases is between 60-61°A rather than betwee.o 63-640 A as
NASA Technical Reports Server (NTRS)
Barnhart, Paul J.; Greber, Isaac
1997-01-01
A series of experiments were performed to investigate the effects of Mach number variation on the characteristics of the unsteady shock wave/turbulent boundary layer interaction generated by a blunt fin. A single blunt fin hemicylindrical leading edge diameter size was used in all of the experiments which covered the Mach number range from 2.0 to 5.0. The measurements in this investigation included surface flow visualization, static and dynamic pressure measurements, both on centerline and off-centerline of the blunt fin axis. Surface flow visualization and static pressure measurements showed that the spatial extent of the shock wave/turbulent boundary layer interaction increased with increasing Mach number. The maximum static pressure, normalized by the incoming static pressure, measured at the peak location in the separated flow region ahead of the blunt fin was found to increase with increasing Mach number. The mean and standard deviations of the fluctuating pressure signals from the dynamic pressure transducers were found to collapse to self-similar distributions as a function of the distance perpendicular to the separation line. The standard deviation of the pressure signals showed initial peaked distribution, with the maximum standard deviation point corresponding to the location of the separation line at Mach number 3.0 to 5.0. At Mach 2.0 the maximum standard deviation point was found to occur significantly upstream of the separation line. The intermittency distributions of the separation shock wave motion were found to be self-similar profiles for all Mach numbers. The intermittent region length was found to increase with Mach number and decrease with interaction sweepback angle. For Mach numbers 3.0 to 5.0 the separation line was found to correspond to high intermittencies or equivalently to the downstream locus of the separation shock wave motion. The Mach 2.0 tests, however, showed that the intermittent region occurs significantly upstream of the separation line. Power spectral densities measured in the intermittent regions were found to have self-similar frequency distributions when compared as functions of a Strouhal number for all Mach numbers and interaction sweepback angles. The maximum zero-crossing frequencies were found to correspond with the peak frequencies in the power spectra measured in the intermittent region.
NASA Technical Reports Server (NTRS)
Usry, J. W.
1983-01-01
Wind shear statistics were calculated for a simulated set of wind profiles based on a proposed standard wind field data base. Wind shears were grouped in altitude in altitude bands of 100 ft between 100 and 1400 ft and in wind shear increments of 0.025 knot/ft. Frequency distributions, means, and standard deviations for each altitude band were derived for the total sample were derived for both sets. It was found that frequency distributions in each altitude band for the simulated data set were more dispersed below 800 ft and less dispersed above 900 ft than those for the measured data set. Total sample frequency of occurrence for the two data sets was about equal for wind shear values between +0.075 knot/ft, but the simulated data set had significantly larger values for all wind shears outside these boundaries. It is shown that normal distribution in both data sets neither data set was normally distributed; similar results are observed from the cumulative frequency distributions.
Performance of the PARCS Testbed Cesium Fountain Frequency Standard
NASA Technical Reports Server (NTRS)
Enzer, Daphna G.; Klipstein, William M.
2004-01-01
A cesium fountain frequency standard has been developed as a ground testbed for the PARCS (Primary Atomic Reference Clock in Space) experiment, an experiment intended to fly on the International Space Station. We report on the performance of the fountain and describe some of the implementations motivated in large part by flight considerations, but of relevance for ground fountains. In particular, we report on a new technique for delivering cooling and trapping laser beams to the atom collection region, in which a given beam is recirculated three times effectively providing much more optical power than traditional configurations. Allan deviations down to 10
Capture of activation during ventricular arrhythmia using distributed stimulation.
Meunier, Jason M; Ramalingam, Sanjiv; Lin, Shien-Fong; Patwardhan, Abhijit R
2007-04-01
Results of previous studies suggest that pacing strength stimuli can capture activation during ventricular arrhythmia locally near pacing sites. The existence of spatio-temporal distribution of excitable gap during arrhythmia suggests that multiple and timed stimuli delivered over a region may permit capture over larger areas. Our objective in this study was to evaluate the efficacy of using spatially distributed pacing (DP) to capture activation during ventricular arrhythmia. Data were obtained from rabbit hearts which were placed against a lattice of parallel wires through which biphasic pacing stimuli were delivered. Electrical activity was recorded optically. Pacing stimuli were delivered in sequence through the parallel wires starting with the wire closest to the apex and ending with one closest to the base. Inter-stimulus delay was based on conduction velocity. Time-frequency analysis of optical signals was used to determine variability in activation. A decrease in standard deviation of dominant frequencies of activation from a grid of locations that spanned the captured area and a concurrence with paced frequency were used as an index of capture. Results from five animals showed that the average standard deviation decreased from 0.81 Hz during arrhythmia to 0.66 Hz during DP at pacing cycle length of 125 ms (p = 0.03) reflecting decreased spatio-temporal variability in activation during DP. Results of time-frequency analysis during these pacing trials showed agreement between activation and paced frequencies. These results show that spatially distributed and timed stimulation can be used to modify and capture activation during ventricular arrhythmia.
Neilson, Jennifer R.; Lamb, Berton Lee; Swann, Earlene M.; Ratz, Joan; Ponds, Phadrea D.; Liverca, Joyce
2005-01-01
The findings presented in this report represent the basic results derived from the attitude assessment survey conducted in the last quarter of 2004. The findings set forth in this report are the frequency distributions for each question in the survey instrument for all respondents. The only statistics provided are descriptive in character - namely, means and associated standard deviations.
Chien-Ching Ma; Ching-Yuan Chang
2013-07-01
Interferometry provides a high degree of accuracy in the measurement of sub-micrometer deformations; however, the noise associated with experimental measurement undermines the integrity of interference fringes. This study proposes the use of standard deviation in the temporal domain to improve the image quality of patterns obtained from temporal speckle pattern interferometry. The proposed method combines the advantages of both mean and subtractive methods to remove background noise and ambient disturbance simultaneously, resulting in high-resolution images of excellent quality. The out-of-plane vibration of a thin piezoelectric plate is the main focus of this study, providing information useful to the development of energy harvesters. First, ten resonant states were measured using the proposed method, and both mode shape and resonant frequency were investigated. We then rebuilt the phase distribution of the first resonant mode based on the clear interference patterns obtained using the proposed method. This revealed instantaneous deformations in the dynamic characteristics of the resonant state. The proposed method also provides a frequency-sweeping function, facilitating its practical application in the precise measurement of resonant frequency. In addition, the mode shapes and resonant frequencies obtained using the proposed method were recorded and compared with results obtained using finite element method and laser Doppler vibrometery, which demonstrated close agreement.
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2012 CFR
2012-07-01
... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2013 CFR
2013-07-01
... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
Role of a Standardized Prism Under Cover Test in the Assessment of Dissociated Vertical Deviation.
Klaehn, Lindsay D; Hatt, Sarah R; Leske, David A; Holmes, Jonathan M
2018-03-01
Dissociated vertical deviation (DVD) is commonly measured using a prism and alternate cover test (PACT), but some providers use a prism under cover test (PUCT). The aim of this study was to compare a standardized PUCT measurement with a PACT measurement, for assessing the magnitude of DVD. Thirty-six patients with a clinical diagnosis of DVD underwent measurement of the angle of deviation with the PACT, fixing with the habitually fixing eye, and with PUCT, fixing both right and left eyes. The PUCT was standardized, using a 10-second cover for each prism magnitude, until the deviation was neutralized. The magnitude of hyperdeviation by PACT and PUCT was compared for the non-fixing eye, using paired non-parametric tests. The frequency of discrepancies more than 4 prism diopters (PD) between PACT and PUCT was calculated. The magnitude of hyperdeviation was greater when measured with PUCT (range 8PD hypodeviation to 20PD hyperdeviation) vs. PACT (18PD hypodeviation to 25PD hyperdeviation) with a median difference of 4.5PD (range -5PD to 21PD); P < 0.0001. Eighteen (50%) of 36 measurements elicited >4PD hyperdeviation (or >4PD less hypodeviation) by PUCT than by PACT. A standardized 10-second PUCT yields greater values than a prism and alternate cover test in the majority of patients with DVD, providing better quantification of the severity of DVD, which may be important for management decisions.
2014-03-27
42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM
Heart rate variability analysed by Poincaré plot in patients with metabolic syndrome.
Kubičková, Alena; Kozumplík, Jiří; Nováková, Zuzana; Plachý, Martin; Jurák, Pavel; Lipoldová, Jolana
2016-01-01
The SD1 and SD2 indexes (standard deviations in two orthogonal directions of the Poincaré plot) carry similar information to the spectral density power of the high and low frequency bands but have the advantage of easier calculation and lesser stationarity dependence. ECG signals from metabolic syndrome (MetS) and control group patients during tilt table test under controlled breathing (20 breaths/minute) were obtained. SD1, SD2, SDRR (standard deviation of RR intervals) and RMSSD (root mean square of successive differences of RR intervals) were evaluated for 31 control group and 33 MetS subjects. Statistically significant lower values were observed in MetS patients in supine position (SD1: p=0.03, SD2: p=0.002, SDRR: p=0.006, RMSSD: p=0.01) and during tilt (SD2: p=0.004, SDRR: p=0.007). SD1 and SD2 combining the advantages of time and frequency domain methods, distinguish successfully between MetS and control subjects. Copyright © 2016 Elsevier Inc. All rights reserved.
Shi, Shaobo; Liu, Tao; Wang, Dandan; Zhang, Yan; Liang, Jinjun; Yang, Bo; Hu, Dan
2017-07-01
The goal of this study was to assess the effects of N-methyl-d-aspartate (NMDA) receptors activation on heart rate variability (HRV) and susceptibility to atrial fibrillation (AF). Rats were randomized for treatment with saline, NMDA (agonist of NMDA receptors), or NMDA plus MK-801 (antagonist of NMDA receptors) for 2 weeks. Heart rate variability was evaluated by using implantable electrocardiogram telemeters. Atrial fibrillation susceptibility was assessed with programmed stimulation in isolated hearts. Compared with the controls, the NMDA-treated rats displayed a decrease in the standard deviation of normal RR intervals, the standard deviation of the average RR intervals, the mean of the 5-min standard deviations of RR intervals, the root mean square of successive differences, and high frequency (HF); and an increase in low frequency (LF) and LF/HF (all P< 0.01). Additionally, the NMDA-treated rats showed prolonged activation latency and reduced effective refractory period (all P< 0.01). Importantly, AF was induced in all NMDA-treated rats. While atrial fibrosis developed, connexin40 downgraded and metalloproteinase 9 upgraded in the NMDA-treated rats (all P< 0.01). Most of the above alterations were mitigated by co-administering with MK-801. These results indicate that NMDA receptors activation reduces HRV and enhances AF inducibility, with cardiac autonomic imbalance, atrial fibrosis, and degradation of gap junction protein identified as potential mechanistic contributors. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
Performance of Planar-Waveguide External Cavity Laser for Precision Measurements
NASA Technical Reports Server (NTRS)
Numata, Kenji; Camp, Jordan; Krainak, Michael A.; Stolpner, Lew
2010-01-01
A 1542-nm planar-waveguide external cavity laser (PW-ECL) is shown to have a sufficiently low level of frequency and intensity noise to be suitable for precision measurement applications. The frequency noise and intensity noise of the PW-ECL was comparable or better than the nonplanar ring oscillator (NPRO) and fiber laser between 0.1 mHz to 100 kHz. Controllability of the PW-ECL was demonstrated by stabilizing its frequency to acetylene (13C2H2) at 10(exp -13) level of Allan deviation. The PW-ECL also has the advantage of the compactness of a standard butterfly package, low cost, and a simple design consisting of a semiconductor gain media coupled to a planar-waveguide Bragg reflector. These features would make the PW-ECL suitable for precision measurements, including compact optical frequency standards, space lidar, and space interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahadi, S., E-mail: su4idi@yahoo.com; Puspito, N. T.; Ibrahim, G.
Determination of onset time precursors of strong earthquakes (Mw > 5) and distance (d < 500 km) using geomagnetic data from Geomagnetic station KTB, Sumatra and two station references DAV, Philippine and DAW, Australia. separate techniques are required in its determination. Not the same as that recorded in the kinetic wave seismograms can be determined by direct time domain. Difficulties associated with electromagnetic waves seismogenic activities require analysis of the transformed signal in the frequency domain. Determination of the frequency spectrum will determine the frequency of emissions emitted from the earthquake source. The aim is to analyze the power amplitudemore » of the ULF emissions in the horizontal component (H) and vertical component (Z). Polarization power ratio Z/H is used for determining the sign of earthquake precursors controlled by the standard deviation. The pattern recognition polarization ratio should be obtained which can differentiate emissions from seismogenic effects of geomagnetic activity. ULF emission patterns generated that seismogenic effect has duration > 5 days and the dominance of emission intensity recorded at the Z component and for the dominance of the emission intensity of geomagnetic activity recorded in the component H. The result shows that the onset time is determined when the polarization power ratio Z/H standard deviation over the limit (p ± 2 σ) which has a duration of > 5 days.« less
Mehlstäubler, Tanja E; Grosche, Gesine; Lisdat, Christian; Schmidt, Piet O; Denker, Heiner
2018-06-01
We review experimental progress on optical atomic clocks and frequency transfer, and consider the prospects of using these technologies for geodetic measurements. Today, optical atomic frequency standards have reached relative frequency inaccuracies below 10 -17 , opening new fields of fundamental and applied research. The dependence of atomic frequencies on the gravitational potential makes atomic clocks ideal candidates for the search for deviations in the predictions of Einstein's general relativity, tests of modern unifying theories and the development of new gravity field sensors. In this review, we introduce the concepts of optical atomic clocks and present the status of international clock development and comparison. Besides further improvement in stability and accuracy of today's best clocks, a large effort is put into increasing the reliability and technological readiness for applications outside of specialized laboratories with compact, portable devices. With relative frequency uncertainties of 10 -18 , comparisons of optical frequency standards are foreseen to contribute together with satellite and terrestrial data to the precise determination of fundamental height reference systems in geodesy with a resolution at the cm-level. The long-term stability of atomic standards will deliver excellent long-term height references for geodetic measurements and for the modelling and understanding of our Earth.
NASA Astrophysics Data System (ADS)
Mehlstäubler, Tanja E.; Grosche, Gesine; Lisdat, Christian; Schmidt, Piet O.; Denker, Heiner
2018-06-01
We review experimental progress on optical atomic clocks and frequency transfer, and consider the prospects of using these technologies for geodetic measurements. Today, optical atomic frequency standards have reached relative frequency inaccuracies below 10‑17, opening new fields of fundamental and applied research. The dependence of atomic frequencies on the gravitational potential makes atomic clocks ideal candidates for the search for deviations in the predictions of Einstein’s general relativity, tests of modern unifying theories and the development of new gravity field sensors. In this review, we introduce the concepts of optical atomic clocks and present the status of international clock development and comparison. Besides further improvement in stability and accuracy of today’s best clocks, a large effort is put into increasing the reliability and technological readiness for applications outside of specialized laboratories with compact, portable devices. With relative frequency uncertainties of 10‑18, comparisons of optical frequency standards are foreseen to contribute together with satellite and terrestrial data to the precise determination of fundamental height reference systems in geodesy with a resolution at the cm-level. The long-term stability of atomic standards will deliver excellent long-term height references for geodetic measurements and for the modelling and understanding of our Earth.
Ultrasonic imaging system for in-process fabric defect detection
Sheen, Shuh-Haw; Chien, Hual-Te; Lawrence, William P.; Raptis, Apostolos C.
1997-01-01
An ultrasonic method and system are provided for monitoring a fabric to identify a defect. A plurality of ultrasonic transmitters generate ultrasonic waves relative to the fabric. An ultrasonic receiver means responsive to the generated ultrasonic waves from the transmitters receives ultrasonic waves coupled through the fabric and generates a signal. An integrated peak value of the generated signal is applied to a digital signal processor and is digitized. The digitized signal is processed to identify a defect in the fabric. The digitized signal processing includes a median value filtering step to filter out high frequency noise. Then a mean value and standard deviation of the median value filtered signal is calculated. The calculated mean value and standard deviation are compared with predetermined threshold values to identify a defect in the fabric.
Image contrast enhancement based on a local standard deviation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Dah-Chung; Wu, Wen-Rong
1996-12-31
The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt`s Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details aremore » concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm.« less
NASA Astrophysics Data System (ADS)
Wang, Zian; Li, Shiguang; Yu, Ting
2015-12-01
This paper propose online identification method of regional frequency deviation coefficient based on the analysis of interconnected grid AGC adjustment response mechanism of regional frequency deviation coefficient and the generator online real-time operation state by measured data through PMU, analyze the optimization method of regional frequency deviation coefficient in case of the actual operation state of the power system and achieve a more accurate and efficient automatic generation control in power system. Verify the validity of the online identification method of regional frequency deviation coefficient by establishing the long-term frequency control simulation model of two-regional interconnected power system.
A New Approach to Extract Forest Water Use Efficiency from Eddy Covariance Data
NASA Astrophysics Data System (ADS)
Scanlon, T. M.; Sulman, B. N.
2016-12-01
Determination of forest water use efficiency (WUE) from eddy covariance data typically involves the following steps: (a) estimating gross primary productivity (GPP) from direct measurements of net ecosystem exchange (NEE) by extrapolating nighttime ecosystem respiration (ER) to daytime conditions, and (b) assuming direct evaporation (E) is minimal several days after rainfall, meaning that direct measurements of evapotranspiration (ET) are identical to transpiration (T). Both of these steps could lead to errors in the estimation of forest WUE. Here, we present a theoretical approach for estimating WUE through the analysis of standard eddy covariance data, which circumvents these steps. Only five statistics are needed from the high-frequency time series to extract WUE: CO2 flux, water vapor flux, standard deviation in CO2 concentration, standard deviation in water vapor concentration, and the correlation coefficient between CO2 and water vapor concentration for each half-hour period. The approach is based on the assumption that stomatal fluxes (i.e. photosynthesis and transpiration) lead to perfectly negative correlations and non-stomatal fluxes (i.e. ecosystem respiration and direct evaporation) lead to perfectly positive correlations within the CO2 and water vapor high frequency time series measured above forest canopies. A mathematical framework is presented, followed by a proof of concept using eddy covariance data and leaf-level measurements of WUE.
NASA Astrophysics Data System (ADS)
Griffin, James M.; Diaz, Fernanda; Geerling, Edgar; Clasing, Matias; Ponce, Vicente; Taylor, Chris; Turner, Sam; Michael, Ernest A.; Patricio Mena, F.; Bronfman, Leonardo
2017-02-01
By using acoustic emission (AE) it is possible to control deviations and surface quality during micro milling operations. The method of micro milling is used to manufacture a submillimetre waveguide where micro machining is employed to achieve the required superior finish and geometrical tolerances. Submillimetre waveguide technology is used in deep space signal retrieval where highest detection efficiencies are needed and therefore every possible signal loss in the receiver has to be avoided and stringent tolerances achieved. With a sub-standard surface finish the signals travelling along the waveguides dissipate away faster than with perfect surfaces where the residual roughness becomes comparable with the electromagnetic skin depth. Therefore, the higher the radio frequency the more critical this becomes. The method of time-frequency analysis (STFT) is used to transfer raw AE into more meaningful salient signal features (SF). This information was then correlated against the measured geometrical deviations and, the onset of catastrophic tool wear. Such deviations can be offset from different AE signals (different deviations from subsequent tests) and feedback for a final spring cut ensuring the geometrical accuracies are met. Geometrical differences can impact on the required transfer of AE signals (change in cut off frequencies and diminished SNR at the interface) and therefore errors have to be minimised to within 1 μm. Rules based on both Classification and Regression Trees (CART) and Neural Networks (NN) were used to implement a simulation displaying how such a control regime could be used as a real time controller, be it corrective measures (via spring cuts) over several initial machining passes or, with a micron cut introducing a level plain measure for allowing setup corrective measures (similar to a spirit level).
Research on frequency control strategy of interconnected region based on fuzzy PID
NASA Astrophysics Data System (ADS)
Zhang, Yan; Li, Chunlan
2018-05-01
In order to improve the frequency control performance of the interconnected power grid, overcome the problems of poor robustness and slow adjustment of traditional regulation, the paper puts forward a frequency control method based on fuzzy PID. The method takes the frequency deviation and tie-line deviation of each area as the control objective, takes the regional frequency deviation and its deviation as input, and uses fuzzy mathematics theory, adjusts PID control parameters online. By establishing the regional frequency control model of water-fire complementary power generation in MATLAB, the regional frequency control strategy is given, and three control modes (TBC-FTC, FTC-FTC, FFC-FTC) are simulated and analyzed. The simulation and experimental results show that, this method has better control performance compared with the traditional regional frequency regulation.
ERIC Educational Resources Information Center
Reardon, Sean F.; Shear, Benjamin R.; Castellano, Katherine E.; Ho, Andrew D.
2017-01-01
Test score distributions of schools or demographic groups are often summarized by frequencies of students scoring in a small number of ordered proficiency categories. We show that heteroskedastic ordered probit (HETOP) models can be used to estimate means and standard deviations of multiple groups' test score distributions from such data. Because…
Promoting Increased Pitch Variation in Oral Presentations with Transient Visual Feedback
ERIC Educational Resources Information Center
Hincks, Rebecca; Edlund, Jens
2009-01-01
This paper investigates learner response to a novel kind of intonation feedback generated from speech analysis. Instead of displays of pitch curves, our feedback is flashing lights that show how much pitch variation the speaker has produced. The variable used to generate the feedback is the standard deviation of fundamental frequency as measured…
Fiber optic reference frequency distribution to remote beam waveguide antennas
NASA Technical Reports Server (NTRS)
Calhoun, Malcolm; Kuhnle, Paul; Law, Julius
1995-01-01
In the NASA/JPL Deep Space Network (DSN), radio science experiments (probing outer planet atmospheres, rings, gravitational waves, etc.) and very long-base interferometry (VLBI) require ultra-stable, low phase noise reference frequency signals at the user locations. Typical locations for radio science/VLBI exciters and down-converters are the cone areas of the 34 m high efficiency antennas or the 70 m antennas, located several hundred meters from the reference frequency standards. Over the past three years, fiber optic distribution links have replaced coaxial cable distribution for reference frequencies to these antenna sites. Optical fibers are the preferred medium for distribution because of their low attenuation, immunity to EMI/IWI, and temperature stability. A new network of Beam Waveguide (BWG) antennas presently under construction in the DSN requires hydrogen maser stability at tens of kilometers distance from the frequency standards central location. The topic of this paper is the design and implementation of an optical fiber distribution link which provides ultra-stable reference frequencies to users at a remote BWG antenna. The temperature profile from the earth's surface to a depth of six feet over a time period of six months was used to optimize the placement of the fiber optic cables. In-situ evaluation of the fiber optic link performance indicates Allan deviation on the order of parts in 10(exp -15) at 1000 and 10,000 seconds averaging time; thus, the link stability degradation due to environmental conditions still preserves hydrogen maser stability at the user locations. This paper reports on the implementation of optical fibers and electro-optic devices for distributing very stable, low phase noise reference signals to remote BWG antenna locations. Allan deviation and phase noise test results for a 16 km fiber optic distribution link are presented in the paper.
Fiber optic reference frequency distribution to remote beam waveguide antennas
NASA Astrophysics Data System (ADS)
Calhoun, Malcolm; Kuhnle, Paul; Law, Julius
1995-05-01
In the NASA/JPL Deep Space Network (DSN), radio science experiments (probing outer planet atmospheres, rings, gravitational waves, etc.) and very long-base interferometry (VLBI) require ultra-stable, low phase noise reference frequency signals at the user locations. Typical locations for radio science/VLBI exciters and down-converters are the cone areas of the 34 m high efficiency antennas or the 70 m antennas, located several hundred meters from the reference frequency standards. Over the past three years, fiber optic distribution links have replaced coaxial cable distribution for reference frequencies to these antenna sites. Optical fibers are the preferred medium for distribution because of their low attenuation, immunity to EMI/IWI, and temperature stability. A new network of Beam Waveguide (BWG) antennas presently under construction in the DSN requires hydrogen maser stability at tens of kilometers distance from the frequency standards central location. The topic of this paper is the design and implementation of an optical fiber distribution link which provides ultra-stable reference frequencies to users at a remote BWG antenna. The temperature profile from the earth's surface to a depth of six feet over a time period of six months was used to optimize the placement of the fiber optic cables. In-situ evaluation of the fiber optic link performance indicates Allan deviation on the order of parts in 10(exp -15) at 1000 and 10,000 seconds averaging time; thus, the link stability degradation due to environmental conditions still preserves hydrogen maser stability at the user locations. This paper reports on the implementation of optical fibers and electro-optic devices for distributing very stable, low phase noise reference signals to remote BWG antenna locations. Allan deviation and phase noise test results for a 16 km fiber optic distribution link are presented in the paper.
47 CFR 74.535 - Emission and bandwidth.
Code of Federal Regulations, 2010 CFR
2010-10-01
... transmitter power (PMEAN) in accordance with the following schedule: (1) When using frequency modulation: (i... employed when digital modulation occupies 50 percent or more of the total peak frequency deviation of a... deviation produced by the digital modulation signal and the deviation produced by any frequency division...
Acoustic and Perceptual Analyses of Adductor Spasmodic Dysphonia in Mandarin-speaking Chinese.
Chen, Zhipeng; Li, Jingyuan; Ren, Qingyi; Ge, Pingjiang
2018-02-12
The objective of this study was to examine the perceptual structure and acoustic characteristics of speech of patients with adductor spasmodic dysphonia (ADSD) in Mandarin. Case-Control Study MATERIALS AND METHODS: For the estimation of dysphonia level, perceptual and acoustic analysis were used for patients with ADSD (N = 20) and the control group (N = 20) that are Mandarin-Chinese speakers. For both subgroups, a sustained vowel and connected speech samples were obtained. The difference of perceptual and acoustic parameters between the two subgroups was assessed and analyzed. For acoustic assessment, the percentage of phonatory breaks (PBs) of connected reading and the percentage of aperiodic segments and frequency shifts (FS) of vowel and reading in patients with ADSD were significantly worse than controls, the mean harmonics-to-noise ratio and the fundamental frequency standard deviation of vowel as well. For perceptual evaluation, the rating of speech and vowel in patients with ADSD are significantly higher than controls. The percentage of aberrant acoustic events (PB, frequency shift, and aperiodic segment) and the fundamental frequency standard deviation and mean harmonics-to-noise ratio were significantly correlated with the perceptual rating in the vowel and reading productions. The perceptual and acoustic parameters of connected vowel and reading in patients with ADSD are worse than those in normal controls, and could validly and reliably estimate dysphonia of ADSD in Mandarin-speaking Chinese. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyewon; Hwang, Min; Muljadi, Eduard
In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power-smoothing capability while preventing over-deceleration of the rotor speed, the gain ofmore » the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. In conclusion, the simulation results based on the IEEE 14-bus system clearly demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less
Lee, Hyewon; Hwang, Min; Muljadi, Eduard; ...
2017-04-18
In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power-smoothing capability while preventing over-deceleration of the rotor speed, the gain ofmore » the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. In conclusion, the simulation results based on the IEEE 14-bus system clearly demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less
Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R
2017-11-17
This article investigated and compared frequency domain and time domain characteristics of drivers' behaviors before and after the start of distracted driving. Data from an existing naturalistic driving study were used. Fast Fourier transform (FFT) was applied for the frequency domain analysis to explore drivers' behavior pattern changes between nondistracted (prestarting of visual-manual task) and distracted (poststarting of visual-manual task) driving periods. Average relative spectral power in a low frequency range (0-0.5 Hz) and the standard deviation in a 10-s time window of vehicle control variables (i.e., lane offset, yaw rate, and acceleration) were calculated and further compared. Sensitivity analyses were also applied to examine the reliability of the time and frequency domain analyses. Results of the mixed model analyses from the time and frequency domain analyses all showed significant degradation in lateral control performance after engaging in visual-manual tasks while driving. Results of the sensitivity analyses suggested that the frequency domain analysis was less sensitive to the frequency bandwidth, whereas the time domain analysis was more sensitive to the time intervals selected for variation calculations. Different time interval selections can result in significantly different standard deviation values, whereas average spectral power analysis on yaw rate in both low and high frequency bandwidths showed consistent results, that higher variation values were observed during distracted driving when compared to nondistracted driving. This study suggests that driver state detection needs to consider the behavior changes during the prestarting periods, instead of only focusing on periods with physical presence of distraction, such as cell phone use. Lateral control measures can be a better indicator of distraction detection than longitudinal controls. In addition, frequency domain analyses proved to be a more robust and consistent method in assessing driving performance compared to time domain analyses.
Adaptive Gain-based Stable Power Smoothing of a DFIG
Muljadi, Eduard; Lee, Hyewon; Hwang, Min; ...
2017-11-01
In a power system that has a high wind penetration, the output power fluctuation of a large-scale wind turbine generator (WTG) caused by the varying wind speed increases the maximum frequency deviation, which is an important metric to assess the quality of electricity, because of the reduced system inertia. This paper proposes a stable power-smoothing scheme of a doubly-fed induction generator (DFIG) that can suppress the maximum frequency deviation, particularly for a power system with a high wind penetration. To do this, the proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combinationmore » with the maximum power point tracking control loop. To improve the power-smoothing capability while guaranteeing the stable operation of a DFIG, the gain of the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. Here, the simulation results based on the IEEE 14-bus system demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WTG under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less
Adaptive Gain-based Stable Power Smoothing of a DFIG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muljadi, Eduard; Lee, Hyewon; Hwang, Min
In a power system that has a high wind penetration, the output power fluctuation of a large-scale wind turbine generator (WTG) caused by the varying wind speed increases the maximum frequency deviation, which is an important metric to assess the quality of electricity, because of the reduced system inertia. This paper proposes a stable power-smoothing scheme of a doubly-fed induction generator (DFIG) that can suppress the maximum frequency deviation, particularly for a power system with a high wind penetration. To do this, the proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combinationmore » with the maximum power point tracking control loop. To improve the power-smoothing capability while guaranteeing the stable operation of a DFIG, the gain of the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. Here, the simulation results based on the IEEE 14-bus system demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WTG under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less
Resistance Training Increases the Variability of Strength Test Scores
2009-06-08
standard deviations for pretest and posttest strength measurements. This information was recorded for every strength test used in a total of 377 samples...significant if the posttest standard deviation consistently was larger than the pretest standard deviation. This condition could be satisfied even if...the difference in the standard deviations was small. For example, the posttest standard deviation might be 1% larger than the pretest standard
Decomposing intraday dependence in currency markets: evidence from the AUD/USD spot market
NASA Astrophysics Data System (ADS)
Batten, Jonathan A.; Ellis, Craig A.; Hogan, Warren P.
2005-07-01
The local Hurst exponent, a measure employed to detect the presence of dependence in a time series, may also be used to investigate the source of intraday variation observed in the returns in foreign exchange markets. Given that changes in the local Hurst exponent may be due to either a time-varying range, or standard deviation, or both of these simultaneously, values for the range, standard deviation and local Hurst exponent are recorded and analyzed separately. To illustrate this approach, a high-frequency data set of the spot Australian dollar/US dollar provides evidence of the returns distribution across the 24-hour trading ‘day’, with time-varying dependence and volatility clearly aligning with the opening and closing of markets. This variation is attributed to the effects of liquidity and the price-discovery actions of dealers.
NASA Astrophysics Data System (ADS)
Kolachevsky, N.; Beyer, A.; Maisenbacher, L.; Matveev, A.; Pohl, R.; Khabarova, K.; Grinin, A.; Lamour, T.; Yost, D. C.; Haensch, T. W.; Udem, Th.
2018-02-01
The core of the "proton radius puzzle" is the discrepancy of four standard deviations between the proton root mean square charge radii (rp) determined from regular hydrogen (H), and the muonic hydrogen atom (μp). We have measured the 2S-4P transition frequency in H, utilizing a cryogenic beam of H and directly demonstrate that quantum interference of neighboring atomic resonances can lead to line shifts much larger than the proton radius discrepancy. Using an asymmetric fit function we obtain rp = 0.8335(95) fm and the Rydberg constant R∞ = 10 973 731.568 076 (96) m-1. The new value for rp is 3.3 combined standard deviations smaller than the latest CODATA value, but in good agreement with the value from μp.
Quan, Hui; Zhang, Ji
2003-09-15
Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
Stability characterization of two multi-channel GPS receivers for accurate frequency transfer.
NASA Astrophysics Data System (ADS)
Taris, F.; Uhrich, P.; Thomas, C.; Petit, G.; Jiang, Z.
In recent years, wide-spread use of the GPS common-view technique has led to major improvements, making it possible to compare remote clocks at their full level of performance. For integration times of 1 to 3 days, their frequency differences are consistently measured to about one part in 1014. Recent developments in atomic frequency standards suggest, however, that this performance may no longer be sufficient. The caesium fountain LPTF FO1, built at the BNM-LPTF, Paris, France, shows a short-term white frequency noise characterized by an Allen deviation σy(τ = 1 s) = 5×10-14 and a type B uncertainty of 2×10-15. To compare the frequencies of such highly stable standards would call for GPS common-view results to be averaged over times far exceeding the intervals of their optimal performance. Previous studies have shown the potential of carrier-phase and code measurements from geodetic GPS receivers for clock frequency comparisons. The experiment related here is an attempt to see the stability limit that could be reached using this technique.
Local oscillator induced degradation of medium-term stability in passive atomic frequency standards
NASA Technical Reports Server (NTRS)
Dick, G. John; Prestage, John D.; Greenhall, Charles A.; Maleki, Lute
1990-01-01
As the performance of passive atomic frequency standards improves, a new limitation is encountered due to frequency fluctuations in an ancillary local oscillator (L.O.). The effect is due to time variation in the gain of the feedback which compensates L.O. frequency fluctuations. The high performance promised by new microwave and optical trapped ion standards may be severely compromised by this effect. Researchers present an analysis of this performance limitation for the case of sequentially interrogated standards. The time dependence of the sensitivity of the interrogation process to L.O. frequency fluctuations is evaluated for single-pulse and double-pulse Ramsey RF interrogation and for amplitude modulated pulses. The effect of these various time dependencies on performance of the standard is calculated for an L.O. with frequency fluctuations showing a typical 1/f spectral density. A limiting 1/sq. root gamma dependent deviation of frequency fluctuations is calculated as a function of pulse lengths, dead time, and pulse overlap. Researchers also present conceptual and hardware-oriented solutions to this problem which achieve a much more nearly constant sensitivity to L.O. fluctuations. Solutions involve use of double-pulse interrogation; alternate interrogation of multiple traps so that the dead time of one trap can be covered by operation of the other; and the use of double-pulse interrogation for two traps, so that during the time of the RF pulses, the increasing sensitivity of one trap tends to compensate for the decreasing sensitivity of the other. A solution making use of amplified-modulated pulses is also presented which shows nominally zero time variation.
A compensated multi-pole linear ion trap mercury frequency standard for ultra-stable timekeeping.
Burt, Eric A; Diener, William A; Tjoelker, Robert L
2008-12-01
The multi-pole linear ion trap frequency standard (LITS) being developed at the Jet Propulsion Laboratory (JPL) has demonstrated excellent short- and long-term stability. The technology has now demonstrated long-term field operation providing a new capability for timekeeping standards. Recently implemented enhancements have resulted in a record line Q of 5 x 10(12) for a room temperature microwave atomic transition and a short-term fractional frequency stability of 5 x 10(-14)/tau(1/2). A scheme for compensating the second order Doppler shift has led to a reduction of the combined sensitivity to the primary LITS systematic effects below 5 x 10(-17) fractional frequency. Initial comparisons to JPL's cesium fountain clock show a systematic floor of less than 2 x 10(-16). The compensated multi-pole LITS at JPL was operated continuously and unattended for a 9-mo period from October 2006 to July 2007. During that time it was used as the frequency reference for the JPL geodetic receiver known as JPLT, enabling comparisons to any clock used as a reference for an International GNSS Service (IGS) site. Comparisons with the laser-cooled primary frequency standards that reported to the Bureau International des Poids et Mesures (BIPM) over this period show a frequency deviation less than 2.7 x 10(-17)/day. In the capacity of a stand-alone ultra-stable flywheel, such a standard could be invaluable for long-term timekeeping applications in metrology labs while its methodology and robustness make it ideal for space applications as well.
Prevalence and clinical implications of improper filter settings in routine electrocardiography.
Kligfield, Paul; Okin, Peter M
2007-03-01
High- and low-filter bandwidth governs the fidelity of electrocardiographic waveforms, including the durations used in established criteria for infarction, the amplitudes used for the diagnosis of ventricular hypertrophy, and the accuracy of the magnitudes of ST-segment elevation and depression. Electrocardiographs allow users to reset high- and low-filter settings for special electrocardiographic applications, but these may be used inappropriately. To examine the prevalence of standard and nonstandard electrocardiographic filtering at 1 general medical community, 256 consecutive outpatient electrocardiograms (ECGs) submitted in advance of ambulatory or same-day admission surgery during a 3-week period were examined. ECGs were considered to meet standards for low-frequency cutoff when equal to 0.05 Hz and to meet standards for high-frequency cutoff when equal to 100 Hz, according to American Heart Association recommendations established in 1975. Only 25% of ECGs (65 of 256) conformed to recommended standards; 75% of ECGs (191 of 254) did not. The most prevalent deviation from standard was reduced high-frequency cutoff, which was present in 96% of tracings with nonstandard bandwidth (most commonly 40 Hz). Increased low-frequency cutoff was present in 62% of ECGs in which it was documented. In conclusion, improper electrocardiographic filtering, with potentially adverse clinical consequences, is highly prevalent at 1 large general medical community and is likely a generalized problem. This problem should be resolvable by targeted educational efforts to reinforce technical standards in electrocardiography.
Recent results of the pulsed optically pumped rubidium clock
NASA Astrophysics Data System (ADS)
Levi, F.; Micalizio, S.; Godone, A.; Calosso, C.; Bertacco, E.
2017-11-01
A laboratory prototype of a pulsed optically pumped (POP) clock based on a rubidium cell with buffer gas is described. This clock has shown very interesting physical and metrological features, such as negligible light-shift, strongly reduced cavity-pulling and very good frequency stability. In this regard, an Allan deviation of σy(τ) = 1.2 τ-1/2 for measurement times up to τ = 105 s has been measured. These results confirm the interesting perspectives of such a frequency standard and make it very attractive for several technological applications, such as radionavigation.
A computer aided treatment event recognition system in radiation therapy.
Xia, Junyi; Mart, Christopher; Bayouth, John
2014-01-01
To develop an automated system to safeguard radiation therapy treatments by analyzing electronic treatment records and reporting treatment events. CATERS (Computer Aided Treatment Event Recognition System) was developed to detect treatment events by retrieving and analyzing electronic treatment records. CATERS is designed to make the treatment monitoring process more efficient by automating the search of the electronic record for possible deviations from physician's intention, such as logical inconsistencies as well as aberrant treatment parameters (e.g., beam energy, dose, table position, prescription change, treatment overrides, etc). Over a 5 month period (July 2012-November 2012), physicists were assisted by the CATERS software in conducting normal weekly chart checks with the aims of (a) determining the relative frequency of particular events in the authors' clinic and (b) incorporating these checks into the CATERS. During this study period, 491 patients were treated at the University of Iowa Hospitals and Clinics for a total of 7692 fractions. All treatment records from the 5 month analysis period were evaluated using all the checks incorporated into CATERS after the training period. About 553 events were detected as being exceptions, although none of them had significant dosimetric impact on patient treatments. These events included every known event type that was discovered during the trial period. A frequency analysis of the events showed that the top three types of detected events were couch position override (3.2%), extra cone beam imaging (1.85%), and significant couch position deviation (1.31%). The significant couch deviation is defined as the number of treatments where couch vertical exceeded two times standard deviation of all couch verticals, or couch lateral/longitudinal exceeded three times standard deviation of all couch laterals and longitudinals. On average, the application takes about 1 s per patient when executed on either a desktop computer or a mobile device. CATERS offers an effective tool to detect and report treatment events. Automation and rapid processing enables electronic record interrogation daily, alerting the medical physicist of deviations potentially days prior to performing weekly check. The output of CATERS could also be utilized as an important input to failure mode and effects analysis.
Temperature effects on wavelength calibration of the optical spectrum analyzer
NASA Astrophysics Data System (ADS)
Mongkonsatit, Kittiphong; Ranusawud, Monludee; Srikham, Sitthichai; Bhatranand, Apichai; Jiraraksopakun, Yuttapong
2018-03-01
This paper presents the investigation of the temperature effects on wavelength calibration of an optical spectrum analyzer or OSA. The characteristics of wavelength dependence on temperatures are described and demonstrated under the guidance of the IEC 62129-1:2006, the international standard for the Calibration of wavelength/optical frequency measurement instruments - Part 1: Optical spectrum analyzer. Three distributed-feedback lasers emit lights with wavelengths of 1310 nm, 1550 nm, and 1600 nm were used as light sources in this work. Each light was split by a 1 x 2 fiber splitter whereas one end was connected to a standard wavelength meter and the other to an under-test OSA. Two Experiment setups were arranged for the analysis of the wavelength reading deviations between a standard wavelength meter and an OSA under a variety of circumstances of different temperatures and humidity conditions. The experimental results showed that, for wavelengths of 1550 nm and 1600 nm, the wavelength deviations were proportional to the value of temperature with the minimum and maximum of -0.015 and 0.030 nm, respectively. While the deviations of 1310 nm wavelength did not change much with the temperature as they were in the range of -0.003 nm to 0.010 nm. The measurement uncertainty was also evaluated according to the IEC 62129-1:2006. The main contribution of measurement uncertainty was caused by the wavelength deviation. The uncertainty of measurement in this study is 0.023 nm with coverage factor, k = 2.
Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi
2017-07-04
BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.
7 CFR 400.204 - Notification of deviation from standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...
Test-retest reliability of 3D ultrasound measurements of the thoracic spine.
Fölsch, Christian; Schlögel, Stefanie; Lakemeier, Stefan; Wolf, Udo; Timmesfeld, Nina; Skwara, Adrian
2012-05-01
To explore the reliability of the Zebris CMS 20 ultrasound analysis system with pointer application for measuring end-range flexion, end-range extension, and neutral kyphosis angle of the thoracic spine. The study was performed within the School of Physiotherapy in cooperation with the Orthopedic Department at a University Hospital. The thoracic spines of 28 healthy subjects were measured. Measurements for neutral kyphosis angle, end-range flexion, and end-range extension were taken once at each time point. The bone landmarks were palpated by one examiner and marked with a pointer containing 2 transmitters using a frequency of 40 kHz. A third transmitter was fixed to the pelvis, and 3 microphones were used as receiver. The real angle was calculated by the software. Bland-Altman plots with 95% limits of agreement, intraclass correlations (ICC), standard deviations of mean measurements, and standard error of measurements were used for statistical analyses. The test-retest reliability in this study was measured within a 24-hour interval. Statistical parameters were used to judge reliability. The mean kyphosis angle was 44.8° with a standard deviation of 17.3° at the first measurement and a mean of 45.8° with a standard deviation of 16.2° the following day. The ICC was high at 0.95 for the neutral kyphosis angle, and the Bland-Altman 95% limits of agreement were within clinical acceptable margins. The ICC was 0.71 for end-range flexion and 0.34 for end-range extension, whereas the Bland-Altman 95% limits of agreement were wider than with the static measurement of kyphosis. Compared with static measurements, the analysis of motion with 3-dimensional ultrasound showed an increased standard deviation for test-retest measurements. The test-retest reliability of ultrasound measuring of the neutral kyphosis angle of the thoracic spine was demonstrated within 24 hours. Bland-Altman 95% limits of agreement and the standard deviation of differences did not appear to be clinically acceptable for measuring flexion and extension. Copyright © 2012 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Bogucki, Sz; Noszczyk-Nowak, A
2017-03-28
Heart rate variability is an established risk factor for mortality in both healthy dogs and animals with heart failure. The aim of this study was to compare short-term heart rate variability (ST-HRV) parameters from 60-min electrocardiograms in dogs with sick sinus syndrome (SSS, n=20) or chronic mitral valve disease (CMVD, n=20) and healthy controls (n=50), and to verify the clinical application of ST-HRV analysis. The study groups differed significantly in terms of both time - and frequency- domain ST-HRV parameters. In the case of dogs with SSS and healthy controls, particularly evident differences pertained to HRV parameters linked directly to the variability of R-R intervals. Lower values of standard deviation of all R-R intervals (SDNN), standard deviation of the averaged R-R intervals for all 5-min segments (SDANN), mean of the standard deviations of all R-R intervals for all 5-min segments (SDNNI) and percentage of successive R-R intervals >50 ms (pNN50) corresponded to a decrease in parasympathetic regulation of heart rate in dogs with CMVD. These findings imply that ST-HRV may be useful for the identification of dogs with SSS and for detection of dysautonomia in animals with CMVD.
An auxiliary frequency tracking system for general purpose lock-in amplifiers
NASA Astrophysics Data System (ADS)
Xie, Kai; Chen, Liuhao; Huang, Anfeng; Zhao, Kai; Zhang, Hanlu
2018-04-01
Lock-in amplifiers (LIAs) are designed to measure weak signals submerged by noise. This is achieved with a signal modulator to avoid low-frequency noise and a narrow-band filter to suppress out-of-band noise. In asynchronous measurement, even a slight frequency deviation between the modulator and the reference may lead to measurement error because the filter’s passband is not flat. Because many commercial LIAs are unable to track frequency deviations, in this paper we propose an auxiliary frequency tracking system. We analyze the measurement error caused by the frequency deviation and propose both a tracking method and an auto-tracking system. This approach requires only three basic parameters, which can be obtained from any general purpose LIA via its communications interface, to calculate the frequency deviation from the phase difference. The proposed auxiliary tracking system is designed as a peripheral connected to the LIA’s serial port, removing the need for an additional power supply. The test results verified the effectiveness of the proposed system; the modified commercial LIA (model SR-850) was able to track the frequency deviation and continuous drift. For step frequency deviations, a steady tracking error of less than 0.001% was achieved within three adjustments, and the worst tracking accuracy was still better than 0.1% for a continuous frequency drift. The tracking system can be used to expand the application scope of commercial LIAs, especially for remote measurements in which the modulation clock and the local reference are separated.
Kocabeyoglu, Sibel; Uzun, Salih; Mocan, Mehmet Cem; Bozkurt, Banu; Irkec, Murat; Orhan, Mehmet
2013-10-01
The aim of this study was to compare the visual field test results in healthy children obtained via the Humphrey matrix 24-2 threshold program and standard automated perimetry (SAP) using the Swedish interactive threshold algorithm (SITA)-Standard 24-2 test. This prospective study included 55 healthy children without ocular or systemic disorders who underwent both SAP and frequency doubling technology (FDT) perimetry visual field testing. Visual field test reliability indices, test duration, global indices (mean deviation [MD], and pattern standard deviation [PSD]) were compared between the 2 tests using the Wilcoxon signed-rank test and paired t-test. The performance of the Humphrey field analyzer (HFA) 24-2 SITA-standard and frequency-doubling technology Matrix 24-2 tests between genders were compared with Mann-Whitney U-test. Fifty-five healthy children with a mean age of 12.2 ± 1.9 years (range from 8 years to 16 years) were included in this prospective study. The test durations of SAP and FDT were similar (5.2 ± 0.5 and 5.1 ± 0.2 min, respectively, P = 0.651). MD and the PSD values obtained via FDT Matrix were significantly higher than those obtained via SAP (P < 0.001), and fixation losses and false negative errors were significantly less with SAP (P < 0.05). A weak positive correlation between the two tests in terms of MD (r = 0.352, P = 0.008) and PSD (r = 0.329, P = 0.014) was observed. Children were able to complete both the visual test algorithms successfully within 6 min. However, SAP testing appears to be associated with less depression of the visual field indices of healthy children. FDT Matrix and SAP should not be used interchangeably in the follow-up of children.
The Standard Deviation of Launch Vehicle Environments
NASA Technical Reports Server (NTRS)
Yunis, Isam
2005-01-01
Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.
Kanadani, Fabio N; Mello, Paulo AA; Dorairaj, Syril K; Kanadani, Tereza CM
2014-01-01
Introduction The gold standard in functional glaucoma evaluation is standard automated perimetry (SAP). However, SAP depends on the reliability of the patients’ responses and other external factors; therefore, other technologies have been developed for earlier detection of visual field changes in glaucoma patients. The frequency-doubling perimetry (FDT) is believed to detect glaucoma earlier than SAP. The multifocal visual evoked potential (mfVEP) is an objective test for functional evaluation. Objective To evaluate the sensitivity and specificity of FDT and mfVEP tests in normal, suspect, and glaucomatous eyes and compare the monocular and interocular mfVEP. Methods Ninety-five eyes from 95 individuals (23 controls, 33 glaucoma suspects, 39 glaucomatous) were enrolled. All participants underwent a full ophthalmic examination, followed by SAP, FDT, and mfVEP tests. Results The area under the curve for mean deviation and pattern standard deviation were 0.756 and 0.761, respectively, for FDT, 0.564 and 0.512 for signal and alpha for interocular mfVEP, and 0.568 and 0.538 for signal and alpha for monocular mfVEP. This difference between monocular and interocular mfVEP was not significant. Conclusion The FDT Matrix was superior to mfVEP in glaucoma detection. The difference between monocular and interocular mfVEP in the diagnosis of glaucoma was not significant. PMID:25075173
Dual-Phase Lock-In Amplifier Based on FPGA for Low-Frequencies Experiments
Macias-Bobadilla, Gonzalo; Rodríguez-Reséndiz, Juvenal; Mota-Valtierra, Georgina; Soto-Zarazúa, Genaro; Méndez-Loyola, Maurino; Garduño-Aparicio, Mariano
2016-01-01
Photothermal techniques allow the detection of characteristics of material without invading it. Researchers have developed hardware for some specific Phase and Amplitude detection (Lock-In Function) applications, eliminating space and unnecessary electronic functions, among others. This work shows the development of a Digital Lock-In Amplifier based on a Field Programmable Gate Array (FPGA) for low-frequency applications. This system allows selecting and generating the appropriated frequency depending on the kind of experiment or material studied. The results show good frequency stability in the order of 1.0 × 10−9 Hz, which is considered good linearity and repeatability response for the most common Laboratory Amplitude and Phase Shift detection devices, with a low error and standard deviation. PMID:26999138
Dual-Phase Lock-In Amplifier Based on FPGA for Low-Frequencies Experiments.
Macias-Bobadilla, Gonzalo; Rodríguez-Reséndiz, Juvenal; Mota-Valtierra, Georgina; Soto-Zarazúa, Genaro; Méndez-Loyola, Maurino; Garduño-Aparicio, Mariano
2016-03-16
Photothermal techniques allow the detection of characteristics of material without invading it. Researchers have developed hardware for some specific Phase and Amplitude detection (Lock-In Function) applications, eliminating space and unnecessary electronic functions, among others. This work shows the development of a Digital Lock-In Amplifier based on a Field Programmable Gate Array (FPGA) for low-frequency applications. This system allows selecting and generating the appropriated frequency depending on the kind of experiment or material studied. The results show good frequency stability in the order of 1.0 × 10(-9) Hz, which is considered good linearity and repeatability response for the most common Laboratory Amplitude and Phase Shift detection devices, with a low error and standard deviation.
Modulation linearization of a frequency-modulated voltage controlled oscillator, part 3
NASA Technical Reports Server (NTRS)
Honnell, M. A.
1975-01-01
An analysis is presented for the voltage versus frequency characteristics of a varactor modulated VHF voltage controlled oscillator in which the frequency deviation is linearized by using the nonlinear characteristics of a field effect transistor as a signal amplifier. The equations developed are used to calculate the oscillator output frequency in terms of pertinent circuit parameters. It is shown that the nonlinearity exponent of the FET has a pronounced influence on frequency deviation linearity, whereas the junction exponent of the varactor controls total frequency deviation for a given input signal. A design example for a 250 MHz frequency modulated oscillator is presented.
A method for predicting the noise levels of coannular jets with inverted velocity profiles
NASA Technical Reports Server (NTRS)
Russell, J. W.
1979-01-01
A coannular jet was equated with a single stream equivalent jet with the same mass flow, energy, and thrust. The acoustic characteristics of the coannular jet were then related to the acoustic characteristics of the single jet. Forward flight effects were included by incorporating a forward exponent, a Doppler amplification factor, and a Strouhal frequency shift. Model test data, including 48 static cases and 22 wind tunnel cases, were used to evaluate the prediction method. For the static cases and the low forward velocity wind tunnel cases, the spectral mean square pressure correlation coefficients were generally greater than 90 percent, and the spectral sound pressure level standard deviation were generally less than 3 decibels. The correlation coefficient and the standard deviation were not affected by changes in equivalent jet velocity. Limitations of the prediction method are also presented.
Frequency Spectrum Neutrality Tests: One for All and All for One
Achaz, Guillaume
2009-01-01
Neutrality tests based on the frequency spectrum (e.g., Tajima's D or Fu and Li's F) are commonly used by population geneticists as routine tests to assess the goodness-of-fit of the standard neutral model on their data sets. Here, I show that these neutrality tests are specific instances of a general model that encompasses them all. I illustrate how this general framework can be taken advantage of to devise new more powerful tests that better detect deviations from the standard model. Finally, I exemplify the usefulness of the framework on SNP data by showing how it supports the selection hypothesis in the lactase human gene by overcoming the ascertainment bias. The framework presented here paves the way for constructing novel tests optimized for specific violations of the standard model that ultimately will help to unravel scenarios of evolution. PMID:19546320
A phase and frequency alignment protocol for 1H MRSI data of the prostate.
Wright, Alan J; Buydens, Lutgarde M C; Heerschap, Arend
2012-05-01
(1)H MRSI of the prostate reveals relative metabolite levels that vary according to the presence or absence of tumour, providing a sensitive method for the identification of patients with cancer. Current interpretations of prostate data rely on quantification algorithms that fit model metabolite resonances to individual voxel spectra and calculate relative levels of metabolites, such as choline, creatine, citrate and polyamines. Statistical pattern recognition techniques can potentially improve the detection of prostate cancer, but these analyses are hampered by artefacts and sources of noise in the data, such as variations in phase and frequency of resonances. Phase and frequency variations may arise as a result of spatial field gradients or local physiological conditions affecting the frequency of resonances, in particular those of citrate. Thus, there are unique challenges in developing a peak alignment algorithm for these data. We have developed a frequency and phase correction algorithm for automatic alignment of the resonances in prostate MRSI spectra. We demonstrate, with a simulated dataset, that alignment can be achieved to a phase standard deviation of 0.095 rad and a frequency standard deviation of 0.68 Hz for the citrate resonances. Three parameters were used to assess the improvement in peak alignment in the MRSI data of five patients: the percentage of variance in all MRSI spectra explained by their first principal component; the signal-to-noise ratio of a spectrum formed by taking the median value of the entire set at each spectral point; and the mean cross-correlation between all pairs of spectra. These parameters showed a greater similarity between spectra in all five datasets and the simulated data, demonstrating improved alignment for phase and frequency in these spectra. This peak alignment program is expected to improve pattern recognition significantly, enabling accurate detection and localisation of prostate cancer with MRSI. Copyright © 2011 John Wiley & Sons, Ltd.
Perception of force and stiffness in the presence of low-frequency haptic noise
Gurari, Netta; Okamura, Allison M.; Kuchenbecker, Katherine J.
2017-01-01
Objective This work lays the foundation for future research on quantitative modeling of human stiffness perception. Our goal was to develop a method by which a human’s ability to perceive suprathreshold haptic force stimuli and haptic stiffness stimuli can be affected by adding haptic noise. Methods Five human participants performed a same-different task with a one-degree-of-freedom force-feedback device. Participants used the right index finger to actively interact with variations of force (∼5 and ∼8 N) and stiffness (∼290 N/m) stimuli that included one of four scaled amounts of haptically rendered noise (None, Low, Medium, High). The haptic noise was zero-mean Gaussian white noise that was low-pass filtered with a 2 Hz cut-off frequency; the resulting low-frequency signal was added to the force rendered while the participant interacted with the force and stiffness stimuli. Results We found that the precision with which participants could identify the magnitude of both the force and stiffness stimuli was affected by the magnitude of the low-frequency haptically rendered noise added to the haptic stimulus, as well as the magnitude of the haptic stimulus itself. The Weber fraction strongly correlated with the standard deviation of the low-frequency haptic noise with a Pearson product-moment correlation coefficient of ρ > 0.83. The mean standard deviation of the low-frequency haptic noise in the haptic stimuli ranged from 0.184 N to 1.111 N across the four haptically rendered noise levels, and the corresponding mean Weber fractions spanned between 0.042 and 0.101. Conclusions The human ability to perceive both suprathreshold haptic force and stiffness stimuli degrades in the presence of added low-frequency haptic noise. Future work can use the reported methods to investigate how force perception and stiffness perception may relate, with possible applications in haptic watermarking and in the assessment of the functionality of peripheral pathways in individuals with haptic impairments. PMID:28575068
NASA Astrophysics Data System (ADS)
Zhu, Yu; Liu, Zhigang; Deng, Wen; Deng, Zhongwen
2018-05-01
Frequency-scanning interferometry (FSI) using an external cavity diode laser (ECDL) is essential for many applications of the absolute distance measurement. However, owing to the hysteresis and creep of the piezoelectric actuator inherent in the ECDL, the optical frequency scanning exhibits a nonlinearity that seriously affects the phase extraction accuracy of the interference signal and results in the reduction of the measurement accuracy. To suppress the optical frequency nonlinearity, a harmonic frequency synthesis method for shaping the desired input signal instead of the original triangular wave is presented. The effectiveness of the presented shaping method is demonstrated through the comparison of the experimental results. Compared with an incremental Renishaw interferometer, the standard deviation of the displacement measurement of the FSI system is less than 2.4 μm when driven by the shaped signal.
7 CFR 400.174 - Notification of deviation from financial standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...
Comparison of spectral estimators for characterizing fractionated atrial electrograms
2013-01-01
Background Complex fractionated atrial electrograms (CFAE) acquired during atrial fibrillation (AF) are commonly assessed using the discrete Fourier transform (DFT), but this can lead to inaccuracy. In this study, spectral estimators derived by averaging the autocorrelation function at lags were compared to the DFT. Method Bipolar CFAE of at least 16 s duration were obtained from pulmonary vein ostia and left atrial free wall sites (9 paroxysmal and 10 persistent AF patients). Power spectra were computed using the DFT and three other methods: 1. a novel spectral estimator based on signal averaging (NSE), 2. the NSE with harmonic removal (NSH), and 3. the autocorrelation function average at lags (AFA). Three spectral parameters were calculated: 1. the largest fundamental spectral peak, known as the dominant frequency (DF), 2. the DF amplitude (DA), and 3. the mean spectral profile (MP), which quantifies noise floor level. For each spectral estimator and parameter, the significance of the difference between paroxysmal and persistent AF was determined. Results For all estimators, mean DA and mean DF values were higher in persistent AF, while the mean MP value was higher in paroxysmal AF. The differences in means between paroxysmals and persistents were highly significant for 3/3 NSE and NSH measurements and for 2/3 DFT and AFA measurements (p<0.001). For all estimators, the standard deviation in DA and MP values were higher in persistent AF, while the standard deviation in DF value was higher in paroxysmal AF. Differences in standard deviations between paroxysmals and persistents were highly significant in 2/3 NSE and NSH measurements, in 1/3 AFA measurements, and in 0/3 DFT measurements. Conclusions Measurements made from all four spectral estimators were in agreement as to whether the means and standard deviations in three spectral parameters were greater in CFAEs acquired from paroxysmal or in persistent AF patients. Since the measurements were consistent, use of two or more of these estimators for power spectral analysis can be assistive to evaluate CFAE more objectively and accurately, which may lead to improved clinical outcome. Since the most significant differences overall were achieved using the NSE and NSH estimators, parameters measured from their spectra will likely be the most useful for detecting and discerning electrophysiologic differences in the AF substrate based upon frequency analysis of CFAE. PMID:23855345
Development of a frequency regulation duty-cycle for standardized energy storage performance testing
Rosewater, David; Ferreira, Summer
2016-05-25
The US DOE Protocol for uniformly measuring and expressing the performance of energy storage systems, first developed in 2012 through inclusive working group activities, provides standardized methodologies for evaluating an energy storage system’s ability to supply specific services to electrical grids. This article elaborates on the data and decisions behind the duty-cycle used for frequency regulation in this protocol. Analysis of a year of publicly available frequency regulation control signal data from a utility was considered in developing the representative signal for this use case. Moreover, this showed that signal standard deviation can be used as a metric for aggressivenessmore » or rigor. From these data, we select representative 2 h long signals that exhibit nearly all of dynamics of actual usage under two distinct regimens, one for average use and the other for highly aggressive use. Our results were combined into a 24-h duty-cycle comprised of average and aggressive segments. The benefits and drawbacks of the selected duty-cycle are discussed along with its potential implications to the energy storage industry.« less
1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...
Effects of blur and repeated testing on sensitivity estimates with frequency doubling perimetry.
Artes, Paul H; Nicolela, Marcelo T; McCormick, Terry A; LeBlanc, Raymond P; Chauhan, Balwantray C
2003-02-01
To investigate the effect of blur and repeated testing on sensitivity with frequency doubling technology (FDT) perimetry. One eye of 12 patients with glaucoma (mean deviation [MD] mean, -2.5 dB, range +0.5 to -4.3 dB) and 11 normal control subjects underwent six consecutive tests with the FDT N30 threshold program in each of two sessions. In session 1, blur was induced by trial lenses (-6.00, -3.00, 0.00, +3.00, and +6.00 D, in random order). In session 2, only the effects of repeated testing were evaluated. The MD and pattern standard deviation (PSD) indices were evaluated as functions of blur and of test order. By correcting the data of session 1 for the reduction of sensitivity with repeated testing (session 2), the effect of blur on FDT sensitivities was established, and its clinical consequences evaluated on total- and pattern-deviation probability maps. FDT sensitivities decreased with blur (by <0.5 dB/D) and with repeated testing (by approximately 2 dB between the first and sixth tests). Blur and repeated testing independently led to larger numbers of locations with significant total and pattern deviation. Sensitivity reductions were similar in normal control subjects and patients with glaucoma, at central and peripheral test locations and at locations with high and low sensitivities. However, patients with glaucoma showed larger deterioration in the total-deviation-probability maps. To optimize the performance of the device, refractive errors should be corrected and immediate retesting avoided. Further research is needed to establish the cause of sensitivity loss with repeated FDT testing.
Optimizing a remote sensing instrument to measure atmospheric surface pressure
NASA Technical Reports Server (NTRS)
Peckham, G. E.; Gatley, C.; Flower, D. A.
1983-01-01
Atmospheric surface pressure can be remotely sensed from a satellite by an active instrument which measures return echoes from the ocean at frequencies near the 60 GHz oxygen absorption band. The instrument is optimized by selecting its frequencies of operation, transmitter powers and antenna size through a new procedure baesd on numerical simulation which maximizes the retrieval accuracy. The predicted standard deviation error in the retrieved surface pressure is 1 mb. In addition the measurements can be used to retrieve water vapor, cloud liquid water and sea state, which is related to wind speed.
Larson, Nicole; MacLehose, Rich; Fulkerson, Jayne A; Berge, Jerica M; Story, Mary; Neumark-Sztainer, Dianne
2013-12-01
Research has shown that adolescents who frequently share evening meals with their families experience more positive health outcomes, including diets of higher nutritional quality. However, little is known about families eating together at breakfast. This study examined sociodemographic differences in family meal frequencies in a population-based adolescent sample. In addition, this study examined associations of family breakfast meal frequency with dietary quality and weight status. Cross-sectional data from EAT 2010 (Eating and Activity in Teens) included anthropometric assessments and classroom-administered surveys completed in 2009-2010. Participants included 2,793 middle and high school students (53.2% girls, mean age=14.4 years) from Minneapolis/St Paul, MN, public schools. Usual dietary intake was self-reported on a food frequency questionnaire. Height and weight were measured. Regression models adjusted for sociodemographic characteristics, family dinner frequency, family functioning, and family cohesion were used to examine associations of family breakfast frequency with dietary quality and weight status. On average, adolescents reported having family breakfast meals 1.5 times (standard deviation=2.1) and family dinner meals 4.1 times (standard deviation=2.6) in the past week. There were racial/ethnic differences in family breakfast frequency, with the highest frequencies reported by adolescents of black, Hispanic, Native American, and mixed race/ethnicity. Family breakfast frequency was also positively associated with male sex, younger age, and living in a two-parent household. Family breakfast frequency was associated with several markers of better diet quality (such as higher intake of fruit, whole grains, and fiber) and lower risk for overweight/obesity. For example, adolescents who reported seven family breakfasts in the past week consumed an average of 0.37 additional daily fruit servings compared with adolescents who never had a family breakfast meal. Results suggest that eating breakfast together as a family can have benefits for adolescents' dietary intake and weight status. Copyright © 2013 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Gehrke, Sergio Alexandre; da Silva, Ulisses Tavares; Del Fabbro, Massimo
2015-12-01
The purpose of this study was to assess implant stability in relation to implant design (conical vs. semiconical and wide-pitch vs narrow-pitch) using resonance frequency analysis. Twenty patients with bilateral edentulous maxillary premolar region were selected. In one hemiarch, conical implants with wide pitch (group 1) were installed; in the other hemiarch, semiconical implants with narrow pitch were installed (group 2). The implant allocation was randomized. The implant stability quotient (ISQ) was measured by resonance frequency analysis immediately following implant placement to assess primary stability (time 1) and at 90 days after placement (time 2). In group 1, the mean and standard deviation ISQ for time 1 was 65.8 ± 6.22 (95% confidence interval [CI], 55 to 80), and for time 2, it was 68.0 ± 5.52 (95% CI, 57 to 77). In group 2, the mean and standard deviation ISQ was 63.6 ± 5.95 (95% CI, 52 to 78) for time 1 and 67.0 ± 5.71 (95% CI, 58 to 78) for time 2. The statistical analysis demonstrated significant difference in the ISQ values between groups at time 1 (P = .007) and no statistical difference at time 2 (P = .54). The greater primary stability of conical implants with wide pitch compared with semiconical implants with narrow pitch might suggest a preference for the former in case of the adoption of immediate or early loading protocols.
The Deep Space Network stability analyzer
NASA Technical Reports Server (NTRS)
Breidenthal, Julian C.; Greenhall, Charles A.; Hamell, Robert L.; Kuhnle, Paul F.
1995-01-01
A stability analyzer for testing NASA Deep Space Network installations during flight radio science experiments is described. The stability analyzer provides realtime measurements of signal properties of general experimental interest: power, phase, and amplitude spectra; Allan deviation; and time series of amplitude, phase shift, and differential phase shift. Input ports are provided for up to four 100 MHz frequency standards and eight baseband analog (greater than 100 kHz bandwidth) signals. Test results indicate the following upper bounds to noise floors when operating on 100 MHz signals: -145 dBc/Hz for phase noise spectrum further than 200 Hz from carrier, 2.5 x 10(exp -15) (tau =1 second) and 1.5 x 10(exp -17) (tau =1000 seconds) for Allan deviation, and 1 x 10(exp -4) degrees for 1-second averages of phase deviation. Four copies of the stability analyzer have been produced, plus one transportable unit for use at non-NASA observatories.
Upgraded FAA Airfield Capacity Model. Volume 1. Supplemental User’s Guide
1981-02-01
SIGMAR (P4.0) cc 1-4 -standard deviation, in seconds, of arrival runway occupancy time (R.O.T.). SIGMAA (F4.0) cc 5-8 -standard deviation, in seconds...iI SI GMAC - The standard deviation of the time from departure clearance to start of roll. SIGMAR - The standard deviation of the arrival runway
NASA Astrophysics Data System (ADS)
Park, Sahnggi; Kim, Kap-Joong; Kim, Duk-Jun; Kim, Gyungock
2009-02-01
Third order ring resonators are designed and their resonance frequency deviations are analyzed experimentally by processing them with E-beam lithography and ICP etching in a CMOS nano-Fabrication laboratory. We developed a reliable method to identify and reduce experimentally the degree of deviation of each ring resonance frequency before completion of the fabrication process. The identified deviations can be minimized by the way to be presented in this paper. It is expected that this method will provide a significant clue to make a high order multi-channel ring resonators.
NASA Astrophysics Data System (ADS)
Gienko, Elena; Kanushin, Vadim; Tolstikov, Alexander; Karpik, Alexander; Kosarev, Nikolay; Ganagina, Irina
2016-04-01
In 2015 in the research on the grant of the Russian science Foundation No. 14-27-00068 was experimentally confirmed the possibility of measuring the gravity potential difference on relativistic frequency shift of the mobile hydrogen standard CH1-1006 (relative frequency instability of the order 10E-14). Hydrogen frequency standard CH1-1006 was calibrated in the system of secondary standard WET 1-19 (SNIIM, Novosibirsk, Russia) and transported to the place of experiment (a distance of 550 km, the Russian Federation, Republic of Altai), where it moved between the measured points at a distance of 35 km with a height difference of 850 meters. To synchronize spatially separated standard CH1-1006 and secondary standard WET 1-19 was applied the method "CommonView", based on the processing results of pseudorange phase GNSS measurements at the point of placement hours. Changing the frequency standard CH1-1006, measured in the system of secondary standard WET 1-19 and associated with his movement between points and the change of gravitational potential, was equal to 7.98•10E-14. Evaluation of root-mean-square two-sample frequency deviation of the standard at the time interval of the experiment was equal to the value of 7.27•10E-15. To control the results of the frequency determination of the gravity potential difference between the points were made high precision gravimetric measurements with an error of 6 MkGal and GNSS measurements for the coordinate determinations in ITRF2008 with an accuracy of 2-5 cm. The difference between the results of the frequency determination of the gravity potential difference with control data from GNSS and gravimetric measurements was estimated 16% of the total value that corresponds to the error of frequency measurement in the experiment. The possibility of using a single moveable frequency standard to determine the gravity potential difference at spaced points using the method of "CommonView", without the use of optical communications between base and mobile frequency standards was shown. Future improvement in engineering the frequency standards and the measurement technique, developed in the course of our experiments, will allow developing one of the most promising areas of relativistic geodesy - autonomous measurement of heights in a common world system which is currently a vitally important problem of geodesy. We got the practical results, which offer further opportunities for more accurate planning of experimental research and the creation of a global relativistic geoid for the formation of a unified global system of heights.
Ozone trends and their relationship to characteristic weather patterns.
Austin, Elena; Zanobetti, Antonella; Coull, Brent; Schwartz, Joel; Gold, Diane R; Koutrakis, Petros
2015-01-01
Local trends in ozone concentration may differ by meteorological conditions. Furthermore, the trends occurring at the extremes of the Ozone distribution are often not reported even though these may be very different than the trend observed at the mean or median and they may be more relevant to health outcomes. Classify days of observation over a 16-year period into broad categories that capture salient daily local weather characteristics. Determine the rate of change in mean and median O3 concentrations within these different categories to assess how concentration trends are impacted by daily weather. Further examine if trends vary for observations in the extremes of the O3 distribution. We used k-means clustering to categorize days of observation based on the maximum daily temperature, standard deviation of daily temperature, mean daily ground level wind speed, mean daily water vapor pressure and mean daily sea-level barometric pressure. The five cluster solution was determined to be the appropriate one based on cluster diagnostics and cluster interpretability. Trends in cluster frequency and pollution trends within clusters were modeled using Poisson regression with penalized splines as well as quantile regression. There were five characteristic groupings identified. The frequency of days with large standard deviations in hourly temperature decreased over the observation period, whereas the frequency of warmer days with smaller deviations in temperature increased. O3 trends were significantly different within the different weather groupings. Furthermore, the rate of O3 change for the 95th percentile and 5th percentile was significantly different than the rate of change of the median for several of the weather categories.We found that O3 trends vary between different characteristic local weather patterns. O3 trends were significantly different between the different weather groupings suggesting an important interaction between changes in prevailing weather conditions and O3 concentration.
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Bekiroglu, Somer; Myrberg, Olle; Ostman, Kristina; Ek, Marianne; Arvidsson, Torbjörn; Rundlöf, Torgny; Hakkarainen, Birgit
2008-08-05
A 1H-nuclear magnetic resonance (NMR) spectroscopy method for quantitative determination of benzethonium chloride (BTC) as a constituent of grapefruit seed extract was developed. The method was validated, assessing its specificity, linearity, range, and precision, as well as accuracy, limit of quantification and robustness. The method includes quantification using an internal reference standard, 1,3,5-trimethoxybenzene, and regarded as simple, rapid, and easy to implement. A commercial grapefruit seed extract was studied and the experiments were performed on spectrometers operating at two different fields, 300 and 600 MHz for proton frequencies, the former with a broad band (BB) probe and the latter equipped with both a BB probe and a CryoProbe. The concentration average for the product sample was 78.0, 77.8 and 78.4 mg/ml using the 300 BB probe, the 600MHz BB probe and CryoProbe, respectively. The standard deviation and relative standard deviation (R.S.D., in parenthesis) for the average concentrations was 0.2 (0.3%), 0.3 (0.4%) and 0.3mg/ml (0.4%), respectively.
Hendel, Michael D; Bryan, Jason A; Barsoum, Wael K; Rodriguez, Eric J; Brems, John J; Evans, Peter J; Iannotti, Joseph P
2012-12-05
Glenoid component malposition for anatomic shoulder replacement may result in complications. The purpose of this study was to define the efficacy of a new surgical method to place the glenoid component. Thirty-one patients were randomized for glenoid component placement with use of either novel three-dimensional computed tomographic scan planning software combined with patient-specific instrumentation (the glenoid positioning system group), or conventional computed tomographic scan, preoperative planning, and surgical technique, utilizing instruments provided by the implant manufacturer (the standard surgical group). The desired position of the component was determined preoperatively. Postoperatively, a computed tomographic scan was used to define and compare the actual implant location with the preoperative plan. In the standard surgical group, the average preoperative glenoid retroversion was -11.3° (range, -39° to 17°). In the glenoid positioning system group, the average glenoid retroversion was -14.8° (range, -27° to 7°). When the standard surgical group was compared with the glenoid positioning system group, patient-specific instrumentation technology significantly decreased (p < 0.05) the average deviation of implant position for inclination and medial-lateral offset. Overall, the average deviation in version was 6.9° in the standard surgical group and 4.3° in the glenoid positioning system group. The average deviation in inclination was 11.6° in the standard surgical group and 2.9° in the glenoid positioning system group. The greatest benefit of patient-specific instrumentation was observed in patients with retroversion in excess of 16°; the average deviation was 10° in the standard surgical group and 1.2° in the glenoid positioning system group (p < 0.001). Preoperative planning and patient-specific instrumentation use resulted in a significant improvement in the selection and use of the optimal type of implant and a significant reduction in the frequency of malpositioned glenoid implants. Novel three-dimensional preoperative planning, coupled with patient and implant-specific instrumentation, allows the surgeon to better define the preoperative pathology, select the optimal implant design and location, and then accurately execute the plan at the time of surgery.
Martin, Jeffrey D.
2002-01-01
Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.
Basic life support: evaluation of learning using simulation and immediate feedback devices1.
Tobase, Lucia; Peres, Heloisa Helena Ciqueto; Tomazini, Edenir Aparecida Sartorelli; Teodoro, Simone Valentim; Ramos, Meire Bruna; Polastri, Thatiane Facholi
2017-10-30
to evaluate students' learning in an online course on basic life support with immediate feedback devices, during a simulation of care during cardiorespiratory arrest. a quasi-experimental study, using a before-and-after design. An online course on basic life support was developed and administered to participants, as an educational intervention. Theoretical learning was evaluated by means of a pre- and post-test and, to verify the practice, simulation with immediate feedback devices was used. there were 62 participants, 87% female, 90% in the first and second year of college, with a mean age of 21.47 (standard deviation 2.39). With a 95% confidence level, the mean scores in the pre-test were 6.4 (standard deviation 1.61), and 9.3 in the post-test (standard deviation 0.82, p <0.001); in practice, 9.1 (standard deviation 0.95) with performance equivalent to basic cardiopulmonary resuscitation, according to the feedback device; 43.7 (standard deviation 26.86) mean duration of the compression cycle by second of 20.5 (standard deviation 9.47); number of compressions 167.2 (standard deviation 57.06); depth of compressions of 48.1 millimeter (standard deviation 10.49); volume of ventilation 742.7 (standard deviation 301.12); flow fraction percentage of 40.3 (standard deviation 10.03). the online course contributed to learning of basic life support. In view of the need for technological innovations in teaching and systematization of cardiopulmonary resuscitation, simulation and feedback devices are resources that favor learning and performance awareness in performing the maneuvers.
Calibration of laser vibrometers at frequencies up to 100 kHz and higher
NASA Astrophysics Data System (ADS)
Silva Pineda, Guillermo; von Martens, Hans-Jürgen; Rojas, Sergio; Ruiz, Arturo; Muñiz, Lorenzo
2008-06-01
Manufacturers and users of laser vibrometers exploit the wide frequency and intensity ranges of laser techniques, ranging over many decades (e.g., from 0.1 Hz to 100 MHz). Traceability to primary measurement standards is demanded over the specified measurement ranges of any measurement instrumentation. As the primary documentary standard ISO 16063-11 for the calibration of vibration transducers is restricted to 10 kHz, a new international standard for the calibration of laser vibrometers, ISO 16063-41, is under development. The current stage of the 2nd Committee Draft (CD) of the ISO standard specifies calibration methods for frequencies from 0.4 Hz to 50 kHz which does not meet the demand for providing traceability at higher frequencies. New investigations will be presented which demonstrate the applicability of the laser interferometer methods specified in ISO 16063-11 and in the 2nd CD also at higher frequencies of 100 kHz and beyond. The three standard methods were simultaneously used for vibration displacement and acceleration measurements up to 100 kHz, and a fourth high-accuracy method has been developed and used. Their results in displacement and acceleration measurements deviated by less than 1 % from each other at vibration displacement amplitudes in the order of 100 nm. In addition to the three interferometer methods specified in ISO 16063-11 and 16063-15, and in the 2nd Committee Draft of 16063-41 as well, measurement results will be presented. Examples of laser vibrometer calibrations will bedemonstrated. Further investigations are aimed
Mooij, Anne H; Frauscher, Birgit; Amiri, Mina; Otte, Willem M; Gotman, Jean
2016-12-01
To assess whether there is a difference in the background activity in the ripple band (80-200Hz) between epileptic and non-epileptic channels, and to assess whether this difference is sufficient for their reliable separation. We calculated mean and standard deviation of wavelet entropy in 303 non-epileptic and 334 epileptic channels from 50 patients with intracerebral depth electrodes and used these measures as predictors in a multivariable logistic regression model. We assessed sensitivity, positive predictive value (PPV) and negative predictive value (NPV) based on a probability threshold corresponding to 90% specificity. The probability of a channel being epileptic increased with higher mean (p=0.004) and particularly with higher standard deviation (p<0.0001). The performance of the model was however not sufficient for fully classifying the channels. With a threshold corresponding to 90% specificity, sensitivity was 37%, PPV was 80%, and NPV was 56%. A channel with a high standard deviation of entropy is likely to be epileptic; with a threshold corresponding to 90% specificity our model can reliably select a subset of epileptic channels. Most studies have concentrated on brief ripple events. We showed that background activity in the ripple band also has some ability to discriminate epileptic channels. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals
Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.
2016-01-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.
Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G
2016-06-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.
Wang, Yuanguo; Zheng, Chichao; Peng, Hu; Chen, Qiang
2018-06-12
The beamforming performance has a large impact on image quality in ultrasound imaging. Previously, several adaptive weighting factors including coherence factor (CF) and generalized coherence factor (GCF) have been proposed to improved image resolution and contrast. In this paper, we propose a new adaptive weighting factor for ultrasound imaging, which is called signal mean-to-standard-deviation factor (SMSF). SMSF is defined as the mean-to-standard-deviation of the aperture data and is used to weight the output of delay-and-sum (DAS) beamformer before image formation. Moreover, we develop a robust SMSF (RSMSF) by extending the SMSF to the spatial frequency domain using an altered spectrum of the aperture data. In addition, a square neighborhood average is applied on the RSMSF to offer a more smoothed square neighborhood RSMSF (SN-RSMSF) value. We compared our methods with DAS, CF, and GCF using simulated and experimental synthetic aperture data sets. The quantitative results show that SMSF results in an 82% lower full width at half-maximum (FWHM) but a 12% lower contrast ratio (CR) compared with CF. Moreover, the SN-RSMSF leads to 15% and 10% improvement, on average, in FWHM and CR compared with GCF while maintaining the speckle quality. This demonstrates that the proposed methods can effectively improve the image resolution and contrast. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hamell, Robert L.; Kuhnle, Paul F.; Sydnor, Richard L.
1992-01-01
Measuring the performance of ultra stable frequency standards such as the Superconducting Cavity Maser Oscillator (SCMO) necessitates improvement of some test instrumentation. The frequency stability test equipment used at JPL includes a 1 Hz Offset Generator to generate a beat frequency between a pair of 100 MHz signals that are being compared. The noise floor of the measurement system using the current Offset Generator is adequate to characterize stability of hydrogen masers, but it is not adequate for the SCMO. A new Offset Generator with improved stability was designed and tested at JPL. With this Offset Generator and a new Zero Crossing Detector, recently developed at JPL, the measurement flow was reduced by a factor of 5.5 at 1 second tau, 3.0 at 1000 seconds, and 9.4 at 10,000 seconds, compared against the previous design. In addition to the new circuit designs of the Offset Generator and Zero Crossing Detector, tighter control of the measurement equipment environment was required to achieve this improvement. The design of this new Offset Generator are described, along with details of the environment control methods used.
allantools: Allan deviation calculation
NASA Astrophysics Data System (ADS)
Wallin, Anders E. E.; Price, Danny C.; Carson, Cantwell G.; Meynadier, Frédéric
2018-04-01
allantools calculates Allan deviation and related time & frequency statistics. The library is written in Python and has a GPL v3+ license. It takes input data that is either evenly spaced observations of either fractional frequency, or phase in seconds. Deviations are calculated for given tau values in seconds. Several noise generators for creating synthetic datasets are also included.
Alcohol consumption for simulated driving performance: A systematic review.
Rezaee-Zavareh, Mohammad Saeid; Salamati, Payman; Ramezani-Binabaj, Mahdi; Saeidnejad, Mina; Rousta, Mansoureh; Shokraneh, Farhad; Rahimi-Movaghar, Vafa
2017-06-01
Alcohol consumption can lead to risky driving and increase the frequency of traffic accidents, injuries and mortalities. The main purpose of our study was to compare simulated driving performance between two groups of drivers, one consumed alcohol and the other not consumed, using a systematic review. In this systematic review, electronic resources and databases including Medline via Ovid SP, EMBASE via Ovid SP, PsycINFO via Ovid SP, PubMed, Scopus, Cumulative Index to Nursing and Allied Health Literature (CINHAL) via EBSCOhost were comprehensively and systematically searched. The randomized controlled clinical trials that compared simulated driving performance between two groups of drivers, one consumed alcohol and the other not consumed, were included. Lane position standard deviation (LPSD), mean of lane position deviation (MLPD), speed, mean of speed deviation (MSD), standard deviation of speed deviation (SDSD), number of accidents (NA) and line crossing (LC) were considered as the main parameters evaluating outcomes. After title and abstract screening, the articles were enrolled for data extraction and they were evaluated for risk of biases. Thirteen papers were included in our qualitative synthesis. All included papers were classified as high risk of biases. Alcohol consumption mostly deteriorated the following performance outcomes in descending order: SDSD, LPSD, speed, MLPD, LC and NA. Our systematic review had troublesome heterogeneity. Alcohol consumption may decrease simulated driving performance in alcohol consumed people compared with non-alcohol consumed people via changes in SDSD, LPSD, speed, MLPD, LC and NA. More well-designed randomized controlled clinical trials are recommended. Copyright © 2017. Production and hosting by Elsevier B.V.
Wesselink, Christiaan; Jansonius, Nomdo M
2017-09-01
To determine the usefulness of frequency doubling perimetry (FDT) for progression detection in glaucoma, compared to standard automated perimetry (SAP). Data were used from 150 eyes of 150 glaucoma patients from the Groningen Longitudinal Glaucoma Study. After baseline, SAP was performed approximately yearly; FDT every other year. First and last visit had to contain both tests. Using linear regression, progression velocities were calculated for SAP (Humphrey Field Analyzer) mean deviation (MD) and FDT MD and the number of test locations with a total deviation probability below p < 0.01 (TD). Progression velocity tertiles were determined and eyes were classified as slowly, intermediately, or fast progressing for both techniques. Comparison between SAP and FDT classifications were made using a Mantel Haenszel chi-square test. Longitudinal signal-to-noise ratios (LSNRs) were calculated, per patient and per technique, defined as progression velocity divided by the standard deviation of the residuals. Mean (SD) follow-up was 6.4 (1.7) years; median (interquartile range [IQR]) baseline SAP MD -6.6 (-14.2 to -3.6) dB. On average 8.2 and 4.5 tests were performed for SAP and FDT, respectively. Median (IQR) MD slope was -0.16 (-0.46 to +0.02) dB/year for SAP and -0.05 (-0.39 to +0.17) dB/year for FDT. Mantel Haenszel chi-squares of SAP MD vs FDT MD and TD were 12.5 (p < 0.001) and 15.8 (p < 0.001), respectively. LSNRs for SAP MD (median -0.17 yr -1 ) were better than those for FDT MD (-0.04 yr -1 ; p = 0.010). FDT may be a useful technique for monitoring glaucoma progression in patients who cannot perform SAP reliably. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
Autonomic cardiovascular control recovery in quadriplegics after handcycle training.
Abreu, Elizângela Márcia de Carvalho; Alves, Rani de Souza; Borges, Ana Carolina Lacerda; Lima, Fernanda Pupio Silva; Júnior, Alderico Rodrigues de Paula; Lima, Mário Oliveira
2016-07-01
The aim of this study was to investigate the cardiovascular autonomic acute response, during recovery after handcycle training, in quadriplegics with spinal cord injury (SCI). [Subjects and Methods] Seven quadriplegics (SCIG -level C6-C7, male, age 28.00 ± 6.97 years) and eight healthy subjects (CG -male, age 25.00 ± 7.38 years) were studied. Their heart rate variability (HRV) was assessed before and after one handcycle training. [Results] After the training, the SCIG showed significantly reduced: intervals between R waves of the electrocardiogram (RR), standard deviation of the NN intervals (SDNN), square root of the mean squares differences of sucessive NN intervals (rMSSD), low frequency power (LF), high frequency power (HF), and Poincaré plot (standard deviation of short-term HRV -SD1 and standard deviation of long-term HRV -SD2). The SDNN, LF, and SD2 remained decreased during the recovery time. The CG showed significantly reduced: RR, rMSSD, number of pairs of adjacent NN intervals differing by more than 50 ms (pNN50), LF, HF, SD1, and sample entropy (SampEn). Among these parameters, only RR remained decreased during recovery time. Comparisons of the means of HRV parameters evaluated between the CG and SCIG showed that the SCIG had significantly lower pNN50, LF, HF, and SampEn before training, while immediately after training, the SCIG had significantly lower SDNN, LF, HF, and SD2. The rMSSD30s of the SCIG significantly reduced in the windows 180 and 330 seconds and between the windows 300 seconds in the CG. [Conclusion] There was a reduction of sympathetic and parasympathetic activity in the recovery period after the training in both groups; however, the CG showed a higher HRV. The parasympathetic activity also gradually increased after training, and in the SCIG, this activity remained reduced even at three minutes after the end of training, which suggests a deficiency in parasympathetic reactivation in quadriplegics after SCI.
Gallstones in children with sickle cell disease followed up at a Brazilian hematology center.
Gumiero, Ana Paula dos Santos; Bellomo-Brandão, Maria Angela; Costa-Pinto, Elizete Aparecida Lomazi da
2008-01-01
Sickle cell disease causes chronic and recurrent hemolysis which is a recognized risk factor for cholelithiasis. This complication occurs in 50% of adults with sickle cell disease. Surgery is the consensual therapy for symptomatic patients, but the surgical approach is still controversial in asymptomatic individuals. To determine the frequency and to describe and discuss the outcome of children with sickle cell disease complicated with gallstones followed up at a tertiary pediatric hematology center. In a retrospective and descriptive study, 225 charts were reviewed and data regarding patient outcome were recorded. The prevalence of cholelithiasis was 45% and half the patients were asymptomatic. The mean age at the time of diagnosis of cholelithiasis and surgical treatment was 12.5 years (standard deviation = 5) and 14 years (standard deviation = 5.4), respectively. The prevalence of cholelithiasis was higher in patients with SS homozygous and Sb heterozygous thalassemia when compared to patients with sickle cell disease. In 50% of symptomatic patients, recurrent abdominal pain was the single or predominant symptom. Thirty-nine of 44 patients submitted to surgery reported symptom relief after the surgical procedure. Asymptomatic individuals who did not undergo surgical treatment were followed up for 7 years (standard deviation = 4.8), and none of them presented complications related to cholelithiasis during this period. The frequency of cholelithiasis in the study population was 45%. One-third of the patients were diagnosed before 10 years of age. Patients with the SS homozygous or Sb heterozygous phenotype were at a higher risk for the development of cholelithiasis than patients with sickle cell disease. About 50% of patients with gallstones were asymptomatic, the most of them did not undergo surgery and did not present complications during a 7-year follow-up period. Cholecystectomy must be considered in symptomatic patients. In asymptomatic patients, conservative management seems to be the better choice.
Mustafa, Gulgun; Kursat, Fidanci Muzaffer; Ahmet, Tas; Alparslan, Genc Fatih; Omer, Gunes; Sertoglu, Erdem; Erkan, Sarı; Ediz, Yesilkaya; Turker, Turker; Ayhan, Kılıc
Childhood obesity is a worldwide health concern. Studies have shown autonomic dysfunction in obese children. The exact mechanism of this dysfunction is still unknown. The aim of this study was to assess the relationship between erythrocyte membrane fatty acid (EMFA) levels and cardiac autonomic function in obese children using heart rate variability (HRV). A total of 48 obese and 32 healthy children were included in this case-control study. Anthropometric and biochemical data, HRV indices, and EMFA levels in both groups were compared statistically. HRV parameters including standard deviation of normal-to-normal R-R intervals (NN), root mean square of successive differences, the number of pairs of successive NNs that differ by >50 ms (NN50), the proportion of NN50 divided by the total number of NNs, high-frequency power, and low-frequency power were lower in obese children compared to controls, implying parasympathetic impairment. Eicosapentaenoic acid and docosahexaenoic acid levels were lower in the obese group (p<0.001 and p=0.012, respectively). In correlation analysis, in the obese group, body mass index standard deviation and linoleic acid, arachidonic acid, triglycerides, and high-density lipoprotein levels showed a linear correlation with one or more HRV parameter, and age, eicosapentaenoic acid, and systolic and diastolic blood pressure correlated with mean heart rate. In linear regression analysis, age, dihomo-gamma-linolenic acid, linoleic acid, arachidonic acid, body mass index standard deviation, systolic blood pressure, triglycerides, low-density lipoprotein and high-density lipoprotein were related to HRV parameters, implying an effect on cardiac autonomic function. There is impairment of cardiac autonomic function in obese children. It appears that levels of EMFAs such as linoleic acid, arachidonic acid and dihomo-gamma-linolenic acid play a role in the regulation of cardiac autonomic function in obese children. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.
Quantitative characterization of color Doppler images: reproducibility, accuracy, and limitations.
Delorme, S; Weisser, G; Zuna, I; Fein, M; Lorenz, A; van Kaick, G
1995-01-01
A computer-based quantitative analysis for color Doppler images of complex vascular formations is presented. The red-green-blue-signal from an Acuson XP10 is frame-grabbed and digitized. By matching each image pixel with the color bar, color pixels are identified and assigned to the corresponding flow velocity (color value). Data analysis consists of delineation of a region of interest and calculation of the relative number of color pixels in this region (color pixel density) as well as the mean color value. The mean color value was compared to flow velocities in a flow phantom. The thyroid and carotid artery in a volunteer were repeatedly examined by a single examiner to assess intra-observer variability. The thyroids in five healthy controls were examined by three experienced physicians to assess the extent of inter-observer variability and observer bias. The correlation between the mean color value and flow velocity ranged from 0.94 to 0.96 for a range of velocities determined by pulse repetition frequency. The average deviation of the mean color value from the flow velocity was 22% to 41%, depending on the selected pulse repetition frequency (range of deviations, -46% to +66%). Flow velocity was underestimated with inadequately low pulse repetition frequency, or inadequately high reject threshold. An overestimation occurred with inadequately high pulse repetition frequency. The highest intra-observer variability was 22% (relative standard deviation) for the color pixel density, and 9.1% for the mean color value. The inter-observer variation was approximately 30% for the color pixel density, and 20% for the mean color value. In conclusion, computer assisted image analysis permits an objective description of color Doppler images. However, the user must be aware that image acquisition under in vivo conditions as well as physical and instrumental factors may considerably influence the results.
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2018-06-01
The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.
Calibration of semi-stochastic procedure for simulating high-frequency ground motions
Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert
2013-01-01
Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw < 7 to zero for Mw 8. Ground motions simulated with the updated parameterization exhibit significantly reduced distance attenuation bias and revised dispersion terms are more compatible with those from empirical models but remain lower at large distances (e.g., > 100 km).
Low-dose CT for quantitative analysis in acute respiratory distress syndrome
2013-08-31
noise of scans performed at 140, 60, 15 and 7.5 mAs corresponded to 10, 16, 38 and 74 Hounsfield Units , respectively. Conclusions: A reduction of...slice of a series, total lung volume, total lung tissue mass and frequency distribution of lung CT numbers expressed in Hounsfield Units (HU) were...tomography; HU: Hounsfield units ; CTDIvol: volumetric computed tomography dose index; DLP: dose length product; E: effective dose; SD: standard deviation
1977-01-01
balanced at the mean, with the central part steeper ( platykurtic : broad mode or truncated tails) -r flatter (leptokurtic: peaked mode or extended...and NUPUR, have negative kurtosis (they are platykurtic , with truncated tails and/or broad modes relative to their standard deviations) FERRO, on the...the other areas, and its gradients are platykurtic but almost unskewed. Hence the square root of sine transformation (Fig,15) and the log tangent
Qubit dephasing due to low-frequency noise.
NASA Astrophysics Data System (ADS)
Sverdlov, Victor; Rabenstein, Kristian; Averin, Dmitri
2004-03-01
We have numerically investigated the effects of the classical low-frequency noise on the qubit dynamics beyond the standard lowest-order perturbation theory in coupling. Noise is generated as a random process with a correlation function characterized by two parameters, the amplitude v0 and the cut-off frequency 2π/τ. Time evolution of the density matrix was averaged over up to 10^7 noise realizations. Contrary to the relaxation time T_1, which for v_0<ω, where ω is the qubit oscillation frequency, is always given correctly by the ``golden-rule'' expression, the dephasing time deviates from the perturbation-theory result, when (v_0/ω)^2(ωτ) ≥1. In this regime, even for unbiased qubit for which the pure dephasing vanishes in perturbation theory, the dephasing is much larger than it's perturbation-theory value 1/(2 T_1).
Evolving geometrical heterogeneities of fault trace data
NASA Astrophysics Data System (ADS)
Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari
2010-08-01
We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.
Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven
2010-09-01
To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.
NASA Technical Reports Server (NTRS)
Costain, C.; Boulanger, J. S.; Daams, H.; Hanson, D. W.; Beehler, R. E.; Clements, A. J.; Davis, D. D.; Klepczynski, W. J.; Veenstra, L. B.; Kaiser, J.
1979-01-01
In most of the experiments, 1 pps pulses of the station atomic clocks were exchanged between the partners, and a cubic equation was fitted to the 1000 to 2000 second measurements. The equations were exchanged and substracted to obtain the time difference of the stations. The standard deviation in the fit of the equations varied, depending on conditions, from 1.5 ns to 16 ns. For the last month of the Hermes experiment, a 1 MHz signal was used, giving a standard deviation of 0.18 ns. The comparison of the time scales via satellite and via Loran-C (BIH Circular D) show clearly that some Loran-C links are very good, but that the NBS link varies by 1 micron s. Via the satellite the frequencies of the time scales can be compared with an accuracy of 2 x 10 to the minus 14 power.
Hansen, John P
2003-01-01
Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.
NASA Astrophysics Data System (ADS)
Kürbis, K.; Mudelsee, M.; Tetzlaff, G.; Brázdil, R.
2009-09-01
For the analysis of trends in weather extremes, we introduce a diagnostic index variable, the exceedance product, which combines intensity and frequency of extremes. We separate trends in higher moments from trends in mean or standard deviation and use bootstrap resampling to evaluate statistical significances. The application of the concept of the exceedance product to daily meteorological time series from Potsdam (1893 to 2005) and Prague-Klementinum (1775 to 2004) reveals that extremely cold winters occurred only until the mid-20th century, whereas warm winters show upward trends. These changes were significant in higher moments of the temperature distribution. In contrast, trends in summer temperature extremes (e.g., the 2003 European heatwave) can be explained by linear changes in mean or standard deviation. While precipitation at Potsdam does not show pronounced trends, dew point does exhibit a change from maximum extremes during the 1960s to minimum extremes during the 1970s.
Real-time combustion control and diagnostics sensor-pressure oscillation monitor
Chorpening, Benjamin T [Morgantown, WV; Thornton, Jimmy [Morgantown, WV; Huckaby, E David [Morgantown, WV; Richards, George A [Morgantown, WV
2009-07-14
An apparatus and method for monitoring and controlling the combustion process in a combustion system to determine the amplitude and/or frequencies of dynamic pressure oscillations during combustion. An electrode in communication with the combustion system senses hydrocarbon ions and/or electrons produced by the combustion process and calibration apparatus calibrates the relationship between the standard deviation of the current in the electrode and the amplitudes of the dynamic pressure oscillations by applying a substantially constant voltage between the electrode and ground resulting in a current in the electrode and by varying one or more of (1) the flow rate of the fuel, (2) the flow rate of the oxidant, (3) the equivalence ratio, (4) the acoustic tuning of the combustion system, and (5) the fuel distribution in the combustion chamber such that the amplitudes of the dynamic pressure oscillations in the combustion chamber are calculated as a function of the standard deviation of the electrode current. Thereafter, the supply of fuel and/or oxidant is varied to modify the dynamic pressure oscillations.
NASA Astrophysics Data System (ADS)
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits of Signal Averaging in Atmospheric Trace-Gas Monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS)," Applied Physics, B57, pp 131-139, April 1993
Flethøj, Mette; Kanters, Jørgen K; Pedersen, Philip J; Haugaard, Maria M; Carstensen, Helena; Olsen, Lisbeth H; Buhl, Rikke
2016-11-28
Although premature beats are a matter of concern in horses, the interpretation of equine ECG recordings is complicated by a lack of standardized analysis criteria and a limited knowledge of the normal beat-to-beat variation of equine cardiac rhythm. The purpose of this study was to determine the appropriate threshold levels of maximum acceptable deviation of RR intervals in equine ECG analysis, and to evaluate a novel two-step timing algorithm by quantifying the frequency of arrhythmias in a cohort of healthy adult endurance horses. Beat-to-beat variation differed considerably with heart rate (HR), and an adaptable model consisting of three different HR ranges with separate threshold levels of maximum acceptable RR deviation was consequently defined. For resting HRs <60 beats/min (bpm) the threshold level of RR deviation was set at 20%, for HRs in the intermediate range between 60 and 100 bpm the threshold was 10%, and for exercising HRs >100 bpm, the threshold level was 4%. Supraventricular premature beats represented the most prevalent arrhythmia category with varying frequencies in seven horses at rest (median 7, range 2-86) and six horses during exercise (median 2, range 1-24). Beat-to-beat variation of equine cardiac rhythm varies according to HR, and threshold levels in equine ECG analysis should be adjusted accordingly. Standardization of the analysis criteria will enable comparisons of studies and follow-up examinations of patients. A small number of supraventricular premature beats appears to be a normal finding in endurance horses. Further studies are required to validate the findings and determine the clinical significance of premature beats in horses.
Cardiovascular Autonomic Dysfunction in Patients with Morbid Obesity
de Sant Anna Junior, Maurício; Carneiro, João Regis Ivar; Carvalhal, Renata Ferreira; Torres, Diego de Faria Magalhães; da Cruz, Gustavo Gavina; Quaresma, José Carlos do Vale; Lugon, Jocemir Ronaldo; Guimarães, Fernando Silva
2015-01-01
Background Morbid obesity is directly related to deterioration in cardiorespiratory capacity, including changes in cardiovascular autonomic modulation. Objective This study aimed to assess the cardiovascular autonomic function in morbidly obese individuals. Methods Cross-sectional study, including two groups of participants: Group I, composed by 50 morbidly obese subjects, and Group II, composed by 30 nonobese subjects. The autonomic function was assessed by heart rate variability in the time domain (standard deviation of all normal RR intervals [SDNN]; standard deviation of the normal R-R intervals [SDNN]; square root of the mean squared differences of successive R-R intervals [RMSSD]; and the percentage of interval differences of successive R-R intervals greater than 50 milliseconds [pNN50] than the adjacent interval), and in the frequency domain (high frequency [HF]; low frequency [LF]: integration of power spectral density function in high frequency and low frequency ranges respectively). Between-group comparisons were performed by the Student’s t-test, with a level of significance of 5%. Results Obese subjects had lower values of SDNN (40.0 ± 18.0 ms vs. 70.0 ± 27.8 ms; p = 0.0004), RMSSD (23.7 ± 13.0 ms vs. 40.3 ± 22.4 ms; p = 0.0030), pNN50 (14.8 ± 10.4 % vs. 25.9 ± 7.2%; p = 0.0061) and HF (30.0 ± 17.5 Hz vs. 51.7 ± 25.5 Hz; p = 0.0023) than controls. Mean LF/HF ratio was higher in Group I (5.0 ± 2.8 vs. 1.0 ± 0.9; p = 0.0189), indicating changes in the sympathovagal balance. No statistical difference in LF was observed between Group I and Group II (50.1 ± 30.2 Hz vs. 40.9 ± 23.9 Hz; p = 0.9013). Conclusion morbidly obese individuals have increased sympathetic activity and reduced parasympathetic activity, featuring cardiovascular autonomic dysfunction. PMID:26536979
Braun, T; Dochtermann, S; Krause, E; Schmidt, M; Schorn, K; Hempel, J M
2011-09-01
The present study analyzes the best combination of frequencies for the calculation of mean hearing loss in pure tone threshold audiometry for correlation with hearing loss for numbers in speech audiometry, since the literature describes different calculation variations for plausibility checking in expertise. Three calculation variations, A (250, 500 and 1000 Hz), B (500 and 1000 Hz) and C (500, 1000 and 2000 Hz), were compared. Audiograms in 80 patients with normal hearing, 106 patients with hearing loss and 135 expertise patients were analyzed in a retrospective manner. Differences between mean pure tone audiometry thresholds and hearing loss for numbers were calculated and statistically compared separately for the right and the left ear in the three patient collectives. We found the calculation variation A to be the best combination of frequencies, since it yielded the smallest standard deviations while being statistically different to calculation variations B and C. The 1- and 2.58-fold standard deviation (representing 68.3% and 99.0% of all values) was ±4.6 and ±11.8 dB for calculation variation A in patients with hearing loss, respectively. For plausibility checking in expertise, the mean threshold from the frequencies 250, 500 and 1000 Hz should be compared to the hearing loss for numbers. The common recommendation reported by the literature to doubt plausibility when the difference of these values exceeds ±5 dB is too strict as shown by this study.
Robust Identification of Local Adaptation from Allele Frequencies
Günther, Torsten; Coop, Graham
2013-01-01
Comparing allele frequencies among populations that differ in environment has long been a tool for detecting loci involved in local adaptation. However, such analyses are complicated by an imperfect knowledge of population allele frequencies and neutral correlations of allele frequencies among populations due to shared population history and gene flow. Here we develop a set of methods to robustly test for unusual allele frequency patterns and correlations between environmental variables and allele frequencies while accounting for these complications based on a Bayesian model previously implemented in the software Bayenv. Using this model, we calculate a set of “standardized allele frequencies” that allows investigators to apply tests of their choice to multiple populations while accounting for sampling and covariance due to population history. We illustrate this first by showing that these standardized frequencies can be used to detect nonparametric correlations with environmental variables; these correlations are also less prone to spurious results due to outlier populations. We then demonstrate how these standardized allele frequencies can be used to construct a test to detect SNPs that deviate strongly from neutral population structure. This test is conceptually related to FST and is shown to be more powerful, as we account for population history. We also extend the model to next-generation sequencing of population pools—a cost-efficient way to estimate population allele frequencies, but one that introduces an additional level of sampling noise. The utility of these methods is demonstrated in simulations and by reanalyzing human SNP data from the Human Genome Diversity Panel populations and pooled next-generation sequencing data from Atlantic herring. An implementation of our method is available from http://gcbias.org. PMID:23821598
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 1: January
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of January. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Mean density standard deviation (all for 13 levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
The use of heart rate variability in assessing precompetitive stress in high-standard judo athletes.
Morales, J; Garcia, V; García-Massó, X; Salvá, P; Escobar, R; Buscà, B
2013-02-01
The objective of this study is to examine the sensitivity to and changes in heart rate variability (HRV) in stressful situations before judo competitions and to observe the differences among judo athletes according to their competitive standards in both official and unofficial competitions. 24 (10 male and 14 female) national- and international-standard athletes were evaluated. Each participant answered the Revised Competitive State Anxiety Inventory (CSAI-2R) and their HRV was recorded both during an official and unofficial competition. The MANOVA showed significant main effects of the athlete's standard and the type of competition in CSAI-2R, in HRV time domain, in HRV frequency domain and in HRV nonlinear analysis (p<0.05). International-standard judo athletes have lower somatic anxiety, cognitive anxiety, heart rate and low-high frequency ratio than national-standard athletes (p<0.05). International-standard athletes have a higher confidence, mean RR interval, standard deviation of RR, square root of the mean squared difference of successive RR intervals, number of consecutive RR that differ by more than 5 ms, short-term variability, long-term variability, long-range scaling exponents and short-range scaling exponent than national-standard judo athletes. In conclusion, international-standard athletes show less pre-competitive anxiety than the national-standard athletes and HRV analysis is sensitive to changes in pre-competitive anxiety. © Georg Thieme Verlag KG Stuttgart · New York.
Comparing Standard Deviation Effects across Contexts
ERIC Educational Resources Information Center
Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.
2017-01-01
Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…
Picosecond-precision multichannel autonomous time and frequency counter
NASA Astrophysics Data System (ADS)
Szplet, R.; Kwiatkowski, P.; RóŻyc, K.; Jachna, Z.; Sondej, T.
2017-12-01
This paper presents the design, implementation, and test results of a multichannel time interval and frequency counter developed as a desktop instrument. The counter contains four main functional modules for (1) performing precise measurements, (2) controlling and fast data processing, (3) low-noise power suppling, and (4) supplying a stable reference clock (optional rubidium standard). A fundamental for the counter, the time interval measurement is based on time stamping combined with a period counting and in-period two-stage time interpolation that allows us to achieve wide measurement range (above 1 h), high precision (even better than 4.5 ps), and high measurement speed (up to 91.2 × 106 timestamps/s). The frequency is measured up to 3.0 GHz with the use of the reciprocal method. Wide functionality of the counter includes also the evaluation of frequency stability of clocks and oscillators (Allan deviation) and phase variation (time interval error, maximum time interval error, time deviation). The 8-channel measurement module is based on a field programmable gate array device, while the control unit involves a microcontroller with a high performance ARM-Cortex core. An efficient and user-friendly control of the counter is provided either locally, through the built-in keypad or/and color touch panel, or remotely, with the aid of USB, Ethernet, RS232C, or RS485 interfaces.
Picosecond-precision multichannel autonomous time and frequency counter.
Szplet, R; Kwiatkowski, P; Różyc, K; Jachna, Z; Sondej, T
2017-12-01
This paper presents the design, implementation, and test results of a multichannel time interval and frequency counter developed as a desktop instrument. The counter contains four main functional modules for (1) performing precise measurements, (2) controlling and fast data processing, (3) low-noise power suppling, and (4) supplying a stable reference clock (optional rubidium standard). A fundamental for the counter, the time interval measurement is based on time stamping combined with a period counting and in-period two-stage time interpolation that allows us to achieve wide measurement range (above 1 h), high precision (even better than 4.5 ps), and high measurement speed (up to 91.2 × 10 6 timestamps/s). The frequency is measured up to 3.0 GHz with the use of the reciprocal method. Wide functionality of the counter includes also the evaluation of frequency stability of clocks and oscillators (Allan deviation) and phase variation (time interval error, maximum time interval error, time deviation). The 8-channel measurement module is based on a field programmable gate array device, while the control unit involves a microcontroller with a high performance ARM-Cortex core. An efficient and user-friendly control of the counter is provided either locally, through the built-in keypad or/and color touch panel, or remotely, with the aid of USB, Ethernet, RS232C, or RS485 interfaces.
CEO stabilized frequency comb from a 1-μm Kerr-lens mode-locked bulk Yb:CYA laser.
Yu, Zijiao; Han, Hainian; Xie, Yang; Peng, Yingnan; Xu, Xiaodong; Wei, Zhiyi
2016-02-08
We report the first Kerr-lens mode-locked (KLM) bulk frequency comb in the 1-μm spectral regime. The fundamental KLM Yb:CYA laser is pumped by a low-noise, high-bright 976-nm fiber laser and typically provides 250-mW output power and 57-fs pulse duration. Only 58-mW output pulses were launched into a 1.3-m photonic crystal fiber (PCF) for one octave-spanning supercontinuum generation. Using a simplified collinear f-2f interferometer, the free-running carrier-envelope offset (CEO) frequency was measured to be 42-dB signal-to-noise ratio (SNR) for a 100-kHz resolution and 9.6-kHz full width at half maximum (FWHM) under a 100-Hz resolution. A long-term CEO control at 23 MHz was ultimately realized by feeding the phase error signal to the pump power of the oscillator. The integrated phase noise (IPN) of the locked CEO was measured to be 316 mrad with an integrated range from 1 Hz to 10 MHz. The standard deviation and Allan deviation for more than 4-hour recording are 1.6 mHz and 5.6 × 10(-18) (for 1-s gate time), respectively. This is, to the best of our knowledge, the best stability achieved among the 1-μm solid-state frequency combs.
Verster, Joris C; Roth, Thomas
2014-07-01
The traditional outcome measure of the Dutch on-the-road driving test is the standard deviation of lateral position (SDLP), the weaving of the car. This paper explores whether excursions out-of-lane are a suitable additional outcome measure to index driving impairment. A literature search was conducted to search for driving tests that used both SDLP and excursions out-of-lane as outcome measures. The analyses were limited to studies examining hypnotic drugs because several of these drugs have been shown to produce next-morning sedation. Standard deviation of lateral position was more sensitive in demonstrating driving impairment. In fact, solely relying on excursions out-of-lane as outcome measure incorrectly classifies approximately half of impaired drives as unimpaired. The frequency of excursions out-of-lane is determined by the mean lateral position within the right traffic lane. Defining driving impairment as having a ΔSDLP > 2.4 cm, half of the impaired driving tests (51.2%, 43/84) failed to produce excursions out-of-lane. Alternatively, 20.9% of driving tests with ΔSDLP < 2.4 cm (27/129) had at least one excursion out-of-lane. Excursions out-of-lane are neither a suitable measure to demonstrate driving impairment nor is this measure sufficiently sensitive to differentiate adequately between differences in magnitude of driving impairment. Copyright © 2014 John Wiley & Sons, Ltd.
Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A
2013-07-01
Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.
Wang, H; Misztal, I; Legarra, A
2014-12-01
This work studied differences between expected (calculated from pedigree) and realized (genomic, from markers) relationships in a real population, the influence of quality control on these differences, and their fit to current theory. Data included 4940 pure line chickens across five generations genotyped for 57,636 SNP. Pedigrees (5762 animals) were available for the five generations, pedigree starting on the first one. Three levels of quality control were used. With no quality control, mean difference between realized and expected relationships for different type of relationships was ≤ 0.04 with standard deviation ≤ 0.10. With strong quality control (call rate ≥ 0.9, parent-progeny conflicts, minor allele frequency and use of only autosomal chromosomes), these numbers reduced to ≤ 0.02 and ≤ 0.04, respectively. While the maximum difference was 1.02 with the complete data, it was only 0.18 with the latest three generations of genotypes (but including all pedigrees). Variation of expected minus realized relationships agreed with theoretical developments and suggests an effective number of loci of 70 for this population. When the pedigree is complete and as deep as the genotypes, the standard deviation of difference between the expected and realized relationships is around 0.04, all categories confounded. Standard deviation of differences larger than 0.10 suggests bad quality control, mistakes in pedigree recording or genotype labelling, or insufficient depth of pedigree. © 2014 Blackwell Verlag GmbH.
Ultrastable laser array at 633 nm for real-time dimensional metrology
NASA Astrophysics Data System (ADS)
Lawall, John; Pedulla, J. Marc; Le Coq, Yann
2001-07-01
We describe a laser system for very-high-accuracy dimensional metrology. A sealed-cavity helium-neon laser is offset locked to an iodine-stabilized laser in order to realize a secondary standard with higher power and less phase noise. Synchronous averaging is employed to remove the effect of the frequency modulation present on the iodine-stabilized laser. Additional lasers are offset locked to the secondary standard for use in interferometry. All servo loops are implemented digitally. The offset-locked lasers have intrinsic linewidths of the order of 2.5 kHz and exhibit a rms deviation from the iodine-stabilized laser below 18 kHz. The amplitude noise is at the shot-noise limit for frequencies above 700 kHz. We describe and evaluate the system in detail, and include a discussion of the noise associated with various types of power supplies.
Kery, M.; Matthies, D.; Schmid, B.
2003-01-01
We studied ecological consequences of distyly for the declining perennial plant Primula veris in the Swiss Jura. Distyly favours cross-fertilization and avoids inbreeding, but may lead to pollen limitation and reduced reproduction if morph frequencies deviate from 50 %. Disassortative mating is promoted by the reciprocal position of stigmas and anthers in the two morphs (pin and thrum) and by intramorph incompatibility and should result in equal frequencies of morphs at equilibrium. However, deviations could arise because of demographic stochasticity, the lower intra-morph incompatibility of the pin morph, and niche differentiation between morphs. Demographic stochasticity should result in symmetric deviations from an even morph frequency among populations and in increased deviations with decreasing population size. If crosses between pins occurred, these would only generate pins, and this could result in a pin-bias of morph frequencies in general and in small populations in particular. If the morphs have different niches, morph frequencies should be related to environmental factors, morphs might be spatially segregated, and morphological differences between morphs would be expected. We tested these hypotheses in the declining distylous P. veris. We studied morph frequencies in relation to environmental conditions and population size, spatial segregation in field populations, morphological differences between morphs, and growth responses to nutrient addition. Morph frequencies in 76 populations with 1 - 80000 flowering plants fluctuated symmetrically about 50 %. Deviations from 50 % were much larger in small populations, and sixof the smallest populations had lost one morph altogether. In contrast, morph frequencies were neither related to population size nor to 17 measures of environmental conditions. We found no spatial segregation or morphological differences in the field or in the common garden. The results suggest that demographic stochasticity caused deviations of the morph ratiofrom unity in small populations. Demographic stochasticity was probably caused by the random elimination of plants during the fragmentation of formerly large continuous populations. Biased morph frequencies may be one of the reasons for the strongly reduced reproduction in small populations of P. veris.
Frequency domain laser velocimeter signal processor: A new signal processing scheme
NASA Technical Reports Server (NTRS)
Meyers, James F.; Clemmons, James I., Jr.
1987-01-01
A new scheme for processing signals from laser velocimeter systems is described. The technique utilizes the capabilities of advanced digital electronics to yield a smart instrument that is able to configure itself, based on the characteristics of the input signals, for optimum measurement accuracy. The signal processor is composed of a high-speed 2-bit transient recorder for signal capture and a combination of adaptive digital filters with energy and/or zero crossing detection signal processing. The system is designed to accept signals with frequencies up to 100 MHz with standard deviations up to 20 percent of the average signal frequency. Results from comparative simulation studies indicate measurement accuracies 2.5 times better than with a high-speed burst counter, from signals with as few as 150 photons per burst.
Derkacz, Arkadiusz; Gawrys, Jakub; Gawrys, Karolina; Podgorski, Maciej; Magott-Derkacz, Agnieszka; Poreba, Rafał; Doroszko, Adrian
2018-06-01
The effect of electromagnetic field on cardiovascular system in the literature is defined in ambiguous way. The aim of this study was to evaluate the effect of electromagnetic field on the heart rate variability (HRV) during the examination with magnetic resonance. Forty-two patients underwent Holter ECG heart monitoring for 30 minutes twice: immediately before and after the examination with magnetic resonance imaging (MRI). HRV was analysed by assessing a few selected time and spectral parameters. Is has been shown that standard deviation of NN intervals (SDNN) and very low frequency rates increased, whereas the low frequency:high frequency parameter significantly decreased following the MRI examination. These results show that MRI may affect the HRV most likely by changing the sympathetic-parasympathetic balance.
Yang, Lijun; Wu, Xuejian; Wei, Haoyun; Li, Yan
2017-04-10
The absolute group refractive index of air at 194061.02 GHz is measured in real time using frequency-sweeping interferometry calibrated by an optical frequency comb. The group refractive index of air is calculated from the calibration peaks of the laser frequency variation and the interference signal of the two beams passing through the inner and outer regions of a vacuum cell when the frequency of a tunable external cavity diode laser is scanned. We continuously measure the refractive index of air for 2 h, which shows that the difference between measured results and Ciddor's equation is less than 9.6×10-8, and the standard deviation of that difference is 5.9×10-8. The relative uncertainty of the measured refractive index of air is estimated to be 8.6×10-8. The data update rate is 0.2 Hz, making it applicable under conditions in which air refractive index fluctuates fast.
2011-01-01
Background We studied the worst-case radiated radiofrequency (RF) susceptibility of automated external defibrillators (AEDs) based on the electromagnetic compatibility (EMC) requirements of a current standard for cardiac defibrillators, IEC 60601-2-4. Square wave modulation was used to mimic cardiac physiological frequencies of 1 - 3 Hz. Deviations from the IEC standard were a lower frequency limit of 30 MHz to explore frequencies where the patient-connected leads could resonate. Also testing up to 20 V/m was performed. We tested AEDs with ventricular fibrillation (V-Fib) and normal sinus rhythm signals on the patient leads to enable testing for false negatives (inappropriate "no shock advised" by the AED). Methods We performed radiated exposures in a 10 meter anechoic chamber using two broadband antennas to generate E fields in the 30 - 2500 MHz frequency range at 1% frequency steps. An AED patient simulator was housed in a shielded box and delivered normal and fibrillation waveforms to the AED's patient leads. We developed a technique to screen ECG waveforms stored in each AED for electromagnetic interference at all frequencies without waiting for the long cycle times between analyses (normally 20 to over 200 s). Results Five of the seven AEDs tested were susceptible to RF interference, primarily at frequencies below 80 MHz. Some induced errors could cause AEDs to malfunction and effectively inhibit operator prompts to deliver a shock to a patient experiencing lethal fibrillation. Failures occurred in some AEDs exposed to E fields between 3 V/m and 20 V/m, in the 38 - 50 MHz range. These occurred when the patient simulator was delivering a V-Fib waveform to the AED. Also, we found it is not possible to test modern battery-only-operated AEDs for EMI using a patient simulator if the IEC 60601-2-4 defibrillator standard's simulated patient load is used. Conclusions AEDs experienced potentially life-threatening false-negative failures from radiated RF, primarily below the lower frequency limit of present AED standards. Field strengths causing failures were at levels as low as 3 V/m at frequencies below 80 MHz where resonance of the patient leads and the AED input circuitry occurred. This plus problems with the standard's' prescribed patient load make changes to the standard necessary. PMID:21801368
Umberger, Ken; Bassen, Howard I
2011-07-29
We studied the worst-case radiated radiofrequency (RF) susceptibility of automated external defibrillators (AEDs) based on the electromagnetic compatibility (EMC) requirements of a current standard for cardiac defibrillators, IEC 60601-2-4. Square wave modulation was used to mimic cardiac physiological frequencies of 1-3 Hz. Deviations from the IEC standard were a lower frequency limit of 30 MHz to explore frequencies where the patient-connected leads could resonate. Also testing up to 20 V/m was performed. We tested AEDs with ventricular fibrillation (V-Fib) and normal sinus rhythm signals on the patient leads to enable testing for false negatives (inappropriate "no shock advised" by the AED). We performed radiated exposures in a 10 meter anechoic chamber using two broadband antennas to generate E fields in the 30-2500 MHz frequency range at 1% frequency steps. An AED patient simulator was housed in a shielded box and delivered normal and fibrillation waveforms to the AED's patient leads. We developed a technique to screen ECG waveforms stored in each AED for electromagnetic interference at all frequencies without waiting for the long cycle times between analyses (normally 20 to over 200 s). Five of the seven AEDs tested were susceptible to RF interference, primarily at frequencies below 80 MHz. Some induced errors could cause AEDs to malfunction and effectively inhibit operator prompts to deliver a shock to a patient experiencing lethal fibrillation. Failures occurred in some AEDs exposed to E fields between 3 V/m and 20 V/m, in the 38 - 50 MHz range. These occurred when the patient simulator was delivering a V-Fib waveform to the AED. Also, we found it is not possible to test modern battery-only-operated AEDs for EMI using a patient simulator if the IEC 60601-2-4 defibrillator standard's simulated patient load is used. AEDs experienced potentially life-threatening false-negative failures from radiated RF, primarily below the lower frequency limit of present AED standards. Field strengths causing failures were at levels as low as 3 V/m at frequencies below 80 MHz where resonance of the patient leads and the AED input circuitry occurred. This plus problems with the standard's' prescribed patient load make changes to the standard necessary.
2014-06-01
subsequent to the beginning of the transformation stage. Means ± standard deviation of SLP are listed for each step of transformation, and for the...labeled every 6 dam, and solid contours are SLP every 4 hPa (unpublished figure provided by H. Archambault...53. A 1-day SLP anomaly (color shading, hPa) from the NCEP operational dataset for 31 Oct 2010. (Image provided by the NOAA/ESRL Physical Sciences
1981-01-14
wet-bulb temperature depression versus dry -bulb temperature, means and standard deviations of d-j-bulb, wet-bulb (over) SDD, 1473 UNCLASS IF I ED FC...distribution tables Dry -bulb temperature versud wet-bulb temperature Cumulative percentage frequency of distribution tables 20. and dew point...PART 5 PRECIPITATION PSYCHROMETRIC.DRY VS WET BULB SNOWFALL MEAN & STO 0EV SNOW EPTH DRY BULB, WET BULB, &DEW POINtI RELATIVE HUMIDITY PARTC SURFACE
Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A
1980-12-01
1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.
Shared Components of Rhythm Generation for Locomotion and Scratching Exist Prior to Motoneurons
Hao, Zhao-Zhe; Berkowitz, Ari
2017-01-01
Does the spinal cord use a single network to generate locomotor and scratching rhythms or two separate networks? Previous research showed that simultaneous swim and scratch stimulation (“dual stimulation”) in immobilized, spinal turtles evokes a single rhythm in hindlimb motor nerves with a frequency often greater than during swim stimulation alone or scratch stimulation alone. This suggests that the signals that trigger swimming and scratching converge and are integrated within the spinal cord. However, these results could not determine whether the integration occurs in motoneurons themselves or earlier, in spinal interneurons. Here, we recorded intracellularly from hindlimb motoneurons during dual stimulation. Motoneuron membrane potentials displayed regular oscillations at a higher frequency during dual stimulation than during swim or scratch stimulation alone. In contrast, arithmetic addition of the oscillations during swimming alone and scratching alone with various delays always generated irregular oscillations. Also, the standard deviation of the phase-normalized membrane potential during dual stimulation was similar to those during swimming or scratching alone. In contrast, the standard deviation was greater when pooling cycles of swimming alone and scratching alone for two of the three forms of scratching. This shows that dual stimulation generates a single rhythm prior to motoneurons. Thus, either swimming and scratching largely share a rhythm generator or the two rhythms are integrated into one rhythm by strong interactions among interneurons. PMID:28848402
Liu, Xiaoyan; Yu, Yijun; Zeng, Xiaoyun; Li, Huanhuan
2018-01-01
Non-pharmacological therapies, especially the physical maneuvers, are viewed as important and promising strategies for reducing syncope recurrences in vasovagal syncope (VVS) patients. We observed the efficacy of a modified Valsalva maneuver (MVM) in VVS patients. 72 VVS patients with syncope history and positive head-up tilt table testing (HUTT) results were randomly divided into conventional treatment group (NVM group, n = 36) and conventional treatment plus standard MVM for 30 days group (MVM group, n = 36). Incidence of recurrent syncope after 12 months (6.5% vs. 41.2%, P<0.01) and rate of positive HUTT after 30 days (9.7% vs.79.4%, P<0.01) were significantly lower in MVM group than in NVM group. HRV results showed that low frequency (LF), LF/ high frequency (HF), standard deviation of NN intervals (SDNN) and standard deviation of all 5-min average NN intervals (SDANN) values were significantly lower in the NVM and MVM groups than in the control group at baseline. After 30 days treatment, LF, LF/HF, SDNN, SDANN values were significantly higher compared to baseline in MVM group. Results of Cox proportional hazard model showed that higher SDNN and SDANN values at 30 days after intervention were protective factors, while positive HUTT at 30 days after intervention was risk factor for recurrent syncope. Our results indicate that 30 days MVM intervention could effectively reduce the incidence of recurrent syncope up to 12 months in VVS patients, possibly through improving sympathetic function of VVS patients. PMID:29381726
He, Li; Wang, Lan; Li, Lun; Liu, Xiaoyan; Yu, Yijun; Zeng, Xiaoyun; Li, Huanhuan; Gu, Ye
2018-01-01
Non-pharmacological therapies, especially the physical maneuvers, are viewed as important and promising strategies for reducing syncope recurrences in vasovagal syncope (VVS) patients. We observed the efficacy of a modified Valsalva maneuver (MVM) in VVS patients. 72 VVS patients with syncope history and positive head-up tilt table testing (HUTT) results were randomly divided into conventional treatment group (NVM group, n = 36) and conventional treatment plus standard MVM for 30 days group (MVM group, n = 36). Incidence of recurrent syncope after 12 months (6.5% vs. 41.2%, P<0.01) and rate of positive HUTT after 30 days (9.7% vs.79.4%, P<0.01) were significantly lower in MVM group than in NVM group. HRV results showed that low frequency (LF), LF/ high frequency (HF), standard deviation of NN intervals (SDNN) and standard deviation of all 5-min average NN intervals (SDANN) values were significantly lower in the NVM and MVM groups than in the control group at baseline. After 30 days treatment, LF, LF/HF, SDNN, SDANN values were significantly higher compared to baseline in MVM group. Results of Cox proportional hazard model showed that higher SDNN and SDANN values at 30 days after intervention were protective factors, while positive HUTT at 30 days after intervention was risk factor for recurrent syncope. Our results indicate that 30 days MVM intervention could effectively reduce the incidence of recurrent syncope up to 12 months in VVS patients, possibly through improving sympathetic function of VVS patients.
A population-based job exposure matrix for power-frequency magnetic fields.
Bowman, Joseph D; Touchstone, Jennifer A; Yost, Michael G
2007-09-01
A population-based job exposure matrix (JEM) was developed to assess personal exposures to power-frequency magnetic fields (MF) for epidemiologic studies. The JEM compiled 2,317 MF measurements taken on or near workers by 10 studies in the United States, Sweden, New Zealand, Finland, and Italy. A database was assembled from the original data for six studies plus summary statistics grouped by occupation from four other published studies. The job descriptions were coded into the 1980 Standard Occupational Classification system (SOC) and then translated to the 1980 job categories of the U.S. Bureau of the Census (BOC). For each job category, the JEM database calculated the arithmetic mean, standard deviation, geometric mean, and geometric standard deviation of the workday-average MF magnitude from the combined data. Analysis of variance demonstrated that the combining of MF data from the different sources was justified, and that the homogeneity of MF exposures in the SOC occupations was comparable to JEMs for solvents and particulates. BOC occupation accounted for 30% of the MF variance (p < 10(-6)), and the contrast (ratio of the between-job variance to the total of within- and between-job variances) was 88%. Jobs lacking data had their exposures inferred from measurements on similar occupations. The JEM provided MF exposures for 97% of the person-months in a population-based case-control study and 95% of the jobs on death certificates in a registry study covering 22 states. Therefore, we expect this JEM to be useful in other population-based epidemiologic studies.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 7: July
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of July. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 10: October
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of October. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point/standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 3: March
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-11-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of March. Included are global analyses of: (1) Mean Temperature Standard Deviation; (2) Mean Geopotential Height Standard Deviation; (3) Mean Density Standard Deviation; (4) Height and Vector Standard Deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean Dew Point Standard Deviation for levels 1000 through 30 mb; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 2: February
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-09-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of February. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 4: April
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of April. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador
2018-01-01
Objective Newcomb-Benford’s Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Design Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson’s χ2, mean absolute deviation and Kuiper tests. Setting/participants Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Main outcome measures Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. Results WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ2 test). Conclusions Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. PMID:29743333
Correction of electrode modelling errors in multi-frequency EIT imaging.
Jehl, Markus; Holder, David
2016-06-01
The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.
Crescenti, Remo A; Bamber, Jeffrey C; Partridge, Mike; Bush, Nigel L; Webb, Steve
2007-11-21
Research on polymer-gel dosimetry has been driven by the need for three-dimensional dosimetry, and because alternative dosimeters are unsatisfactory or too slow for that task. Magnetic resonance tomography is currently the most well-developed technique for determining radiation-induced changes in polymer structure, but quick low-cost alternatives remain of significant interest. In previous work, ultrasound attenuation and speed of sound were found to change as a function of absorbed radiation dose in polymer-gel dosimeters, although the investigations were restricted to one ultrasound frequency. Here, the ultrasound attenuation coefficient mu in one polymer gel (MAGIC) was investigated as a function of radiation dose D and as a function of ultrasonic frequency f in a frequency range relevant for imaging dose distributions. The nonlinearity of the frequency dependence was characterized, fitting a power-law model mu = af(b); the fitting parameters were examined for potential use as additional dose readout parameters. In the observed relationship between the attenuation coefficient and dose, the slopes in a quasi-linear dose range from 0 to 30 Gy were found to vary with the gel batch but lie between 0.0222 and 0.0348 dB cm(-1) Gy(-1) at 2.3 MHz, between 0.0447 and 0.0608 dB cm(-1) Gy(-1) at 4.1 MHz and between 0.0663 and 0.0880 dB cm(-1) Gy(-1) at 6.0 MHz. The mean standard deviation of the slope for all samples and frequencies was 15.8%. The slope was greater at higher frequencies, but so were the intra-batch fluctuations and intra-sample standard deviations. Further investigations are required to overcome the observed variability, which was largely associated with the sample preparation technique, before it can be determined whether any frequency is superior to others in terms of accuracy and precision in dose determination. Nevertheless, lower frequencies will allow measurements through larger samples. The fit parameter a of the frequency dependence, describing the attenuation coefficient at 1 MHz, was found to be dose dependent, which is consistent with our expectations, as polymerization is known to be associated with increased absorption of ultrasound. No significant dose dependence was found for the fit parameter b, which describes the nonlinearity with frequency. This is consistent with the increased absorption being due to the introduction of new relaxation processes with characteristic frequencies similar to those of existing processes. The data presented here will help with optimizing the design of future 3D dose-imaging systems using ultrasound methods.
Exploring Students' Conceptions of the Standard Deviation
ERIC Educational Resources Information Center
delMas, Robert; Liu, Yan
2005-01-01
This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2012 CFR
2012-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2014 CFR
2014-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2011 CFR
2011-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2013 CFR
2013-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
ERIC Educational Resources Information Center
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2010 CFR
2010-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.6 - Tolerances for moisture meters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat Mid ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat High ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat...
Gu, Yu; Wang, Yang-Fu; Li, Qiang; Liu, Zu-Wu
2016-10-20
Chinese liquors can be classified according to their flavor types. Accurate identification of Chinese liquor flavors is not always possible through professional sommeliers' subjective assessment. A novel polymer piezoelectric sensor electric nose (e-nose) can be applied to distinguish Chinese liquors because of its excellent ability in imitating human senses by using sensor arrays and pattern recognition systems. The sensor, based on the quartz crystal microbalance (QCM) principle is comprised of a quartz piezoelectric crystal plate sandwiched between two specific gas-sensitive polymer coatings. Chinese liquors are identified by obtaining the resonance frequency value changes of each sensor using the e-nose. However, the QCM principle failed to completely account for a particular phenomenon: we found that the resonance frequency values fluctuated in the stable state. For better understanding the phenomenon, a 3D Computational Fluid Dynamics (CFD) simulation using the finite volume method is employed to study the influence of the flow-induced forces to the resonance frequency fluctuation of each sensor in the sensor box. A dedicated procedure was developed for modeling the flow of volatile gas from Chinese liquors in a realistic scenario to give reasonably good results with fair accuracy. The flow-induced forces on the sensors are displayed from the perspective of their spatial-temporal and probability density distributions. To evaluate the influence of the fluctuation of the flow-induced forces on each sensor and ensure the serviceability of the e-nose, the standard deviation of resonance frequency value (SD F ) and the standard deviation of resultant forces (SD Fy ) in y-direction (F y ) are compared. Results show that the fluctuations of F y are bound up with the resonance frequency values fluctuations. To ensure that the sensor's resonance frequency values are steady and only fluctuate slightly, in order to improve the identification accuracy of Chinese liquors using the e-nose, the sensors in the sensor box should be in the proper place, i.e., where the fluctuations of the flow-induced forces is relatively small. This plays a significant reference role in determining the optimum design of the e-nose for accurately identifying Chinese liquors.
Low altitude wind shear statistics derived from measured and FAA proposed standard wind profiles
NASA Technical Reports Server (NTRS)
Dunham, R. E., Jr.; Usry, J. W.
1984-01-01
Wind shear statistics were calculated for a simulated data set using wind profiles proposed as a standard and compared to statistics derived from measured wind profile data. Wind shear values were grouped in altitude bands of 100 ft between 100 and 1400 ft, and in wind shear increments of 0.025 kt/ft between + or - 0.600 kt/ft for the simulated data set and between + or - 0.200 kt/ft for the measured set. No values existed outside the + or - 0.200 kt/ft boundaries for the measured data. Frequency distributions, means, and standard deviations were derived for each altitude band for both data sets, and compared. Also, frequency distributions were derived for the total sample for both data sets and compared. Frequency of occurrence of a given wind shear was about the same for both data sets for wind shears, but less than + or 0.10 kt/ft, but the simulated data set had larger values outside these boundaries. Neglecting the vertical wind component did not significantly affect the statistics for these data sets. The frequency of occurrence of wind shears for the flight measured data was essentially the same for each altitude band and the total sample, but the simulated data distributions were different for each altitude band. The larger wind shears for the flight measured data were found to have short durations.
Redmond, Tony; O'Leary, Neil; Hutchison, Donna M; Nicolela, Marcelo T; Artes, Paul H; Chauhan, Balwantray C
2013-12-01
A new analysis method called permutation of pointwise linear regression measures the significance of deterioration over time at each visual field location, combines the significance values into an overall statistic, and then determines the likelihood of change in the visual field. Because the outcome is a single P value, individualized to that specific visual field and independent of the scale of the original measurement, the method is well suited for comparing techniques with different stimuli and scales. To test the hypothesis that frequency-doubling matrix perimetry (FDT2) is more sensitive than standard automated perimetry (SAP) in identifying visual field progression in glaucoma. Patients with open-angle glaucoma and healthy controls were examined by FDT2 and SAP, both with the 24-2 test pattern, on the same day at 6-month intervals in a longitudinal prospective study conducted in a hospital-based setting. Only participants with at least 5 examinations were included. Data were analyzed with permutation of pointwise linear regression. Permutation of pointwise linear regression is individualized to each participant, in contrast to current analyses in which the statistical significance is inferred from population-based approaches. Analyses were performed with both total deviation and pattern deviation. Sixty-four patients and 36 controls were included in the study. The median age, SAP mean deviation, and follow-up period were 65 years, -2.6 dB, and 5.4 years, respectively, in patients and 62 years, +0.4 dB, and 5.2 years, respectively, in controls. Using total deviation analyses, statistically significant deterioration was identified in 17% of patients with FDT2, in 34% of patients with SAP, and in 14% of patients with both techniques; in controls these percentages were 8% with FDT2, 31% with SAP, and 8% with both. Using pattern deviation analyses, statistically significant deterioration was identified in 16% of patients with FDT2, in 17% of patients with SAP, and in 3% of patients with both techniques; in controls these values were 3% with FDT2 and none with SAP. No evidence was found that FDT2 is more sensitive than SAP in identifying visual field deterioration. In about one-third of healthy controls, age-related deterioration with SAP reached statistical significance.
Effect of laser frequency noise on fiber-optic frequency reference distribution
NASA Technical Reports Server (NTRS)
Logan, R. T., Jr.; Lutes, G. F.; Maleki, L.
1989-01-01
The effect of the linewidth of a single longitude-mode laser on the frequency stability of a frequency reference transmitted over a single-mode optical fiber is analyzed. The interaction of the random laser frequency deviations with the dispersion of the optical fiber is considered to determine theoretically the effect on the Allan deviation (square root of the Allan variance) of the transmitted frequency reference. It is shown that the magnitude of this effect may determine the limit of the ultimate stability possible for frequency reference transmission on optical fiber, but is not a serious limitation to present system performance.
Summary of Meteorological Observations, Surface (SMOS), Barbers Point, Hawaii.
1984-09-01
available. Also provided are the means and standard deviations for each month and annual (all months). The extremes for a month are not printed nor...January 1964. When 90 or more of the daily observations of peak gust wind data are available for a month, the extreme is selected and printed . These...ASHEVILLE, NC PERCENTAGE FREQUENCY OF WIND DIRECTION AND SPEED (FROM HOURLY OBSERVATIONS) STATUSI STATIM usA. V U0*t5 CLA mi6 (O t ST PE ND MEAN (KNTS) 1
Trends in New U.S. Marine Corps Accessions During the Recent Conflicts in Iraq and Afghanistan
2014-01-01
modest changes over the study period. Favorable trends included recent (2009-2010) improvernents in body mass index and physical activity levels...height, body mass index (BMI) in kg/m^ was calculated. Frequency of physical activity before service entry was assessed from self-report. Initial run...Test; BMI, body mass index; mph, miles per hour; SD, standard deviation. "Numbers (n) may not add up to 131,961 because of missing self-reported data for
NASA Astrophysics Data System (ADS)
Lu, Xian; Chu, Xinzhao; Li, Haoyu; Chen, Cao; Smith, John A.; Vadas, Sharon L.
2017-09-01
We present the first statistical study of gravity waves with periods of 0.3-2.5 h that are persistent and dominant in the vertical winds measured with the University of Colorado STAR Na Doppler lidar in Boulder, CO (40.1°N, 105.2°W). The probability density functions of the wave amplitudes in temperature and vertical wind, ratios of these two amplitudes, phase differences between them, and vertical wavelengths are derived directly from the observations. The intrinsic period and horizontal wavelength of each wave are inferred from its vertical wavelength, amplitude ratio, and a designated eddy viscosity by applying the gravity wave polarization and dispersion relations. The amplitude ratios are positively correlated with the ground-based periods with a coefficient of 0.76. The phase differences between the vertical winds and temperatures (φW -φT) follow a Gaussian distribution with 84.2±26.7°, which has a much larger standard deviation than that predicted for non-dissipative waves ( 3.3°). The deviations of the observed phase differences from their predicted values for non-dissipative waves may indicate wave dissipation. The shorter-vertical-wavelength waves tend to have larger phase difference deviations, implying that the dissipative effects are more significant for shorter waves. The majority of these waves have the vertical wavelengths ranging from 5 to 40 km with a mean and standard deviation of 18.6 and 7.2 km, respectively. For waves with similar periods, multiple peaks in the vertical wavelengths are identified frequently and the ones peaking in the vertical wind are statistically longer than those peaking in the temperature. The horizontal wavelengths range mostly from 50 to 500 km with a mean and median of 180 and 125 km, respectively. Therefore, these waves are mesoscale waves with high-to-medium frequencies. Since they have recently become resolvable in high-resolution general circulation models (GCMs), this statistical study provides an important and timely reference for them.
Modeling and monitoring of tooth fillet crack growth in dynamic simulation of spur gear set
NASA Astrophysics Data System (ADS)
Guilbault, Raynald; Lalonde, Sébastien; Thomas, Marc
2015-05-01
This study integrates a linear elastic fracture mechanics analysis of the tooth fillet crack propagation into a nonlinear dynamic model of spur gear sets. An original formulation establishes the rigidity of sound and damaged teeth. The formula incorporates the contribution of the flexible gear body and real crack trajectories in the fillet zone. The work also develops a KI prediction formula. A validation of the equation estimates shows that the predicted KI are in close agreement with published numerical and experimental values. The representation also relies on the Paris-Erdogan equation completed with crack closure effects. The analysis considers that during dN fatigue cycles, a harmonic mean of ΔK assures optimal evaluations. The paper evaluates the influence of the mesh frequency distance from the resonances of the system. The obtained results indicate that while the dependence may demonstrate obvious nonlinearities, the crack progression rate increases with a mesh frequency augmentation. The study develops a tooth fillet crack propagation detection procedure based on residual signals (RS) prepared in the frequency domain. The proposed approach accepts any gear conditions as reference signature. The standard deviation and mean values of the RS are evaluated as gear condition descriptors. A trend tracking of their responses obtained from a moving linear regression completes the analysis. Globally, the results show that, regardless of the reference signal, both descriptors are sensitive to the tooth fillet crack and sharply react to tooth breakage. On average, the mean value detected the crack propagation after a size increase of 3.69 percent as compared to the reference condition, whereas the standard deviation required crack progressions of 12.24 percent. Moreover, the mean descriptor shows evolutions closer to the crack size progression.
Lin, Lung-Chang; Ouyang, Chen-Sen; Chiang, Ching-Tai; Wu, Rong-Ching; Wu, Hui-Chuan
2014-01-01
Summary Objective Listening to Mozart K.448 has been demonstrated to improve spatial task scores, leading to what is known as the Mozart Effect. However, most of these reports only describe the phenomena but lack the scientific evidence needed to properly investigate the mechanism of Mozart Effect. In this study, we used electroencephalography (EEG) and heart rate variability (HRV) to evaluate the effects of Mozart K.448 on healthy volunteers to explore Mozart Effect. Design An EEG-based post-intervention analysis. Setting Kaohsiung Medical University Hospital, Kaohsiung, Taiwan. Participants Twenty-nine college students were enrolled. They received EEG and electrocardiogram examinations simultaneously before, during and after listening to the first movement of Mozart K.448. Main outcome measure EEG alpha, theta and beta power and HRV were compared in each stage. Results The results showed a significant decrease in alpha, theta and beta power when they listened to Mozart K.448. In addition, the average root mean square successive difference, the proportion derived by dividing NN50 by the total number of NN intervals, standard deviations of NN intervals and standard deviations of differences between adjacent NN intervals showed a significant decrease, while the high frequency revealed a significant decrease with a significantly elevated low-frequency/high-frequency ratio. Conclusion Listening to Mozart K.448 significantly decreased EEG alpha, theta and beta power and HRV. This study indicates that there is brain cortical function and sympathetic tone activation in healthy adults when listening to Mozart K.448, which may play an important role in the mechanism of Mozart Effect. PMID:25383198
Visualizing the Sample Standard Deviation
ERIC Educational Resources Information Center
Sarkar, Jyotirmoy; Rashid, Mamunur
2017-01-01
The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…
Ramalingam, S; Jayaprakash, A; Mohan, S; Karabacak, M
2011-11-01
FT-IR and FT-Raman (4000-100 cm(-1)) spectral measurements of 3-methyl-1,2-butadiene (3M12B) have been attempted in the present work. Ab-initio HF and DFT (LSDA/B3LYP/B3PW91) calculations have been performed giving energies, optimized structures, harmonic vibrational frequencies, IR intensities and Raman activities. Complete vibrational assignments on the observed spectra are made with vibrational frequencies obtained by HF and DFT (LSDA/B3LYP/B3PW91) at 6-31G(d,p) and 6-311G(d,p) basis sets. The results of the calculations have been used to simulate IR and Raman spectra for the molecule that showed good agreement with the observed spectra. The potential energy distribution (PED) corresponding to each of the observed frequencies are calculated which confirms the reliability and precision of the assignment and analysis of the vibrational fundamentals modes. The oscillation of vibrational frequencies of butadiene due to the couple of methyl group is also discussed. A study on the electronic properties such as HOMO and LUMO energies, were performed by time-dependent DFT (TD-DFT) approach. The calculated HOMO and LUMO energies show that charge transfer occurs within the molecule. The thermodynamic properties of the title compound at different temperatures reveal the correlations between standard heat capacities (C) standard entropies (S), and standard enthalpy changes (H). Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chalyy, V.
2011-01-01
A bilateral regional comparison of national microphone standards from 2 Hz to 250 Hz was carried out between the DP NDI 'Systema' (Ukraine) and the VNIIFTRI (Russia) from July to September 2009. The comparison, COOMET.AUV.A-K2, was based on the pressure calibration of laboratory standard microphones type LSIP. The comparison results have been linked to the established key comparison reference value (KCRV) of CCAUV.A-K2. The degrees of equivalence, expressed as the deviation from the established KCRV and its expanded uncertainty (k = 2), have been determined, and the comparison result is in agreement with the KCRV within the estimated uncertainties at all employed frequencies. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCAUV, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Janssen, Paddy K C; Olivier, Berend; Zwinderman, Aeilko H; Waldinger, Marcel D
2014-01-01
To analyze a recently published meta-analysis of six studies on 5-HTTLPR polymorphism and lifelong premature ejaculation (PE). Calculation of fraction observed and expected genotype frequencies and Hardy Weinberg equilibrium (HWE) of cases and controls. LL,SL and SS genotype frequencies of patients were subtracted from genotype frequencies of an ideal population (LL25%, SL50%, SS25%, p = 1 for HWE). Analysis of PCRs of six studies and re-analysis of the analysis and Odds ratios (ORs) reported in the recently published meta-analysis. Three studies deviated from HWE in patients and one study deviated from HWE in controls. In three studies in-HWE the mean deviation of genotype frequencies from a theoretical population not-deviating from HWE was small: LL(1.7%), SL(-2.3%), SS(0.6%). In three studies not-in-HWE the mean deviation of genotype frequencies was high: LL(-3.3%), SL(-18.5%) and SS(21.8%) with very low percentage SL genotype concurrent with very high percentage SS genotype. The most serious PCR deviations were reported in the three not-in-HWE studies. The three in-HWE studies had normal OR. In contrast, the three not-in-HWE studies had a low OR. In three studies not-in-HWE and with very low OR, inadequate PCR analysis and/or inadequate interpretation of its gel electrophoresis resulted in very low SL and a resulting shift to very high SS genotype frequency outcome. Consequently, PCRs of these three studies are not reliable. Failure to note the inadequacy of PCR tests makes such PCRs a confounding factor in clinical interpretation of genetic studies. Currently, a meta-analysis can only be performed on three studies-in-HWE. However, based on the three studies-in-HWE with OR of about 1 there is not any indication that in men with lifelong PE the frequency of LL,SL and SS genotype deviates from the general male population and/or that the SL or SS genotype is in any way associated with lifelong PE.
Pádua, Gisley de; Martinez, Edson Zangiacomi; Brunherotti, Marisa Afonso de Andrade
2009-01-01
The increase of the gastric volume observed in the gavage feeding is likely to cause consequences to the premature newborn, modifying the respiratory indicators. To investigate the alterations in the cardiorespiratory system of premature newborns submitted to an increase of the gastric volume by gavage feeding, according to four different body positioning methods. The study is a randomized crossover trial using a sample of 16 newborns with gestational age from 31 to 34 weeks and birth weight less or equal to 2.500 g. The newborns were included in the study if they had from 7 to 10 days of life, feeding by orogastric tube, total volume of 150 mL/kg/day and absence of supplemental oxygen-therapy. A different positioning method was used at each gavage (all raised to 30 degrees), or say, right lateral, left lateral, prone and supine positions. The following response variables were considered: respiratory and cardiac frequencies, saturation of oxygen, drawing of intercostals, beating of nasal wing and grunting. These measures were collected in intervals of 2 minutes during 5 minutes after the gavage feeding, during whole period of the gavage feeding, and during 5 minutes before the gavage feeding. The mean gestational age was 32 weeks (standard deviation 1.3) and the mean weight of the newborns was 1.722 g (standard deviation 276.3). The newborns presented higher values of the mean respiratory frequency in supine and left lateral body positions. In the right lateral and prone positions, the newborns presented lower mean cardiac frequency. The mean oxygen saturation had the lowest values in the left lateral and supine positions. The right lateral and prone positions presented low frequencies of intercostals drawing, beating of nasal wing and grunting. Our results indicate that right lateral and prone positions have influence on the cardiorespiratory effect, where left lateral and supine are the positions who presented higher negative effect in the newborns submitted to the increase of the gastric volume.
New Evidence on the Relationship Between Climate and Conflict
NASA Astrophysics Data System (ADS)
Burke, M.
2015-12-01
We synthesize a large new body of research on the relationship between climate and conflict. We consider many types of human conflict, ranging from interpersonal conflict -- domestic violence, road rage, assault, murder, and rape -- to intergroup conflict -- riots, coups, ethnic violence, land invasions, gang violence, and civil war. After harmonizing statistical specifications and standardizing estimated effect sizes within each conflict category, we implement a meta-analysis that allows us to estimate the mean effect of climate variation on conflict outcomes as well as quantify the degree of variability in this effect size across studies. Looking across more than 50 studies, we find that deviations from moderate temperatures and precipitation patterns systematically increase the risk of conflict, often substantially, with average effects that are highly statistically significant. We find that contemporaneous temperature has the largest average effect by far, with each 1 standard deviation increase toward warmer temperatures increasing the frequency of contemporaneous interpersonal conflict by 2% and of intergroup conflict by more than 10%. We also quantify substantial heterogeneity in these effect estimates across settings.
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
Energy Content & Spectral Energy Representation of Wave Propagation in a Granular Chain
NASA Astrophysics Data System (ADS)
Shrivastava, Rohit; Luding, Stefan
2017-04-01
A mechanical wave is propagation of vibration with transfer of energy and momentum. Studying the energy as well as spectral energy characteristics of a propagating wave through disordered granular media can assist in understanding the overall properties of wave propagation through materials like soil. The study of these properties is aimed at modeling wave propagation for oil, mineral or gas exploration (seismic prospecting) or non-destructive testing for the study of internal structure of solids. Wave propagation through granular materials is often accompanied by energy attenuation which is quantified by Quality factor and this parameter has often been used to characterize material properties, hence, determining the Quality factor (energy attenuation parameter) can also help in determining the properties of the material [3], studied experimentally in [2]. The study of Energy content (Kinetic, Potential and Total Energy) of a pulse propagating through an idealized one-dimensional discrete particle system like a mass disordered granular chain can assist in understanding the energy attenuation due to disorder as a function of propagation distance. The spectral analysis of the energy signal can assist in understanding dispersion as well as attenuation due to scattering in different frequencies (scattering attenuation). The selection of one-dimensional granular chain also helps in studying only the P-wave attributes of the wave and removing the influence of shear or rotational waves. Granular chains with different mass distributions have been studied, by randomly selecting masses from normal, binary and uniform distributions and the standard deviation of the distribution is considered as the disorder parameter, higher standard deviation means higher disorder and lower standard deviation means lower disorder [1]. For obtaining macroscopic/continuum properties, ensemble averaging has been invoked. Instead of analyzing deformation-, velocity- or stress-signals, interpreting information from a Total Energy signal turned out to be much easier in comparison to displacement, velocity or acceleration signals of the wave, hence, indicating a better analysis method for wave propagation through granular materials. Increasing disorder decreases the Energy of higher frequency signals transmitted, but at the same time the energy of spatially localized high frequencies increases. Brian P. Lawney and Stefan Luding. Mass-disorder effects on the frequency filtering in one-dimensional discrete particle systems. AIP Conference Proceedings, 1542(1), 2013. Ibrahim Guven. Hydraulical and acoustical properties of porous sintered glass bead systems: experiments, theory and simulations (Doctoral dissertation). Rainer Tonn. Comparison of seven methods for the computation of Q. Physics of the Earth and Planetary Interiors, 55(3):259 - 268, 1989. Rohit Kumar Shrivastava and Stefan Luding.: Effect of Disorder on Bulk Sound Wave Speed : A Multiscale Spectral Analysis, Nonlin. Processes Geophys. Discuss., doi:10.5194/npg-2016-83, in review, 2017.
Improving operating room turnover time: a systems based approach.
Bhatt, Ankeet S; Carlson, Grant W; Deckers, Peter J
2014-12-01
Operating room (OR) turnover time (TT) has a broad and significant impact on hospital administrators, providers, staff and patients. Our objective was to identify current problems in TT management and implement a consistent, reproducible process to reduce average TT and process variability. Initial observations of TT were made to document the existing process at a 511 bed, 24 OR, academic medical center. Three control groups, including one consisting of Orthopedic and Vascular Surgery, were used to limit potential confounders such as case acuity/duration and equipment needs. A redesigned process based on observed issues, focusing on a horizontally structured, systems-based approach has three major interventions: developing consistent criteria for OR readiness, utilizing parallel processing for patient and room readiness, and enhancing perioperative communication. Process redesign was implemented in Orthopedics and Vascular Surgery. Comparisons of mean and standard deviation of TT were made using an independent 2-tailed t-test. Using all surgical specialties as controls (n = 237), mean TT (hh:mm:ss) was reduced by 0:20:48 min (95 % CI, 0:10:46-0:30:50), from 0:44:23 to 0:23:25, a 46.9 % reduction. Standard deviation of TT was reduced by 0:10:32 min, from 0:16:24 to 0:05:52 and frequency of TT≥30 min was reduced from 72.5to 11.7 %. P < 0.001 for each. Using Vascular and Orthopedic surgical specialties as controls (n = 13), mean TT was reduced by 0:15:16 min (95 % CI, 0:07:18-0:23:14), from 0:38:51 to 0:23:35, a 39.4 % reduction. Standard deviation of TT reduced by 0:08:47, from 0:14:39 to 0:05:52 and frequency of TT≥30 min reduced from 69.2 to 11.7 %. P < 0.001 for each. Reductions in mean TT present major efficiency, quality improvement, and cost-reduction opportunities. An OR redesign process focusing on parallel processing and enhanced communication resulted in greater than 35 % reduction in TT. A systems-based focus should drive OR TT design.
Down-Looking Interferometer Study II, Volume I,
1980-03-01
g(standard deviation of AN )(standard deviation of(3) where T’rm is the "reference spectrum", an estimate of the actual spectrum v gv T ’V Cgv . If jpj...spectrum T V . cgv . According to Eq. (2), Z is the standard deviation of the observed contrast spectral radiance AN divided by the effective rms system
40 CFR 61.207 - Radium-226 sampling and measurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...
Briehl, Margaret M; Nelson, Mark A; Krupinski, Elizabeth A; Erps, Kristine A; Holcomb, Michael J; Weinstein, John B; Weinstein, Ronald S
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, "Mechanisms of Human Disease." Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master's: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises.
Briehl, Margaret M.; Nelson, Mark A.; Krupinski, Elizabeth A.; Erps, Kristine A.; Holcomb, Michael J.; Weinstein, John B.
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, “Mechanisms of Human Disease.” Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master’s: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises. PMID:28725783
Study on characteristics of chirp about Doppler wind lidar system
NASA Astrophysics Data System (ADS)
Du, Li-fang; Yang, Guo-tao; Wang, Ji-hong; Yue, Chuan; Chen, Lin-xiang
2016-11-01
In the doppler wind lidar, usually every 4MHz frequency error will produce wind error of 1m/s of 532nm laser. In the Doppler lidar system, frequency stabilization was achieved through absorption of iodine molecules. Commands that control the instrumental system were based on the PID algorithm and coded using VB language. The frequency of the seed laser was locked to iodine molecular absorption line 1109 which is close to the upper edge of the absorption range, with long-time (>4h) frequency-locking accuracy being≤0.5MHz and long-time frequency stability being 10-9 . The experimental result indicated that the seed frequency and the pulse laser frequency have a deviation, which effect is called the laser chirp characteristics. Finally chirp test system was constructed and tested the frequency offset in time. And such frequency deviation is known as Chirp of the laser pulse. The real-time measured frequency difference of the continuous and pulsed lights was about 10MHz, long-time stability deviation was around 5MHz. After experimental testing technology mature, which can monitoring the signal at long-term with corrected the wind speed.
On the variability of the Priestley-Taylor coefficient over water bodies
NASA Astrophysics Data System (ADS)
Assouline, Shmuel; Li, Dan; Tyler, Scott; Tanny, Josef; Cohen, Shabtai; Bou-Zeid, Elie; Parlange, Marc; Katul, Gabriel G.
2016-01-01
Deviations in the Priestley-Taylor (PT) coefficient αPT from its accepted 1.26 value are analyzed over large lakes, reservoirs, and wetlands where stomatal or soil controls are minimal or absent. The data sets feature wide variations in water body sizes and climatic conditions. Neither surface temperature nor sensible heat flux variations alone, which proved successful in characterizing αPT variations over some crops, explain measured deviations in αPT over water. It is shown that the relative transport efficiency of turbulent heat and water vapor is key to explaining variations in αPT over water surfaces, thereby offering a new perspective over the concept of minimal advection or entrainment introduced by PT. Methods that allow the determination of αPT based on low-frequency sampling (i.e., 0.1 Hz) are then developed and tested, which are usable with standard meteorological sensors that filter some but not all turbulent fluctuations. Using approximations to the Gram determinant inequality, the relative transport efficiency is derived as a function of the correlation coefficient between temperature and water vapor concentration fluctuations (RTq). The proposed approach reasonably explains the measured deviations from the conventional αPT = 1.26 value even when RTq is determined from air temperature and water vapor concentration time series that are Gaussian-filtered and subsampled to a cutoff frequency of 0.1 Hz. Because over water bodies, RTq deviations from unity are often associated with advection and/or entrainment, linkages between αPT and RTq offer both a diagnostic approach to assess their significance and a prognostic approach to correct the 1.26 value when using routine meteorological measurements of temperature and humidity.
Pinilla, Jaime; López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador
2018-05-09
Newcomb-Benford's Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson's χ 2 , mean absolute deviation and Kuiper tests. Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ 2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ 2 test). Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Hou, Yubin; Zhang, Qian; Qi, Shuxian; Feng, Xian; Wang, Pu
2018-03-15
We demonstrate a polarization-maintaining (PM) dual-wavelength (DW) single-frequency Er-doped distributed Bragg reflection (DBR) fiber laser with 28 GHz stable frequency difference. A homemade PM low-reflectivity superimposed fiber Bragg grating (SFBG) is employed as the output port of the DBR fiber laser. The SFBG has two reflection wavelengths located in the same grating region. The reflectivity of both DWs is around 85%. The achieved linear polarization extinction ratio is more than 20 dB. The DWs of the laser output are located at 1552.2 nm and 1552.43 nm, respectively. The optical signal-to-noise ratio (SNR) is above 60 dB. For each wavelength, only one longitudinal mode exists. The beat frequency of the two longitudinal modes is measured to be 28.4474 GHz, with the SNR of more than 65 dB and the linewidth less than 300 Hz. During a 60-min-long measurement, the standard deviation of the frequency fluctuation is 58.592 kHz.
Anderson, Jeffrey S; Zielinski, Brandon A; Nielsen, Jared A; Ferguson, Michael A
2014-04-01
Very low-frequency blood oxygen level-dependent (BOLD) fluctuations have emerged as a valuable tool for describing brain anatomy, neuropathology, and development. Such fluctuations exhibit power law frequency dynamics, with largest amplitude at lowest frequencies. The biophysical mechanisms generating such fluctuations are poorly understood. Using publicly available data from 1,019 subjects of age 7-30, we show that BOLD fluctuations exhibit temporal complexity that is linearly related to local connectivity (regional homogeneity), consistently and significantly covarying across subjects and across gray matter regions. This relationship persisted independently of covariance with gray matter density or standard deviation of BOLD signal. During late neurodevelopment, BOLD fluctuations were unchanged with age in association cortex while becoming more random throughout the rest of the brain. These data suggest that local interconnectivity may play a key role in establishing the complexity of low-frequency BOLD fluctuations underlying functional magnetic resonance imaging connectivity. Stable low-frequency power dynamics may emerge through segmentation and integration of connectivity during development of distributed large-scale brain networks. Copyright © 2013 Wiley Periodicals, Inc.
Evaluation and validity of a LORETA normative EEG database.
Thatcher, R W; North, D; Biver, C
2005-04-01
To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.
High-Resolution FTIR Spectrum of the ν 12 Band of trans- d2-Ethylene
NASA Astrophysics Data System (ADS)
Teo, H. H.; Ong, P. P.; Tan, T. L.; Goh, K. L.
2000-11-01
The ν12 band of trans-d2-ethylene (trans-C2H2D2) has been recorded with an unapodized resolution of 0.0024 cm-1 in the frequency range of 1240-1360 cm-1 by Fourier transform infrared (FTIR) spectroscopy. This band was found to be relatively free from any local frequency perturbations. By fitting a total of 1185 infrared transitions of ν12 with a standard deviation of 0.00043 cm-1 using a Watson's A-reduced Hamiltonian in the Ir representation, a set of accurate rovibrational constants for v12 = 1 state was derived. The ν12 band is A type with a band center at 1298.03797 ± 0.00004 cm-1.
NASA Astrophysics Data System (ADS)
Pengvanich, Phongphaeth
In this thesis, several contemporary issues on coherent radiation sources are examined. They include the fast startup and the injection locking of microwave magnetrons, and the effects of random manufacturing errors on phase and small signal gain of terahertz traveling wave amplifiers. In response to the rapid startup and low noise magnetron experiments performed at the University of Michigan that employed periodic azimuthal perturbations in the axial magnetic field, a systematic study of single particle orbits is performed for a crossed electric and periodic magnetic field. A parametric instability in the orbits, which brings a fraction of the electrons from the cathode toward the anode, is discovered. This offers an explanation of the rapid startup observed in the experiments. A phase-locking model has been constructed from circuit theory to qualitatively explain various regimes observed in kilowatt magnetron injection-locking experiments, which were performed at the University of Michigan. These experiments utilize two continuous-wave magnetrons; one functions as an oscillator and the other as a driver. Time and frequency domain solutions are developed from the model, allowing investigations into growth, saturation, and frequency response of the output. The model qualitatively recovers many of the phase-locking frequency characteristics observed in the experiments. Effects of frequency chirp and frequency perturbation on the phase and lockability have also been quantified. Development of traveling wave amplifier operating at terahertz is a subject of current interest. The small circuit size has prompted a statistical analysis of the effects of random fabrication errors on phase and small signal gain of these amplifiers. The small signal theory is treated with a continuum model in which the electron beam is monoenergetic. Circuit perturbations that vary randomly along the beam axis are introduced through the dimensionless Pierce parameters describing the beam-wave velocity mismatch (b), the gain parameter (C), and the cold tube circuit loss ( d). Our study shows that perturbation in b dominates the other two in terms of power gain and phase shift. Extensive data show that standard deviation of the output phase is linearly proportional to standard deviation of the individual perturbations in b, C and d.
A Note on Standard Deviation and Standard Error
ERIC Educational Resources Information Center
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
Isaksen, Jonas; Leber, Remo; Schmid, Ramun; Schmid, Hans-Jakob; Generali, Gianluca; Abächerli, Roger
2017-02-01
The first-order high-pass filter (AC coupling) has previously been shown to affect the ECG for higher cut-off frequencies. We seek to find a systematic deviation in computer measurements of the electrocardiogram when the AC coupling with a 0.05 Hz first-order high-pass filter is used. The standard 12-lead electrocardiogram from 1248 patients and the automated measurements of their DC and AC coupled version were used. We expect a large unipolar QRS-complex to produce a deviation in the opposite direction in the ST-segment. We found a strong correlation between the QRS integral and the offset throughout the ST-segment. The coefficient for J amplitude deviation was found to be -0.277 µV/(µV⋅s). Potential dangerous alterations to the diagnostically important ST-segment were found. Medical professionals and software developers for electrocardiogram interpretation programs should be aware of such high-pass filter effects since they could be misinterpreted as pathophysiology or some pathophysiology could be masked by these effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
A fatigue monitoring system based on time-domain and frequency-domain analysis of pulse data
NASA Astrophysics Data System (ADS)
Shen, Jiaai
2018-04-01
Fatigue is almost a problem that everyone would face, and a psychosis that everyone hates. If we can test people's fatigue condition and remind them of the tiredness, dangers in life, for instance, traffic accidents and sudden death will be effectively reduced, people's fatigued operations will be avoided. And people can be assisted to have access to their own and others' physical condition in time to alternate work with rest. The article develops a wearable bracelet based on FFT Pulse Frequency Spectrum Analysis and IBI's standard deviation and range calculation, according to people's heart rate (BPM) and inter-beat interval (IBI) while being tired and conscious. The hardware part is based on Arduino, pulse rate sensor, and Bluetooth module, and the software part is relied on network micro database and APP. By doing sample experiment to get more accurate standard value to judge tiredness, we prove that we can judge people's fatigue condition based on heart rate (BPM) and inter-beat interval (IBI).
Code of Federal Regulations, 2010 CFR
2010-01-01
... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...
NASA Astrophysics Data System (ADS)
Zhou, Yan; Liu, Cheng-hui; Pu, Yang; Cheng, Gangge; Zhou, Lixin; Chen, Jun; Zhu, Ke; Alfano, Robert R.
2016-03-01
Raman spectroscopy has become widely used for diagnostic purpose of breast, lung and brain cancers. This report introduced a new approach based on spatial frequency spectra analysis of the underlying tissue structure at different stages of brain tumor. Combined spatial frequency spectroscopy (SFS), Resonance Raman (RR) spectroscopic method is used to discriminate human brain metastasis of lung cancer from normal tissues for the first time. A total number of thirty-one label-free micrographic images of normal and metastatic brain cancer tissues obtained from a confocal micro- Raman spectroscopic system synchronously with examined RR spectra of the corresponding samples were collected from the identical site of tissue. The difference of the randomness of tissue structures between the micrograph images of metastatic brain tumor tissues and normal tissues can be recognized by analyzing spatial frequency. By fitting the distribution of the spatial frequency spectra of human brain tissues as a Gaussian function, the standard deviation, σ, can be obtained, which was used to generate a criterion to differentiate human brain cancerous tissues from the normal ones using Support Vector Machine (SVM) classifier. This SFS-SVM analysis on micrograph images presents good results with sensitivity (85%), specificity (75%) in comparison with gold standard reports of pathology and immunology. The dual-modal advantages of SFS combined with RR spectroscopy method may open a new way in the neuropathology applications.
Strategies to Prevent MRSA Transmission in Community-Based Nursing Homes: A Cost Analysis.
Roghmann, Mary-Claire; Lydecker, Alison; Mody, Lona; Mullins, C Daniel; Onukwugha, Eberechukwu
2016-08-01
OBJECTIVE To estimate the costs of 3 MRSA transmission prevention scenarios compared with standard precautions in community-based nursing homes. DESIGN Cost analysis of data collected from a prospective, observational study. SETTING AND PARTICIPANTS Care activity data from 401 residents from 13 nursing homes in 2 states. METHODS Cost components included the quantities of gowns and gloves, time to don and doff gown and gloves, and unit costs. Unit costs were combined with information regarding the type and frequency of care provided over a 28-day observation period. For each scenario, the estimated costs associated with each type of care were summed across all residents to calculate an average cost and standard deviation for the full sample and for subgroups. RESULTS The average cost for standard precautions was $100 (standard deviation [SD], $77) per resident over a 28-day period. If gown and glove use for high-risk care was restricted to those with MRSA colonization or chronic skin breakdown, average costs increased to $137 (SD, $120) and $125 (SD, $109), respectively. If gowns and gloves were used for high-risk care for all residents in addition to standard precautions, the average cost per resident increased substantially to $223 (SD, $127). CONCLUSIONS The use of gowns and gloves for high-risk activities with all residents increased the estimated cost by 123% compared with standard precautions. This increase was ameliorated if specific subsets (eg, those with MRSA colonization or chronic skin breakdown) were targeted for gown and glove use for high-risk activities. Infect Control Hosp Epidemiol 2016;37:962-966.
The optimal input optical pulse shape for the self-phase modulation based chirp generator
NASA Astrophysics Data System (ADS)
Zachinyaev, Yuriy; Rumyantsev, Konstantin
2018-04-01
The work is aimed to obtain the optimal shape of the input optical pulse for the proper functioning of the self-phase modulation based chirp generator allowing to achieve high values of chirp frequency deviation. During the research, the structure of the device based on self-phase modulation effect using has been analyzed. The influence of the input optical pulse shape of the transmitting optical module on the chirp frequency deviation has been studied. The relationship between the frequency deviation of the generated chirp and frequency linearity for the three options for implementation of the pulse shape has been also estimated. The results of research are related to the development of the theory of radio processors based on fiber-optic structures and can be used in radars, secure communications, geolocation and tomography.
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
Ultrasonic test of resistance spot welds based on wavelet package analysis.
Liu, Jing; Xu, Guocheng; Gu, Xiaopeng; Zhou, Guanghao
2015-02-01
In this paper, ultrasonic test of spot welds for stainless steel sheets has been studied. It is indicated that traditional ultrasonic signal analysis in either time domain or frequency domain remains inadequate to evaluate the nugget diameter of spot welds. However, the method based on wavelet package analysis in time-frequency domain can easily distinguish the nugget from the corona bond by extracting high-frequency signals in different positions of spot welds, thereby quantitatively evaluating the nugget diameter. The results of ultrasonic test fit the actual measured value well. Mean value of normal distribution of error statistics is 0.00187, and the standard deviation is 0.1392. Furthermore, the quality of spot welds was evaluated, and it is showed ultrasonic nondestructive test based on wavelet packet analysis can be used to evaluate the quality of spot welds, and it is more reliable than single tensile destructive test. Copyright © 2014 Elsevier B.V. All rights reserved.
Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie
2015-01-01
We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871
Spectroscopic pulsational frequency identification and mode determination of γ Doradus star HD 12901
NASA Astrophysics Data System (ADS)
Brunsden, E.; Pollard, K. R.; Cottrell, P. L.; Wright, D. J.; De Cat, P.
2012-12-01
Using multisite spectroscopic data collected from three sites, the frequencies and pulsational modes of the γ Doradus star HD 12901 were identified. A total of six frequencies in the range 1-2 d-1 were observed, their identifications supported by multiple line-profile measurement techniques and previously published photometry. Five frequencies were of sufficient signal-to-noise ratio for mode identification, and all five displayed similar three-bump standard deviation profiles which were fitted well with (l,m) = (1,1) modes. These fits had reduced χ2 values of less than 18. We propose that this star is an excellent candidate to test models of non-radially pulsating γ Doradus stars as a result of the presence of multiple (1,1) modes. This paper includes data taken at the Mount John University Observatory of the University of Canterbury (New Zealand), the McDonald Observatory of the University of Texas at Austin (Texas, USA) and the European Southern Observatory at La Silla (Chile).
Jia, Tao; Gao, Di
2018-04-03
Molecular dynamics simulation is employed to investigate the microscopic heat current inside an argon-copper nanofluid. Wavelet analysis of the microscopic heat current inside the nanofluid system is conducted. The signal of the microscopic heat current is decomposed into two parts: one is the approximation part; the other is the detail part. The approximation part is associated with the low-frequency part of the signal, and the detail part is associated with the high-frequency part of the signal. Both the probability distributions of the high-frequency and the low-frequency parts of the signals demonstrate Gaussian-like characteristics. The curves fit to data of the probability distribution of the microscopic heat current are established, and the parameters including the mean value and the standard deviation in the mathematical formulas of the curves show dramatic changes for the cases before and after adding copper nanoparticles into the argon base fluid.
Introducing Discrete Frequency Infrared Technology for High-Throughput Biofluid Screening
NASA Astrophysics Data System (ADS)
Hughes, Caryn; Clemens, Graeme; Bird, Benjamin; Dawson, Timothy; Ashton, Katherine M.; Jenkinson, Michael D.; Brodbelt, Andrew; Weida, Miles; Fotheringham, Edeline; Barre, Matthew; Rowlette, Jeremy; Baker, Matthew J.
2016-02-01
Accurate early diagnosis is critical to patient survival, management and quality of life. Biofluids are key to early diagnosis due to their ease of collection and intimate involvement in human function. Large-scale mid-IR imaging of dried fluid deposits offers a high-throughput molecular analysis paradigm for the biomedical laboratory. The exciting advent of tuneable quantum cascade lasers allows for the collection of discrete frequency infrared data enabling clinically relevant timescales. By scanning targeted frequencies spectral quality, reproducibility and diagnostic potential can be maintained while significantly reducing acquisition time and processing requirements, sampling 16 serum spots with 0.6, 5.1 and 15% relative standard deviation (RSD) for 199, 14 and 9 discrete frequencies respectively. We use this reproducible methodology to show proof of concept rapid diagnostics; 40 unique dried liquid biopsies from brain, breast, lung and skin cancer patients were classified in 2.4 cumulative seconds against 10 non-cancer controls with accuracies of up to 90%.
Andrzejak, Ryszard; Poreba, Rafal; Poreba, Malgorzata; Derkacz, Arkadiusz; Skalik, Robert; Gac, Pawel; Beck, Boguslaw; Steinmetz-Beck, Aleksandra; Pilecki, Witold
2008-08-01
It is possible that electromagnetic field (EMF) generated by mobile phones (MP) may have an influence on the autonomic nervous system (ANS) and modulates the function of circulatory system. The aim of the study was to estimate the influence of the call with a mobile phone on heart rate variability (HRV) in young healthy people. The time and frequency domain HRV analyses were performed to assess the changes in sympathovagal balance in a group of 32 healthy students with normal electrocardiogram (ECG) and echocardiogram at rest. The frequency domain variables were computed: ultra low frequency (ULF) power, very low frequency (VLF) power, low frequency (LF) power, high frequency (HF) power and LF/HF ratio was determined. ECG Holter monitoring was recorded in standardized conditions: from 08:00 to 09:00 in the morning in a sitting position, within 20 min periods: before the telephone call (period I), during the call with use of mobile phone (period II), and after the telephone call (period III). During 20 min call with a mobile phone time domain parameters such as standard deviation of all normal sinus RR intervals (SDNN [ms]--period I: 73.94+/-25.02, period II: 91.63+/-35.99, period III: 75.06+/-27.62; I-II: p<0.05, II-III: p<0.05) and standard deviation of the averaged normal sinus RR intervals for all 5-mm segments (SDANN [ms]--period I: 47.78+/-22.69, period II: 60.72+/-27.55, period III: 47.12+/-23.21; I-II: p<0.05, II-III: p<0.05) were significantly increased. As well as very low frequency (VLF [ms2]--period I: 456.62+/-214.13, period II: 566.84+/-216.99, period III: 477.43+/-203.94; I-II: p<0.05), low frequency (LF [ms(2)]--period I: 607.97+/-201.33, period II: 758.28+/-307.90, period III: 627.09+/-220.33; I-II: p<0.01, II-III: p<0.05) and high frequency (HF [ms(2)]--period I: 538.44+/-290.63, period II: 730.31+/-445.78, period III: 590.94+/-301.64; I-II: p<0.05) components were the highest and the LF/HF ratio (period I: 1.48+/-0.38, period II: 1.16+/-0.35, period III: 1.46+/-0.40; I-II: p<0.05, II-III: p<0.05) was the lowest during a call with a mobile phone. The tone of the parasympathetic system measured indirectly by analysis of heart rate variability was increased while sympathetic tone was lowered during the call with use of a mobile phone. It was shown that the call with a mobile phone may change the autonomic balance in healthy subjects. Changes in heart rate variability during the call with a mobile phone could be affected by electromagnetic field but the influence of speaking cannot be excluded.
A better norm-referenced grading using the standard deviation criterion.
Chan, Wing-shing
2014-01-01
The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.
Johnson, Craig W; Johnson, Ronald; Kim, Mira; McKee, John C
2009-11-01
During 2004 and 2005 orientations, all 187 and 188 new matriculates, respectively, in two southwestern U.S. nursing schools completed Personal Background and Preparation Surveys (PBPS) in the first predictive validity study of a diagnostic and prescriptive instrument for averting adverse academic status events (AASE) among nursing or health science professional students. One standard deviation increases in PBPS risks (p < 0.05) multiplied odds of first-year or second-year AASE by approximately 150%, controlling for school affiliation and underrepresented minority student (URMS) status. AASE odds one standard deviation above mean were 216% to 250% those one standard deviation below mean. Odds of first-year or second-year AASE for URMS one standard deviation above the 2004 PBPS mean were 587% those for non-URMS one standard deviation below mean. The PBPS consistently and significantly facilitated early identification of nursing students at risk for AASE, enabling proactive targeting of interventions for risk amelioration and AASE or attrition prevention. Copyright 2009, SLACK Incorporated.
Investigation of the relationship between ionospheric foF2 and earthquakes
NASA Astrophysics Data System (ADS)
Karaboga, Tuba; Canyilmaz, Murat; Ozcan, Osman
2018-04-01
Variations of the ionospheric F2 region critical frequency (foF2) have been investigated statistically before earthquakes during 1980-2008 periods in Japan area. Ionosonde data was taken from Kokubunji station which is in the earthquake preparation zone for all earthquakes. Standard Deviations and Inter-Quartile Range methods are applied to the foF2 data. It is observed that there are anomalous variations in foF2 before earthquakes. These variations can be regarded as ionospheric precursors and may be used for earthquake prediction.
Secular Variation and Physical Characteristics Determination of the HADS Star EH Lib
NASA Astrophysics Data System (ADS)
Pena, J. H.; Villarreal, C.; Pina, D. S.; Renteria, A.; Soni, A., Guillen, J. Calderon, J.
2017-12-01
Physical parameters of EH Lib have been determined based on observations carried out in 2015 with photometry. They have also served, along with samples from the years 1969 and 1986, to analyse the frequency content of EH Lib with Fourier Transforms. Recent CCD observations increased the times of maximum with twelve new times which helped us study the secular variation of the period with a method based on the minimization of the standard deviation of the O-C residuals. It is concluded that there may be a long-term period change.
How Much Does Inbreeding Reduce Heterozygosity? Empirical Results from Aedes aegypti
Powell, Jeffrey R.; Evans, Benjamin R.
2017-01-01
Deriving strains of mosquitoes with reduced genetic variation is useful, if not necessary, for many genetic studies. Inbreeding is the standard way of achieving this. Full-sib inbreeding the mosquito Aedes aegypti for seven generations reduced heterozygosity to 72% of the initial heterozygosity in contrast to the expected 13%. This deviation from expectations is likely due to high frequencies of deleterious recessive alleles that, given the number of markers studied (27,674 single nucleotide polymorphisms [SNPs]), must be quite densely spread in the genome. PMID:27799643
Demonstration of the Gore Module for Passive Ground Water Sampling
2014-06-01
ix ACRONYMS AND ABBREVIATIONS % RSD percent relative standard deviation 12DCA 1,2-dichloroethane 112TCA 1,1,2-trichloroethane 1122TetCA...Analysis of Variance ROD Record of Decision RSD relative standard deviation SBR Southern Bush River SVOC semi-volatile organic compound...replicate samples had a relative standard deviation ( RSD ) that was 20% or less. For the remaining analytes (PCE, cDCE, and chloroform), at least 70
Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling
2016-01-01
To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.
Weinstein, Ronald S; Krupinski, Elizabeth A; Weinstein, John B; Graham, Anna R; Barker, Gail P; Erps, Kristine A; Holtrust, Angelette L; Holcomb, Michael J
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school ( F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender ( F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level ( F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student's expectations. One class voted K-12 general pathology their "elective course-of-the-year."
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Flexner 3.0—Democratization of Medical Knowledge for the 21st Century
Krupinski, Elizabeth A.; Weinstein, John B.; Graham, Anna R.; Barker, Gail P.; Erps, Kristine A.; Holtrust, Angelette L.; Holcomb, Michael J.
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school (F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender (F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level (F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student’s expectations. One class voted K-12 general pathology their “elective course-of-the-year.” PMID:28725762
A low-cost acoustic permeameter
NASA Astrophysics Data System (ADS)
Drake, Stephen A.; Selker, John S.; Higgins, Chad W.
2017-04-01
Intrinsic permeability is an important parameter that regulates air exchange through porous media such as snow. Standard methods of measuring snow permeability are inconvenient to perform outdoors, are fraught with sampling errors, and require specialized equipment, while bringing intact samples back to the laboratory is also challenging. To address these issues, we designed, built, and tested a low-cost acoustic permeameter that allows computation of volume-averaged intrinsic permeability for a homogenous medium. In this paper, we validate acoustically derived permeability of homogenous, reticulated foam samples by comparison with results derived using a standard flow-through permeameter. Acoustic permeameter elements were designed for use in snow, but the measurement methods are not snow-specific. The electronic components - consisting of a signal generator, amplifier, speaker, microphone, and oscilloscope - are inexpensive and easily obtainable. The system is suitable for outdoor use when it is not precipitating, but the electrical components require protection from the elements in inclement weather. The permeameter can be operated with a microphone either internally mounted or buried a known depth in the medium. The calibration method depends on choice of microphone positioning. For an externally located microphone, calibration was based on a low-frequency approximation applied at 500 Hz that provided an estimate of both intrinsic permeability and tortuosity. The low-frequency approximation that we used is valid up to 2 kHz, but we chose 500 Hz because data reproducibility was maximized at this frequency. For an internally mounted microphone, calibration was based on attenuation at 50 Hz and returned only intrinsic permeability. We found that 50 Hz corresponded to a wavelength that minimized resonance frequencies in the acoustic tube and was also within the response limitations of the microphone. We used reticulated foam of known permeability (ranging from 2 × 10-7 to 3 × 10-9 m2) and estimated tortuosity of 1.05 to validate both methods. For the externally mounted microphone the mean normalized standard deviation was 6 % for permeability and 2 % for tortuosity. The mean relative error from known measurements was 17 % for permeability and 2 % for tortuosity. For the internally mounted microphone the mean normalized standard deviation for permeability was 10 % and the relative error was also 10 %. Permeability determination for an externally mounted microphone is less sensitive to environmental noise than is the internally mounted microphone and is therefore the recommended method. The approximation using the internally mounted microphone was developed as an alternative for circumstances in which placing the microphone in the medium was not feasible. Environmental noise degrades precision of both methods and is recognizable as increased scatter for replicate data points.
Estimation of the neural drive to the muscle from surface electromyograms
NASA Astrophysics Data System (ADS)
Hofmann, David
Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.
Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.
Bowman, Richard G; Caraway, David; Bentley, Ishmael
2013-01-01
Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.
Green, Scott R.; Gianchandani, Yogesh B.
2017-01-01
Resonant magnetoelastic devices are widely used as anti-theft tags and are also being investigated for a range of sensing applications. The vast majority of magnetoelastic devices are operated at resonance, and rely upon an external interface to wirelessly detect the resonant frequency, and other characteristics. For micromachined devices, this detection method must accommodate diminished signal strength and elevated resonant frequencies. Feedthrough of the interrogating stimulus to the detector also presents a significant challenge. This paper describes a method of interrogating wireless magnetoelastic strain sensors using a new frequency-lock approach. Following a brief excitation pulse, the sensor ring-down is analyzed and a feedback loop is used to match the excitation frequency and the resonant frequency. Data acquisition hardware is used in conjunction with custom software to implement the frequency-lock loop. Advantages of the method include temporal isolation of interrogating stimulus from the sensor response and near real-time tracking of resonant frequencies. The method was investigated using a family of wireless strain sensors with resonant frequencies ranging from 120 to 240 kHz. Strain levels extending to 3.5 mstrain and sensitivities up to 14300 ppm/mstrain were measured with response times faster than 0.5 s. The standard deviation of the locked frequency did not exceed 0.1%. PMID:28713873
Cappell, M S; Spray, D C; Bennett, M V
1988-06-28
Protractor muscles in the gastropod mollusc Navanax inermis exhibit typical spontaneous miniature end plate potentials with mean amplitude 1.71 +/- 1.19 (standard deviation) mV. The evoked end plate potential is quantized, with a quantum equal to the miniature end plate potential amplitude. When their rate is stationary, occurrence of miniature end plate potentials is a random, Poisson process. When non-stationary, spontaneous miniature end plate potential occurrence is a non-stationary Poisson process, a Poisson process with the mean frequency changing with time. This extends the random Poisson model for miniature end plate potentials to the frequently observed non-stationary occurrence. Reported deviations from a Poisson process can sometimes be accounted for by the non-stationary Poisson process and more complex models, such as clustered release, are not always needed.
On-line determination of transient stability status using multilayer perceptron neural network
NASA Astrophysics Data System (ADS)
Frimpong, Emmanuel Asuming; Okyere, Philip Yaw; Asumadu, Johnson
2018-01-01
A scheme to predict transient stability status following a disturbance is presented. The scheme is activated upon the tripping of a line or bus and operates as follows: Two samples of frequency deviation values at all generator buses are obtained. At each generator bus, the maximum frequency deviation within the two samples is extracted. A vector is then constructed from the extracted maximum frequency deviations. The Euclidean norm of the constructed vector is calculated and then fed as input to a trained multilayer perceptron neural network which predicts the stability status of the system. The scheme was tested using data generated from the New England test system. The scheme successfully predicted the stability status of all two hundred and five disturbance test cases.
Mercury Human Exposure in Populations Living Around Lake Tana (Ethiopia).
Habiba, G; Abebe, G; Bravo, Andrea G; Ermias, D; Staffan, Ǻ; Bishop, K
2017-02-01
A survey carried out in Lake Tana in 2015 found that Hg levels in some fish species exceeded internationally accepted safe levels for fish consumption. The current study assesses human exposure to Hg through fish consumption around the Lake Tana. Of particular interest was that a dietary intake of fishes is currently a health risk for Bihar Dar residents and anglers. Hair samples were collected from three different groups: anglers, college students and teachers, and daily laborers. A questionary includes gender, age, weight, activity. Frequency of fish consumption and origin of the eaten fish were completed by each participant. Mercury concentrations in hair were significantly higher (P value <0.05) for anglers (mean ± standard deviation 0.120 ± 0.199 μg/g) than college students (mean ± standard deviation 0.018 ± 0.039 μg/g) or daily workers (mean ± standard deviation 16 ± 9.5 ng/g). Anglers consumed fish more often than daily workers and college group. Moreover, there was also a strong correlation (P value <0.05) between the logarithms of total mercury and age associated with mercury concentration in scalp hair. Mercury concentrations in the hair of men were on average twice the value of the women. Also, users of skin lightening soap on a daily basis had 2.5 times greater mercury in scalp hair than non-users. Despite the different sources of mercury exposure mentioned above, the mercury concentrations of the scalp hair of participants of this study were below levels deemed to pose a threat to health.
Cramer, Richard D.
2015-01-01
The possible applicability of the new template CoMFA methodology to the prediction of unknown biological affinities was explored. For twelve selected targets, all ChEMBL binding affinities were used as training and/or prediction sets, making these 3D-QSAR models the most structurally diverse and among the largest ever. For six of the targets, X-ray crystallographic structures provided the aligned templates required as input (BACE, cdk1, chk2, carbonic anhydrase-II, factor Xa, PTP1B). For all targets including the other six (hERG, cyp3A4 binding, endocrine receptor, COX2, D2, and GABAa), six modeling protocols applied to only three familiar ligands provided six alternate sets of aligned templates. The statistical qualities of the six or seven models thus resulting for each individual target were remarkably similar. Also, perhaps unexpectedly, the standard deviations of the errors of cross-validation predictions accompanying model derivations were indistinguishable from the standard deviations of the errors of truly prospective predictions. These standard deviations of prediction ranged from 0.70 to 1.14 log units and averaged 0.89 (8x in concentration units) over the twelve targets, representing an average reduction of almost 50% in uncertainty, compared to the null hypothesis of “predicting” an unknown affinity to be the average of known affinities. These errors of prediction are similar to those from Tanimoto coefficients of fragment occurrence frequencies, the predominant approach to side effect prediction, which template CoMFA can augment by identifying additional active structural classes, by improving Tanimoto-only predictions, by yielding quantitative predictions of potency, and by providing interpretable guidance for avoiding or enhancing any specific target response. PMID:26065424
Computer Programs for the Semantic Differential: Further Modifications.
ERIC Educational Resources Information Center
Lawson, Edwin D.; And Others
The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…
Deviations from LTE in a stellar atmosphere
NASA Technical Reports Server (NTRS)
Kalkofen, W.; Klein, R. I.; Stein, R. F.
1979-01-01
Deviations for LTE are investigated in an atmosphere of hydrogen atoms with one bound level, satisfying the equations of radiative, hydrostatic, and statistical equilibrium. The departure coefficient and the kinetic temperature as functions of the frequency dependence of the radiative cross section are studied analytically and numerically. Near the outer boundary of the atmosphere, the departure coefficient is smaller than unity when the radiative cross section grows with frequency faster than with the square of frequency; it exceeds unity otherwise. Far from the boundary the departure coefficient tends to exceed unity for any frequency dependence of the radiative cross section. Overpopulation always implies that the kinetic temperature in the statistical-equilibrium atmosphere is higher than the temperature in the corresponding LTE atmosphere. Upper and lower bounds on the kinetic temperature are given for an atmosphere with deviations from LTE only in the optically shallow layers when the emergent intensity can be described by a radiation temperature.
Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.
McClure, Foster D; Lee, Jung K
2006-01-01
A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.
NASA Astrophysics Data System (ADS)
Sternkopf, Christian; Manske, Eberhard
2018-06-01
We report on the enhancement of a previously-presented heterodyne laser source on the basis of two phase-locked loop (PLL) frequency coupled internal-mirror He–Ne lasers. Our new system consists of two digitally controlled He–Ne lasers with slightly different wavelengths, and offers high-frequency stability and very narrow optical linewidth. The digitally controlled system has been realized by using a FPGA controller and transconductance amplifiers. The light of both lasers was coupled into separate fibres for heterodyne interferometer applications. To enhance the laser performance we observed the sensitivity of both laser tubes to electromagnetic noise from various laser power supplies and frequency control systems. Furthermore, we describe how the linewidth of a frequency-controlled He–Ne laser can be reduced during precise frequency stabilisation. The digitally controlled laser source reaches a standard beat frequency deviation of less than 20 Hz (with 1 s gate time) and a spectral full width at half maximum (FWHM) of the beat signal less than 3 kHz. The laser source has enough optical output power to serve a fibre-coupled multi axis heterodyne interferometer. The system can be adjusted to output beat frequencies in the range of 0.1 MHz–20 MHz.
Phase noise characterization of a QD-based diode laser frequency comb.
Vedala, Govind; Al-Qadi, Mustafa; O'Sullivan, Maurice; Cartledge, John; Hui, Rongqing
2017-07-10
We measure, simultaneously, the phases of a large set of comb lines from a passively mode locked, InAs/InP, quantum dot laser frequency comb (QDLFC) by comparing the lines to a stable comb reference using multi-heterodyne coherent detection. Simultaneity permits the separation of differential and common mode phase noise and a straightforward determination of the wavelength corresponding to the minimum width of the comb line. We find that the common mode and differential phases are uncorrelated, and measure for the first time for a QDLFC that the intrinsic differential-mode phase (IDMP) between adjacent subcarriers is substantially the same for all subcarrier pairs. The latter observation supports an interpretation of 4.4ps as the standard deviation of IDMP on a 200µs time interval for this laser.
Time delay between cardiac and brain activity during sleep transitions
NASA Astrophysics Data System (ADS)
Long, Xi; Arends, Johan B.; Aarts, Ronald M.; Haakma, Reinder; Fonseca, Pedro; Rolink, Jérôme
2015-04-01
Human sleep consists of wake, rapid-eye-movement (REM) sleep, and non-REM (NREM) sleep that includes light and deep sleep stages. This work investigated the time delay between changes of cardiac and brain activity for sleep transitions. Here, the brain activity was quantified by electroencephalographic (EEG) mean frequency and the cardiac parameters included heart rate, standard deviation of heartbeat intervals, and their low- and high-frequency spectral powers. Using a cross-correlation analysis, we found that the cardiac variations during wake-sleep and NREM sleep transitions preceded the EEG changes by 1-3 min but this was not the case for REM sleep transitions. These important findings can be further used to predict the onset and ending of some sleep stages in an early manner.
Frequency modulation television analysis: Distortion analysis
NASA Technical Reports Server (NTRS)
Hodge, W. H.; Wong, W. H.
1973-01-01
Computer simulation is used to calculate the time-domain waveform of standard T-pulse-and-bar test signal distorted in passing through an FM television system. The simulator includes flat or preemphasized systems and requires specification of the RF predetection filter characteristics. The predetection filters are modeled with frequency-symmetric Chebyshev (0.1-db ripple) and Butterworth filters. The computer was used to calculate distorted output signals for sixty-four different specified systems, and the output waveforms are plotted for all sixty-four. Comparison of the plotted graphs indicates that a Chebyshev predetection filter of four poles causes slightly more signal distortion than a corresponding Butterworth filter and the signal distortion increases as the number of poles increases. An increase in the peak deviation also increases signal distortion. Distortion also increases with the addition of preemphasis.
Stewart, J M
2000-02-01
Invasive arterial monitoring alters autonomic tone. The effects of intravenous (i.v.) insertion are less clear. The author assessed the effects of i.v. insertion on autonomic activity in patients aged 11 to 19 years prior to head-up tilt by measuring heart rate, blood pressure, heart rate variability, blood pressure variability, and baroreceptor gain before and after i.v. insertion with continuous electrocardiography and arterial tonometry in patients with orthostatic tachycardia syndrome (OTS, N = 21), in patients who experienced simple fainting (N = 14), and in normal control subjects (N = 6). Five-minute samples were collected after 30 minutes supine. Fifteen minutes after i.v. insertion, data were collected again. These 5-minute samples were also collected in a separate control population without i.v. insertion after 30 minutes supine and again 30 minutes later. This population included 12 patients with OTS, 13 patients who experienced simple fainting, and 6 normal control subjects. Heart rate variability included the mean RR, the standard deviation of the RR interval (SDNN), and the root mean square of successive RR differences (RMSSD). Autoregressive spectral modeling was used. Low-frequency power (LFP, 0.04-0.15 Hz), high-frequency power (HFP, 0.15-0.40 Hz), and total power (TP, 0.01-0.40 Hz) were compared. Blood pressure variability included standard deviation of systolic blood pressure, LFP, and HFP. Baroreceptor gain at low frequency and high frequency was calculated from cross-spectral transfer function magnitudes when coherence was greater than 0.5. In patients with OTS, RR (790 +/- 50 msec), SDNN (54 +/- 6 msec), RMSSD (55 +/- 5 msec), LFP (422 +/- 200 ms2/Hz), HFP (846 +/- 400 ms2/Hz), and TP (1550 +/- 320 ms2/Hz) were less than in patients who experienced simple fainting (RR, 940 +/- 50 msec; SDNN, 84 +/- 10 msec; RMSSD, 91 +/- 7 msec; LFP, 880 +/- 342 ms2/Hz; HFP, 1720 +/- 210 ms2/Hz; and TP, 3228 +/- 490 ms2/Hz) or normal control subjects (RR, 920 +/- 30 msec; SDNN, 110 +/- 29 msec; RMSSD, 120 +/- 16 msec; LFP, 1600 +/- 331 ms2/Hz; HFP, 2700 +/- 526 ms2/Hz; and TP, 5400 +/- 1017 ms2/Hz). Blood pressure and blood pressure variability were not different in any group. Standard deviation, LFP, and HFP were, respectively, 5.24 +/- 0.8 mm Hg, 1.2 +/- 0.2, and 1.5 +/- 0.3 for patients with OTS; 4.6 +/- 0.4 mm Hg, 1.2 +/- 0.2, and 1.4 +/- 0.3 for patients who experienced simple fainting; and 5.55 +/- 1.0 mm Hg, 1.4 +/- 0.2, and 1.6 +/- 0.3 for normal control subjects. Baroreceptor gain at low frequency and high frequency in patients with OTS (16 +/- 4 msec/mm Hg, 17 +/- 5) was comparable to that in patients who experienced simple fainting (33 +/- 4, 32 +/- 3) and that in normal control subjects (31 +/- 8, 37 +/- 9). Heart rate variability differed between patients with OTS and patients who experienced simple fainting or normal control subjects, and blood pressure and blood pressure variability were not different, but no parameter changed after i.v. insertion. There were no differences from the groups that did not receive i.v. insertions. Data suggest, at most, a limited effect of i.v. insertion on autonomic function in adolescents.
Spectroscopy of H3+ based on a new high-accuracy global potential energy surface.
Polyansky, Oleg L; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Ovsyannikov, Roman I; Tennyson, Jonathan; Lodi, Lorenzo; Szidarovszky, Tamás; Császár, Attila G
2012-11-13
The molecular ion H(3)(+) is the simplest polyatomic and poly-electronic molecular system, and its spectrum constitutes an important benchmark for which precise answers can be obtained ab initio from the equations of quantum mechanics. Significant progress in the computation of the ro-vibrational spectrum of H(3)(+) is discussed. A new, global potential energy surface (PES) based on ab initio points computed with an average accuracy of 0.01 cm(-1) relative to the non-relativistic limit has recently been constructed. An analytical representation of these points is provided, exhibiting a standard deviation of 0.097 cm(-1). Problems with earlier fits are discussed. The new PES is used for the computation of transition frequencies. Recently measured lines at visible wavelengths combined with previously determined infrared ro-vibrational data show that an accuracy of the order of 0.1 cm(-1) is achieved by these computations. In order to achieve this degree of accuracy, relativistic, adiabatic and non-adiabatic effects must be properly accounted for. The accuracy of these calculations facilitates the reassignment of some measured lines, further reducing the standard deviation between experiment and theory.
Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles
NASA Astrophysics Data System (ADS)
Kobayashi, Naoki; Yamazaki, Hiroshi
2018-01-01
We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.
Ambulatory blood pressure profiles in familial dysautonomia.
Goldberg, Lior; Bar-Aluma, Bat-El; Krauthammer, Alex; Efrati, Ori; Sharabi, Yehonatan
2018-02-12
Familial dysautonomia (FD) is a rare genetic disease that involves extreme blood pressure fluctuations secondary to afferent baroreflex failure. The diurnal blood pressure profile, including the average, variability, and day-night difference, may have implications for long-term end organ damage. The purpose of this study was to describe the circadian pattern of blood pressure in the FD population and relationships with renal and pulmonary function, use of medications, and overall disability. We analyzed 24-h ambulatory blood pressure monitoring recordings in 22 patients with FD. Information about medications, disease severity, renal function (estimated glomerular filtration, eGFR), pulmonary function (forced expiratory volume in 1 s, FEV1) and an index of blood pressure variability (standard deviation of systolic pressure) were analyzed. The mean (± SEM) 24-h blood pressure was 115 ± 5.6/72 ± 2.0 mmHg. The diurnal blood pressure variability was high (daytime systolic pressure standard deviation 22.4 ± 1.5 mmHg, nighttime 17.2 ± 1.6), with a high frequency of a non-dipping pattern (16 patients, 73%). eGFR, use of medications, FEV1, and disability scores were unrelated to the degree of blood pressure variability or to dipping status. This FD cohort had normal average 24-h blood pressure, fluctuating blood pressure, and a high frequency of non-dippers. Although there was evidence of renal dysfunction based on eGFR and proteinuria, the ABPM profile was unrelated to the measures of end organ dysfunction or to reported disability.
Complexities of follicle deviation during selection of a dominant follicle in Bos taurus heifers.
Ginther, O J; Baldrighi, J M; Siddiqui, M A R; Araujo, E R
2016-11-01
Follicle deviation during a follicular wave is a continuation in growth rate of the dominant follicle (F1) and decreased growth rate of the largest subordinate follicle (F2). The reliability of using an F1 of 8.5 mm to represent the beginning of expected deviation for experimental purposes during waves 1 and 2 (n = 26 per wave) was studied daily in heifers. Each wave was subgrouped as follows: standard subgroup (F1 larger than F2 for 2 days preceding deviation and F2 > 7.0 mm on the day of deviation), undersized subgroup (F2 did not attain 7.0 mm by the day of deviation), and switched subgroup (F2 larger than F1 at least once on the 2 days before or on the day of deviation). For each wave, mean differences in diameter between F1 and F2 changed abruptly at expected deviation in the standard subgroup but began 1 day before expected deviation in the undersized and switched subgroups. Concentrations of FSH in the wave-stimulating FSH surge and an increase in LH centered on expected deviation did not differ among subgroups. Results for each wave indicated that (1) expected deviation (F1, 8.5 mm) was a reliable representation of actual deviation in the standard subgroup but not in the undersized and switched subgroups; (2) concentrations of the gonadotropins normalized to expected deviation were similar among the three subgroups, indicating that the day of deviation was related to diameter of F1 and not F2; and (3) defining an expected day of deviation for experimental use should consider both diameter of F1 and the characteristics of deviation. Copyright © 2016 Elsevier Inc. All rights reserved.
40 CFR 90.708 - Cumulative Sum (CumSum) procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... is 5.0×σ, and is a function of the standard deviation, σ. σ=is the sample standard deviation and is... individual engine. FEL=Family Emission Limit (the standard if no FEL). F=.25×σ. (2) After each test pursuant...
NASA Astrophysics Data System (ADS)
Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.
2017-11-01
Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
Takamizawa, Akifumi; Yanagimachi, Shinya; Tanabe, Takehiko; Hagimoto, Ken; Hirano, Iku; Watabe, Ken-ichi; Ikegami, Takeshi; Hartnett, John G
2014-09-01
The frequency stability of an atomic fountain clock was significantly improved by employing an ultra-stable local oscillator and increasing the number of atoms detected after the Ramsey interrogation, resulting in a measured Allan deviation of 8.3 × 10(-14)τ(-1/2)). A cryogenic sapphire oscillator using an ultra-low-vibration pulse-tube cryocooler and cryostat, without the need for refilling with liquid helium, was applied as a local oscillator and a frequency reference. High atom number was achieved by the high power of the cooling laser beams and optical pumping to the Zeeman sublevel m(F) = 0 employed for a frequency measurement, although vapor-loaded optical molasses with the simple (001) configuration was used for the atomic fountain clock. The resulting stability is not limited by the Dick effect as it is when a BVA quartz oscillator is used as the local oscillator. The stability reached the quantum projection noise limit to within 11%. Using a combination of a cryocooled sapphire oscillator and techniques to enhance the atom number, the frequency stability of any atomic fountain clock, already established as primary frequency standard, may be improved without opening its vacuum chamber.
N2/O2/H2 Dual-Pump Cars: Validation Experiments
NASA Technical Reports Server (NTRS)
OByrne, S.; Danehy, P. M.; Cutler, A. D.
2003-01-01
The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.
Integrated optics reflectometer
Couch, Philip R; Murphy, Kent A.; Gunther, Michael F; Gause, Charles B
2017-01-31
An apparatus includes a laser source configured to output laser light at a target frequency, and a measurement unit configured to measure a deviation between an actual frequency outputted by the laser source at a current period of time and the target frequency of the laser source. The apparatus includes a feedback control unit configured to, based on the measured deviation between the actual and target frequencies, control the laser source to maintain a constant frequency of laser output from the laser source so that the frequency of laser light transmitted from the laser source is adjusted to the target frequency. The feedback control unit can control the laser source to maintain a linear rate of change in the frequency of its laser light output, and compensate for characteristics of the measurement unit utilized for frequency measurement. A method is provided for performing the feedback control of the laser source.
NASA Astrophysics Data System (ADS)
Purinton, Benjamin; Bookhagen, Bodo
2017-04-01
In this study, we validate and compare elevation accuracy and geomorphic metrics of satellite-derived digital elevation models (DEMs) on the southern Central Andean Plateau. The plateau has an average elevation of 3.7 km and is characterized by diverse topography and relief, lack of vegetation, and clear skies that create ideal conditions for remote sensing. At 30 m resolution, SRTM-C, ASTER GDEM2, stacked ASTER L1A stereopair DEM, ALOS World 3D, and TanDEM-X have been analyzed. The higher-resolution datasets include 12 m TanDEM-X, 10 m single-CoSSC TerraSAR-X/TanDEM-X DEMs, and 5 m ALOS World 3D. These DEMs are state of the art for optical (ASTER and ALOS) and radar (SRTM-C and TanDEM-X) spaceborne sensors. We assessed vertical accuracy by comparing standard deviations of the DEM elevation versus 307 509 differential GPS measurements across 4000 m of elevation. For the 30 m DEMs, the ASTER datasets had the highest vertical standard deviation at > 6.5 m, whereas the SRTM-C, ALOS World 3D, and TanDEM-X were all < 3.5 m. Higher-resolution DEMs generally had lower uncertainty, with both the 12 m TanDEM-X and 5 m ALOS World 3D having < 2 m vertical standard deviation. Analysis of vertical uncertainty with respect to terrain elevation, slope, and aspect revealed the low uncertainty across these attributes for SRTM-C (30 m), TanDEM-X (12-30 m), and ALOS World 3D (5-30 m). Single-CoSSC TerraSAR-X/TanDEM-X 10 m DEMs and the 30 m ASTER GDEM2 displayed slight aspect biases, which were removed in their stacked counterparts (TanDEM-X and ASTER Stack). Based on low vertical standard deviations and visual inspection alongside optical satellite data, we selected the 30 m SRTM-C, 12-30 m TanDEM-X, 10 m single-CoSSC TerraSAR-X/TanDEM-X, and 5 m ALOS World 3D for geomorphic metric comparison in a 66 km2 catchment with a distinct river knickpoint. Consistent m/n values were found using chi plot channel profile analysis, regardless of DEM type and spatial resolution. Slope, curvature, and drainage area were calculated and plotting schemes were used to assess basin-wide differences in the hillslope-to-valley transition related to the knickpoint. While slope and hillslope length measurements vary little between datasets, curvature displays higher magnitude measurements with fining resolution. This is especially true for the optical 5 m ALOS World 3D DEM, which demonstrated high-frequency noise in 2-8 pixel steps through a Fourier frequency analysis. The improvements in accurate space-radar DEMs (e.g., TanDEM-X) for geomorphometry are promising, but airborne or terrestrial data are still necessary for meter-scale analysis.
Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.
Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O
2009-04-01
Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.
Roy, Tapta Kanchan; Carrington, Tucker; Gerber, R Benny
2014-08-21
Anharmonic vibrational spectroscopy calculations using MP2 and B3LYP computed potential surfaces are carried out for a series of molecules, and frequencies and intensities are compared with those from experiment. The vibrational self-consistent field with second-order perturbation correction (VSCF-PT2) is used in computing the spectra. The test calculations have been performed for the molecules HNO3, C2H4, C2H4O, H2SO4, CH3COOH, glycine, and alanine. Both MP2 and B3LYP give results in good accord with experimental frequencies, though, on the whole, MP2 gives very slightly better agreement. A statistical analysis of deviations in frequencies from experiment is carried out that gives interesting insights. The most probable percentage deviation from experimental frequencies is about -2% (to the red of the experiment) for B3LYP and +2% (to the blue of the experiment) for MP2. There is a higher probability for relatively large percentage deviations when B3LYP is used. The calculated intensities are also found to be in good accord with experiment, but the percentage deviations are much larger than those for frequencies. The results show that both MP2 and B3LYP potentials, used in VSCF-PT2 calculations, account well for anharmonic effects in the spectroscopy of molecules of the types considered.
NASA Astrophysics Data System (ADS)
Mazzoleni, Paolo; Matta, Fabio; Zappa, Emanuele; Sutton, Michael A.; Cigada, Alfredo
2015-03-01
This paper discusses the effect of pre-processing image blurring on the uncertainty of two-dimensional digital image correlation (DIC) measurements for the specific case of numerically-designed speckle patterns having particles with well-defined and consistent shape, size and spacing. Such patterns are more suitable for large measurement surfaces on large-scale specimens than traditional spray-painted random patterns without well-defined particles. The methodology consists of numerical simulations where Gaussian digital filters with varying standard deviation are applied to a reference speckle pattern. To simplify the pattern application process for large areas and increase contrast to reduce measurement uncertainty, the speckle shape, mean size and on-center spacing were selected to be representative of numerically-designed patterns that can be applied on large surfaces through different techniques (e.g., spray-painting through stencils). Such 'designer patterns' are characterized by well-defined regions of non-zero frequency content and non-zero peaks, and are fundamentally different from typical spray-painted patterns whose frequency content exhibits near-zero peaks. The effect of blurring filters is examined for constant, linear, quadratic and cubic displacement fields. Maximum strains between ±250 and ±20,000 με are simulated, thus covering a relevant range for structural materials subjected to service and ultimate stresses. The robustness of the simulation procedure is verified experimentally using a physical speckle pattern subjected to constant displacements. The stability of the relation between standard deviation of the Gaussian filter and measurement uncertainty is assessed for linear displacement fields at varying image noise levels, subset size, and frequency content of the speckle pattern. It is shown that bias error as well as measurement uncertainty are minimized through Gaussian pre-filtering. This finding does not apply to typical spray-painted patterns without well-defined particles, for which image blurring is only beneficial in reducing bias errors.
Chuang, Shin-Shin; Wu, Kung-Tai; Lin, Chen-Yang; Lee, Steven; Chen, Gau-Yang; Kuo, Cheng-Deng
2014-08-01
The Poincaré plot of RR intervals (RRI) is obtained by plotting RRIn+1 against RRIn. The Pearson correlation coefficient (ρRRI), slope (SRRI), Y-intercept (YRRI), standard deviation of instantaneous beat-to-beat RRI variability (SD1RR), and standard deviation of continuous long-term RRI variability (SD2RR) can be defined to characterize the plot. Similarly, the Poincaré plot of autocorrelation function (ACF) of RRI can be obtained by plotting ACFk+1 against ACFk. The corresponding Pearson correlation coefficient (ρACF), slope (SACF), Y-intercept (YACF), SD1ACF, and SD2ACF can be defined similarly to characterize the plot. By comparing the indices of Poincaré plots of RRI and ACF between patients with acute myocardial infarction (AMI) and patients with patent coronary artery (PCA), we found that the ρACF and SACF were significantly larger, whereas the RMSSDACF/SDACF and SD1ACF/SD2ACF were significantly smaller in AMI patients. The ρACF and SACF correlated significantly and negatively with normalized high-frequency power (nHFP), and significantly and positively with normalized very low-frequency power (nVLFP) of heart rate variability in both groups of patients. On the contrary, the RMSSDACF/SDACF and SD1ACF/SD2ACF correlated significantly and positively with nHFP, and significantly and negatively with nVLFP and low-/high-frequency power ratio (LHR) in both groups of patients. We concluded that the ρACF, SACF, RMSSDACF/SDACF, and SD1ACF/SD2ACF, among many other indices of ACF Poincaré plot, can be used to differentiate between patients with AMI and patients with PCA, and that the increase in ρACF and SACF and the decrease in RMSSDACF/SDACF and SD1ACF/SD2ACF suggest an increased sympathetic and decreased vagal modulations in both groups of patients.
NASA Astrophysics Data System (ADS)
Shen, Qin; Gao, Guangyao; Hu, Wei; Fu, Bojie
2016-09-01
Knowledge of the spatial-temporal variability of soil water content (SWC) is critical for understanding a range of hydrological processes. In this study, the spatial variance and temporal stability of SWC were investigated in a cropland-shelterbelt-desert site at the oasis-desert ecotone in the middle of the Heihe River Basin, China. The SWC was measured on 65 occasions to a depth of 2.8 m at 45 locations during two growing seasons from 2012 to 2013. The standard deviation of the SWC versus the mean SWC exhibited a convex upward relationship in the shelterbelt with the greatest spatial variation at the SWC of around 22.0%, whereas a linearly increasing relationship was observed for the cropland, desert, and land use pattern. The standard deviation of the relative difference was positively linearly correlated with the SWC (p < 0.05) for the land use pattern, whereas such a relationship was not found in the three land use types. The spatial pattern of the SWC was more time stable for the land use pattern, followed by desert, shelterbelt, and cropland. The spatial pattern of SWC changed dramatically among different soil layers. The locations representing the mean SWC varied with the depth, and no location could represent the whole soil profile due to different soil texture, root distribution and irrigation management. The representative locations of each soil layer could be used to estimate the mean SWC well. The statistics of temporal stability of the SWC could be presented equally well with a low frequency of observation (30-day interval) as with a high frequency (5-day interval). Sampling frequency had little effect on the selection of the representative locations of the field mean SWC. This study provides useful information for designing the optimal strategy for sampling SWC at the oasis-desert ecotone in the arid inland river basin.
Mass and Double-Beta-Decay Q Value of Xe136
NASA Astrophysics Data System (ADS)
Redshaw, Matthew; Wingfield, Elizabeth; McDaniel, Joseph; Myers, Edmund G.
2007-02-01
The atomic mass of Xe136 has been measured by comparing cyclotron frequencies of single ions in a Penning trap. The result, with 1 standard deviation uncertainty, is M(Xe136)=135.907 214 484 (11) u. Combined with previous results for the mass of Ba136 [Audi, Wapstra, and Thibault, Nucl. Phys. A 729, 337 (2003)NUPABL0375-947410.1016/j.nuclphysa.2003.11.003], this gives a Q value (M[Xe136]-M[Ba136])c2=2457.83(37)keV, sufficiently precise for ongoing searches for the neutrinoless double-beta decay of Xe136.
Mass and Double-Beta-Decay Q Value of {sup 136}Xe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redshaw, Matthew; Wingfield, Elizabeth; McDaniel, Joseph
The atomic mass of {sup 136}Xe has been measured by comparing cyclotron frequencies of single ions in a Penning trap. The result, with 1 standard deviation uncertainty, is M({sup 136}Xe)=135.907 214 484 (11) u. Combined with previous results for the mass of {sup 136}Ba [Audi, Wapstra, and Thibault, Nucl. Phys. A 729, 337 (2003)], this gives a Q value (M[{sup 136}Xe]-M[{sup 136}Ba])c{sup 2}=2457.83(37) keV, sufficiently precise for ongoing searches for the neutrinoless double-beta decay of {sup 136}Xe.
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Matrix Summaries Improve Research Reports: Secondary Analyses Using Published Literature
ERIC Educational Resources Information Center
Zientek, Linda Reichwein; Thompson, Bruce
2009-01-01
Correlation matrices and standard deviations are the building blocks of many of the commonly conducted analyses in published research, and AERA and APA reporting standards recommend their inclusion when reporting research results. The authors argue that the inclusion of correlation/covariance matrices, standard deviations, and means can enhance…
30 CFR 74.8 - Measurement, accuracy, and reliability requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... concentration, as defined by the relative standard deviation of the distribution of measurements. The relative standard deviation shall be less than 0.1275 without bias for both full-shift measurements of 8 hours or... Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, Virginia 22209-3939...
Jiménez-Castellanos, Emilio; Orozco-Varo, Ana; Arroyo-Cruz, Gema; Iglesias-Linares, Alejandro
2016-06-01
Deviation from the facial midline and inclination of the dental midline or occlusal plane has been described as extremely influential in the layperson's perceptions of the overall esthetics of the smile. The purpose of this study was to determine the prevalence of deviation from the facial midline and inclination of the dental midline or occlusal plane in a selected sample. White participants from a European population (N=158; 93 women, 65 men) who met specific inclusion criteria were selected for the present study. Standardized 1:1 scale frontal photographs were made, and 3 variables of all participants were measured: midline deviation, midline inclination, and inclination of the occlusal plane. Software was used to measure midline deviation and inclination, taking the bipupillary line and the facial midline as references. Tests for normality of the sample were explored and descriptive statistics (means ±SD) were calculated. The chi-square test was used to evaluate differences in midline deviation, midline inclination, and occlusion plane (α=.05) RESULTS: Frequencies of midline deviation (>2 mm), midline inclination (>3.5 degrees), and occlusal plane inclination (>2 degrees) were 31.64% (mean 2.7±1.23 mm), 10.75% (mean 7.9 degrees ±3.57), and 25.9% (mean 9.07 degrees ±3.16), respectively. No statistically significant differences (P>.05) were found between sex and any of the esthetic smile values. The incidence of alterations with at least 1 altered parameter that affected smile esthetics was 51.9% in a population from southern Europe. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Changgang; Sun, Yanli; Yu, Yawei
2017-05-01
Under frequency load shedding (UFLS) is an important measure to tackle with frequency drop caused by load-generation imbalance. In existing schemes, loads are shed by relays in a discontinuous way, which is the major reason leading to under-shedding and over-shedding problems. With the application of power electronics technology, some loads can be controlled continuously, and it is possible to improve the UFSL with continuous loads. This paper proposes an UFLS scheme by shedding loads continuously. The load shedding amount is proportional to frequency deviation before frequency reaches its minimum during transient process. The feasibility of the proposed scheme is analysed with analytical system frequency response model. The impacts of governor droop, system inertia, and frequency threshold on the performance of the proposed UFLS scheme are discussed. Cases are demonstrated to validate the proposed scheme by comparing it with conventional UFLS schemes.
USL/DBMS NASA/PC R and D project C programming standards
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Moreau, Dennis R.
1984-01-01
A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.
NASA Astrophysics Data System (ADS)
Yi, Chen; Isaev, A. E.; Yuebing, Wang; Enyakov, A. M.; Teng, Fei; Matveev, A. N.
2011-01-01
A description is given of the COOMET project 473/RU-a/09: a pilot comparison of hydrophone calibrations at frequencies from 250 Hz to 200 kHz between Hangzhou Applied Acoustics Research Institute (HAARI, China)—pilot laboratory—and Russian National Research Institute for Physicotechnical and Radio Engineering Measurements (VNIIFTRI, Designated Institute of Russia of the CIPM MRA). Two standard hydrophones, B&K 8104 and TC 4033, were calibrated and compared to assess the current state of hydrophone calibration of HAARI (China) and Russia. Three different calibration methods were applied: a vibrating column method, a free-field reciprocity method and a comparison method. The standard facilities of each laboratory were used, and three different sound fields were applied: pressure field, free-field and reverberant field. The maximum deviation of the sensitivities of two hydrophones between the participants' results was 0.36 dB. Main text. To reach the main text of this paper, click on Final Report. The final report has been peer-reviewed and approved for publication by the CCAUV-KCWG.
Nunes, J M; Riccio, M E; Buhler, S; Di, D; Currat, M; Ries, F; Almada, A J; Benhamamouch, S; Benitez, O; Canossi, A; Fadhlaoui-Zid, K; Fischer, G; Kervaire, B; Loiseau, P; de Oliveira, D C M; Papasteriades, C; Piancatelli, D; Rahal, M; Richard, L; Romero, M; Rousseau, J; Spiroski, M; Sulcebe, G; Middleton, D; Tiercy, J-M; Sanchez-Mazas, A
2010-07-01
During the 15th International Histocompatibility and Immunogenetics Workshop (IHIWS), 14 human leukocyte antigen (HLA) laboratories participated in the Analysis of HLA Population Data (AHPD) project where 18 new population samples were analyzed statistically and compared with data available from previous workshops. To that aim, an original methodology was developed and used (i) to estimate frequencies by taking into account ambiguous genotypic data, (ii) to test for Hardy-Weinberg equilibrium (HWE) by using a nested likelihood ratio test involving a parameter accounting for HWE deviations, (iii) to test for selective neutrality by using a resampling algorithm, and (iv) to provide explicit graphical representations including allele frequencies and basic statistics for each series of data. A total of 66 data series (1-7 loci per population) were analyzed with this standard approach. Frequency estimates were compliant with HWE in all but one population of mixed stem cell donors. Neutrality testing confirmed the observation of heterozygote excess at all HLA loci, although a significant deviation was established in only a few cases. Population comparisons showed that HLA genetic patterns were mostly shaped by geographic and/or linguistic differentiations in Africa and Europe, but not in America where both genetic drift in isolated populations and gene flow in admixed populations led to a more complex genetic structure. Overall, a fruitful collaboration between HLA typing laboratories and population geneticists allowed finding useful solutions to the problem of estimating gene frequencies and testing basic population diversity statistics on highly complex HLA data (high numbers of alleles and ambiguities), with promising applications in either anthropological, epidemiological, or transplantation studies.
Thermal stability control system of photo-elastic interferometer in the PEM-FTs
NASA Astrophysics Data System (ADS)
Zhang, M. J.; Jing, N.; Li, K. W.; Wang, Z. B.
2018-01-01
A drifting model for the resonant frequency and retardation amplitude of a photo-elastic modulator (PEM) in the photo-elastic modulated Fourier transform spectrometer (PEM-FTs) is presented. A multi-parameter broadband-matching driving control method is proposed to improve the thermal stability of the PEM interferometer. The automatically frequency-modulated technology of the driving signal based on digital phase-locked technology is used to track the PEM's changing resonant frequency. Simultaneously the maximum optical-path-difference of a laser's interferogram is measured to adjust the amplitude of the PEM's driving signal so that the spectral resolution is stable. In the experiment, the multi-parameter broadband-matching control method is applied to the driving control system of the PEM-FTs. Control of resonant frequency and retardation amplitude stabilizes the maximum optical-path-difference to approximately 236 μm and results in a spectral resolution of 42 cm-1. This corresponds to a relative error smaller than 2.16% (4.28 standard deviation). The experiment shows that the method can effectively stabilize the spectral resolution of the PEM-FTs.
Nine months in space: effects on human autonomic cardiovascular regulation.
Cooke, W H; Ames JE, I V; Crossman, A A; Cox, J F; Kuusela, T A; Tahvanainen, K U; Moon, L B; Drescher, J; Baisch, F J; Mano, T; Levine, B D; Blomqvist, C G; Eckberg, D L
2000-09-01
We studied three Russian cosmonauts to better understand how long-term exposure to microgravity affects autonomic cardiovascular control. We recorded the electrocardiogram, finger photoplethysmographic pressure, and respiratory flow before, during, and after two 9-mo missions to the Russian space station Mir. Measurements were made during four modes of breathing: 1) uncontrolled spontaneous breathing; 2) stepwise breathing at six different frequencies; 3) fixed-frequency breathing; and 4) random-frequency breathing. R wave-to-R wave (R-R) interval standard deviations decreased in all and respiratory frequency R-R interval spectral power decreased in two cosmonauts in space. Two weeks after the cosmonauts returned to Earth, R-R interval spectral power was decreased, and systolic pressure spectral power was increased in all. The transfer function between systolic pressures and R-R intervals was reduced in-flight, was reduced further the day after landing, and had not returned to preflight levels by 14 days after landing. Our results suggest that long-duration spaceflight reduces vagal-cardiac nerve traffic and decreases vagal baroreflex gain and that these changes may persist as long as 2 wk after return to Earth.
High-precision and low-cost vibration generator for low-frequency calibration system
NASA Astrophysics Data System (ADS)
Li, Rui-Jun; Lei, Ying-Jun; Zhang, Lian-Sheng; Chang, Zhen-Xin; Fan, Kuang-Chao; Cheng, Zhen-Ying; Hu, Peng-Hao
2018-03-01
Low-frequency vibration is one of the harmful factors that affect the accuracy of micro-/nano-measuring machines because its amplitude is significantly small and it is very difficult to avoid. In this paper, a low-cost and high-precision vibration generator was developed to calibrate an optical accelerometer, which is self-designed to detect low-frequency vibration. A piezoelectric actuator is used as vibration exciter, a leaf spring made of beryllium copper is used as an elastic component, and a high-resolution, low-thermal-drift eddy current sensor is applied to investigate the vibrator’s performance. Experimental results demonstrate that the vibration generator can achieve steady output displacement with frequency range from 0.6 Hz to 50 Hz, an analytical displacement resolution of 3.1 nm and an acceleration range from 3.72 mm s-2 to 1935.41 mm s-2 with a relative standard deviation less than 1.79%. The effectiveness of the high-precision and low-cost vibration generator was verified by calibrating our optical accelerometer.
Diffraction-limited 577 nm true-yellow laser by frequency doubling of a tapered diode laser
NASA Astrophysics Data System (ADS)
Christensen, Mathias; Vilera, Mariafernanda; Noordegraaf, Danny; Hansen, Anders K.; Buß, Thomas; Jensen, Ole B.; Skovgaard, Peter M. W.
2018-02-01
A wide range of laser medical treatments are based on coagulation of blood by absorption of the laser radiation. It has, therefore, always been a goal of these treatments to maximize the ratio of absorption in the blood to that in the surrounding tissue. For this purpose lasers at 577 nm are ideal since this wavelength is at the peak of the absorption in oxygenated hemoglobin. Furthermore, 577 nm has a lower absorption in melanin when compared to green wavelengths (515 - 532 nm), giving it an advantage when treating at greater penetration depth. Here we present a laser system based on frequency doubling of an 1154 nm Distributed Bragg Reflector (DBR) tapered diode laser, emitting 1.1 W of single frequency and diffraction limited yellow light at 577 nm, corresponding to a conversion efficiency of 30.5%. The frequency doubling is performed in a single pass configuration using a cascade of two bulk non-linear crystals. The system is power stabilized over 10 hours with a standard deviation of 0.13% and the relative intensity noise is measured to be 0.064 % rms.
Yoshikura, Hiroshi; Takeuchi, Fumihiko
2017-11-22
In norovirus and Campylobacter food poisonings, the frequencies of the number of patients per incident and that of the number of eaters per incident followed a lognormal distribution, with medians of 12-27 and 23-48 for norovirus and 5-8 and 9-21 for Campylobacter food poisonings, respectively. The lognormal frequency distribution of eaters could be simulated by assuming that people find a dish more appealing if that dish has already been found to be appealing to others. The numbers of patients and eaters per incident were not necessarily inter-correlated; the frequencies of the attack rates (number of patients/number of eaters) were distributed evenly from 0.01 to 1; that is, the attack rates of these food poisonings could not be represented by means and standard deviations. The frequency distributions of the attack rates were nevertheless not entirely disordered; plotting the attack rate against the number of patients in individual incidents produced fingerprint-like patterns that were repeatedly produced at the prefectural and national levels.
Trait mindfulness helps shield decision-making from translating into health-risk behavior.
Black, David S; Sussman, Steve; Johnson, C Anderson; Milam, Joel
2012-12-01
The cognitive tendency toward mindfulness may influence the enactment of health and risk behaviors by its bringing increased attention to and awareness of decision-making processes underlying behavior. The present study examined the moderating effect of trait mindfulness on associations between intentions to smoke (ITS)/smoking refusal self-efficacy (SRSE) and smoking frequency. Self-reports from Chinese adolescents (N = 5,287; mean age = 16.2 years, standard deviation = .7; 48.8% female) were collected in 24 schools. Smoking frequency was regressed on latent factor interactions Mindful Attention Awareness Scale*ITS and Mindful Attention Awareness Scale*SRSE, adjusting for school clustering effects and covariates. Both interaction terms were significant in cross-sectional analyses and showed that high ITS predicted higher smoking frequency among those low, relative to high, in trait mindfulness, whereas low SRSE predicted higher smoking frequency among those low, relative to high, in trait mindfulness. Findings suggest trait mindfulness possibly shields against decision-making processes that place adolescents at risk for smoking. Copyright © 2012 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Speech waveform perturbation analysis: a perceptual-acoustical comparison of seven measures.
Askenfelt, A G; Hammarberg, B
1986-03-01
The performance of seven acoustic measures of cycle-to-cycle variations (perturbations) in the speech waveform was compared. All measures were calculated automatically and applied on running speech. Three of the measures refer to the frequency of occurrence and severity of waveform perturbations in special selected parts of the speech, identified by means of the rate of change in the fundamental frequency. Three other measures refer to statistical properties of the distribution of the relative frequency differences between adjacent pitch periods. One perturbation measure refers to the percentage of consecutive pitch period differences with alternating signs. The acoustic measures were tested on tape recorded speech samples from 41 voice patients, before and after successful therapy. Scattergrams of acoustic waveform perturbation data versus an average of perceived deviant voice qualities, as rated by voice clinicians, are presented. The perturbation measures were compared with regard to the acoustic-perceptual correlation and their ability to discriminate between normal and pathological voice status. The standard deviation of the distribution of the relative frequency differences was suggested as the most useful acoustic measure of waveform perturbations for clinical applications.
HF radar transmissions that deviate from great-circle paths: new insight from e-POP RRI
NASA Astrophysics Data System (ADS)
Perry, G. W.; Miller, E. S.; James, H. G.; Howarth, A. D.; St-Maurice, J. P.; Yau, A. W.
2016-12-01
Significant deviations of SuperDARN radar transmissions from their expected great-circle paths have been detected at ionospheric altitudes using the Radio Receiver Instrument (RRI) on the Enhanced Polar Outflow Probe (e-POP). Experiments between SuperDARN Rankin Inlet and e-POP RRI were conducted at similar local times over consecutive days. Customized experiment modes which incorporated the agile frequency switching capabilities of each system were used. The RRI measurements show deviations of radar transmissions from their expected paths by as much as 2 or 3 SuperDARN beam widths, equivalent to 6° - 10° in bearing from Rankin Inlet. The deviations displayed a dependence on the radar carrier frequency and a day-to-day variability, suggesting that the deviations were transient in nature. We will discuss the deviations in the context of 3D ray trace modeling and measurements from the Resolute Bay Incoherent Scatter Radar - North (RISR-N). The latter provided diagnostic information of the ionosphere along the ray path between RRI and Rankin Inlet during the experiments.
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.
2013-01-01
In 2011 the noise generating capabilities in the reverberation chamber of the Structural Acoustic Loads and Transmission (SALT) facility at NASA Langley Research Center were enhanced with two fiberglass reinforced polyester resin exponential horns, each coupled to Wyle Acoustic Source WAS-3000 airstream modulators. This report describes the characterization of the reverberation chamber in terms of the background noise, diffusivity, sound pressure levels, the reverberation times and the related overall acoustic absorption in the empty chamber and with the acoustic horn(s) installed. The frequency range of interest includes the 80 Hz to 8000 Hz one-third octave bands. Reverberation time and sound pressure level measurements were conducted and standard deviations from the mean were computed. It was concluded that a diffuse field could be produced above the Schroeder frequency in the 400 Hz one-third octave band and higher for all applications. This frequency could be lowered by installing panel diffusers or moving vanes to improve the acoustic modal overlap in the chamber. In the 80 Hz to 400 Hz one-third octave bands a successful measurement will be dependent on the type of measurement, the test configuration, the source and microphone locations and the desired accuracy. It is recommended that qualification measurements endorsed in the International Standards be conducted for each particular application.
Mitigating voltage lead errors of an AC Josephson voltage standard by impedance matching
NASA Astrophysics Data System (ADS)
Zhao, Dongsheng; van den Brom, Helko E.; Houtzager, Ernest
2017-09-01
A pulse-driven AC Josephson voltage standard (ACJVS) generates calculable AC voltage signals at low temperatures, whereas measurements are performed with a device under test (DUT) at room temperature. The voltage leads cause the output voltage to show deviations that scale with the frequency squared. Error correction mechanisms investigated so far allow the ACJVS to be operational for frequencies up to 100 kHz. In this paper, calculations are presented to deal with these errors in terms of reflected waves. Impedance matching at the source side of the system, which is loaded with a high-impedance DUT, is proposed as an accurate method to mitigate these errors for frequencies up to 1 MHz. Simulations show that the influence of non-ideal component characteristics, such as the tolerance of the matching resistor, the capacitance of the load input impedance, losses in the voltage leads, non-homogeneity in the voltage leads, a non-ideal on-chip connection and inductors between the Josephson junction array and the voltage leads, can be corrected for using the proposed procedures. The results show that an expanded uncertainty of 12 parts in 106 (k = 2) at 1 MHz and 0.5 part in 106 (k = 2) at 100 kHz is within reach.
NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input.variable U_NDG_OLD contains standard deviation of wind speed (m/s)variable V_NDG_OLD contains the standard deviation of wind direction (deg)This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).
Lee, Jong-Ho; Kim, Kyu-Hyeong; Hong, Jin-Woo; Lee, Won-Chul; Koo, Sungtae
2011-06-01
This study aimed to compare the effects of high frequency electroacupuncture (EA) and low-frequency EA on the autonomic nervous system by using a heart rate variability measuring device in normal individuals. Fourteen participants were recruited and each participated in the high-frequency and low-frequency sessions (crossover design). The order of sessions was randomized and the interval between the two sessions was over 2 weeks. Participants received needle insertion with 120-Hz stimulation during the high-frequency session (high-frequency EA group), and with 2-Hz stimulation during the low-frequency session (low-frequency EA group). Acupuncture needles were directly inserted perpendicularly to LI 4 and LI 11 acupoints followed by delivery of electric pulses to these points for 15 minutes. Heart rate variability was measured 5 minutes before and after EA stimulation by a heart rate variability measuring system. We found a significant increase in the standard deviation of the normal-to-normal interval in the high-frequency EA group, with no change in the low-frequency EA group. Both the high-frequency and low-frequency EA groups showed no significant differences in other parameters including high-frequency power, low-frequency power, and the ratio of low-frequency power to high-frequency power. Based on these findings, we concluded that high-frequency EA stimulation is more effective than low-frequency EA stimulation in increasing autonomic nervous activity and there is no difference between the two EA frequencies in enhancing sympathovagal balance. Copyright © 2011 Korean Pharmacopuncture Institute. Published by .. All rights reserved.
Subjective age-of-acquisition norms for 600 Turkish words from four age groups.
Göz, İlyas; Tekcan, Ali I; Erciyes, Aslı Aktan
2017-10-01
The main purpose of this study was to report age-based subjective age-of-acquisition (AoA) norms for 600 Turkish words. A total of 115 children, 100 young adults, 115 middle-aged adults, and 127 older adults provided AoA estimates for 600 words on a 7-point scale. The intraclass correlations suggested high reliability, and the AoA estimates were highly correlated across the four age groups. Children gave earlier AoA estimates than the three adult groups; this was true for high-frequency as well as low-frequency words. In addition to the means and standard deviations of the AoA estimates, we report word frequency, concreteness, and imageability ratings, as well as word length measures (numbers of syllables and letters), for the 600 words as supplemental materials. The present ratings represent a potentially useful database for researchers working on lexical processing as well as other aspects of cognitive processing, such as autobiographical memory.
NASA Astrophysics Data System (ADS)
Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel
2018-01-01
The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor ( k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series. [Figure not available: see fulltext.
Stabilizing low-frequency oscillation with two-stage filter in Hall thrusters
NASA Astrophysics Data System (ADS)
Wei, Liqiu; Han, Liang; Ding, Yongjie; Yu, Daren; Zhang, Chaohai
2017-07-01
The use of a filter is the most common method to suppress low-frequency discharge current oscillation in Hall thrusters. The only form of filter in actual use involves RLC networks, which serve the purpose of reducing the level of conducted electromagnetic interference returning to the power processing unit, which is the function of a filter. Recently, the role of the filter in the oscillation control was introduced. It has been noted that the filter regulates the voltage across itself according to the variation of discharge current so as to decrease its fluctuation in the discharge circuit, which is the function of a controller. Therefore, a kind of two-stage filter is proposed to fulfill these two purposes, filtering and controlling, and the detailed design methods are discussed and verified. A current oscillation attenuation ratio of 10 was achieved by different capacitance and inductance combinations of the filter stage, and the standard deviation of low-frequency oscillations decreased from 3 A-1 A by the control stage in our experiment.
Demodulation RFI statistics for a 3-stage op amp LED circuit
NASA Astrophysics Data System (ADS)
Whalen, James J.
An experiment has been performed to demonstrate the feasibility of combining several methods of electromagnetic-compatibility analysis. The part of the experiment that demonstrates how RF signals cause interference in an audio-frequency (AF) circuit and how the interference can be suppressed is described. The circuit includes three operational amplifiers (op amps) and a light-emitting diode (LED). A 50 percent amplitude-modulated (AM) radio-frequency-interference (RFI) signal is used, varied over the range from 0.1 to 400 MHz. The AM frequency is 1 kHz. The RFI is injected into the inverting input of the first op amp, and the 1-kHz demodulation response of the amplifier is amplified by the second and third op amps and lights the LED to provide a visual display of the existence of RFI. An RFI suppression capacitor was added to reduce the RFI. The demodulation RFI results are presented as scatter plots for 35 741 op amps. Mean values and standard deviations are also shown.
Celestial Reference Frames at Multiple Radio Wavelengths
NASA Technical Reports Server (NTRS)
Jacobs, Christopher S.
2012-01-01
In 1997 the IAU adopted the International Celestial Reference Frame (ICRF) built from S/X VLBI data. In response to IAU resolutions encouraging the extension of the ICRF to additional frequency bands, VLBI frames have been made at 24, 32, and 43 gigahertz. Meanwhile, the 8.4 gigahertz work has been greatly improved with the 2009 release of the ICRF-2. This paper discusses the motivations for extending the ICRF to these higher radio bands. Results to date will be summarized including evidence that the high frequency frames are rapidly approaching the accuracy of the 8.4 gigahertz ICRF-2. We discuss current limiting errors and prospects for the future accuracy of radio reference frames. We note that comparison of multiple radio frames is characterizing the frequency dependent systematic noise floor from extended source morphology and core shift. Finally, given Gaia's potential for high accuracy optical astrometry, we have simulated the precision of a radio-optical frame tie to be approximately10-15 microarcseconds ((1-sigma) (1-standard deviation), per component).
75 FR 67093 - Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-01
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2010-P-0517] Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... from the requirements of the standards of identity issued under section 401 of the Federal Food, Drug...
78 FR 2273 - Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-10
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2012-P-1189] Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... interstate shipment of experimental packs of food varying from the requirements of standards of identity...
Upgraded FAA Airfield Capacity Model. Volume 2. Technical Description of Revisions
1981-02-01
the threshold t k a the time at which departure k is released FIGURE 3-1 TIME AXIS DIAGRAM OF SINGLE RUNWAY OPERATIONS 3-2 J"- SIGMAR the standard...standard deviation of the interarrival time. SIGMAR - the standard deviation of the arrival runway occupancy time. A-5 SINGLE - program subroutine for
Methods of editing cloud and atmospheric layer affected pixels from satellite data
NASA Technical Reports Server (NTRS)
Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)
1982-01-01
Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.
A Taxonomy of Delivery and Documentation Deviations During Delivery of High-Fidelity Simulations.
McIvor, William R; Banerjee, Arna; Boulet, John R; Bekhuis, Tanja; Tseytlin, Eugene; Torsher, Laurence; DeMaria, Samuel; Rask, John P; Shotwell, Matthew S; Burden, Amanda; Cooper, Jeffrey B; Gaba, David M; Levine, Adam; Park, Christine; Sinz, Elizabeth; Steadman, Randolph H; Weinger, Matthew B
2017-02-01
We developed a taxonomy of simulation delivery and documentation deviations noted during a multicenter, high-fidelity simulation trial that was conducted to assess practicing physicians' performance. Eight simulation centers sought to implement standardized scenarios over 2 years. Rules, guidelines, and detailed scenario scripts were established to facilitate reproducible scenario delivery; however, pilot trials revealed deviations from those rubrics. A taxonomy with hierarchically arranged terms that define a lack of standardization of simulation scenario delivery was then created to aid educators and researchers in assessing and describing their ability to reproducibly conduct simulations. Thirty-six types of delivery or documentation deviations were identified from the scenario scripts and study rules. Using a Delphi technique and open card sorting, simulation experts formulated a taxonomy of high-fidelity simulation execution and documentation deviations. The taxonomy was iteratively refined and then tested by 2 investigators not involved with its development. The taxonomy has 2 main classes, simulation center deviation and participant deviation, which are further subdivided into as many as 6 subclasses. Inter-rater classification agreement using the taxonomy was 74% or greater for each of the 7 levels of its hierarchy. Cohen kappa calculations confirmed substantial agreement beyond that expected by chance. All deviations were classified within the taxonomy. This is a useful taxonomy that standardizes terms for simulation delivery and documentation deviations, facilitates quality assurance in scenario delivery, and enables quantification of the impact of deviations upon simulation-based performance assessment.
Tree-ring-based drought reconstruction in the Iberian Range (east of Spain) since 1694
NASA Astrophysics Data System (ADS)
Tejedor, Ernesto; de Luis, Martín; Cuadrat, José María; Esper, Jan; Saz, Miguel Ángel
2016-03-01
Droughts are a recurrent phenomenon in the Mediterranean basin with negative consequences for society, economic activities, and natural systems. Nevertheless, the study of drought recurrence and severity in Spain has been limited so far due to the relatively short instrumental period. In this work, we present a reconstruction of the standardized precipitation index (SPI) for the Iberian Range. Growth variations and climatic signals within the network are assessed developing a correlation matrix and the data combined to a single chronology integrating 336 samples from 169 trees of five different pine species distributed throughout the province of Teruel. The new chronology, calibrated against regional instrumental climatic data, shows a high and stable correlation with the July SPI integrating moisture conditions over 12 months forming the basis for a 318-year drought reconstruction. The climate signal contained in this reconstruction is highly significant ( p < 0.05) and spatially robust over the interior areas of Spain located above 1000 meters above sea level (masl). According to our SPI reconstruction, seven substantially dry and five wet periods are identified since the late seventeenth century considering ≥±1.76 standard deviations. Besides these, 36 drought and 28 pluvial years were identified. Some of these years, such as 1725, 1741, 1803, and 1879, are also revealed in other drought reconstructions in Romania and Turkey, suggesting that coherent larger-scale synoptic patterns drove these extreme deviations. Since regional drought deviations are also retained in historical documents, the tree-ring-based reconstruction presented here will allow us to cross-validate drought frequency and magnitude in a highly vulnerable region.
Tree-ring-based drought reconstruction in the Iberian Range (east of Spain) since 1694.
Tejedor, Ernesto; de Luis, Martín; Cuadrat, José María; Esper, Jan; Saz, Miguel Ángel
2016-03-01
Droughts are a recurrent phenomenon in the Mediterranean basin with negative consequences for society, economic activities, and natural systems. Nevertheless, the study of drought recurrence and severity in Spain has been limited so far due to the relatively short instrumental period. In this work, we present a reconstruction of the standardized precipitation index (SPI) for the Iberian Range. Growth variations and climatic signals within the network are assessed developing a correlation matrix and the data combined to a single chronology integrating 336 samples from 169 trees of five different pine species distributed throughout the province of Teruel. The new chronology, calibrated against regional instrumental climatic data, shows a high and stable correlation with the July SPI integrating moisture conditions over 12 months forming the basis for a 318-year drought reconstruction. The climate signal contained in this reconstruction is highly significant (p < 0.05) and spatially robust over the interior areas of Spain located above 1000 meters above sea level (masl). According to our SPI reconstruction, seven substantially dry and five wet periods are identified since the late seventeenth century considering ≥±1.76 standard deviations. Besides these, 36 drought and 28 pluvial years were identified. Some of these years, such as 1725, 1741, 1803, and 1879, are also revealed in other drought reconstructions in Romania and Turkey, suggesting that coherent larger-scale synoptic patterns drove these extreme deviations. Since regional drought deviations are also retained in historical documents, the tree-ring-based reconstruction presented here will allow us to cross-validate drought frequency and magnitude in a highly vulnerable region.
Frequency-controlled wireless shape memory polymer microactuator for drug delivery application.
Zainal, M A; Ahmad, A; Mohamed Ali, M S
2017-03-01
This paper reports the wireless Shape-Memory-Polymer actuator operated by external radio frequency magnetic fields and its application in a drug delivery device. The actuator is driven by a frequency-sensitive wireless resonant heater which is bonded directly to the Shape-Memory-Polymer and is activated only when the field frequency is tuned to the resonant frequency of heater. The heater is fabricated using a double-sided Cu-clad Polyimide with much simpler fabrication steps compared to previously reported methods. The actuation range of 140 μm as the tip opening distance is achieved at device temperature 44 °C in 30 s using 0.05 W RF power. A repeatability test shows that the actuator's average maximum displacement is 110 μm and standard deviation of 12 μm. An experiment is conducted to demonstrate drug release with 5 μL of an acidic solution loaded in the reservoir and the device is immersed in DI water. The actuator is successfully operated in water through wireless activation. The acidic solution is released and diffused in water with an average release rate of 0.172 μL/min.
Würbach, Ariane; Zellner, Konrad; Kromeyer-Hauschild, Katrin
2009-08-01
To describe the meal patterns of Jena schoolchildren and their associations with children's weight status and parental characteristics. Cross-sectional study. Twenty schools in Jena (100,000 inhabitants), south-east Germany. A total of 2054 schoolchildren aged 7-14 years with information on BMI standard deviation score (BMI-SDS) and weight status (based on German reference values), of whom 1571 had additional information about their parents (parental education and employment status, weight status according to WHO guidelines) and meal patterns (school lunch participation rate, meal frequencies, breakfast consumption and frequency of family meals). Weight status of the children was associated with weight status, education and employment status of the parents. Meal patterns were strongly dependent on children's age and parental employment. As age increased, the frequency of meal consumption, participation rate in school lunches and the number of family meals decreased. Using linear regression analysis, a high inverse association between BMI-SDS and meal frequency was observed, in addition to relationships with parental weight status and paternal education. Age-specific prevention programmes should encourage greater meal frequency. The close involvement of parents is essential in any strategy for improving children's (families') diets.
47 CFR Appendix 1 to Subpart E of... - Glossary of Terms
Code of Federal Regulations, 2012 CFR
2012-10-01
... Part 95 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO... frequencies. Reference frequencies from which the carrier frequency, suppressed or otherwise, may not deviate by more than the specified frequency tolerance. Crystal. Quartz piezo-electric element. Crystal...
Directional synthetic aperture flow imaging.
Jensen, Jørgen Arendt; Nikolov, Svetoslav Ivanov
2004-09-01
A method for flow estimation using synthetic aperture imaging and focusing along the flow direction is presented. The method can find the correct velocity magnitude for any flow angle, and full color flow images can be measured using only 32 to 128 pulse emissions. The approach uses spherical wave emissions with a number of defocused elements and a linear frequency-modulated pulse (chirp) to improve the signal-to-noise ratio. The received signals are dynamically focused along the flow direction and these signals are used in a cross-correlation estimator for finding the velocity magnitude. The flow angle is manually determined from the B-mode image. The approach can be used for both tissue and blood velocity determination. The approach was investigated using both simulations and a flow system with a laminar flow. The flow profile was measured with a commercial 7.5 MHz linear array transducer. A plastic tube with an internal diameter of 17 mm was used with an EcoWatt 1 pump generating a laminar, stationary flow. The velocity profile was measured for flow angles of 90 and 60 degrees. The RASMUS research scanner was used for acquiring radio frequency (RF) data from 128 elements of the array, using 8 emissions with 11 elements in each emission. A 20-micros chirp was used during emission. The RF data were subsequently beamformed off-line and stationary echo canceling was performed. The 60-degree flow with a peak velocity of 0.15 m/s was determined using 16 groups of 8 emissions, and the relative standard deviation was 0.36% (0.65 mm/s). Using the same setup for purely transverse flow gave a standard deviation of 1.2% (2.1 mm/s). Variation of the different parameters revealed the sensitivity to number of lines, angle deviations, length of correlation interval, and sampling interval. An in vivo image of the carotid artery and jugular vein of a healthy 29-year-old volunteer was acquired. A full color flow image using only 128 emissions could be made with a high-velocity precision.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Ibrahim, Shewkar E; Sayed, Tarek; Ismail, Karim
2012-11-01
Several earlier studies have noted the shortcomings with existing geometric design guides which provide deterministic standards. In these standards the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from the standards. To mitigate these shortcomings, probabilistic geometric design has been advocated where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a mechanism for risk measurement to evaluate the safety impact of deviations from design standards. This paper applies reliability analysis for optimizing the safety of highway cross-sections. The paper presents an original methodology to select a suitable combination of cross-section elements with restricted sight distance to result in reduced collisions and consistent risk levels. The purpose of this optimization method is to provide designers with a proactive approach to the design of cross-section elements in order to (i) minimize the risk associated with restricted sight distance, (ii) balance the risk across the two carriageways of the highway, and (iii) reduce the expected collision frequency. A case study involving nine cross-sections that are parts of two major highway developments in British Columbia, Canada, was presented. The results showed that an additional reduction in collisions can be realized by incorporating the reliability component, P(nc) (denoting the probability of non-compliance), in the optimization process. The proposed approach results in reduced and consistent risk levels for both travel directions in addition to further collision reductions. Copyright © 2012 Elsevier Ltd. All rights reserved.
Safety analytics for integrating crash frequency and real-time risk modeling for expressways.
Wang, Ling; Abdel-Aty, Mohamed; Lee, Jaeyoung
2017-07-01
To find crash contributing factors, there have been numerous crash frequency and real-time safety studies, but such studies have been conducted independently. Until this point, no researcher has simultaneously analyzed crash frequency and real-time crash risk to test whether integrating them could better explain crash occurrence. Therefore, this study aims at integrating crash frequency and real-time safety analyses using expressway data. A Bayesian integrated model and a non-integrated model were built: the integrated model linked the crash frequency and the real-time models by adding the logarithm of the estimated expected crash frequency in the real-time model; the non-integrated model independently estimated the crash frequency and the real-time crash risk. The results showed that the integrated model outperformed the non-integrated model, as it provided much better model results for both the crash frequency and the real-time models. This result indicated that the added component, the logarithm of the expected crash frequency, successfully linked and provided useful information to the two models. This study uncovered few variables that are not typically included in the crash frequency analysis. For example, the average daily standard deviation of speed, which was aggregated based on speed at 1-min intervals, had a positive effect on crash frequency. In conclusion, this study suggested a methodology to improve the crash frequency and real-time models by integrating them, and it might inspire future researchers to understand crash mechanisms better. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ren, Yan; Zheng, Shuwen; Wei, Wei; Wu, Bingui; Zhang, Hongsheng; Cai, Xuhui; Song, Yu
2018-02-01
We analyzed the structure and evolution of turbulent transfer and the wind profile in the atmospheric boundary layer in relation to aerosol concentrations during an episode of heavy haze pollution from 6 December 2016 to 9 January 2017. The turbulence data were recorded at Peking University's atmospheric science and environment observation station. The results showed a negative correlation between the wind speed and the PM2.5 concentration. The turbulence kinetic energy was large and showed obvious diurnal variations during unpolluted (clean) weather, but was small during episodes of heavy haze pollution. Under both clean and heavy haze conditions, the relation between the non-dimensional wind components and the stability parameter z/ L followed a 1/3 power law, but the normalized standard deviations of the wind speed were smaller during heavy pollution events than during clean periods under near-neutral conditions. Under unstable conditions, the normalized standard deviation of the potential temperature σ θ /| θ *| was related to z/ L, roughly following a -1/3 power law, and the ratio during pollution days was greater than that during clean days. The three-dimensional turbulence energy spectra satisfied a -2/3 power exponent rate in the high-frequency band. In the low-frequency band, the wind velocity spectrum curve was related to the stability parameters under clear conditions, but was not related to atmospheric stratification under polluted conditions. In the dissipation stage of the heavy pollution episode, the horizontal wind speed first started to increase at high altitudes and then gradually decreased at lower altitudes. The strong upward motion during this stage was an important dynamic factor in the dissipation of the heavy haze.
Futia, Gregory L; Schlaepfer, Isabel R; Qamar, Lubna; Behbakht, Kian; Gibson, Emily A
2017-07-01
Detection of circulating tumor cells (CTCs) in a blood sample is limited by the sensitivity and specificity of the biomarker panel used to identify CTCs over other blood cells. In this work, we present Bayesian theory that shows how test sensitivity and specificity set the rarity of cell that a test can detect. We perform our calculation of sensitivity and specificity on our image cytometry biomarker panel by testing on pure disease positive (D + ) populations (MCF7 cells) and pure disease negative populations (D - ) (leukocytes). In this system, we performed multi-channel confocal fluorescence microscopy to image biomarkers of DNA, lipids, CD45, and Cytokeratin. Using custom software, we segmented our confocal images into regions of interest consisting of individual cells and computed the image metrics of total signal, second spatial moment, spatial frequency second moment, and the product of the spatial-spatial frequency moments. We present our analysis of these 16 features. The best performing of the 16 features produced an average separation of three standard deviations between D + and D - and an average detectable rarity of ∼1 in 200. We performed multivariable regression and feature selection to combine multiple features for increased performance and showed an average separation of seven standard deviations between the D + and D - populations making our average detectable rarity of ∼1 in 480. Histograms and receiver operating characteristics (ROC) curves for these features and regressions are presented. We conclude that simple regression analysis holds promise to further improve the separation of rare cells in cytometry applications. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Balaguer Martínez, Josep Vicent; Del Castillo Aguas, Guadalupe; Gallego Iborra, Ana
2017-12-30
To assess whether there is a relationship between the prescription of antibiotics and the performance of complementary tests with frequency of use and loyalty in Primary Care. Analytical descriptive study performed through a network of Primary Care sentinel paediatricians (PAPenRed). Each paediatrician reviewed the spontaneous visits (in Primary Care and in Emergency Departments) of 15 patients for 12 months, randomly chosen from their quota. The prescription of antibiotics and the complementary tests performed on these patients were also collected. A total of 212 paediatricians took part and reviewed 2,726 patients. It was found that 8.3% were moderate over-users (mean + 1-2 standard deviations) and 5.2% extreme over-users (mean + 2 standard deviations). Almost half (49.6%) were high-loyalty patients (more than 75% of visits with their doctor). The incidence ratio of antibiotic prescriptions for moderate over-users was 2.13 (1.74-2.62) and 3.25 (2.55-4.13) for extreme over-users, compared to non-over-user children. The incidence ratio for the diagnostic tests were 2.25 (1.86-2.73) and 3.48 (2.78-4.35), respectively. The incidence ratios for antibiotic prescription were 1.34 (1.16-1.55) in patients with medium-high loyalty, 1.45 (1.15-1.83) for medium-low loyalty, and 1.08 (0.81-1.44) for those with low loyalty, compared to patients with high loyalty. The incidence ratios to perform diagnostic tests were 1.46 (1.27-1.67); 1.60 (1.28 - 2.00), and 0.84 (0.63-1.12), respectively. Antibiotics prescription and complementary tests were significantly related to medical overuse. They were also related to loyalty, but less significantly. Copyright © 2017. Publicado por Elsevier España, S.L.U.
NASA Astrophysics Data System (ADS)
Nåvik, Petter; Rønnquist, Anders; Stichel, Sebastian
2017-09-01
The contact force between the pantograph and the contact wire ensures energy transfer between the two. Too small of a force leads to arching and unstable energy transfer, while too large of a force leads to unnecessary wear on both parts. Thus, obtaining the correct contact force is important for both field measurements and estimates using numerical analysis. The field contact force time series is derived from measurements performed by a self-propelled diagnostic vehicle containing overhead line recording equipment. The measurements are not sampled at the actual contact surface of the interaction but by force transducers beneath the collector strips. Methods exist for obtaining more realistic measurements by adding inertia and aerodynamic effects to the measurements. The variation in predicting the pantograph-catenary interaction contact force is studied in this paper by evaluating the effect of the force sampling location and the effects of signal processing such as filtering. A numerical model validated by field measurements is used to study these effects. First, this paper shows that the numerical model can reproduce a train passage with high accuracy. Second, this study introduces three different options for contact force predictions from numerical simulations. Third, this paper demonstrates that the standard deviation and the maximum and minimum values of the contact force are sensitive to a low-pass filter. For a specific case, an 80 Hz cut-off frequency is compared to a 20 Hz cut-off frequency, as required by EN 50317:2012; the results show an 11% increase in standard deviation, a 36% increase in the maximum value and a 19% decrease in the minimum value.
Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.
Masalski, Marcin; Kręcicki, Tomasz
2013-04-12
Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.
On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
Standardizing clinical trials workflow representation in UML for international site comparison.
de Carvalho, Elias Cesar Araujo; Jayanti, Madhav Kishore; Batilana, Adelia Portero; Kozan, Andreia M O; Rodrigues, Maria J; Shah, Jatin; Loures, Marco R; Patil, Sunita; Payne, Philip; Pietrobon, Ricardo
2010-11-09
With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows.
Standardizing Clinical Trials Workflow Representation in UML for International Site Comparison
de Carvalho, Elias Cesar Araujo; Jayanti, Madhav Kishore; Batilana, Adelia Portero; Kozan, Andreia M. O.; Rodrigues, Maria J.; Shah, Jatin; Loures, Marco R.; Patil, Sunita; Payne, Philip; Pietrobon, Ricardo
2010-01-01
Background With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. Methods Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. Results Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. Conclusions This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows. PMID:21085484
A stochastic approach to noise modeling for barometric altimeters.
Sabatini, Angelo Maria; Genovese, Vincenzo
2013-11-18
The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-02-21
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-01-01
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system. PMID:24566635
NASA Astrophysics Data System (ADS)
Kim, Ji-Soo; Han, Soo-Hyung; Ryang, Woo-Hun
2001-12-01
Electrical resistivity mapping was conducted to delineate boundaries and architecture of the Eumsung Basin Cretaceous. Basin boundaries are effectively clarified in electrical dipole-dipole resistivity sections as high-resistivity contrast bands. High resistivities most likely originate from the basement of Jurassic granite and Precambrian gneiss, contrasting with the lower resistivities from infilled sedimentary rocks. The electrical properties of basin-margin boundaries are compatible with the results of vertical electrical soundings and very-low-frequency electromagnetic surveys. A statistical analysis of the resistivity sections is tested in terms of standard deviation and is found to be an effective scheme for the subsurface reconstruction of basin architecture as well as the surface demarcation of basin-margin faults and brittle fracture zones, characterized by much higher standard deviation. Pseudo three-dimensional architecture of the basin is delineated by integrating the composite resistivity structure information from two cross-basin E-W magnetotelluric lines and dipole-dipole resistivity lines. Based on statistical analysis, the maximum depth of the basin varies from about 1 km in the northern part to 3 km or more in the middle part. This strong variation supports the view that the basin experienced pull-apart opening with rapid subsidence of the central blocks and asymmetric cross-basinal extension.
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
Morikawa, Kei; Kurimoto, Noriaki; Inoue, Takeo; Mineshita, Masamichi; Miyazawa, Teruomi
2015-01-01
Endobronchial ultrasonography using a guide sheath (EBUS-GS) is an increasingly common bronchoscopic technique, but currently, no methods have been established to quantitatively evaluate EBUS images of peripheral pulmonary lesions. The purpose of this study was to evaluate whether histogram data collected from EBUS-GS images can contribute to the diagnosis of lung cancer. Histogram-based analyses focusing on the brightness of EBUS images were retrospectively conducted: 60 patients (38 lung cancer; 22 inflammatory diseases), with clear EBUS images were included. For each patient, a 400-pixel region of interest was selected, typically located at a 3- to 5-mm radius from the probe, from recorded EBUS images during bronchoscopy. Histogram height, width, height/width ratio, standard deviation, kurtosis and skewness were investigated as diagnostic indicators. Median histogram height, width, height/width ratio and standard deviation were significantly different between lung cancer and benign lesions (all p < 0.01). With a cutoff value for standard deviation of 10.5, lung cancer could be diagnosed with an accuracy of 81.7%. Other characteristics investigated were inferior when compared to histogram standard deviation. Histogram standard deviation appears to be the most useful characteristic for diagnosing lung cancer using EBUS images. © 2015 S. Karger AG, Basel.
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
Climatology of Neutral vertical winds in the midlatitude thermosphere
NASA Astrophysics Data System (ADS)
Kerr, R.; Kapali, S.; Riccobono, J.; Migliozzi, M. A.; Noto, J.; Brum, C. G. M.; Garcia, R.
2017-12-01
More than one thousand measurements of neutral vertical winds, relative to an assumed average of 0 m/s during a nighttime period, have been made at Arecibo Observatory and the Millstone Hill Optical Facility since 2012, using imaging Fabry-Perot interferometers. These instruments, tuned to the 630 nm OI emission, are carefully calibrated for instrumental frequency drift using frequency stabilized lasers, allowing isolation of Doppler motion in the zenith with 1-2 m/s accuracy. As one example of the results, relative vertical winds at Arecibo during quiet geomagnetic conditions near winter solstice 2016, range ±70 m/s and have a one standard deviation statistical variability of ±34 m/s. This compares with a ±53 m/s deviation from the average meridional wind, and a ±56 m/s deviation from the average zonal wind measured during the same period. Vertical neutral wind velocities for all periods range from roughly 30% - 60% of the horizontal velocity domain at Arecibo. At Millstone Hill, the vertical velocities relative to horizontal velocities are similar, but slightly smaller. The midnight temperature maximum at Arecibo is usually correlated with a surge in the upward wind, and vertical wind excursions of more than 80 m/s are common during magnetic storms at both sites. Until this compilation of vertical wind climatology, vertical motions of the neutral atmosphere outside of the auroral zone have generally been assumed to be very small compared to horizontal transport. In fact, excursions from small vertical velocities in the mid-latitude thermosphere near the F2 ionospheric peak are common, and are not isolated events associated with unsettled geomagnetic conditions or other special dynamic conditions.
How well do the GCMs replicate the historical precipitation variability in the Colorado River Basin?
NASA Astrophysics Data System (ADS)
Guentchev, G.; Barsugli, J. J.; Eischeid, J.; Raff, D. A.; Brekke, L.
2009-12-01
Observed precipitation variability measures are compared to measures obtained using the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project (CMIP3) General Circulation Models (GCM) data from 36 model projections downscaled by Brekke at al. (2007) and 30 model projections downscaled by Jon Eischeid. Three groups of variability measures are considered in this historical period (1951-1999) comparison: a) basic variability measures, such as standard deviation, interdecadal standard deviation; b) exceedance probability values, i.e., 10% (extreme wet years) and 90% (extreme dry years) exceedance probability values of series of n-year running mean annual amounts, where n=1,12; 10% exceedance probability values of annual maximum monthly precipitation (extreme wet months); and c) runs variability measures, e.g., frequency of negative and positive runs of annual precipitation amounts, total number of the negative and positive runs. Two gridded precipitation data sets produced from observations are used: the Maurer et al. (2002) and the Daly et al. (1994) Precipitation Regression on Independent Slopes Method (PRISM) data sets. The data consist of monthly grid-point precipitation averaged on a United States Geological Survey (USGS) hydrological sub-region scale. The statistical significance of the obtained model minus observed measure differences is assessed using a block bootstrapping approach. The analyses were performed on annual, seasonal and monthly scale. The results indicate that the interdecadal standard deviation is underestimated, in general, on all time scales by the downscaled model data. The differences are statistically significant at a 0.05 significance level for several Lower Colorado Basin sub-regions on annual and seasonal scale, and for several sub-regions located mostly in the Upper Colorado River Basin for the months of March, June, July and November. Although the models simulate drier extreme wet years, wetter extreme dry years and drier extreme wet months for the Upper Colorado basin, the differences are mostly not-significant. Exceptions are the results about the extreme wet years for n=3 for sub-region White-Yampa, for n=6, 7, and 8 for sub-region Upper Colorado-Dolores, and about the extreme dry years for n=11 for sub-region Great Divide-Upper Green. None of the results for the sub-regions in the Lower Colorado Basin were significant. For most of the Upper Colorado sub-regions the models simulate significantly lower frequency of negative and positive 4-6 year runs, while for several sub-regions a significantly higher frequency of 2-year negative runs is evident in the model versus the Maurer data comparisons. The model projections versus the PRISM data comparison reveals similar results for the negative runs, while for the positive runs the results indicate that the models simulate higher frequency of the 2-6 year runs. The results for the Lower Colorado basin sub-regions are similar, in general, to these for the Upper Colorado sub-regions. The differences between the simulated and the observed total number of negative or positive runs were not significant for almost all of the sub-regions within the Colorado River Basin.
Marfeo, Elizabeth E; Ni, Pengsheng; Chan, Leighton; Rasch, Elizabeth K; Jette, Alan M
2014-07-01
The goal of this article was to investigate optimal functioning of using frequency vs. agreement rating scales in two subdomains of the newly developed Work Disability Functional Assessment Battery: the Mood & Emotions and Behavioral Control scales. A psychometric study comparing rating scale performance embedded in a cross-sectional survey used for developing a new instrument to measure behavioral health functioning among adults applying for disability benefits in the United States was performed. Within the sample of 1,017 respondents, the range of response category endorsement was similar for both frequency and agreement item types for both scales. There were fewer missing values in the frequency items than the agreement items. Both frequency and agreement items showed acceptable reliability. The frequency items demonstrated optimal effectiveness around the mean ± 1-2 standard deviation score range; the agreement items performed better at the extreme score ranges. Findings suggest an optimal response format requires a mix of both agreement-based and frequency-based items. Frequency items perform better in the normal range of responses, capturing specific behaviors, reactions, or situations that may elicit a specific response. Agreement items do better for those whose scores are more extreme and capture subjective content related to general attitudes, behaviors, or feelings of work-related behavioral health functioning. Copyright © 2014 Elsevier Inc. All rights reserved.
Method of estimating flood-frequency parameters for streams in Idaho
Kjelstrom, L.C.; Moffatt, R.L.
1981-01-01
Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)
Observed hierarchy of student proficiency with period, frequency, and angular frequency
NASA Astrophysics Data System (ADS)
Young, Nicholas T.; Heckler, Andrew F.
2018-01-01
In the context of a generic harmonic oscillator, we investigated students' accuracy in determining the period, frequency, and angular frequency from mathematical and graphical representations. In a series of studies including interviews, free response tests, and multiple choice tests developed in an iterative process, we assessed students in both algebra-based and calculus-based, traditionally instructed university-level introductory physics courses. Using the results, we categorized nine skills necessary for proficiency in determining period, frequency, and angular frequency. Overall results reveal that, postinstruction, proficiency is quite low: only about 20%-40% of students mastered most of the nine skills. Next, we used a semiquantitative, intuitive method to investigate the hierarchical structure of the nine skills. We also employed the more formal item tree analysis method to verify this structure and found that the skills form a multilevel, nonlinear hierarchy, with mastery of some skills being prerequisite for mastery in other skills. Finally, we implemented a targeted, 30-min group-work activity to improve proficiency in these skills and found a 1 standard deviation gain in accuracy. Overall, the results suggest that many students currently lack these essential skills, targeted practice may lead to required mastery, and that the observed hierarchical structure in the skills suggests that instruction should especially attend to the skills lower in the hierarchy.
Standardization of Laser Methods and Techniques for Vibration Measurements and Calibrations
NASA Astrophysics Data System (ADS)
von Martens, Hans-Jürgen
2010-05-01
The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and refined laser methods and techniques developed by national metrology institutes and by leading manufacturers in the past two decades have been swiftly specified as standard methods for inclusion into in the series ISO 16063 of international documentary standards. A survey of ISO Standards for the calibration of vibration and shock transducers demonstrates the extended ranges and improved accuracy (measurement uncertainty) of laser methods and techniques for vibration and shock measurements and calibrations. The first standard for the calibration of laser vibrometers by laser interferometry or by a reference accelerometer calibrated by laser interferometry (ISO 16063-41) is on the stage of a Draft International Standard (DIS) and may be issued by the end of 2010. The standard methods with refined techniques proved to achieve wider measurement ranges and smaller measurement uncertainties than that specified in the ISO Standards. The applicability of different standardized interferometer methods to vibrations at high frequencies was recently demonstrated up to 347 kHz (acceleration amplitudes up to 350 km/s2). The relative deviations between the amplitude measurement results of the different interferometer methods that were applied simultaneously, differed by less than 1% in all cases.
Optical frequency transfer via a 660 km underground fiber link using a remote Brillouin amplifier.
Raupach, S M F; Koczwara, A; Grosche, G
2014-11-03
In long-distance, optical continuous-wave frequency transfer via fiber, remote bidirectional Er³ ⁺ -doped fiber amplifiers are commonly used to mitigate signal attenuation. We demonstrate for the first time the ultrastable transfer of an optical frequency using a remote fiber Brillouin amplifier, placed in a server room along the link. Using it as the only means of remote amplification, on a 660 km loop of installed underground fiber we bridge distances of 250 km and 160 km between amplifications. Over several days of uninterrupted measurement, we find an instability of the frequency transfer (Allan deviation of Λ-weighted data with 1 s gate time) of around 1 × 10(-19) and less for averaging times longer than 3000 s. The modified Allan deviation reaches 3 × 10(-19) at an averaging time of 100 s. Beyond 100 s it follows the interferometer noise floor, and for averaging times longer than 1000 s the modified Allan deviation is in the 10(-20) range. A conservative value of the overall accuracy is 1 × 10(-19)
CODATA recommended values of the fundamental constants
NASA Astrophysics Data System (ADS)
Mohr, Peter J.; Taylor, Barry N.
2000-11-01
A review is given of the latest Committee on Data for Science and Technology (CODATA) adjustment of the values of the fundamental constants. The new set of constants, referred to as the 1998 values, replaces the values recommended for international use by CODATA in 1986. The values of the constants, and particularly the Rydberg constant, are of relevance to the calculation of precise atomic spectra. The standard uncertainty (estimated standard deviation) of the new recommended value of the Rydberg constant, which is based on precision frequency metrology and a detailed analysis of the theory, is approximately 1/160 times the uncertainty of the 1986 value. The new set of recommended values as well as a searchable bibliographic database that gives citations to the relevant literature is available on the World Wide Web at physics.nist.gov/constants and physics.nist.gov/constantsbib, respectively. .
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Santric-Milicevic, M; Vasic, V; Terzic-Supic, Z
2016-08-15
In times of austerity, the availability of econometric health knowledge assists policy-makers in understanding and balancing health expenditure with health care plans within fiscal constraints. The objective of this study is to explore whether the health workforce supply of the public health care sector, population number, and utilization of inpatient care significantly contribute to total health expenditure. The dependent variable is the total health expenditure (THE) in Serbia from the years 2003 to 2011. The independent variables are the number of health workers employed in the public health care sector, population number, and inpatient care discharges per 100 population. The statistical analyses include the quadratic interpolation method, natural logarithm and differentiation, and multiple linear regression analyses. The level of significance is set at P < 0.05. The regression model captures 90 % of all variations of observed dependent variables (adjusted R square), and the model is significant (P < 0.001). Total health expenditure increased by 1.21 standard deviations, with an increase in health workforce growth rate by 1 standard deviation. Furthermore, this rate decreased by 1.12 standard deviations, with an increase in (negative) population growth rate by 1 standard deviation. Finally, the growth rate increased by 0.38 standard deviation, with an increase of the growth rate of inpatient care discharges per 100 population by 1 standard deviation (P < 0.001). Study results demonstrate that the government has been making an effort to control strongly health budget growth. Exploring causality relationships between health expenditure and health workforce is important for countries that are trying to consolidate their public health finances and achieve universal health coverage at the same time.
Pinto Pereira, Snehal M; Li, Leah; Power, Chris
2014-12-01
Much adult physical inactivity research ignores early-life factors from which later influences may originate. In the 1958 British birth cohort (followed from 1958 to 2008), leisure-time inactivity, defined as activity frequency of less than once a week, was assessed at ages 33, 42, and 50 years (n = 12,776). Early-life factors (at ages 0-16 years) were categorized into 3 domains (i.e., physical, social, and behavioral). We assessed associations of adult inactivity 1) with factors within domains, 2) with the 3 domains combined, and 3) allowing for adult factors. At each age, approximately 32% of subjects were inactive. When domains were combined, factors associated with inactivity (e.g., at age 50 years) were prepubertal stature (5% lower odds per 1-standard deviation higher height), hand control/coordination problems (14% higher odds per 1-point increase on a 4-point scale), cognition (10% lower odds per 1-standard deviation greater ability), parental divorce (21% higher odds), institutional care (29% higher odds), parental social class at child's birth (9% higher odds per 1-point reduction on a 4-point scale), minimal parental education (13% higher odds), household amenities (2% higher odds per increase (representing poorer amenities) on a 19-point scale), inactivity (8% higher odds per 1-point reduction in activity on a 4-point scale), low sports aptitude (13% higher odds), and externalizing behaviors (i.e., conduct problems) (5% higher odds per 1-standard deviation higher score). Adjustment for adult covariates weakened associations slightly. Factors from early life were associated with adult leisure-time inactivity, allowing for early identification of groups vulnerable to inactivity. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Age estimates for the late quaternary high sea-stands
NASA Astrophysics Data System (ADS)
Smart, Peter L.; Richards, David A.
A database of more than 300 published alpha-counted uranium-series ages has been compiled for coral reef terraces formed by Late Pleistocene high sea-stands. The database was screened to eliminate unreliable age estimates ( {230Th }/{232Th } < 20, calcite > 5%) and those without quoted without quoted errors, and a distributed error frequency curve was produced. This curve can be considered as a finite mixture model comprising k component normal distributions each with a weighting α. By using an expectation maximising algorithm, the mean and standard deviation of the component distributions, each corresponding to a high sea level event, were estimated. Eight high sea-stands with mean and standard deviations of 129.0 ± 33.0, 123.0 ± 13.0, 102.5 ± 2.0, 81.5 ± 5.0, 61.5 ± 6.0, 50.0 ± 1.0, 40.5 ± 5.0 and 33.0 ± 2.5 ka were resolved. The standard deviations are generally larger than the values quoted for individual age estimates. Whilst this may be due to diagenetic effects, especially for the older corals, it is argued that in many cases geological evidence clearly indicates that the high stands are multiple events, often not resolvable at sites with low rates of uplift. The uranium-series dated coral-reef terrace chronology shows good agreement with independent chronologies derived for Antarctic ice cores, although the resolution for the latter is better. Agreement with orbitally-tuned deep-sea core records is also good, but it is argued that Isotope Stage 5e is not a single event, as recorded in the cores, but a multiple event spanning some 12 ka. The much earlier age for Isotope Stage 5e given by Winograd et al. (1988) is not supported by the coral reef data, but further mass-spectrometric uranium-series dating is needed to permit better chronological resolution.
Interaction of the Radar Waves with the Capillary Waves on the Ocean.
1983-05-01
30 1 I | I I , . ’ , 3 4 6 8 10 15 20 30 40506070 Wind Speed ( m/see) Figure 3.1: Comparison between theoretical and measured for vertical and...wind speed cases, say, between 8 and 10 m/sec, one would not expect such a high value of standard deviation. Figure 8.6a illustrates a sample set of...1979 - 10 o -20 -30U -40 1 U-1’AIId 11111 1 0 10 20 30 40 50 60 70 80 20 - ~downwindf 10 - W polaizan -~ frequency1 - 150Hz - 24 Sept 979 Ma 0- - 10 -2 1
A generalization of the theory of fringe patterns containing displacement information
NASA Astrophysics Data System (ADS)
Sciammarella, C. A.; Bhat, G.
The theory that provides the interpretation of interferometric fringes as frequency modulated signals, is used to show that the electrooptical system used to analyze fringe patterns can be considered as a simultaneous Fourier spectrum analyzer. This interpretation generalizes the quasi-heterodyning techniques. It is pointed out that the same equations that yield the discrete Fourier transform as summations, yield correct values for a reduced number of steps. Examples of application of the proposed technique to electronic holography are given. It is found that for a uniform field the standard deviation of the individual readings is 1/20 of the fringe spacing.
An Output Approach to Incentive Reimbursement for Hospitals
Ro, Kong-kyun; Auster, Richard
1969-01-01
A method of incentive reimbursement for health care institutions is described that is designed to stimulate the providers' efficiency. The two main features are: (1) reimbursement based on a weighted average of actual cost and mean cost plus or minus an appropriate number of standard deviations; (2) output defined as episodes of illness given adequate treatment instead of days of hospitalization. It is suggested that despite the operational difficulties involved in a method of payment based on an output approach, the flexibility incorporated into the determination of reimbursement by use of the properties of a normal frequency distribution would make the system workable. PMID:5349002
Estimates of primary productivity over the Thar Desert based upon Nimbus-7 37 GHz data - 1979-1985
NASA Technical Reports Server (NTRS)
Choudhury, B. J.
1987-01-01
An empirical relationship has been determined between the difference of vertically and horizontally polarized brightness temperatures noted at the 37 GHz frequency of the Nimbus-7 SMMR and primary productivity over hot arid and semiarid regions of Africa and Australia. This empirical relationship is applied to estimate the primary productivity over the Thar Desert between 1979 and 1985, giving an average value of 0.271 kg/sq m per yr. The spatial variability of the productivity values is found to be quite significant, with a standard deviation about the mean of 0.08 kg/sq m per yr.
Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation
Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.
2000-01-01
We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.
Simultaneous measurements of L- and S-band tree shadowing for space-Earth communications
NASA Technical Reports Server (NTRS)
Vogel, Wolfhard J.; Torrence, Geoffrey W.; Lin, Hsin P.
1995-01-01
We present results from simultaneous L- and S-Band slant-path fade measurements through trees. One circularly-polarized antenna was used at each end of the dual-frequency link to provide information on the correlation of tree shadowing at 1620 and 2500 MHz. Fades were measured laterally in the shadow region with 5 cm spacing. Fade differences between L- and S-Band had a normal distribution with low means and standard deviations from 5.2 to 7.5 dB. Spatial variations occurred with periods larger than 1-2 wavelengths. Swept measurements over 160 MHz spans showed that the stdv. of power as function of frequency increased from approximately 1-6 dB at locations with mean fades of 4 and 20 dB, respectively. At a 5 dB fade, the central 90% of fade slopes were within a range of 0.7 (1.9) dB/MHz at L-(S-) Band.
Self-mixing instrument for simultaneous distance and speed measurement
NASA Astrophysics Data System (ADS)
Norgia, Michele; Melchionni, Dario; Pesatori, Alessandro
2017-12-01
A novel instrument based on Self-mixing interferometry is proposed to simultaneously measure absolute distance and velocity. The measurement method is designed for working directly on each kind of surface, in industrial environment, overcoming also problems due to speckle pattern effect. The laser pump current is modulated at quite high frequency (40 kHz) and the estimation of the induced fringes frequency allows an almost instantaneous measurement (measurement time equal to 25 μs). A real time digital elaboration processes the measurement data and discards unreliable measurements. The simultaneous measurement reaches a relative standard deviation of about 4·10-4 in absolute distance, and 5·10-3 in velocity measurement. Three different laser sources are tested and compared. The instrument shows good performances also in harsh environment, for example measuring the movement of an opaque iron tube rotating under a running water flow.
The effects of body position on distortion-product otoacoustic emission testing.
Driscoll, Carlie; Kei, Joseph; Shyu, Jenny; Fukai, Natasha
2004-09-01
Otoacoustic emissions are frequently acquired from patients in a variety of body positions aside from the standard, seated orientation. Yet little knowledge is available regarding whether these deviations will produce nonpathological changes to the clinical results obtained. The present study aimed to describe the effects of body position on the distortion-product otoacoustic emissions of 60 normal-hearing adults. With particular attention given to common clinical practice, the Otodynamics ILO292, and the measurement parameters of amplitude, signal-to-noise ratio, and noise were utilized. Significant position-related effects and interactions were revealed for all parameters. Specifically, stronger emissions in the mid frequencies and higher noise levels at the extreme low and high frequencies were produced by testing subjects while lying on their side compared with the seated position. Further analysis of body position effects on emissions is warranted, in order to determine the need for clinical application of position-dependent normative data.
NASA Astrophysics Data System (ADS)
Angerer, Andreas; Astner, Thomas; Wirtitsch, Daniel; Sumiya, Hitoshi; Onoda, Shinobu; Isoya, Junichi; Putz, Stefan; Majer, Johannes
2016-07-01
We design and implement 3D-lumped element microwave cavities that spatially focus magnetic fields to a small mode volume. They allow coherent and uniform coupling to electron spins hosted by nitrogen vacancy centers in diamond. We achieve large homogeneous single spin coupling rates, with an enhancement of more than one order of magnitude compared to standard 3D cavities with a fundamental resonance at 3 GHz. Finite element simulations confirm that the magnetic field distribution is homogeneous throughout the entire sample volume, with a root mean square deviation of 1.54%. With a sample containing 1017 nitrogen vacancy electron spins, we achieve a collective coupling strength of Ω = 12 MHz, a cooperativity factor C = 27, and clearly enter the strong coupling regime. This allows to interface a macroscopic spin ensemble with microwave circuits, and the homogeneous Rabi frequency paves the way to manipulate the full ensemble population in a coherent way.
Conjunction Assessment Late-Notice High-Interest Event Investigation: Space Weather Aspects
NASA Technical Reports Server (NTRS)
Pachura, D.; Hejduk, M. D.
2016-01-01
Late-notice events usually driven by large changes in primary (protected) object or secondary object state. Main parameter to represent size of state change is component position difference divided by associated standard deviation (epsilon divided by sigma) from covariance. Investigation determined actual frequency of large state changes, in both individual and combined states. Compared them to theoretically expected frequencies. Found that large changes ( (epsilon divided by sigma) is greater than 3) in individual object states occur much more frequently than theory dictates. Effect is less pronounced in radial components and in events with probability of collision (Pc) greater than 1 (sup -5) (1e-5). Found combined state matched much closer to theoretical expectation, especially for radial and cross-track. In-track is expected to be the most vulnerable to modeling errors, so not surprising that non-compliance largest in this component.
Zolgharni, M; Griffiths, H; Ledger, P D
2010-08-01
The feasibility of detecting a cerebral haemorrhage with a hemispherical MIT coil array consisting of 56 exciter/sensor coils of 10 mm radius and operating at 1 and 10 MHz was investigated. A finite difference method combined with an anatomically realistic head model comprising 12 tissue types was used to simulate the strokes. Frequency-difference images were reconstructed from the modelled data with different levels of the added phase noise and two types of a priori boundary errors: a displacement of the head and a size scaling error. The results revealed that a noise level of 3 m degrees (standard deviation) was adequate for obtaining good visualization of a peripheral stroke (volume approximately 49 ml). The simulations further showed that the displacement error had to be within 3-4 mm and the scaling error within 3-4% so as not to cause unacceptably large artefacts on the images.
FTIR Spectrum of the ν 4Band of DCOOD
NASA Astrophysics Data System (ADS)
Tan, T. L.; Goh, K. L.; Ong, P. P.; Teo, H. H.
1999-06-01
The FTIR spectrum of the ν4band of deuterated formic acid (DCOOD) has been measured with a resolution of 0.004 cm-1in the frequency range of 1120 to 1220 cm-1. A total of 1866 assigned transitions have been analyzed and fitted using a Watson'sA-reduced Hamiltonian in theIrrepresentation to derive rovibrational constants for the upper state (v4= 1) with a standard deviation of 0.00036 cm-1. In the course of the analysis, the constants for the ground state were improved by a simultaneous fit of microwave frequencies and combination differences from the infrared measurements. Due to the relatively unperturbed nature of the band, the constants can be used to accurately calculate the infrared line positions for the whole band. Although the band is a hybrid typeAandB, onlya-type transitions were strong enough to be observed. The band center is at 1170.79980 ± 0.00002 cm-1.
Quantifying the Influence of Climate on Human Conflict
NASA Astrophysics Data System (ADS)
Hsiang, S. M.; Burke, M.; Miguel, E.
2014-12-01
A rapidly growing body of research examines whether human conflict can be affected by climatic changes. Drawing from archaeology, criminology, economics, geography, history, political science, and psychology, we assemble and analyze the most rigorous quantitative studies and document, for the first time, a striking convergence of results. We find strong causal evidence linking climatic events to human conflict across a range of spatial and temporal scales and across all major regions of the world. The magnitude of climate's influence is substantial: for each one standard deviation (1sd) change in climate toward warmer temperatures or more extreme rainfall, median estimates indicate that the frequency of interpersonal violence rises 4% and the frequency of intergroup conflict rises 14%. Because locations throughout the inhabited world are expected to warm 2sd to 4sd by 2050, amplified rates of human conflict could represent a large and critical impact of anthropogenic climate change.
The truly remarkable universality of half a standard deviation: confirmation through another look.
Norman, Geoffrey R; Sloan, Jeff A; Wyrwich, Kathleen W
2004-10-01
In this issue of Expert Review of Pharmacoeconomics and Outcomes Research, Farivar, Liu, and Hays present their findings in 'Another look at the half standard deviation estimate of the minimally important difference in health-related quality of life scores (hereafter referred to as 'Another look') . These researchers have re-examined the May 2003 Medical Care article 'Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation' (hereafter referred to as 'Remarkable') in the hope of supporting their hypothesis that the minimally important difference in health-related quality of life measures is undoubtedly closer to 0.3 standard deviations than 0.5. Nonetheless, despite their extensive wranglings with the exclusion of many articles that we included in our review; the inclusion of articles that we did not include in our review; and the recalculation of effect sizes using the absolute value of the mean differences, in our opinion, the results of the 'Another look' article confirm the same findings in the 'Remarkable' paper.
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Effect of multizone refractive multifocal contact lenses on standard automated perimetry.
Madrid-Costa, David; Ruiz-Alcocer, Javier; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa
2012-09-01
The aim of this study was to evaluate whether the creation of 2 foci (distance and near) provided by multizone refractive multifocal contact lenses (CLs) for presbyopia correction affects the measurements on Humphreys 24-2 Swedish interactive threshold algorithm (SITA) standard automated perimetry (SAP). In this crossover study, 30 subjects were fitted in random order with either a multifocal CL or a monofocal CL. After 1 month, a Humphrey 24-2 SITA standard strategy was performed. The visual field global indices (the mean deviation [MD] and pattern standard deviation [PSD]), reliability indices, test duration, and number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% on pattern deviation probability plots were determined and compared between multifocal and monofocal CLs. Thirty eyes of 30 subjects were included in this study. There were no statistically significant differences in reliability indices or test duration. There was a statistically significant reduction in the MD with the multifocal CL compared with monfocal CL (P=0.001). Differences were not found in PSD nor in the number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% in the pattern deviation probability maps studied. The results of this study suggest that the multizone refractive lens produces a generalized depression in threshold sensitivity as measured by the Humphreys 24-2 SITA SAP.
Vocal singing by prelingually-deafened children with cochlear implants.
Xu, Li; Zhou, Ning; Chen, Xiuwu; Li, Yongxin; Schultz, Heather M; Zhao, Xiaoyan; Han, Demin
2009-09-01
The coarse pitch information in cochlear implants might hinder the development of singing in prelingually-deafened pediatric users. In the present study, seven prelingually-deafened children with cochlear implants (5.4-12.3 years old) sang one song that was the most familiar to him or her. The control group consisted of 14 normal-hearing children (4.1-8.0 years old). The fundamental frequencies (F0) of each note in the recorded songs were extracted. The following five metrics were computed based on the reference music scores: (1) F0 contour direction of the adjacent notes, (2) F0 compression ratio of the entire song, (3) mean deviation of the normalized F0 across the notes, (4) mean deviation of the pitch intervals, and (5) standard deviation of the note duration differences. Children with cochlear implants showed significantly poorer performance in the pitch-based assessments than the normal-hearing children. No significant differences were seen between the two groups in the rhythm-based measure. Prelingually-deafened children with cochlear implants have significant deficits in singing due to their inability to manipulate pitch in the correct directions and to produce accurate pitch height. Future studies with a large sample size are warranted in order to account for the large variability in singing performance.
Stenzel, O; Wilbrandt, S; Wolf, J; Schürmann, M; Kaiser, N; Ristau, D; Ehlers, H; Carstens, F; Schippel, S; Mechold, L; Rauhut, R; Kennedy, M; Bischoff, M; Nowitzki, T; Zöller, A; Hagedorn, H; Reus, H; Hegemann, T; Starke, K; Harhausen, J; Foest, R; Schumacher, J
2017-02-01
Random effects in the repeatability of refractive index and absorption edge position of tantalum pentoxide layers prepared by plasma-ion-assisted electron-beam evaporation, ion beam sputtering, and magnetron sputtering are investigated and quantified. Standard deviations in refractive index between 4*10-4 and 4*10-3 have been obtained. Here, lowest standard deviations in refractive index close to our detection threshold could be achieved by both ion beam sputtering and plasma-ion-assisted deposition. In relation to the corresponding mean values, the standard deviations in band-edge position and refractive index are of similar order.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
McClure, Foster D; Lee, Jung K
2005-01-01
Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
Evaluation of a fever-management algorithm in a pediatric cancer center in a low-resource setting.
Mukkada, Sheena; Smith, Cristel Kate; Aguilar, Delta; Sykes, April; Tang, Li; Dolendo, Mae; Caniza, Miguela A
2018-02-01
In low- and middle-income countries (LMICs), inconsistent or delayed management of fever contributes to poor outcomes among pediatric patients with cancer. We hypothesized that standardizing practice with a clinical algorithm adapted to local resources would improve outcomes. Therefore, we developed a resource-specific algorithm for fever management in Davao City, Philippines. The primary objective of this study was to evaluate adherence to the algorithm. This was a prospective cohort study of algorithm adherence to assess the types of deviation, reasons for deviation, and pathogens isolated. All pediatric oncology patients who were admitted with fever (defined as an axillary temperature >37.7°C on one occasion or ≥37.4°C on two occasions 1 hr apart) or who developed fever within 48 hr of admission were included. Univariate and multiple linear regression analyses were used to determine the relation between clinical predictors and length of hospitalization. During the study, 93 patients had 141 qualifying febrile episodes. Even though the algorithm was designed locally, deviations occurred in 70 (50%) of 141 febrile episodes on day 0, reflecting implementation barriers at the patient, provider, and institutional levels. There were 259 deviations during the first 7 days of admission in 92 (65%) of 141 patient episodes. Failure to identify high-risk patients, missed antimicrobial doses, and pathogen isolation were associated with prolonged hospitalization. Monitoring algorithm adherence helps in assessing the quality of pediatric oncology care in LMICs and identifying opportunities for improvement. Measures that decrease high-frequency/high-impact algorithm deviations may shorten hospitalizations and improve healthcare use in LMICs. © 2017 Wiley Periodicals, Inc.
Horton, Stephen P.; Norris, Robert D.; Moran, Seth C.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.
2008-01-01
From October 2004 to May 2005, the Center for Earthquake Research and Information of the University of Memphis operated two to six broadband seismometers within 5 to 20 km of Mount St. Helens to help monitor recent seismic and volcanic activity. Approximately 57,000 earthquakes identified during the 7-month deployment had a normal magnitude distribution with a mean magnitude of 1.78 and a standard deviation of 0.24 magnitude units. Both the mode and range of earthquake magnitude and the rate of activity varied during the deployment. We examined the time domain and spectral characteristics of two classes of events seen during dome building. These include volcano-tectonic earthquakes and lower-frequency events. Lower-frequency events are further classified into hybrid earthquakes, low-frequency earthquakes, and long-duration volcanic tremor. Hybrid and low-frequency earthquakes showed a continuum of characteristics that varied systematically with time. A progressive loss of high-frequency seismic energy occurred in earthquakes as magma approached and eventually reached the surface. The spectral shape of large and small earthquakes occurring within days of each other did not vary with magnitude. Volcanic tremor events and lower-frequency earthquakes displayed consistent spectral peaks, although higher frequencies were more favorably excited during tremor than earthquakes.
NASA Technical Reports Server (NTRS)
Davarian, F.
1994-01-01
The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.
Introducing the Mean Absolute Deviation "Effect" Size
ERIC Educational Resources Information Center
Gorard, Stephen
2015-01-01
This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…
Hopper, John L.
2015-01-01
How can the “strengths” of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors—and that is how risk gradients are interpreted—so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RRs. This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. PMID:26520360
NASA Astrophysics Data System (ADS)
Muji Susantoro, Tri; Wikantika, Ketut; Saepuloh, Asep; Handoyo Harsolumakso, Agus
2018-05-01
Selection of vegetation indices in plant mapping is needed to provide the best information of plant conditions. The methods used in this research are the standard deviation and the linear regression. This research tried to determine the vegetation indices used for mapping the sugarcane conditions around oil and gas fields. The data used in this study is Landsat 8 OLI/TIRS. The standard deviation analysis on the 23 vegetation indices with 27 samples has resulted in the six highest standard deviations of vegetation indices, termed as GRVI, SR, NLI, SIPI, GEMI and LAI. The standard deviation values are 0.47; 0.43; 0.30; 0.17; 0.16 and 0.13. Regression correlation analysis on the 23 vegetation indices with 280 samples has resulted in the six vegetation indices, termed as NDVI, ENDVI, GDVI, VARI, LAI and SIPI. This was performed based on regression correlation with the lowest value R2 than 0,8. The combined analysis of the standard deviation and the regression correlation has obtained the five vegetation indices, termed as NDVI, ENDVI, GDVI, LAI and SIPI. The results of the analysis of both methods show that a combination of two methods needs to be done to produce a good analysis of sugarcane conditions. It has been clarified through field surveys and showed good results for the prediction of microseepages.
NASA Technical Reports Server (NTRS)
Kraft, R. E.; Yu, J.; Kwan, H. W.
1999-01-01
The primary purpose of this study is to develop improved models for the acoustic impedance of treatment panels at high frequencies, for application to subscale treatment designs. Effects that cause significant deviation of the impedance from simple geometric scaling are examined in detail, an improved high-frequency impedance model is developed, and the improved model is correlated with high-frequency impedance measurements. Only single-degree-of-freedom honeycomb sandwich resonator panels with either perforated sheet or "linear" wiremesh faceplates are considered. The objective is to understand those effects that cause the simple single-degree-of- freedom resonator panels to deviate at the higher-scaled frequency from the impedance that would be obtained at the corresponding full-scale frequency. This will allow the subscale panel to be designed to achieve a specified impedance spectrum over at least a limited range of frequencies. An advanced impedance prediction model has been developed that accounts for some of the known effects at high frequency that have previously been ignored as a small source of error for full-scale frequency ranges.
Remote auditing of radiotherapy facilities using optically stimulated luminescence dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lye, Jessica, E-mail: jessica.lye@arpansa.gov.au; Dunn, Leon; Kenny, John
Purpose: On 1 July 2012, the Australian Clinical Dosimetry Service (ACDS) released its Optically Stimulated Luminescent Dosimeter (OSLD) Level I audit, replacing the previous TLD based audit. The aim of this work is to present the results from this new service and the complete uncertainty analysis on which the audit tolerances are based. Methods: The audit release was preceded by a rigorous evaluation of the InLight® nanoDot OSLD system from Landauer (Landauer, Inc., Glenwood, IL). Energy dependence, signal fading from multiple irradiations, batch variation, reader variation, and dose response factors were identified and quantified for each individual OSLD. The detectorsmore » are mailed to the facility in small PMMA blocks, based on the design of the existing Radiological Physics Centre audit. Modeling and measurement were used to determine a factor that could convert the dose measured in the PMMA block, to dose in water for the facility's reference conditions. This factor is dependent on the beam spectrum. The TPR{sub 20,10} was used as the beam quality index to determine the specific block factor for a beam being audited. The audit tolerance was defined using a rigorous uncertainty calculation. The audit outcome is then determined using a scientifically based two tiered action level approach. Audit outcomes within two standard deviations were defined as Pass (Optimal Level), within three standard deviations as Pass (Action Level), and outside of three standard deviations the outcome is Fail (Out of Tolerance). Results: To-date the ACDS has audited 108 photon beams with TLD and 162 photon beams with OSLD. The TLD audit results had an average deviation from ACDS of 0.0% and a standard deviation of 1.8%. The OSLD audit results had an average deviation of −0.2% and a standard deviation of 1.4%. The relative combined standard uncertainty was calculated to be 1.3% (1σ). Pass (Optimal Level) was reduced to ≤2.6% (2σ), and Fail (Out of Tolerance) was reduced to >3.9% (3σ) for the new OSLD audit. Previously with the TLD audit the Pass (Optimal Level) and Fail (Out of Tolerance) were set at ≤4.0% (2σ) and >6.0% (3σ). Conclusions: The calculated standard uncertainty of 1.3% at one standard deviation is consistent with the measured standard deviation of 1.4% from the audits and confirming the suitability of the uncertainty budget derived audit tolerances. The OSLD audit shows greater accuracy than the previous TLD audit, justifying the reduction in audit tolerances. In the TLD audit, all outcomes were Pass (Optimal Level) suggesting that the tolerances were too conservative. In the OSLD audit 94% of the audits have resulted in Pass (Optimal level) and 6% of the audits have resulted in Pass (Action Level). All Pass (Action level) results have been resolved with a repeat OSLD audit, or an on-site ion chamber measurement.« less
Ponikowski, P; Anker, S D; Chua, T P; Szelemej, R; Piepoli, M; Adamopoulos, S; Webb-Peploe, K; Harrington, D; Banasiak, W; Wrabec, K; Coats, A J
1997-06-15
After acute myocardial infarction, depressed heart rate variability (HRV) has been proven to be a powerful independent predictor of a poor outcome. Although patients with chronic congestive heart failure (CHF) have also markedly impaired HRV, the prognostic value of HRV analysis in these patients remains unknown. The aim of this study was to investigate whether HRV parameters could predict survival in 102 consecutive patients with moderate to severe CHF (90 men, mean age 58 years, New York Heart Association [NYHA] class II to IV, CHF due to idiopathic dilated cardiomyopathy in 24 patients and ischemic heart disease in 78 patients, ejection fraction [EF], 26%; peak oxygen consumption, 16.9 ml/kg/min) after exclusion of patients in atrial fibrilation with diabetes or with chronic renal failure. In the prognostic analysis (Cox proportional-hazards model, Kaplan-Meier survival analysis), the following factors were investigated: age, CHF etiology, NYHA class, EF, peak oxygen consumption, presence of ventricular tachycardia on Holter monitoring, and HRV measures derived from 24-hour electrocardiography monitoring, calculated in the time (standard deviation of all normal RR intervals [SDNN], standard deviation of 5-minute RR intervals [SDANN], mean of all 5-minute standard deviations of RR intervals [SD], root-mean-square of difference of successive RR intervals [rMSSD], and percentage of adjacent RR intervals >50 ms different [pNN50]) and frequency domain (total power [TP], power within low-frequency band [LF], and power within high-frequency band [HF]). During follow-up of 584 +/- 405 days (365 days in all who survived), 19 patients (19%) died (mean time to death: 307 +/- 315 days, range 3 to 989). Cox's univariate analysis identified the following factors to be predictors of death: NYHA (p = 0.003), peak oxygen consumption (p = 0.01), EF (p = 0.02), ventricular tachycardia on Holter monitoring (p = 0.05), and among HRV measures: SDNN (p = 0.004), SDANN (p = 0.003), SD (p = 0.02), and LF (p = 0.003). In multivariate analysis, HRV parameters (SDNN, SDANN, LF) were found to predict survival independently of NYHA functional class, EF, peak oxygen consumption, and ventricular tachycardia on Holter monitoring. The Kaplan-Meier survival curves revealed SDNN < 100 ms to be a useful risk factor; 1-year survival in patients with SDNN < 100 ms was 78% when compared with 95% in those with SDNN > 100 ms (p = 0.008). The coexistence of SDNN < 100 ms and a peak oxygen consumption < 14 ml/kg/min allowed identification of a group of 18 patients with a particularly poor prognosis (1-year survival 63% vs 94% in the remaining patients, p <0.001). We conclude that depressed HRV on 24-hour ambulatory electrocardiography monitoring is an independent risk factor for a poor prognosis in patients with CHF. Whether analysis of HRV could be recommended in the risk stratification for better management of patients with CHF needs further investigation.
Inferences on hydrogen bond networks in water from isopermitive frequency investigations.
Geethu, P M; Ranganathan, Venketesh T; Satapathy, Dillip K Kumar
2018-06-26
Intermolecular hydrogen bonds play a crucial role in determining the unique characteristics of liquid water. We present low-frequency (1 Hz - 40 MHz) dielectric spectroscopic investigations on water in the presence and absence of added solutes at different temperatures from 10°C to 60°C. The intersection points of temperature dependent permittivity contours at the vicinity of isopermitive frequency (IPF) in water are recorded and its properties are presumed to be related to the extent of hydrogen bond networks in water. IPF is defined as the frequency at which the relative permittivity of water is almost independent of temperature. The set of intersection points of temperature dependent permittivity contours at the vicinity of IPF are characterized by the mean (M<sub>IPF</sub>) and root-mean-square deviation/standard deviation (σ<sub>IPF</sub> ) associated with IPF. The tunability of M<sub>IPF</sub> by the addition of NaCl salt emphasizes the strong correlation between the concentration of ions in water and the M<sub>IPF</sub> . The σ<sub>IPF</sub> is surmised to be related to the orientational correlations of water dipoles as well as to the intermolecular hydrogen bond networks in water. Further, alterations in σ<sub>IPF</sub> is observed with the addition of kosmotropic and chaotropic solutes into water and are thought to arise due to the restructuring of hydrogen bond networks in water in presence of added solutes. Notably, the solute induced reconfiguration of hydrogen bond networks in water or often-discussed structure making/breaking effects of the added solutes in water can be inferred, albeit qualitatively, by examining the M<sub>IPF</sub> and σ<sub>IPF</sub>. Further, the Gaussian deconvoluted OH-stretching modes present in the Raman Spectra of water and aqueous solutions of IPA and DMF strongly endorses the structural rearrangements occurring in water in presence of kosmotropes and chaotropes and are in-line with the results derived from the root-mean-square deviation in IPF. . © 2018 IOP Publishing Ltd.
Collinearity in Least-Squares Analysis
ERIC Educational Resources Information Center
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
Robust Confidence Interval for a Ratio of Standard Deviations
ERIC Educational Resources Information Center
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Estimating maize water stress by standard deviation of canopy temperature in thermal imagery
USDA-ARS?s Scientific Manuscript database
A new crop water stress index using standard deviation of canopy temperature as an input was developed to monitor crop water status. In this study, thermal imagery was taken from maize under various levels of deficit irrigation treatments in different crop growing stages. The Expectation-Maximizatio...
Analysis of the Precision of Pulsar Time Clock Modeltwo
NASA Astrophysics Data System (ADS)
Zhao, Cheng-shi; Tong, Ming-lei; Gao, Yu-ping; Yang, Ting-gao
2018-04-01
Millisecond pulsars have a very high rotation stability, which can be applied to many research fields, such as the establishment of the pulsar time standard, the detection of gravitational wave, the spacecraft navigation by using X-ray pulsars and so on. In this paper, we employ two millisecond pulsars PSR J0437-4715 and J1713+0743, which are observed by the International Pulsar Timing Array (IPTA), to analyze the precision of pulsar clock parameter and the prediction accuracy of pulse time of arrival (TOA). It is found that the uncertainty of spin frequency is 10-15 Hz, the uncertainty of the first derivative of spin frequency is 10-23 s-2, and the precision of measured rotational parameters increases by one order of magnitude with the accumulated observational data every 4∼5 years. In addition, the errors of TOAs within 4.8 yr which are predicted by the clock model established by the 10 yr data of J0437-4715 are less than 1 μs. Therefore, one can use the pulsar time standard to calibrate the atomic clock, and make the atomic time deviate from the TT (Terrestrial Time) less than 1 μs within 4.8 yr.
Functional Covariance Networks: Obtaining Resting-State Networks from Intersubject Variability
Gohel, Suril; Di, Xin; Walter, Martin; Biswal, Bharat B.
2012-01-01
Abstract In this study, we investigate a new approach for examining the separation of the brain into resting-state networks (RSNs) on a group level using resting-state parameters (amplitude of low-frequency fluctuation [ALFF], fractional ALFF [fALFF], the Hurst exponent, and signal standard deviation). Spatial independent component analysis is used to reveal covariance patterns of the relevant resting-state parameters (not the time series) across subjects that are shown to be related to known, standard RSNs. As part of the analysis, nonresting state parameters are also investigated, such as mean of the blood oxygen level-dependent time series and gray matter volume from anatomical scans. We hypothesize that meaningful RSNs will primarily be elucidated by analysis of the resting-state functional connectivity (RSFC) parameters and not by non-RSFC parameters. First, this shows the presence of a common influence underlying individual RSFC networks revealed through low-frequency fluctation (LFF) parameter properties. Second, this suggests that the LFFs and RSFC networks have neurophysiological origins. Several of the components determined from resting-state parameters in this manner correlate strongly with known resting-state functional maps, and we term these “functional covariance networks”. PMID:22765879
Scaling laws and fluctuations in the statistics of word frequencies
NASA Astrophysics Data System (ADS)
Gerlach, Martin; Altmann, Eduardo G.
2014-11-01
In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.
Bowhead whale localization using asynchronous hydrophones in the Chukchi Sea.
Warner, Graham A; Dosso, Stan E; Hannay, David E; Dettmer, Jan
2016-07-01
This paper estimates bowhead whale locations and uncertainties using non-linear Bayesian inversion of their modally-dispersed calls recorded on asynchronous recorders in the Chukchi Sea, Alaska. Bowhead calls were recorded on a cluster of 7 asynchronous ocean-bottom hydrophones that were separated by 0.5-9.2 km. A warping time-frequency analysis is used to extract relative mode arrival times as a function of frequency for nine frequency-modulated whale calls that dispersed in the shallow water environment. Each call was recorded on multiple hydrophones and the mode arrival times are inverted for: the whale location in the horizontal plane, source instantaneous frequency (IF), water sound-speed profile, seabed geoacoustic parameters, relative recorder clock drifts, and residual error standard deviations, all with estimated uncertainties. A simulation study shows that accurate prior environmental knowledge is not required for accurate localization as long as the inversion treats the environment as unknown. Joint inversion of multiple recorded calls is shown to substantially reduce uncertainties in location, source IF, and relative clock drift. Whale location uncertainties are estimated to be 30-160 m and relative clock drift uncertainties are 3-26 ms.
McCormick, Matthew M.; Madsen, Ernest L.; Deaner, Meagan E.; Varghese, Tomy
2011-01-01
Absolute backscatter coefficients in tissue-mimicking phantoms were experimentally determined in the 5–50 MHz frequency range using a broadband technique. A focused broadband transducer from a commercial research system, the VisualSonics Vevo 770, was used with two tissue-mimicking phantoms. The phantoms differed regarding the thin layers covering their surfaces to prevent desiccation and regarding glass bead concentrations and diameter distributions. Ultrasound scanning of these phantoms was performed through the thin layer. To avoid signal saturation, the power spectra obtained from the backscattered radio frequency signals were calibrated by using the signal from a liquid planar reflector, a water-brominated hydrocarbon interface with acoustic impedance close to that of water. Experimental values of absolute backscatter coefficients were compared with those predicted by the Faran scattering model over the frequency range 5–50 MHz. The mean percent difference and standard deviation was 54% ± 45% for the phantom with a mean glass bead diameter of 5.40 μm and was 47% ± 28% for the phantom with 5.16 μm mean diameter beads. PMID:21877789
Frequency assignments for HFDF receivers in a search and rescue network
NASA Astrophysics Data System (ADS)
Johnson, Krista E.
1990-03-01
This thesis applies a multiobjective linear programming approach to the problem of assigning frequencies to high frequency direction finding (HFDF) receivers in a search-and-rescue network in order to maximize the expected number of geolocations of vessels in distress. The problem is formulated as a multiobjective integer linear programming problem. The integrality of the solutions is guaranteed by the totally unimodularity of the A-matrix. Two approaches are taken to solve the multiobjective linear programming problem: (1) the multiobjective simplex method as implemented in ADBASE; and (2) an iterative approach. In this approach, the individual objective functions are weighted and combined in a single additive objective function. The resulting single objective problem is expressed as a network programming problem and solved using SAS NETFLOW. The process is then repeated with different weightings for the objective functions. The solutions obtained from the multiobjective linear programs are evaluated using a FORTRAN program to determine which solution provides the greatest expected number of geolocations. This solution is then compared to the sample mean and standard deviation for the expected number of geolocations resulting from 10,000 random frequency assignments for the network.
EFFECTIVE INDICES FOR MONITORING MENTAL WORKLOAD WHILE PERFORMING MULTIPLE TASKS.
Hsu, Bin-Wei; Wang, Mao-Jiun J; Chen, Chi-Yuan; Chen, Fang
2015-08-01
This study identified several physiological indices that can accurately monitor mental workload while participants performed multiple tasks with the strategy of maintaining stable performance and maximizing accuracy. Thirty male participants completed three 10-min. simulated multitasks: MATB (Multi-Attribute Task Battery) with three workload levels. Twenty-five commonly used mental workload measures were collected, including heart rate, 12 HRV (heart rate variability), 10 EEG (electroencephalography) indices (α, β, θ, α/θ, θ/β from O1-O2 and F4-C4), and two subjective measures. Analyses of index sensitivity showed that two EEG indices, θ and α/θ (F4-C4), one time-domain HRV-SDNN (standard deviation of inter-beat intervals), and four frequency-domain HRV: VLF (very low frequency), LF (low frequency), %HF (percentage of high frequency), and LF/HF were sensitive to differentiate high workload. EEG α/θ (F4-C4) and LF/HF were most effective for monitoring high mental workload. LF/HF showed the highest correlations with other physiological indices. EEG α/θ (F4-C4) showed strong correlations with subjective measures across different mental workload levels. Operation strategy would affect the sensitivity of EEG α (F4-C4) and HF.
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
NASA Technical Reports Server (NTRS)
Whealton, J. H.; Mason, E. A.
1973-01-01
An asymptotic solution of the Boltzmann equation is developed for ICR absorption, without restrictions on the ion-neutral collision frequency or mass ratio. Velocity dependence of the collision frequency causes deviations from Lorentzian line shape.
A 920-kilometer optical fiber link for frequency metrology at the 19th decimal place.
Predehl, K; Grosche, G; Raupach, S M F; Droste, S; Terra, O; Alnis, J; Legero, Th; Hänsch, T W; Udem, Th; Holzwarth, R; Schnatz, H
2012-04-27
Optical clocks show unprecedented accuracy, surpassing that of previously available clock systems by more than one order of magnitude. Precise intercomparisons will enable a variety of experiments, including tests of fundamental quantum physics and cosmology and applications in geodesy and navigation. Well-established, satellite-based techniques for microwave dissemination are not adequate to compare optical clocks. Here, we present phase-stabilized distribution of an optical frequency over 920 kilometers of telecommunication fiber. We used two antiparallel fiber links to determine their fractional frequency instability (modified Allan deviation) to 5 × 10(-15) in a 1-second integration time, reaching 10(-18) in less than 1000 seconds. For long integration times τ, the deviation from the expected frequency value has been constrained to within 4 × 10(-19). The link may serve as part of a Europe-wide optical frequency dissemination network.
Electromagnetic Compatibility Testing Studies
NASA Technical Reports Server (NTRS)
Trost, Thomas F.; Mitra, Atindra K.
1996-01-01
This report discusses the results on analytical models and measurement and simulation of statistical properties from a study of microwave reverberation (mode-stirred) chambers performed at Texas Tech University. Two analytical models of power transfer vs. frequency in a chamber, one for antenna-to-antenna transfer and the other for antenna to D-dot sensor, were experimentally validated in our chamber. Two examples are presented of the measurement and calculation of chamber Q, one for each of the models. Measurements of EM power density validate a theoretical probability distribution on and away from the chamber walls and also yield a distribution with larger standard deviation at frequencies below the range of validity of the theory. Measurements of EM power density at pairs of points which validate a theoretical spatial correlation function on the chamber walls and also yield a correlation function with larger correlation length, R(sub corr), at frequencies below the range of validity of the theory. A numerical simulation, employing a rectangular cavity with a moving wall shows agreement with the measurements. The determination that the lowest frequency at which the theoretical spatial correlation function is valid in our chamber is considerably higher than the lowest frequency recommended by current guidelines for utilizing reverberation chambers in EMC testing. Two suggestions have been made for future studies related to EMC testing.
A single-frequency double-pulse Ho:YLF laser for CO2-lidar
NASA Astrophysics Data System (ADS)
Kucirek, P.; Meissner, A.; Eiselt, P.; Höfer, M.; Hoffmann, D.
2016-03-01
A single-frequency q-switched Ho:YLF laser oscillator with a bow-tie ring resonator, specifically designed for highspectral stability, is reported. It is pumped with a dedicated Tm:YLF laser at 1.9 μm. The ramp-and-fire method with a DFB-diode laser as a reference is employed for generating single-frequency emission at 2051 nm. The laser is tested with different operating modes, including cw-pumping at different pulse repetition frequencies and gain-switched pumping. The standard deviation of the emission wavelength of the laser pulses is measured with the heterodyne technique at the different operating modes. Its dependence on the single-pass gain in the crystal and on the cavity finesse is investigated. At specific operating points the spectral stability of the laser pulses is 1.5 MHz (rms over 10 s). Under gain-switched pumping with 20% duty cycle and 2 W of average pump power, stable single-frequency pulse pairs with a temporal separation of 580 μs are produced at a repetition rate of 50 Hz. The measured pulse energy is 2 mJ (<2 % rms error on the pulse energy over 10 s) and the measured pulse duration is approx. 20 ns for each of the two pulses in the burst.
Effect of HeartMate left ventricular assist device on cardiac autonomic nervous activity.
Kim, S Y; Montoya, A; Zbilut, J P; Mawulawde, K; Sullivan, H J; Lonchyna, V A; Terrell, M R; Pifarré, R
1996-02-01
Clinical performance of a left ventricular assist device is assessed via hemodynamic parameters and end-organ function. This study examined effect of a left ventricular assist device on human neurophysiology. This study evaluated the time course change of cardiac autonomic activity of 3 patients during support with a left ventricular assist device before cardiac transplantation. Cardiac autonomic activity was determined by power spectral analysis of short-term heart rate variability. The heart rate variability before cardiac transplantation was compared with that on the day before left ventricular assist device implantation. The standard deviation of the mean of the R-R intervals of the electrocardiogram, an index of vagal activity, increased to 27 +/- 7 ms from 8 +/- 0.6 ms. The modulus of power spectral components increased. Low frequency (sympathetic activity) and high frequency power (vagal activity) increased by a mean of 9 and 22 times of each baseline value (low frequency power, 5.2 +/- 3.0 ms2; high frequency power, 2.1 +/- 0.7 ms2). The low over high frequency power ratio decreased substantially, indicating an improvement of cardiac sympatho-vagal balance. The study results suggest that left ventricular assist device support before cardiac transplantation may exert a favorable effect on cardiac autonomic control in patients with severe heart failure.
Antenna pattern measurements to characterize the out-of-band behavior of reflector antennas
NASA Astrophysics Data System (ADS)
Cown, B. J.; Weaver, E. E.; Ryan, C. E., Jr.
1983-12-01
Research was conducted to collect and describe out-of-band antenna pattern data. The research efforts were devoted: (1) to deriving valid measured data for a reflector antenna for out-of-band frequencies spanning intervals around the second and third harmonics of the in-band design frequency, and (2) to statistically characterize the measured data. The second harmonic data were collected for both polarization senses for the out-of-band frequencies of 5.5 GHz to 7.5 GHz in steps of 0.1 GHz. The third harmonic data were collected for both polarization senses for the out-of-band frequencies of 8.0 GHz to 10.0 GHz in steps of 0.1 GHz. Additionally, in-band data were collected at 2.9, 3.0, and 3.1 GHz for both polarization senses. The measured data were collected on the Georgia Tech compact antenna range test facility with the aid of an automated data logger system designed expressly for efficient collection of broadband antenna data. The pattern data, recorded directly on magnetic disks, were analyzed: (1) to compute average gain and standard deviation over selected angular sectors, (2) to construct cumulative probability curves, and (3) to specify the peak gain and the angular coordinates of the peak at each frequency.
YALE NATURAL RADIOCARBON MEASUREMENTS. PART VI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuiver, M.; Deevey, E.S.
1961-01-01
Most of the measurements made since publication of Yale V are included; some measurements, such as a series collected in Greenland, are withneld pending additional information or field work that will make better interpretations possible. In addition to radiocarbon dates of geologic and/or archaeologic interest, recent assays are given of C/sup 14/ in lake waters and other lacustrine materials, now normalized for C/sup 13/ content. The newly accepted convention is followed in expressing normalized C/sup 14/ values as DELTA = delta C/sup 14/ (2 delta C/sup 13/ + 50)STAl + ( delta C/sup 14//1000)! where DELTA is the per milmore » deviation of the C/sup 14/ if the sample from any contemporary standard (whether organic or a carbonate) after correction of sample and/or standard for real age, for the Suess effect, for normal isotopic fractionation, and for deviations of C/sup 14/ content of the age- and pollution- corrected l9th-century wood standard from that of 95% of the NBS oxalic acid standard; delta C/sup 14/ is the measured deviation from 95% of the NBS standard, and delta C/sup 13/ is the deviation from the NBS limestone standard, both in per mil. These assays are variously affected by artificial C/sup 14/ resulting from nuclear tests. (auth)« less
Yanagihara, Nobuyuki; Seki, Meikan; Nakano, Masahiro; Hachisuga, Toru; Goto, Yukio
2014-06-01
Disturbance of autonomic nervous activity has been thought to play a role in the climacteric symptoms of postmenopausal women. This study was therefore designed to investigate the relationship between autonomic nervous activity and climacteric symptoms in postmenopausal Japanese women. The autonomic nervous activity of 40 Japanese women with climacteric symptoms and 40 Japanese women without climacteric symptoms was measured by power spectral analysis of heart rate variability using a standard hexagonal radar chart. The scores for climacteric symptoms were determined using the simplified menopausal index. Sympathetic excitability and irritability, as well as the standard deviation of mean R-R intervals in supine position, were significantly (P < 0.01, 0.05, and 0.001, respectively) decreased in women with climacteric symptoms. There was a negative correlation between the standard deviation of mean R-R intervals in supine position and the simplified menopausal index score. The lack of control for potential confounding variables was a limitation of this study. In climacteric women, the standard deviation of mean R-R intervals in supine position is negatively correlated with the simplified menopausal index score.
The Primordial Inflation Explorer (PIXIE)
NASA Technical Reports Server (NTRS)
Kogut, Alan; Chluba, Jens; Fixsen, Dale J.; Meyer, Stephan; Spergel, David
2016-01-01
The Primordial Inflation Explorer is an Explorer-class mission to open new windows on the early universe through measurements of the polarization and absolute frequency spectrum of the cosmic microwave background. PIXIE will measure the gravitational-wave signature of primordial inflation through its distinctive imprint in linear polarization, and characterize the thermal history of the universe through precision measurements of distortions in the blackbody spectrum. PIXIE uses an innovative optical design to achieve background-limited sensitivity in 400 spectral channels spanning over 7 octaves in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). Multi-moded non-imaging optics feed a polarizing Fourier Transform Spectrometer to produce a set of interference fringes, proportional to the difference spectrum between orthogonal linear polarizations from the two input beams. Multiple levels of symmetry and signal modulation combine to reduce systematic errors to negligible levels. PIXIE will map the full sky in Stokes I, Q, and U parameters with angular resolution 2.6 degrees and sensitivity 70 nK per 1degree square pixel. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r < 10(exp. -3) at 5 standard deviations. The PIXIE mission complements anticipated ground-based polarization measurements such as CMBS4, providing a cosmic-variance-limited determination of the large-scale E-mode signal to measure the optical depth, constrain models of reionization, and provide a firm detection of the neutrino mass (the last unknown parameter in the Standard Model of particle physics). In addition, PIXIE will measure the absolute frequency spectrum to characterize deviations from a blackbody with sensitivity 3 orders of magnitude beyond the seminal COBE/FIRAS limits. The sky cannot be black at this level; the expected results will constrain physical processes ranging from inflation to the nature of the first stars and the physical conditions within the interstellar medium of the Galaxy. We describe the PIXIE instrument and mission architecture required to measure the CMB to the limits imposed by astrophysical foregrounds.
Passive correlation ranging of a geostationary satellite using DVB-S payload signals.
NASA Astrophysics Data System (ADS)
Shakun, Leonid; Shulga, Alexandr; Sybiryakova, Yevgeniya; Bushuev, Felix; Kaliuzhnyi, Mykola; Bezrukovs, Vladislavs; Moskalenko, Sergiy; Kulishenko, Vladislav; Balagura, Oleg
2016-07-01
Passive correlation ranging (PaCoRa) for geostationary satellites is now considered as an alternate to tone-ranging (https://artes.esa.int/search/node/PaCoRa). The PaCoRa method has been employed in the Research Institute "Nikolaev astronomical observatory" since the first experiment in August 2011 with two stations spatially separated on 150 km. The PaCoRa has been considered as an independent method for tracking the future Ukrainian geostationary satellite "Lybid'. Now a radio engineering complex (RC) for passive ranging consists of five spatially separated stations of receiving digital satellite television and a data processing center located in Mykolaiv. The stations are located in Kyiv, Kharkiv, Mukacheve, Mykolaiv (Ukraine) and in Ventspils (Latvia). Each station has identical equipment. The equipment allows making synchronous recording of fragments of the DVB-S signal from the quadrature detector output of a satellite television receiver. The fragments are recorded every second. Synchronization of the stations is performed using GPS receivers. Samples of the complex signal obtained in this way are archived and are sent to the data processing center over the Internet. Here the time differences of arrival (TDOA) for pairs of the stations are determined as a result of correlation processing of received signals. The values of the TDOA that measured every second are used for orbit determination (OD) of the satellite. The results of orbit determination of the geostationary telecommunication satellite "Eutelsat-13B" (13º East) obtained during about four months of observations in 2015 are presented in the report. The TDOA and OD accuracies are also given. Single-measurement error (1 sigma) of the TDOA is equal about 8.7 ns for all pairs of the stations. Standard deviations and average values of the residuals between the observed TDOA and the TDOA computed using the orbit elements obtained from optical measurements are estimated for the pairs Kharkiv-Mykolaiv and Mukacheve-Mykolaiv. The standard deviations do not exceed 10 ns for the both pairs and the average values are equal +10 ns and -106 ns respectively for Kharkiv-Mykolaiv and Mukacheve-Mykolaiv. We discuss the residuals between the observed TDOA and estimates of the TDOA that are calculated by fitted models of satellite motion: the SGP4/SDP4 model and the model based on the numerical integration of the equations of motion taking into account the geopotential, and the perturbation from the Moon and the Sun. We note that residuals from the model SGP4/SDP4 have periodic deviations due to the inaccuracy of the SGP4/SDP4 model. As a result, estimation of the standard deviation of the satellite position is about 60 m for the epoch of the SGP4/SDP4 orbit elements. The residuals for the numerical model in the interval of one day do not show low-frequency deviation. In this case, the estimate of the standard deviation of the satellite position is about 12 m for the epoch of the numerical orbit elements. Keywords. DVB-S, geostationary satellite, orbit determination, passive ranging.
User's Manual for Program PeakFQ, Annual Flood-Frequency Analysis Using Bulletin 17B Guidelines
Flynn, Kathleen M.; Kirby, William H.; Hummel, Paul R.
2006-01-01
Estimates of flood flows having given recurrence intervals or probabilities of exceedance are needed for design of hydraulic structures and floodplain management. Program PeakFQ provides estimates of instantaneous annual-maximum peak flows having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (annual-exceedance probabilities of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002, respectively). As implemented in program PeakFQ, the Pearson Type III frequency distribution is fit to the logarithms of instantaneous annual peak flows following Bulletin 17B guidelines of the Interagency Advisory Committee on Water Data. The parameters of the Pearson Type III frequency curve are estimated by the logarithmic sample moments (mean, standard deviation, and coefficient of skewness), with adjustments for low outliers, high outliers, historic peaks, and generalized skew. This documentation provides an overview of the computational procedures in program PeakFQ, provides a description of the program menus, and provides an example of the output from the program.
NASA Astrophysics Data System (ADS)
Rana, K. P. S.; Kumar, Vineet; Mendiratta, Jatin
2017-11-01
One of the most elementary concepts in freshmen Electrical Engineering subject comprises the Resistance-Inductance-Capacitance (RLC) circuit fundamentals, that is, their time and frequency domain responses. For a beginner, generally, it is difficult to understand and appreciate the step and the frequency responses, particularly the resonance. This paper proposes a student-friendly teaching and learning approach by inculcating the multifaceted versatile software LabVIEWTM along with the educational laboratory virtual instrumentation suite hardware, for studying the RLC circuit time and frequency domain responses. The proposed approach has offered an interactive laboratory experiment where students can model circuits in simulation and hardware circuits on prototype board, and then compare their performances. The theoretical simulations and the obtained experimental data are found to be in very close agreement, thereby enhancing the conviction of students. Finally, the proposed methodology was also subjected to the assessment of learning outcomes based on student feedback, and an average score of 8.05 out of 10 with a standard deviation of 0.471 was received, indicating the overall satisfaction of the students.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Argence, B.; Halloin, H.; Jeannin, O.
We have developed a 532 nm iodine stabilized laser system that may be suitable for the LISA mission (Laser Interferometer Space Antenna) or other future spaceborne missions. This system is based on an externally frequency-doubled Nd:YAG laser source and uses the molecular transfer spectroscopy technique for the frequency stabilization. This technique has been optimized for LISA: compactness (less than 1.1x1.1 m{sup 2}), vacuum compatibility, ease of use and initialization, minimization of the number of active components (acousto-optic modulators are both used for frequency shifting and phase modulating the pump beam). By locking on the a{sub 10} hyperfine component of themore » R(56)32-0 transition, we find an Allan standard deviation ({sigma}) of 3x10{sup -14} at 1 s and {sigma}<2x10{sup -14} for 20 s{<=}{tau}{<=}10{sup 3} s. In terms of linear spectral density, this roughly corresponds to a stability better than 30 Hz/{radical}(Hz) between 10{sup -2} and 1 Hz with a stability decrease close to 1/f below 10 mHz.« less
NASA Astrophysics Data System (ADS)
Kelly, Brandon C.; Hughes, Philip A.; Aller, Hugh D.; Aller, Margo F.
2003-07-01
We introduce an algorithm for applying a cross-wavelet transform to analysis of quasi-periodic variations in a time series and introduce significance tests for the technique. We apply a continuous wavelet transform and the cross-wavelet algorithm to the Pearson-Readhead VLBI survey sources using data obtained from the University of Michigan 26 m paraboloid at observing frequencies of 14.5, 8.0, and 4.8 GHz. Thirty of the 62 sources were chosen to have sufficient data for analysis, having at least 100 data points for a given time series. Of these 30 sources, a little more than half exhibited evidence for quasi-periodic behavior in at least one observing frequency, with a mean characteristic period of 2.4 yr and standard deviation of 1.3 yr. We find that out of the 30 sources, there were about four timescales for every 10 time series, and about half of those sources showing quasi-periodic behavior repeated the behavior in at least one other observing frequency.
Extremely high-accuracy correction of air refractive index using two-colour optical frequency combs
Wu, Guanhao; Takahashi, Mayumi; Arai, Kaoru; Inaba, Hajime; Minoshima, Kaoru
2013-01-01
Optical frequency combs have become an essential tool for distance metrology, showing great advantages compared with traditional laser interferometry. However, there is not yet an appropriate method for air refractive index correction to ensure the high performance of such techniques when they are applied in air. In this study, we developed a novel heterodyne interferometry technique based on two-colour frequency combs for air refractive index correction. In continuous 500-second tests, a stability of 1.0 × 10−11 was achieved in the measurement of the difference in the optical distance between two wavelengths. Furthermore, the measurement results and the calculations are in nearly perfect agreement, with a standard deviation of 3.8 × 10−11 throughout the 10-hour period. The final two-colour correction of the refractive index of air over a path length of 61 m was demonstrated to exhibit an uncertainty better than 1.4 × 10−8, which is the best result ever reported without precise knowledge of environmental parameters. PMID:23719387
Harnessing high-dimensional hyperentanglement through a biphoton frequency comb
NASA Astrophysics Data System (ADS)
Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee
2015-08-01
Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.
Power supply conditioning circuit
NASA Technical Reports Server (NTRS)
Primas, L. E.; Loveland, R.
1987-01-01
A power supply conditioning circuit that can reduce Periodic and Random Deviations (PARD) on the output voltages of dc power supplies to -150 dBV from dc to several KHz with no measurable periodic deviations is described. The PARD for a typical commercial low noise power supply is -74 dBV for frequencies above 20 Hz and is often much worse at frequencies below 20 Hz. The power supply conditioning circuit described here relies on the large differences in the dynamic impedances of a constant current diode and a zener diode to establish a dc voltage with low PARD. Power supplies with low PARD are especially important in circuitry involving ultrastable frequencies for the Deep Space Network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sleiman, Mohamad; Chen, Sharon; Gilbert, Haley E.
A laboratory method to simulate natural exposure of roofing materials has been reported in a companion article. Here in the current article, we describe the results of an international, nine-participant interlaboratory study (ILS) conducted in accordance with ASTM Standard E691-09 to establish the precision and reproducibility of this protocol. The accelerated soiling and weathering method was applied four times by each laboratory to replicate coupons of 12 products representing a wide variety of roofing categories (single-ply membrane, factory-applied coating (on metal), bare metal, field-applied coating, asphalt shingle, modified-bitumen cap sheet, clay tile, and concrete tile). Participants reported initial and laboratory-agedmore » values of solar reflectance and thermal emittance. Measured solar reflectances were consistent within and across eight of the nine participating laboratories. Measured thermal emittances reported by six participants exhibited comparable consistency. For solar reflectance, the accelerated aging method is both repeatable and reproducible within an acceptable range of standard deviations: the repeatability standard deviation sr ranged from 0.008 to 0.015 (relative standard deviation of 1.2–2.1%) and the reproducibility standard deviation sR ranged from 0.022 to 0.036 (relative standard deviation of 3.2–5.8%). The ILS confirmed that the accelerated aging method can be reproduced by multiple independent laboratories with acceptable precision. In conclusion, this study supports the adoption of the accelerated aging practice to speed the evaluation and performance rating of new cool roofing materials.« less
Testing the Standard Model with the Primordial Inflation Explorer
NASA Technical Reports Server (NTRS)
Kogut, Alan J.
2011-01-01
The Primordial Inflation Explorer is an Explorer-class mission to measure the gravity-wave signature of primordial inflation through its distinctive imprint on the linear polarization of the cosmic microwave background. PIXIE uses an innovative optical design to achieve background-limited sensitivity in 400 spectral channels spanning 2.5 decades in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r < 10A{-3) at 5 standard deviations. The rich PIXIE data set will also constrain physical processes ranging from Big Bang cosmology to the nature of the first stars to physical conditions within the interstellar medium of the Galaxy. I describe the PIXIE instrument and mission architecture needed to detect the inflationary signature using only 4 semiconductor bolometers.
Multifocal visual evoked potentials for early glaucoma detection.
Weizer, Jennifer S; Musch, David C; Niziol, Leslie M; Khan, Naheed W
2012-07-01
To compare multifocal visual evoked potentials (mfVEP) with other detection methods in early open-angle glaucoma. Ten patients with suspected glaucoma and 5 with early open-angle glaucoma underwent mfVEP, standard automated perimetry (SAP), short-wave automated perimetry, frequency-doubling technology perimetry, and nerve fiber layer optical coherence tomography. Nineteen healthy control subjects underwent mfVEP and SAP for comparison. Comparisons between groups involving continuous variables were made using independent t tests; for categorical variables, Fisher's exact test was used. Monocular mfVEP cluster defects were associated with an increased SAP pattern standard deviation (P = .0195). Visual fields that showed interocular mfVEP cluster defects were more likely to also show superior quadrant nerve fiber layer thinning by OCT (P = .0152). Multifocal visual evoked potential cluster defects are associated with a functional and an anatomic measure that both relate to glaucomatous optic neuropathy. Copyright 2012, SLACK Incorporated.
47 CFR 101.811 - Modulation requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... signaling on frequencies below 500 MHz is not authorized. (b) When amplitude modulation is used, the... frequency modulation is used for single channel radiotelephony on frequencies below 500 MHz, the deviation... 47 Telecommunication 5 2010-10-01 2010-10-01 false Modulation requirements. 101.811 Section 101...
Selection and Classification Using a Forecast Applicant Pool.
ERIC Educational Resources Information Center
Hendrix, William H.
The document presents a forecast model of the future Air Force applicant pool. By forecasting applicants' quality (means and standard deviations of aptitude scores) and quantity (total number of applicants), a potential enlistee could be compared to the forecasted pool. The data used to develop the model consisted of means, standard deviation, and…
NASA Technical Reports Server (NTRS)
Herrman, B. D.; Uman, M. A.; Brantley, R. D.; Krider, E. P.
1976-01-01
The principle of operation of a wideband crossed-loop magnetic-field direction finder is studied by comparing the bearing determined from the NS and EW magnetic fields at various times up to 155 microsec after return stroke initiation with the TV-determined lightning channel base direction. For 40 lightning strokes in the 3 to 12 km range, the difference between the bearings found from magnetic fields sampled at times between 1 and 10 microsec and the TV channel-base data has a standard deviation of 3-4 deg. Included in this standard deviation is a 2-3 deg measurement error. For fields sampled at progressively later times, both the mean and the standard deviation of the difference between the direction-finder bearing and the TV bearing increase. Near 150 microsec, means are about 35 deg and standard deviations about 60 deg. The physical reasons for the late-time inaccuracies in the wideband direction finder and the occurrence of these effects in narrow-band VLF direction finders are considered.
Wavelength selection method with standard deviation: application to pulse oximetry.
Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija
2011-07-01
Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.
How random is a random vector?
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-12-01
Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups.
Hasija, Narender; Bala, Madhu; Goyal, Virender
2014-05-01
Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton's ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton's study. After measuring the teeth on all 100 patients, Bolton's analysis was performed. Results were compared with Bolton's means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85.
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
Estimating extreme stream temperatures by the standard deviate method
NASA Astrophysics Data System (ADS)
Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz
2006-02-01
It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.
NASA Technical Reports Server (NTRS)
Rhoads, James E.; Rigby, Jane Rebecca; Malhotra, Sangeeta; Allam, Sahar; Carilli, Chris; Combes, Francoise; Finkelstein, Keely; Finkelstein, Steven; Frye, Brenda; Gerin, Maryvonne;
2014-01-01
We report on two regularly rotating galaxies at redshift z approx. = 2, using high-resolution spectra of the bright [C microns] 158 micrometers emission line from the HIFI instrument on the Herschel Space Observatory. Both SDSS090122.37+181432.3 ("S0901") and SDSSJ120602.09+514229.5 ("the Clone") are strongly lensed and show the double-horned line profile that is typical of rotating gas disks. Using a parametric disk model to fit the emission line profiles, we find that S0901 has a rotation speed of v sin(i) approx. = 120 +/- 7 kms(sup -1) and a gas velocity dispersion of (standard deviation)g < 23 km s(sup -1) (1(standard deviation)). The best-fitting model for the Clone is a rotationally supported disk having v sin(i) approx. = 79 +/- 11 km s(sup -1) and (standard deviation)g 4 kms(sup -1) (1(standard deviation)). However, the Clone is also consistent with a family of dispersion-dominated models having (standard deviation)g = 92 +/- 20 km s(sup -1). Our results showcase the potential of the [C microns] line as a kinematic probe of high-redshift galaxy dynamics: [C microns] is bright, accessible to heterodyne receivers with exquisite velocity resolution, and traces dense star-forming interstellar gas. Future [C microns] line observations with ALMA would offer the further advantage of spatial resolution, allowing a clearer separation between rotation and velocity dispersion.
An International Menopause Society study of vasomotor symptoms in Bangkok and Chiang Mai, Thailand.
Sriprasert, I; Pantasri, T; Piyamongkol, W; Suwan, A; Chaikittisilpa, S; Sturdee, D; Gupta, P; Hunter, M S
2017-04-01
To examine relationships between location, demographics, lifestyle, beliefs, and experience of hot flushes and night sweats (HFNS) amongst women living in two cities in Thailand. Cross-sectional study of peri- and postmenopausal women, aged 45-55 years, from Bangkok and Chiang Mai. Participants completed questionnaires (demographics, health, HFNS (prevalence, frequency and problem-rating) and beliefs about menopause). A sub-sample of women from each location was interviewed. A total of 632 women (320 Bangkok and 312 Chiang Mai) aged 50.88 (standard deviation 3.06) years, took part. The prevalence of HFNS was 65%, average HFNS frequency 8.7 (10.8) per week and problem rating 4.3/10. Women from Chiang Mai had significantly more problematic HFNS, but prevalence and frequency were similar in both sites. Poor general health predicted HFNS prevalence and frequency, while Chiang Mai location, HFNS frequency, age, diet and beliefs about menopause were associated with problematic HFNS. Location remained significant after controlling for education, occupation and age; location was partially explained by beliefs. Qualitative interview responses illustrated the differences in beliefs about menopause between locations. HFNS reports are prevalent with moderate frequency and problem-ratings in these urban centers in Thailand. The results will be included in the broader International Menopause Society study of Climate, Altitude and Temperature (IMS-CAT) of the impact of climate on HFNS.
Muhamad, Hairul Masrini; Xu, Xiaomei; Zhang, Xuelei; Jaaman, Saifullah Arifin; Muda, Azmi Marzuki
2018-05-01
Studies of Irrawaddy dolphins' acoustics assist in understanding the behaviour of the species and thereby conservation of this species. Whistle signals emitted by Irrawaddy dolphin within the Bay of Brunei in Malaysian waters were characterized. A total of 199 whistles were analysed from seven sightings between January and April 2016. Six types of whistles contours named constant, upsweep, downsweep, concave, convex, and sine were detected when the dolphins engaged in traveling, foraging, and socializing activities. The whistle durations ranged between 0.06 and 3.86 s. The minimum frequency recorded was 443 Hz [Mean = 6000 Hz, standard deviation (SD) = 2320 Hz] and the maximum frequency recorded was 16 071 Hz (Mean = 7139 Hz, SD = 2522 Hz). The mean frequency range (F.R.) for the whistles was 1148 Hz (Minimum F.R. = 0 Hz, Maximum F.R. = 4446 Hz; SD = 876 Hz). Whistles in the Bay of Brunei were compared with population recorded from the waters of Matang and Kalimantan. The comparisons showed differences in whistle duration, minimum frequency, start frequency, and number of inflection point. Variation in whistle occurrence and frequency may be associated with surface behaviour, ambient noise, and recording limitation. This will be an important element when planning a monitoring program.
All-fiber upconversion high spectral resolution wind lidar using a Fabry-Perot interferometer.
Shangguan, Mingjia; Xia, Haiyun; Wang, Chong; Qiu, Jiawei; Shentu, Guoliang; Zhang, Qiang; Dou, Xiankang; Pan, Jian-Wei
2016-08-22
An all-fiber, micro-pulse and eye-safe high spectral resolution wind lidar (HSRWL) at 1.5 μm is proposed and demonstrated by using a pair of upconversion single-photon detectors and a fiber Fabry-Perot scanning interferometer (FFP-SI). In order to improve the optical detection efficiency, both the transmission spectrum and the reflection spectrum of the FFP-SI are used for spectral analyses of the aerosol backscatter and the reference laser pulse. Taking advantages of high signal-to-noise ratio of the detectors and high spectral resolution of the FFP-SI, the center frequencies and the bandwidths of spectra of the aerosol backscatter are obtained simultaneously. Continuous LOS wind observations are carried out on two days at Hefei (31.843 °N, 117.265 °E), China. The horizontal detection range of 4 km is realized with temporal resolution of 1 minute. The spatial resolution is switched from 30 m to 60 m at distance of 1.8 km. In a comparison experiment, LOS wind measurements from the HSRWL show good agreement with the results from an ultrasonic wind sensor (Vaisala windcap WMT52). An empirical method is adopted to evaluate the precision of the measurements. The standard deviation of the wind speed is 0.76 m/s at 1.8 km. The standard deviation of bandwidth variation is 2.07 MHz at 1.8 km.
Stapedotomy in osteogenesis imperfecta: a prospective study of 32 consecutive cases.
Vincent, Robert; Wegner, Inge; Stegeman, Inge; Grolman, Wilko
2014-12-01
To prospectively evaluate hearing outcomes in patients with osteogenesis imperfecta undergoing primary stapes surgery and to isolate prognostic factors for success. A nonrandomized, open, prospective case series. A tertiary referral center. Twenty-five consecutive patients who underwent 32 primary stapedotomies for osteogenesis imperfecta with evidence of stapes fixation and available postoperative pure-tone audiometry. Primary stapedotomy with vein graft interposition and reconstruction with a regular Teflon piston or bucket handle-type piston. Preoperative and postoperative audiometric evaluation using conventional 4-frequency (0.5, 1, 2, and 4 kHz) audiometry. Air-conduction thresholds, bone-conduction thresholds, and air-bone gap were measured. The overall audiometric results as well as the results of audiometric evaluation at 3 months and at least 1 year after surgery were used. Overall, postoperative air-bone gap closure to within 10 dB was achieved in 88% of cases. Mean (standard deviation) gain in air-conduction threshold was 22 (9.4) dB for the entire case series, and mean (standard deviation) air-bone gap closure was 22 (9.0) dB. Backward multivariate logistic regression showed that a model with preoperative air-bone gap closure and intraoperatively established incus length accurately predicts success after primary stapes surgery. Stapes surgery is a feasible and safe treatment option in patients with osteogenesis imperfecta. Success is associated with preoperative air-bone gap and intraoperatively established incus length.
Huh, S.; Dickey, D.A.; Meador, M.R.; Ruhl, K.E.
2005-01-01
A temporal analysis of the number and duration of exceedences of high- and low-flow thresholds was conducted to determine the number of years required to detect a level shift using data from Virginia, North Carolina, and South Carolina. Two methods were used - ordinary least squares assuming a known error variance and generalized least squares without a known error variance. Using ordinary least squares, the mean number of years required to detect a one standard deviation level shift in measures of low-flow variability was 57.2 (28.6 on either side of the break), compared to 40.0 years for measures of high-flow variability. These means become 57.6 and 41.6 when generalized least squares is used. No significant relations between years and elevation or drainage area were detected (P>0.05). Cluster analysis did not suggest geographic patterns in years related to physiography or major hydrologic regions. Referring to the number of observations required to detect a one standard deviation shift as 'characterizing' the variability, it appears that at least 20 years of record on either side of a shift may be necessary to adequately characterize high-flow variability. A longer streamflow record (about 30 years on either side) may be required to characterize low-flow variability. ?? 2005 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pournoury, M.; Zamiri, A.; Kim, T. Y.; Yurlov, V.; Oh, K.
2016-03-01
Capacitive touch sensor screen with the metal materials has recently become qualified for substitution of ITO; however several obstacles still have to be solved. One of the most important issues is moiré phenomenon. The visibility problem of the metal-mesh, in touch sensor module (TSM) is numerically considered in this paper. Based on human eye contract sensitivity function (CSF), moiré pattern of TSM electrode mesh structure is simulated with MATLAB software for 8 inch screen display in oblique view. Standard deviation of the generated moiré by the superposition of electrode mesh and screen image is calculated to find the optimal parameters which provide the minimum moiré visibility. To create the screen pixel array and mesh electrode, rectangular function is used. The filtered image, in frequency domain, is obtained by multiplication of Fourier transform of the finite mesh pattern (product of screen pixel and mesh electrode) with the calculated CSF function for three different observer distances (L=200, 300 and 400 mm). It is observed that the discrepancy between analytical and numerical results is less than 0.6% for 400 mm viewer distance. Moreover, in the case of oblique view due to considering the thickness of the finite film between mesh electrodes and screen, different points of minimum standard deviation of moiré pattern are predicted compared to normal view.
Mach wave properties in the presence of source and medium heterogeneity
NASA Astrophysics Data System (ADS)
Vyas, J. C.; Mai, P. M.; Galis, M.; Dunham, Eric M.; Imperatori, W.
2018-06-01
We investigate Mach wave coherence for kinematic supershear ruptures with spatially heterogeneous source parameters, embedded in 3D scattering media. We assess Mach wave coherence considering: 1) source heterogeneities in terms of variations in slip, rise time and rupture speed; 2) small-scale heterogeneities in Earth structure, parameterized from combinations of three correlation lengths and two standard deviations (assuming von Karman power spectral density with fixed Hurst exponent); and 3) joint effects of source and medium heterogeneities. Ground-motion simulations are conducted using a generalized finite-difference method, choosing a parameterization such that the highest resolved frequency is ˜5 Hz. We discover that Mach wave coherence is slightly diminished at near fault distances (< 10 km) due to spatially variable slip and rise time; beyond this distance the Mach wave coherence is more strongly reduced by wavefield scattering due to small-scale heterogeneities in Earth structure. Based on our numerical simulations and theoretical considerations we demonstrate that the standard deviation of medium heterogeneities controls the wavefield scattering, rather than the correlation length. In addition, we find that peak ground accelerations in the case of combined source and medium heterogeneities are consistent with empirical ground motion prediction equations for all distances, suggesting that in nature ground shaking amplitudes for supershear ruptures may not be elevated due to complexities in the rupture process and seismic wave-scattering.
Comparison of different functional EIT approaches to quantify tidal ventilation distribution.
Zhao, Zhanqi; Yun, Po-Jen; Kuo, Yen-Liang; Fu, Feng; Dai, Meng; Frerichs, Inez; Möller, Knut
2018-01-30
The aim of the study was to examine the pros and cons of different types of functional EIT (fEIT) to quantify tidal ventilation distribution in a clinical setting. fEIT images were calculated with (1) standard deviation of pixel time curve, (2) regression coefficients of global and local impedance time curves, or (3) mean tidal variations. To characterize temporal heterogeneity of tidal ventilation distribution, another fEIT image of pixel inspiration times is also proposed. fEIT-regression is very robust to signals with different phase information. When the respiratory signal should be distinguished from the heart-beat related signal, or during high-frequency oscillatory ventilation, fEIT-regression is superior to other types. fEIT-tidal variation is the most stable image type regarding the baseline shift. We recommend using this type of fEIT image for preliminary evaluation of the acquired EIT data. However, all these fEITs would be misleading in their assessment of ventilation distribution in the presence of temporal heterogeneity. The analysis software provided by the currently available commercial EIT equipment only offers either fEIT of standard deviation or tidal variation. Considering the pros and cons of each fEIT type, we recommend embedding more types into the analysis software to allow the physicians dealing with more complex clinical applications with on-line EIT measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, D; Meier, J; Mawlawi, O
Purpose: Use a NEMA-IEC PET phantom to assess the robustness of FDG-PET-based radiomics features to changes in reconstruction parameters across different scanners. Methods: We scanned a NEMA-IEC PET phantom on 3 different scanners (GE Discovery VCT, GE Discovery 710, and Siemens mCT) using a FDG source-to-background ratio of 10:1. Images were retrospectively reconstructed using different iterations (2–3), subsets (21–24), Gaussian filter widths (2, 4, 6mm), and matrix sizes (128,192,256). The 710 and mCT used time-of-flight and point-spread-functions in reconstruction. The axial-image through the center of the 6 active spheres was used for analysis. A region-of-interest containing all spheres was ablemore » to simulate a heterogeneous lesion due to partial volume effects. Maximum voxel deviations from all retrospectively reconstructed images (18 per scanner) was compared to our standard clinical protocol. PET Images from 195 non-small cell lung cancer patients were used to compare feature variation. The ratio of a feature’s standard deviation from the patient cohort versus the phantom images was calculated to assess for feature robustness. Results: Across all images, the percentage of voxels differing by <1SUV and <2SUV ranged from 61–92% and 88–99%, respectively. Voxel-voxel similarity decreased when using higher resolution image matrices (192/256 versus 128) and was comparable across scanners. Taking the ratio of patient and phantom feature standard deviation was able to identify features that were not robust to changes in reconstruction parameters (e.g. co-occurrence correlation). Metrics found to be reasonably robust (standard deviation ratios > 3) were observed for routinely used SUV metrics (e.g. SUVmean and SUVmax) as well as some radiomics features (e.g. co-occurrence contrast, co-occurrence energy, standard deviation, and uniformity). Similar standard deviation ratios were observed across scanners. Conclusions: Our method enabled a comparison of feature variability across scanners and was able to identify features that were not robust to changes in reconstruction parameters.« less
NASA Astrophysics Data System (ADS)
Stier, P.; Schutgens, N. A. J.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Myhre, G.; Penner, J. E.; Randles, C.; Samset, B.; Schulz, M.; Yu, H.; Zhou, C.
2012-09-01
Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in nine participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.51 W m-2 and the inter-model standard deviation is 0.70 W m-2, corresponding to a relative standard deviation of 15%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.26 W m-2, and the standard deviation increases to 1.21 W m-2, corresponding to a significant relative standard deviation of 96%. However, the top-of-atmosphere forcing variability owing to absorption is low, with relative standard deviations of 9% clear-sky and 12% all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment, demonstrates that host model uncertainties could explain about half of the overall sulfate forcing diversity of 0.13 W m-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.
Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data
NASA Astrophysics Data System (ADS)
Shulenin, V. P.
2016-10-01
Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.
Frequency stabilization of a 2.05 μm laser using hollow-core fiber CO2 frequency reference cell
NASA Astrophysics Data System (ADS)
Meras, Patrick; Poberezhskiy, Ilya Y.; Chang, Daniel H.; Spiers, Gary D.
2010-04-01
We have designed and built a hollow-core fiber frequency reference cell, filled it with CO2, and used it to demonstrate frequency stabilization of a 2.05 μm Tm:Ho:YLF laser using frequency modulation (FM) spectroscopy technique. The frequency reference cell is housed in a compact and robust hermetic package that contains a several meter long hollow-core photonic crystal fiber optically coupled to index-guiding fibers with a fusion splice on one end and a mechanical splice on the other end. The package has connectorized fiber pigtails and a valve used to evacuate, refill it, or adjust the gas pressure. We have demonstrated laser frequency standard deviation decreasing from >450MHz (free-running) to <2.4MHz (stabilized). The 2.05 μm laser wavelength is of particular interest for spectroscopic instruments due to the presence of many CO2 and H20 absorption lines in its vicinity. To our knowledge, this is the first reported demonstration of laser frequency stabilization at this wavelength using a hollow-core fiber reference cell. This approach enables all-fiber implementation of the optical portion of laser frequency stabilization system, thus making it dramatically more lightweight, compact, and robust than the traditional free-space version that utilizes glass or metal gas cells. It can also provide much longer interaction length of light with gas and does not require any alignment. The demonstrated frequency reference cell is particularly attractive for use in aircraft and space coherent lidar instruments for measuring atmospheric CO2 profile.
40 CFR 63.7751 - What reports must I submit and when?
Code of Federal Regulations, 2010 CFR
2010-07-01
... deviations from any emissions limitations (including operating limit), work practice standards, or operation and maintenance requirements, a statement that there were no deviations from the emissions limitations...-of-control during the reporting period. (7) For each deviation from an emissions limitation...
Adaptive convergence nonuniformity correction algorithm.
Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua
2011-01-01
Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.
Masses of Te130 and Xe130 and Double-β-Decay Q Value of Te130
NASA Astrophysics Data System (ADS)
Redshaw, Matthew; Mount, Brianna J.; Myers, Edmund G.; Avignone, Frank T., III
2009-05-01
The atomic masses of Te130 and Xe130 have been obtained by measuring cyclotron frequency ratios of pairs of triply charged ions simultaneously trapped in a Penning trap. The results, with 1 standard deviation uncertainty, are M(Te130)=129.906222744(16)u and M(Xe130)=129.903509351(15)u. From the mass difference the double-β-decay Q value of Te130 is determined to be Qββ(Te130)=2527.518(13)keV. This is a factor of 150 more precise than the result of the AME2003 [G. Audi , Nucl. Phys. A729, 337 (2003)NUPABL0375-947410.1016/j.nuclphysa.2003.11.003].
Sentiment analysis of feature ranking methods for classification accuracy
NASA Astrophysics Data System (ADS)
Joseph, Shashank; Mugauri, Calvin; Sumathy, S.
2017-11-01
Text pre-processing and feature selection are important and critical steps in text mining. Text pre-processing of large volumes of datasets is a difficult task as unstructured raw data is converted into structured format. Traditional methods of processing and weighing took much time and were less accurate. To overcome this challenge, feature ranking techniques have been devised. A feature set from text preprocessing is fed as input for feature selection. Feature selection helps improve text classification accuracy. Of the three feature selection categories available, the filter category will be the focus. Five feature ranking methods namely: document frequency, standard deviation information gain, CHI-SQUARE, and weighted-log likelihood -ratio is analyzed.
Statistics of biospeckles with application to diagnostics of periodontitis
NASA Astrophysics Data System (ADS)
Starukhin, Pavel Y.; Kharish, Natalia A.; Sedykh, Alexey V.; Ulyanov, Sergey S.; Lepilin, Alexander V.; Tuchin, Valery V.
1999-04-01
Results of Monte-Carlo simulations Doppler shift are presented for the model of random medium that contain moving particles. The single-layered and two-layered configurations of the medium are considered. Doppler shift of the frequency of laser light is investigated as a function of such parameters as absorption coefficient, scattering coefficient, and thickness of the medium. Possibility of application of speckle interferometry for diagnostics in dentistry has been analyzed. Problem of standardization of the measuring procedure has been studied. Deviation of output characteristics of Doppler system for blood microcirculation measurements has been investigated. Dependence of form of Doppler spectrum on the number of speckles, integration by aperture, has been studied in experiments in vivo.
Wada, Kenji; Matsukura, Satoru; Tanaka, Amaka; Matsuyama, Tetsuya; Horinaka, Hiromichi
2015-09-07
A simple method to measure single-mode optical fiber lengths is proposed and demonstrated using a gain-switched 1.55-μm distributed feedback laser without a fast photodetector or an optical interferometer. From the variation in the amplified spontaneous emission noise intensity with respect to the modulation frequency of the gain switching, the optical length of a 1-km single-mode fiber immersed in water is found to be 1471.043915 m ± 33 μm, corresponding to a relative standard deviation of 2.2 × 10(-8). This optical length is an average value over a measurement time of one minute under ordinary laboratory conditions.
Spectral density measurements of gyro noise
NASA Technical Reports Server (NTRS)
Truncale, A.; Koenigsberg, W.; Harris, R.
1972-01-01
Power spectral density (PSD) was used to analyze the outputs of several gyros in the frequency range from 0.01 to 200 Hz. Data were accumulated on eight inertial quality instruments. The results are described in terms of input angle noise (arcsec 2/Hz) and are presented on log-log plots of PSD. These data show that the standard deviation of measurement noise was 0.01 arcsec or less for some gyros in the passband from 1 Hz down 10 0.01 Hz and probably down to 0.001 Hz for at least one gyro. For the passband between 1 and 100 Hz, uncertainties in the 0.01 and 0.05 arcsec region were observed.
Multi-year slant path rain fade statistics at 28.56 and 19.04 GHz for Wallops Island, Virginia
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1979-01-01
Multiyear rain fade statistics at 28.56 GHz and 19.04 GHz were compiled for the region of Wallops Island, Virginia covering the time periods, 1 April 1977 through 31 March 1978, and 1 September 1978 through 31 August 1979. The 28.56 GHz attenuations were derived by monitoring the beacon signals from the COMSTAR geosynchronous satellite, D sub 2 during the first year, and satellite, D sub 3, during the second year. Although 19.04 GHz beacons exist aboard these satellites, statistics at this frequency were predicted using the 28 GHz fade data, the measured rain rate distribution, and effective path length concepts. The prediction method used was tested against radar derived fade distributions and excellent comparisons were noted. For example, the rms deviations between the predicted and test distributions were less than or equal to 0.2dB or 4% at 19.04 GHz. The average ratio between the 28.56 GHz and 19.04 GHz fades were also derived for equal percentages of time resulting in a factor of 2.1 with a .05 standard deviation.
Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets
ERIC Educational Resources Information Center
Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad
2017-01-01
Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…
Screen Twice, Cut Once: Assessing the Predictive Validity of Teacher Selection Tools
ERIC Educational Resources Information Center
Goldhaber, Dan; Grout, Cyrus; Huntington-Klein, Nick
2015-01-01
It is well documented that teachers can have profound effects on student outcomes. Empirical estimates find that a one standard deviation increase in teacher quality raises student test achievement by 10 to 25 percent of a standard deviation. More recent evidence shows that the effectiveness of teachers can affect long-term student outcomes, such…
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Parabolic trough receiver heat loss and optical efficiency round robin 2015/2016
NASA Astrophysics Data System (ADS)
Pernpeintner, Johannes; Schiricke, Björn; Sallaberry, Fabienne; de Jalón, Alberto García; López-Martín, Rafael; Valenzuela, Loreto; de Luca, Antonio; Georg, Andreas
2017-06-01
A round robin for parabolic trough receiver heat loss and optical efficiency in the laboratory was performed between five institutions using five receivers in 2015/2016. Heat loss testing was performed at three cartridge heater test benches and one Joule heating test bench in the temperature range between 100 °C and 550 °C. Optical efficiency testing was performed with two spectrometric test bench and one calorimetric test bench. Heat loss testing results showed standard deviations at the order of 6% to 12 % for most temperatures and receivers and a standard deviation of 17 % for one receiver at 100 °C. Optical efficiency is presented normalized for laboratories showing standard deviations of 0.3 % to 1.3 % depending on the receiver.
Benign positional vertigo and hyperuricaemia.
Adam, A M
2005-07-01
To find out if there is any association between serum uric acid level and positional vertigo. A prospective, case controlled study. A private neurological clinic. All patients presenting with vertigo. Ninety patients were seen in this period with 78 males and 19 females. Mean age was 47 +/- 3 years (at 95% confidence level) with a standard deviation of 12.4. Their mean uric acid level was 442 +/- 16 (at 95% confidence level) with a standard deviation of 79.6 umol/l as compared to 291 +/- 17 (at 95% confidence level) with a standard deviation of 79.7 umol/l in the control group. The P-value was less than 0.001. That there is a significant association between high uric acid and benign positional vertigo.
NASA Technical Reports Server (NTRS)
Clark, P. E.; Andre, C. G.; Adler, I.; Weidner, J.; Podwysocki, M.
1976-01-01
The positive correlation between Al/Si X-ray fluorescence intensity ratios determined during the Apollo 15 lunar mission and a broad-spectrum visible albedo of the moon is quantitatively established. Linear regression analysis performed on 246 1 degree geographic cells of X-ray fluorescence intensity and visible albedo data points produced a statistically significant correlation coefficient of .78. Three distinct distributions of data were identified as (1) within one standard deviation of the regression line, (2) greater than one standard deviation below the line, and (3) greater than one standard deviation above the line. The latter two distributions of data were found to occupy distinct geographic areas in the Palus Somni region.
Screening Samples for Arsenic by Inductively Coupled Plasma-Mass Spectrometry for Treaty Samples
2014-02-01
2.274 3.657 10.06 14.56 30.36 35.93 % RSD : 15.87% 4.375% 2.931% 4.473% 3.349% 3.788% 2.802% 3.883% 3.449% RSD , relative standard deviation 9 Table...107.9% 106.4% Standard Deviation: 0.3171 0.3498 0.8024 2.964 4.526 10.06 13.83 16.38 11.81 % RSD : 5.657% 3.174% 3.035% 5.507% 4.332% 3.795% 2.626...119.1% 116.5% 109.4% 106.8% 105.2% 105.5% 105.8% 108.6% 107.8% Standard Deviation: 0.2379 0.5595 1.173 2.375 2.798 5.973 11.79 15.10 30.54 % RSD
A deviation display method for visualising data in mobile gamma-ray spectrometry.
Kock, Peder; Finck, Robert R; Nilsson, Jonas M C; Ostlund, Karl; Samuelsson, Christer
2010-09-01
A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded (137)Cs and (241)Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPGe). The deviation display enhances positive significant changes while suppressing the natural background fluctuations. After an initialization time of about 10min this technique leads to a homogeneous display dominated by the background colour, where even small changes in spectral data are easy to discover. As this paper shows, the deviation display method works well for all tested gamma energies and natural background radiation levels and with both tested detector systems.
Design concepts using ring lasers for frequency stabilization
NASA Technical Reports Server (NTRS)
Mocker, H.
1967-01-01
Laser frequency stabilization methods are based on a frequency discriminant which generates an unambiguous deviation signal used for automatic stabilization. Closed-loop control stabilizes cavity length at a null point. Some systems have a stabilized ring laser using a piezoelectric dither and others use a Doppler gain tube.
Castro-Sánchez, Adelaida María; Matarán-Peñarrocha, Guillermo A; Sánchez-Labraca, Nuria; Quesada-Rubio, José Manuel; Granero-Molina, José; Moreno-Lorenzo, Carmen
2011-01-01
Fibromyalgia is a prevalent musculoskeletal disorder associated with widespread mechanical tenderness, fatigue, non-refreshing sleep, depressed mood and pervasive dysfunction of the autonomic nervous system: tachycardia, postural intolerance, Raynaud's phenomenon and diarrhoea. To determine the effects of craniosacral therapy on sensitive tender points and heart rate variability in patients with fibromyalgia. A randomized controlled trial. Ninety-two patients with fibromyalgia were randomly assigned to an intervention group or placebo group. Patients received treatments for 20 weeks. The intervention group underwent a craniosacral therapy protocol and the placebo group received sham treatment with disconnected magnetotherapy equipment. Pain intensity levels were determined by evaluating tender points, and heart rate variability was recorded by 24-hour Holter monitoring. After 20 weeks of treatment, the intervention group showed significant reduction in pain at 13 of the 18 tender points (P < 0.05). Significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement versus baseline values were observed in the intervention group but not in the placebo group. At two months and one year post therapy, the intervention group showed significant differences versus baseline in tender points at left occiput, left-side lower cervical, left epicondyle and left greater trochanter and significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement. Craniosacral therapy improved medium-term pain symptoms in patients with fibromyalgia.
Code of Federal Regulations, 2010 CFR
2010-07-01
... which are distinct from the standard deviation process and specific to the requirements of the Federal... agency request a deviation from the provisions of this part? 102-38.30 Section 102-38.30 Public Contracts... executive agency request a deviation from the provisions of this part? Refer to §§ 102-2.60 through 102-2...
Yamaguchi, Tsuyoshi; Yonezawa, Takuya; Koda, Shinobu
2015-07-15
The frequency-dependent viscosity and conductivity of three imidazolium-based ionic liquids were measured at several temperatures in the MHz region, and the results are compared with the intermediate scattering functions determined by neutron spin echo spectroscopy. The relaxations of both the conductivity and the viscosity agree with that of the intermediate scattering function at the ionic correlation when the relaxation time is short. As the relaxation time increases, the relaxations of the two transport properties deviate to lower frequencies than that of the ionic structure. The deviation begins at a shorter relaxation time for viscosity than for conductivity, which explains the fractional Walden rule between the zero-frequency values of the shear viscosity and the molar conductivity.
Family structure and childhood anthropometry in Saint Paul, Minnesota in 1918
Warren, John Robert
2017-01-01
Concern with childhood nutrition prompted numerous surveys of children’s growth in the United States after 1870. The Children’s Bureau’s 1918 “Weighing and Measuring Test” measured two million children to produce the first official American growth norms. Individual data for 14,000 children survives from the Saint Paul, Minnesota survey whose stature closely approximated national norms. As well as anthropometry the survey recorded exact ages, street address and full name. These variables allow linkage to the 1920 census to obtain demographic and socioeconomic information. We matched 72% of children to census families creating a sample of nearly 10,000 children. Children in the entire survey (linked set) averaged 0.74 (0.72) standard deviations below modern WHO height-for-age standards, and 0.48 (0.46) standard deviations below modern weight-for-age norms. Sibship size strongly influenced height-for-age, and had weaker influence on weight-for-age. Each additional child six or underreduced height-for-age scores by 0.07 standard deviations (95% CI: −0.03, 0.11). Teenage siblings had little effect on height-forage. Social class effects were substantial. Children of laborers averaged half a standard deviation shorter than children of professionals. Family structure and socio-economic status had compounding impacts on children’s stature. PMID:28943749
Physically Challenging Song Traits, Male Quality, and Reproductive Success in House Wrens
Cramer, Emily R. A.
2013-01-01
Physically challenging signals are likely to honestly indicate signaler quality. In trilled bird song two physically challenging parameters are vocal deviation (the speed of sound frequency modulation) and trill consistency (how precisely syllables are repeated). As predicted, in several species, they correlate with male quality, are preferred by females, and/or function in male-male signaling. Species may experience different selective pressures on their songs, however; for instance, there may be opposing selection between song complexity and song performance difficulty, such that in species where song complexity is strongly selected, there may not be strong selection on performance-based traits. I tested whether vocal deviation and trill consistency are signals of male quality in house wrens (Troglodytes aedon), a species with complex song structure. Males’ singing ability did not correlate with male quality, except that older males sang with higher trill consistency, and males with more consistent trills responded more aggressively to playback (although a previous study found no effect of stimulus trill consistency on males’ responses to playback). Males singing more challenging songs did not gain in polygyny, extra-pair paternity, or annual reproductive success. Moreover, none of the standard male quality measures I investigated correlated with mating or reproductive success. I conclude that vocal deviation and trill consistency do not signal male quality in this species. PMID:23527137
The Objective Assessment of Cough Frequency in Bronchiectasis.
Spinou, Arietta; Lee, Kai K; Sinha, Aish; Elston, Caroline; Loebinger, Michael R; Wilson, Robert; Chung, Kian Fan; Yousaf, Nadia; Pavord, Ian D; Matos, Sergio; Garrod, Rachel; Birring, Surinder S
2017-10-01
Cough in bronchiectasis is associated with significant impairment in health status. This study aimed to quantify cough frequency objectively with a cough monitor and investigate its relationship with health status. A secondary aim was to identify clinical predictors of cough frequency. Fifty-four patients with bronchiectasis were compared with thirty-five healthy controls. Objective 24-h cough, health status (cough-specific: Leicester Cough Questionnaire LCQ and bronchiectasis specific: Bronchiectasis Health Questionnaire BHQ), cough severity and lung function were measured. The clinical predictors of cough frequency in bronchiectasis were determined in a multivariate analysis. Objective cough frequency was significantly raised in patients with bronchiectasis compared to healthy controls [geometric mean (standard deviation)] 184.5 (4.0) vs. 20.6 (3.2) coughs/24-h; mean fold-difference (95% confidence interval) 8.9 (5.2, 15.2); p < 0.001 and they had impaired health status. There was a significant correlation between objective cough frequency and subjective measures; LCQ r = -0.52 and BHQ r = -0.62, both p < 0.001. Sputum production, exacerbations (between past 2 weeks to 12 months) and age were significantly associated with objective cough frequency in multivariate analysis, explaining 52% of the variance (p < 0.001). There was no statistically significant association between cough frequency and lung function. Cough is a common and significant symptom in patients with bronchiectasis. Sputum production, exacerbations and age, but not lung function, were independent predictors of cough frequency. Ambulatory objective cough monitoring provides novel insights and should be further investigated as an outcome measure in bronchiectasis.
NASA Technical Reports Server (NTRS)
Spera, David A.
2008-01-01
Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.
NASA Astrophysics Data System (ADS)
Goad, Pamela Joy
The fusion of musical voices is an important aspect of musical blend, or the mixing of individual sounds. Yet, little research has been done to explicitly determine the factors involved in fusion. In this study, the similarity of timbre and modulation were examined for their contribution to the fusion of sounds. It is hypothesized that similar timbres will fuse better than dissimilar timbres, and, voices with the same kind of modulation will fuse better than voices of different modulations. A perceptually-based measure, known as sharpness was investigated as a measure of timbre. The advantages of using sharpness are that it is based on hearing sensitivities and masking phenomena of inner ear processing. Five musical instrument families were digitally recorded in performances across a typical playing range at two extreme dynamic levels. Analyses reveal that sharpness is capable of uncovering subtle changes in timbre including those found in musical dynamics, instrument design, and performer-specific variations. While these analyses alone are insufficient to address fusion, preliminary calculations of timbral combinations indicate that sharpness has the potential to predict the fusion of sounds used in musical composition. Three experiments investigated the effects of modulation on the fusion of a harmonic major sixth interval. In the first experiment using frequency modulation, stimuli varied in deviation about a mean fundamental frequency and relative modulation phase between the two tones. Results showed smaller frequency deviations promoted fusion and relative phase differences had a minimal effect. In a second experiment using amplitude modulation, stimuli varied in deviation about a mean amplitude level and relative phase of modulation. Results showed smaller amplitude deviations promoted better fusion, but unlike frequency modulation, relative phase differences were also important. In a third experiment, frequency modulation, amplitude modulation and mixed modulation were arranged in all possible voicings. Results showed frequency modulation in the lower voice and less variance in amplitude envelopes contributed to an increase in fusion. The theory that similar modulations would promote better fusion was only marginally supported. For these experiments, results revealed differences depending on modulation type and that a lesser amount of modulation fosters greater fusion.
Differential Deposition for Surface Figure Corrections in Grazing Incidence X-Ray Optics
NASA Technical Reports Server (NTRS)
Ramsey, Brian D.; Kilaru, Kiranmayee; Atkins, Carolyn; Gubarev, Mikhail V.; Broadway, David M.
2015-01-01
Differential deposition corrects the low- and mid- spatial-frequency deviations in the axial figure of Wolter-type grazing incidence X-ray optics. Figure deviations is one of the major contributors to the achievable angular resolution. Minimizing figure errors can significantly improve the imaging quality of X-ray optics. Material of varying thickness is selectively deposited, using DC magnetron sputtering, along the length of optic to minimize figure deviations. Custom vacuum chambers are built that can incorporate full-shell and segmented Xray optics. Metrology data of preliminary corrections on a single meridian of full-shell x-ray optics show an improvement of mid-spatial frequencies from 6.7 to 1.8 arc secs HPD. Efforts are in progress to correct a full-shell and segmented optics and to verify angular-resolution improvement with X-ray testing.
Post-Kerr black hole spectroscopy
NASA Astrophysics Data System (ADS)
Glampedakis, Kostas; Pappas, George; Silva, Hector O.; Berti, Emanuele
2017-09-01
One of the central goals of the newborn field of gravitational wave astronomy is to test gravity in the highly nonlinear, strong field regime characterizing the spacetime of black holes. In particular, "black hole spectroscopy" (the observation and identification of black hole quasinormal mode frequencies in the gravitational wave signal) is expected to become one of the main tools for probing the structure and dynamics of Kerr black holes. In this paper we take a significant step toward that goal by constructing a "post-Kerr" quasinormal mode formalism. The formalism incorporates a parametrized but general perturbative deviation from the Kerr metric and exploits the well-established connection between the properties of the spacetime's circular null geodesics and the fundamental quasinormal mode to provide approximate, eikonal limit formulas for the modes' complex frequencies. The resulting algebraic toolkit can be used in waveform templates for ringing black holes with the purpose of measuring deviations from the Kerr metric. As a first illustrative application of our framework, we consider the Johannsen-Psaltis deformed Kerr metric and compute the resulting deviation in the quasinormal mode frequency relative to the known Kerr result.
NASA Astrophysics Data System (ADS)
Giordano, V.; Grop, S.; Fluhr, C.; Dubois, B.; Kersalé, Y.; Rubiola, E.
2016-06-01
The Cryogenic Sapphire Oscillator (CSO) is the microwave oscillator which feature the highest short-term stability. Our best units exhibit Allan deviation σy (τ) of 4.5x10-16 at 1s, ≈ 1.5x10-16 at 100 s ≤ t ≤ 5,000 s (floor), and ≤ 5x10-15 at one day. The use of a Pulse-Tube cryocooler enables full two year operation with virtually no maintenance. Starting with a short history of the CSO in our lab, we go through the architecture and we provide more details about the resonator, the cryostat, the oscillator loop, and the servo electronics. We implemented three similar oscillators, which enable the evaluation of each with the three- cornered hat method, and provide the potential for Allan deviation measurements at parts of 10-17 level. One of our CSOs (ULISS) is transportable, and goes with a small customized truck. The unique feature of ULISS is that its σy (τ) can be validated at destination by measuring before and after the roundtrip. To this extent, ULISS can be regarded as a traveling standard of frequency stability. The CSOs are a part of the Oscillator IMP project, a platform dedicated to the measurement of noise and short-term stability of oscillators and devices in the whole radio spectrum (from MHz to THz), including microwave photonics. The scope spans from routine measurements to the research on new oscillators, components, and measurement methods.
The kilometer-sized Main Belt asteroid population revealed by Spitzer
NASA Astrophysics Data System (ADS)
Ryan, E. L.; Mizuno, D. R.; Shenoy, S. S.; Woodward, C. E.; Carey, S. J.; Noriega-Crespo, A.; Kraemer, K. E.; Price, S. D.
2015-06-01
Aims: Multi-epoch Spitzer Space Telescope 24 μm data is utilized from the MIPSGAL and Taurus Legacy surveys to detect asteroids based on their relative motion. Methods: Infrared detections are matched to known asteroids and average diameters and albedos are derived using the near Earth asteroid thermal model (NEATM) for 1865 asteroids ranging in size from 0.2 to 169 km. A small subsample of these objects was also detected by IRAS or MSX and the single wavelength albedo and diameter fits derived from these data are within the uncertainties of the IRAS and/or MSX derived albedos and diameters and available occultation diameters, which demonstrates the robustness of our technique. Results: The mean geometric albedo of the small Main Belt asteroids in this sample is pV = 0.134 with a sample standard deviation of 0.106. The albedo distribution of this sample is far more diverse than the IRAS or MSX samples. The cumulative size-frequency distribution of asteroids in the Main Belt at small diameters is directly derived and a 3σ deviation from the fitted size-frequency distribution slope is found near 8 km. Completeness limits of the optical and infrared surveys are discussed. Tables 1-3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A42
Cochlear implanted children present vocal parameters within normal standards.
de Souza, Lourdes Bernadete Rocha; Bevilacqua, Maria Cecília; Brasolotto, Alcione Ghedini; Coelho, Ana Cristina
2012-08-01
to compare acoustic and perceptual parameters regarding the voice of cochlear implanted children, with normal hearing children. this is a cross-sectional, quantitative and qualitative study. Thirty six cochlear implanted children aged between 3 y and 3 m to 5 y and 9 m and 25 children with normal hearing, aged between 3 y and 11 m and 6 y and 6 m, participated in this study. The recordings and the acoustics analysis of the sustained vowel/a/and spontaneous speech were performed using the PRAAT program. The parameters analyzed for the sustained vowel were the mean of the fundamental frequency, jitter, shimmer and harmonic-to-noise ratio (HNR). For the spontaneous speech, the minimum and maximum frequencies and the number of semitones were extracted. The perceptual analysis of the speech material was analyzed using visual-analogical scales of 100 points, composing the aspects related to the overall severity of the vocal deviation, roughness, breathiness, strain, pitch, loudness and resonance deviation, and instability. This last parameter was only analyzed for the sustained vowel. The results demonstrated that the majority of the vocal parameters analyzed in the samples of the implanted children disclosed values similar to those obtained by the group of children with normal hearing. implanted children who participate in a (re) habilitation and follow-up program, can present vocal characteristics similar to those vocal characteristics of children with normal hearing. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Seay, Joseph F.; Gregorczyk, Karen N.; Hasselquist, Leif
2016-01-01
Abstract Influences of load carriage and inclination on spatiotemporal parameters were examined during treadmill and overground walking. Ten soldiers walked on a treadmill and overground with three load conditions (00 kg, 20 kg, 40 kg) during level, uphill (6% grade) and downhill (-6% grade) inclinations at self-selected speed, which was constant across conditions. Mean values and standard deviations for double support percentage, stride length and a step rate were compared across conditions. Double support percentage increased with load and inclination change from uphill to level walking, with a 0.4% stance greater increase at the 20 kg condition compared to 00 kg. As inclination changed from uphill to downhill, the step rate increased more overground (4.3 ± 3.5 steps/min) than during treadmill walking (1.7 ± 2.3 steps/min). For the 40 kg condition, the standard deviations were larger than the 00 kg condition for both the step rate and double support percentage. There was no change between modes for step rate standard deviation. For overground compared to treadmill walking, standard deviation for stride length and double support percentage increased and decreased, respectively. Changes in the load of up to 40 kg, inclination of 6% grade away from the level (i.e., uphill or downhill) and mode (treadmill and overground) produced small, yet statistically significant changes in spatiotemporal parameters. Variability, as assessed by standard deviation, was not systematically lower during treadmill walking compared to overground walking. Due to the small magnitude of changes, treadmill walking appears to replicate the spatiotemporal parameters of overground walking. PMID:28149338
Hopper, John L
2015-11-15
How can the "strengths" of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors-and that is how risk gradients are interpreted-so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RR(s). This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Maassen, Gerard H
2010-08-01
In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.
Simple programmable voltage reference for low frequency noise measurements
NASA Astrophysics Data System (ADS)
Ivanov, V. E.; Chye, En Un
2018-05-01
The paper presents a circuit design of a low-noise voltage reference based on an electric double-layer capacitor, a microcontroller and a general purpose DAC. A large capacitance value (1F and more) makes it possible to create low-pass filter with a large time constant, effectively reducing low-frequency noise beyond its bandwidth. Choosing the optimum value of the resistor in the RC filter, one can achieve the best ratio between the transient time, the deviation of the output voltage from the set point and the minimum noise cut-off frequency. As experiments have shown, the spectral density of the voltage at a frequency of 1 kHz does not exceed 1.2 nV/√Hz the maximum deviation of the output voltage from the predetermined does not exceed 1.4 % and depends on the holding time of the previous value. Subsequently, this error is reduced to a constant value and can be compensated.
Crosstalk compensation in analysis of energy storage devices
Christophersen, Jon P; Morrison, John L; Morrison, William H; Motloch, Chester G; Rose, David M
2014-06-24
Estimating impedance of energy storage devices includes generating input signals at various frequencies with a frequency step factor therebetween. An excitation time record (ETR) is generated to include a summation of the input signals and a deviation matrix of coefficients is generated relative to the excitation time record to determine crosstalk between the input signals. An energy storage device is stimulated with the ETR and simultaneously a response time record (RTR) is captured that is indicative of a response of the energy storage device to the ETR. The deviation matrix is applied to the RTR to determine an in-phase component and a quadrature component of an impedance of the energy storage device at each of the different frequencies with the crosstalk between the input signals substantially removed. This approach enables rapid impedance spectra measurements that can be completed within one period of the lowest frequency or less.
Gender Differences in Numeracy in Indonesia: Evidence from a Longitudinal Dataset
ERIC Educational Resources Information Center
Suryadarma, Daniel
2015-01-01
This paper uses a rich longitudinal dataset to measure the evolution of the gender differences in numeracy among school-age children in Indonesia. Girls outperformed boys by 0.08 standard deviations when the sample was around 11 years old. Seven years later, the gap has widened to 0.19 standard deviations, equivalent to around 18 months of…
A Survey Data Response to the Teaching of Utility Curves and Risk Aversion
ERIC Educational Resources Information Center
Hobbs, Jeffrey; Sharma, Vivek
2011-01-01
In many finance and economics courses as well as in practice, the concept of risk aversion is reduced to the standard deviation of returns, whereby risk-averse investors prefer to minimize their portfolios' standard deviations. In reality, the concept of risk aversion is richer and more interesting than this, and can easily be conveyed through…
On the Linear Relation between the Mean and the Standard Deviation of a Response Time Distribution
ERIC Educational Resources Information Center
Wagenmakers, Eric-Jan; Brown, Scott
2007-01-01
Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different…